dc.contributorUniversidade Estadual Paulista (Unesp)
dc.contributorPurdue Univ
dc.date.accessioned2014-05-20T13:28:54Z
dc.date.accessioned2022-10-05T13:27:03Z
dc.date.available2014-05-20T13:28:54Z
dc.date.available2022-10-05T13:27:03Z
dc.date.created2014-05-20T13:28:54Z
dc.date.issued1998-07-01
dc.identifierIEEE Transactions on Neural Networks. New York: IEEE-Inst Electrical Electronics Engineers Inc., v. 9, n. 4, p. 629-638, 1998.
dc.identifier1045-9227
dc.identifierhttp://hdl.handle.net/11449/9651
dc.identifier10.1109/72.701176
dc.identifierWOS:000074419800005
dc.identifier.urihttp://repositorioslatinoamericanos.uchile.cl/handle/2250/3885975
dc.description.abstractContinuous-time neural networks for solving convex nonlinear unconstrained;programming problems without using gradient information of the objective function are proposed and analyzed. Thus, the proposed networks are nonderivative optimizers. First, networks for optimizing objective functions of one variable are discussed. Then, an existing one-dimensional optimizer is analyzed, and a new line search optimizer is proposed. It is shown that the proposed optimizer network is robust in the sense that it has disturbance rejection property. The network can be implemented easily in hardware using standard circuit elements. The one-dimensional net is used as a building block in multidimensional networks for optimizing objective functions of several variables. The multidimensional nets implement a continuous version of the coordinate descent method.
dc.languageeng
dc.publisherInstitute of Electrical and Electronics Engineers (IEEE)
dc.relationIEEE Transactions on Neural Networks
dc.rightsAcesso restrito
dc.sourceWeb of Science
dc.subjectanalog networks
dc.subjectcoordinate descent
dc.subjectderivative free optimization
dc.subjectunconstrained optimization
dc.titleAnalog neural nonderivative optimizers
dc.typeArtigo


Este ítem pertenece a la siguiente institución