dc.creatorCerqueira Jes J.F.
dc.creatorPalhares Alvaro G.B.
dc.creatorMadrid Marconi K.
dc.date2000
dc.date2015-06-30T19:52:04Z
dc.date2015-11-26T14:47:31Z
dc.date2015-06-30T19:52:04Z
dc.date2015-11-26T14:47:31Z
dc.date.accessioned2018-03-28T21:57:59Z
dc.date.available2018-03-28T21:57:59Z
dc.identifier
dc.identifierProceedings Of The International Joint Conference On Neural Networks. Ieee, Piscataway, Nj, United States, v. 4, n. , p. 517 - 522, 2000.
dc.identifier
dc.identifier
dc.identifierhttp://www.scopus.com/inward/record.url?eid=2-s2.0-0033698502&partnerID=40&md5=001ff17d9791e2f571d3c5083ae8c7fc
dc.identifierhttp://www.repositorio.unicamp.br/handle/REPOSIP/107392
dc.identifierhttp://repositorio.unicamp.br/jspui/handle/REPOSIP/107392
dc.identifier2-s2.0-0033698502
dc.identifier.urihttp://repositorioslatinoamericanos.uchile.cl/handle/2250/1253311
dc.descriptionA convergence analysis for learning algorithms based on gradient optimization methods was made and applied to the backpropagation algorithm. Made using Lyapunov's second method, the analysis supplies an upper bound for the learning rate of the back-propagation algorithm. This upper bound is useful for finding solutions for the parameter adjustment for the backpropagation algorithm. The convergence is solved via empirical methods. The solution presented is based on the well knowledge stability criterion for nonlinear systems.
dc.description4
dc.description
dc.description517
dc.description522
dc.languageen
dc.publisherIEEE, Piscataway, NJ, United States
dc.relationProceedings of the International Joint Conference on Neural Networks
dc.rightsfechado
dc.sourceScopus
dc.titleComplement To The Back-propagation Algorithm: An Upper Bound For The Learning Rate
dc.typeActas de congresos


Este ítem pertenece a la siguiente institución