Actas de congresos
Complement To The Back-propagation Algorithm: An Upper Bound For The Learning Rate
Registro en:
Proceedings Of The International Joint Conference On Neural Networks. Ieee, Piscataway, Nj, United States, v. 4, n. , p. 517 - 522, 2000.
2-s2.0-0033698502
Autor
Cerqueira Jes J.F.
Palhares Alvaro G.B.
Madrid Marconi K.
Institución
Resumen
A convergence analysis for learning algorithms based on gradient optimization methods was made and applied to the backpropagation algorithm. Made using Lyapunov's second method, the analysis supplies an upper bound for the learning rate of the back-propagation algorithm. This upper bound is useful for finding solutions for the parameter adjustment for the backpropagation algorithm. The convergence is solved via empirical methods. The solution presented is based on the well knowledge stability criterion for nonlinear systems. 4
517 522