dc.contributorUniversidade Estadual Paulista (Unesp)
dc.date.accessioned2014-05-20T15:25:49Z
dc.date.available2014-05-20T15:25:49Z
dc.date.created2014-05-20T15:25:49Z
dc.date.issued2004-01-01
dc.identifier2004 IEEE International Joint Conference on Neural Networks, Vols 1-4, Proceedings. New York: IEEE, p. 1021-1026, 2004.
dc.identifier1098-7576
dc.identifierhttp://hdl.handle.net/11449/36159
dc.identifier10.1109/IJCNN.2004.1380074
dc.identifierWOS:000224941900177
dc.identifier4831789901823849
dc.identifier0000-0002-9984-9949
dc.description.abstractThe multilayer perceptron network has become one of the most used in the solution of a wide variety of problems. The training process is based on the supervised method where the inputs are presented to the neural network and the output is compared with a desired value. However, the algorithm presents convergence problems when the desired output of the network has small slope in the discrete time samples or the output is a quasi-constant value. The proposal of this paper is presenting an alternative approach to solve this convergence problem with a pre-conditioning method of the desired output data set before the training process and a post-conditioning when the generalization results are obtained. Simulations results are presented in order to validate the proposed approach.
dc.languageeng
dc.publisherIEEE
dc.relation2004 IEEE International Joint Conference on Neural Networks, Vols 1-4, Proceedings
dc.rightsAcesso aberto
dc.sourceWeb of Science
dc.titleAn alternative approach to solve convergence problems in the backpropagation algorithm
dc.typeActas de congresos


Este ítem pertenece a la siguiente institución