dc.creatorAlfonso, Marcelo
dc.creatorKavka, Carlos
dc.creatorPrintista, Alicia Marcela
dc.date2000-10
dc.date2000-10
dc.date2012-11-01T14:40:14Z
dc.identifierhttp://sedici.unlp.edu.ar/handle/10915/23442
dc.descriptionThe back-propagation algorithm is one of the most widely used training algorithms for neural networks. The training phase of a multilayer perceptron by using this algorithm can take very long time making neural networks difficult to accept. One approach to solve this problem consists in the parallelization of the training algorithm. There exists many different approaches, however most of them are well adapted to specialized hardware. The idea to use a network of workstations as a general purpose parallel computer is widely accepted. However, the communication overhead imposes restrictions in the design of parallel algorithms. In this work, we propose a parallel implementation of the back-propagation algorithm that is suitable to be applied to a network of workstations. The objective is twofold. The first goal is to increment the performance of the training phase of the algorithm with low communication overhead. The second goal is to provide a dynamic assignment of tasks to processors in order to make the best use of the computational resources.
dc.descriptionI Workshop de Agentes y Sistemas Inteligentes (WASI)
dc.descriptionRed de Universidades con Carreras en Informática (RedUNCI)
dc.formatapplication/pdf
dc.languageen
dc.relationVI Congreso Argentino de Ciencias de la Computación
dc.rightshttp://creativecommons.org/licenses/by-nc-sa/2.5/ar/
dc.rightsCreative Commons Attribution-NonCommercial-ShareAlike 2.5 Argentina (CC BY-NC-SA 2.5)
dc.subjectCiencias Informáticas
dc.titleA low communication overhead parallel implementation of the back-propagation algorithm
dc.typeObjeto de conferencia
dc.typeObjeto de conferencia


Este ítem pertenece a la siguiente institución