Colombia | Artículo de revista

On the approximation of the inverse dynamics of a robotic manipulator by a neural network trained with a stochastic learning algorithm

dc.creatorSegura, Enrique Carlos
dc.date2019-02-19T21:46:12Z
dc.date2019-02-19T21:46:12Z
dc.date2013-12-31
dc.date.accessioned2023-10-03T19:51:55Z
dc.date.available2023-10-03T19:51:55Z
dc.identifierSegura, E. (2013). On the approximation of the inverse dynamics of a robotic manipulator by a neural network trained with a stochastic learning algorithm. INGE CUC, 9(2), 39-43. Recuperado a partir de https://revistascientificas.cuc.edu.co/ingecuc/article/view/4
dc.identifier0122-6517, 2382-4700 electrónico
dc.identifierhttp://hdl.handle.net/11323/2631
dc.identifier2382-4700
dc.identifierCorporación Universidad de la Costa
dc.identifier0122-6517
dc.identifierREDICUC - Repositorio CUC
dc.identifierhttps://repositorio.cuc.edu.co/
dc.identifier.urihttps://repositorioslatinoamericanos.uchile.cl/handle/2250/9172781
dc.descriptionThe SAGA algorithm is used to ap-proximate the inverse dynamics of a robotic manipulator with two rotational joints. SAGA (Simulated Annealing Gradient Adaptation) is a stochastic strategy for additive construction of an artificial neural network of the two-layer perceptron type based on three essential ele-ments: a) network weights update by means of the information from the gradient for the cost function; b) approval or rejection of the suggested change through a technique of clas-sical simulated annealing; and c) progressive growth of the neural network as its struc-ture reveals insufficient, using a conservative strategy for adding units to the hidden layer. Experiments are performed and efficiency is analyzed in terms of the relation between mean relative errors -in the training and test-ing sets-, network size, and computation time. The ability of the proposed technique to per-form good approximations by minimizing the complexity of the network’s architecture and, hence, the required computational memory, is emphasized. Moreover, the evolution of mini-mization processes as the cost surface is modi-fied is also discussed
dc.descriptionSe utiliza el algoritmo SAGA para aproximar la dinámica inversa de un manipula-dor robótico con dos juntas rotacionales. SAGA (Simulated Annealing + Gradiente + Adapta-ción) es una estrategia estocástica para la cons-trucción aditiva de una red neuronal artificial de tipo perceptrón de dos capas, basada en tres elementos esenciales: a) actualización de los pe-sos de la red por medio de información del gra-diente de la función de costo; b) aceptación o re-chazo del cambio propuesto por una técnica de recocido simulado (simulated annealing) clási-ca; y c) crecimiento progresivo de la red neuro-nal, en la medida en que su estructura resulta insuficiente, usando una estrategia conserva-dora para agregar unidades a la capa oculta. Se realizan experimentos y se analiza la eficien-cia en términos de la relación entre error rela-tivo medio -en los conjuntos de entrenamien-to y de testeo-, tamaño de la red y tiempos de cómputo. Se hace énfasis en la habilidad de la técnica propuesta para obtener buenas aproxi-maciones, minimizando la complejidad de la ar-quitectura de la red y, por lo tanto, la memoria computacional requerida. Además, se discute la evolución del proceso de minimización a medi-da que la superficie de costo se modifica
dc.formatapplication/pdf
dc.formatapplication/pdf
dc.languageeng
dc.publisherCorporación Universidad de la Costa
dc.relationINGE CUC; Vol. 9, Núm. 2 (2013)
dc.relationINGE CUC
dc.relationINGE CUC
dc.relation[1] V. I. Arnold, “On Functions of three Variables”, Dokl. Akad. Nauk, no.114, pp. 679-681, 1957.
dc.relation[2] G. Cybenko, “Approximation by superpositions of a sigmoidal function”, Math. Control, Signals and Systems, vol.2, no.4, pp. 303-314, 1989.
dc.relation[3] K. Funahashi, “On the approximate realization of continuous mappings by neural networks”, Neural Networks, vol.2, no.3, pp. 183-92, 1989.
dc.relation[4] S. Haykin, Neural Networks and Learning Machines. Upper Saddle River, Pearson-Prentice Hall, 2009.
dc.relation[5] Y. Ito, “Extension of Approximation Capability of Three Layered Neural Networks to Derivatives”, Proc. IEEE Int. Conf. Neural Networks, San Francisco, 1993, pp. 377-381.
dc.relation[6] S. Kirkpatrick, C. D. Gelatt, and M. P. Vecchi, “Optimization by Simulated Annealing”, Science, vol. 220, pp. 671-680, 1983.
dc.relation[7] A. N. Kolmogorov, On the Representation of Functions of Many Variables by Superposition of Continuous Functions of one Variable and Addition (1957), Am. Math. Soc. Tr., vol.28, pp. 55-59, 1963.
dc.relation[8] P. J. Van Laarhoven and E. H. Aarts, Simulated Annealing: Theory and Applications. Dordrech: Kluwer, 2010.
dc.relation[9] M. Leshno, V. Y. Lin, A. Pinkus and S. Schocken, “Multilayer Feedforward Networks with a Nonpolynomial Activation Function Can Approximate Any Function”, Neural Networks, vol.6, no 6, pp. 861-867, 1993.
dc.relation[10] A. B. Martínez, R. M. Planas, and E. C. Segura, “Disposición anular de cámaras sobre un robot móvil”, en Actas XVII Jornadas de Automática Santander96, Santander, 1996.
dc.relation[11] N. Metropolis, A. Rosenbluth, M. Rosenbluth, A. H. Teller, and E. Teller, “Equation of State Calculations by Fast Computing Machines”, J. Chem. Phys, vol. 21, no 6, pp. 1087-91, 1953.
dc.relation[12] D. E. Rumelhart, G. E Hinton, and R. J. Williams, “Learning representations by back-propagating errors”, Nature no.323, pp. 533-536, 1986.
dc.relation[13] P. Salamon, P. Paolo Sibani, and R. Frost, Facts, Conjectures and Improvements for Simulated Annealing. SIAM Monographs on Mathematical Modeling and Computation, 2002.
dc.relation[14] E. C. Segura, A non parametric method for video camera calibration using a neural network, Int. Symp. Multi-Technology Information Processing, Hsinchu, Taiwan, 1996.
dc.relation[15] E. C. Segura, Optimisation with Simulated Annealing through Regularisation of the Target Function, Proc. XII Congreso Arg. de Ciencias de la Computación, Potrero de los Funes, 2006.
dc.relation[16] D. A. Sprecher, “On the Structure of Continuous Functions of Several Variables”, Tr. Am. Math. Soc., vol.115, pp. 340-355, 1963.
dc.relationINGE CUC
dc.rightsinfo:eu-repo/semantics/openAccess
dc.rightshttp://purl.org/coar/access_right/c_abf2
dc.sourceINGE CUC
dc.sourcehttps://revistascientificas.cuc.edu.co/ingecuc/article/view/4
dc.subjectNeural network
dc.subjectRobotic manipulator
dc.subjectMultilayer perceptron
dc.subjectStochastic learning
dc.subjectInverse dynamics
dc.subjectNeural network
dc.subjectRobotic manipulator
dc.subjectMultilayer perceptron
dc.subjectStochastic learning
dc.subjectInverse dynamics
dc.titleOn the approximation of the inverse dynamics of a robotic manipulator by a neural network trained with a stochastic learning algorithm
dc.titleOn the approximation of the inverse dynamics of a robotic manipulator by a neural network trained with a stochastic learning algorithm
dc.typeArtículo de revista
dc.typehttp://purl.org/coar/resource_type/c_6501
dc.typeText
dc.typeinfo:eu-repo/semantics/article
dc.typeinfo:eu-repo/semantics/publishedVersion
dc.typehttp://purl.org/redcol/resource_type/ART
dc.typeinfo:eu-repo/semantics/acceptedVersion
dc.typehttp://purl.org/coar/version/c_ab4af688f83e57aa


Este ítem pertenece a la siguiente institución