dc.creatorLeottau, David L.
dc.creatorVatsyayan, Aashish
dc.creatorRuiz del Solar, Javier
dc.creatorBabuška, Robert
dc.date.accessioned2019-05-29T13:39:19Z
dc.date.available2019-05-29T13:39:19Z
dc.date.created2019-05-29T13:39:19Z
dc.date.issued2017
dc.identifierLecture Notes in Computer Science (LNCS, volume 9776), 2017
dc.identifier16113349
dc.identifier03029743
dc.identifier10.1007/978-3-319-68792-6_31
dc.identifierhttps://repositorio.uchile.cl/handle/2250/169053
dc.description.abstractIn this paper, decentralized reinforcement learning is applied to a control problem with a multidimensional action space. We propose a decentralized reinforcement learning architecture for a mobile robot, where the individual components of the commanded velocity vector are learned in parallel by separate agents. We empirically demonstrate that the decentralized architecture outperforms its centralized counterpart in terms of the learning time, while using less computational resources. The method is validated on two problems: an extended version of the 3-dimensional mountain car, and a ball-pushing behavior performed with a differential-drive robot, which is also tested on a physical setup.
dc.languageen
dc.publisherSpringer
dc.rightshttp://creativecommons.org/licenses/by-nc-nd/3.0/cl/
dc.rightsAttribution-NonCommercial-NoDerivs 3.0 Chile
dc.sourceLecture Notes in Computer Science
dc.subjectDecentralized control
dc.subjectMultiagent learning
dc.subjectReinforcement learning
dc.subjectRobot soccer
dc.titleDecentralized reinforcement learning applied to mobile robots
dc.typeArtículo de revista


Este ítem pertenece a la siguiente institución