dc.creatorLeiva Castro, Francisco
dc.creatorRuíz del Solar San Martín, Javier
dc.date.accessioned2020-11-02T21:06:03Z
dc.date.available2020-11-02T21:06:03Z
dc.date.created2020-11-02T21:06:03Z
dc.date.issued2020
dc.identifierIEEE Robotics and Automation Letters. Vol. 5, No. 4, (2020)
dc.identifier10.1109/LRA.2020.3010732
dc.identifierhttps://repositorio.uchile.cl/handle/2250/177506
dc.description.abstractIn this letter, we propose a robust approach to train map-less navigation policies that rely on variable size 2D point clouds, using Deep Reinforcement Learning (Deep RL). The navigation policies are trained in simulations using the DDPG algorithm. Through experimental evaluations in simulated and real-world environments, we showcase the benefits of our approach when compared to more classical RL-based formulations: better performance, the possibility to interchange sensors at deployment time, and to easily augment the environment observability through sensor preprocessing and/or sensor fusion. Videos showing trajectories traversed by agents trained with the proposed approach can be found in https://youtu.be/AzvRJyN6rwQ.
dc.languageen
dc.publisher(IEEE) Institute of Electrical and Electronics Engineers
dc.rightshttp://creativecommons.org/licenses/by-nc-nd/3.0/cl/
dc.rightsAttribution-NonCommercial-NoDerivs 3.0 Chile
dc.sourceIEEE Robotics and Automation Letters
dc.subjectReinforcement learning
dc.subjectReactive and sensorbased planning
dc.subjectMap-less local planning
dc.titleRobust RL-based map-less local planning: Using 2D point clouds as observations
dc.typeArtículo de revista


Este ítem pertenece a la siguiente institución