dc.contributor | Universidade Federal da Paraíba (UFPB) | |
dc.contributor | Universidade Estadual Paulista (UNESP) | |
dc.date.accessioned | 2022-04-28T19:29:14Z | |
dc.date.accessioned | 2022-12-20T01:12:41Z | |
dc.date.available | 2022-04-28T19:29:14Z | |
dc.date.available | 2022-12-20T01:12:41Z | |
dc.date.created | 2022-04-28T19:29:14Z | |
dc.date.issued | 2020-07-01 | |
dc.identifier | International Conference on Systems, Signals, and Image Processing, v. 2020-July, p. 217-222. | |
dc.identifier | 2157-8702 | |
dc.identifier | 2157-8672 | |
dc.identifier | http://hdl.handle.net/11449/221528 | |
dc.identifier | 10.1109/IWSSIP48289.2020.9145427 | |
dc.identifier | 2-s2.0-85089136198 | |
dc.identifier.uri | https://repositorioslatinoamericanos.uchile.cl/handle/2250/5401657 | |
dc.description.abstract | One of the fundamental dilemmas of mobile robotics is the use of sensory information to locate an agent in geographic space. In this paper, we developed a global relocation system to predict the robot's position and avoid unforeseen actions from a monocular image, which we named SpaceYNet. We incorporated Inception layers to symmetric layers of down-sampling and up-sampling to solve depth-scene and 6-DoF estimation simultaneously. Also, we compared SpaceYNet to PoseNet - a state of the art in robot pose regression using CNN - in order to evaluate it. The comparison comprised one public dataset and one created in a broad indoor environment. SpaceYNet showed higher accuracy in global percentages when compared to PoseNet. | |
dc.language | eng | |
dc.relation | International Conference on Systems, Signals, and Image Processing | |
dc.source | Scopus | |
dc.subject | Dataset | |
dc.subject | depth-scene | |
dc.subject | pose | |
dc.subject | regression | |
dc.subject | robot | |
dc.title | SpaceYNet: A Novel Approach to Pose and Depth-Scene Regression Simultaneously | |
dc.type | Actas de congresos | |