dc.contributorUniversidade Federal da Paraíba (UFPB)
dc.contributorUniversidade Estadual Paulista (UNESP)
dc.date.accessioned2022-04-28T19:29:14Z
dc.date.accessioned2022-12-20T01:12:41Z
dc.date.available2022-04-28T19:29:14Z
dc.date.available2022-12-20T01:12:41Z
dc.date.created2022-04-28T19:29:14Z
dc.date.issued2020-07-01
dc.identifierInternational Conference on Systems, Signals, and Image Processing, v. 2020-July, p. 217-222.
dc.identifier2157-8702
dc.identifier2157-8672
dc.identifierhttp://hdl.handle.net/11449/221528
dc.identifier10.1109/IWSSIP48289.2020.9145427
dc.identifier2-s2.0-85089136198
dc.identifier.urihttps://repositorioslatinoamericanos.uchile.cl/handle/2250/5401657
dc.description.abstractOne of the fundamental dilemmas of mobile robotics is the use of sensory information to locate an agent in geographic space. In this paper, we developed a global relocation system to predict the robot's position and avoid unforeseen actions from a monocular image, which we named SpaceYNet. We incorporated Inception layers to symmetric layers of down-sampling and up-sampling to solve depth-scene and 6-DoF estimation simultaneously. Also, we compared SpaceYNet to PoseNet - a state of the art in robot pose regression using CNN - in order to evaluate it. The comparison comprised one public dataset and one created in a broad indoor environment. SpaceYNet showed higher accuracy in global percentages when compared to PoseNet.
dc.languageeng
dc.relationInternational Conference on Systems, Signals, and Image Processing
dc.sourceScopus
dc.subjectDataset
dc.subjectdepth-scene
dc.subjectpose
dc.subjectregression
dc.subjectrobot
dc.titleSpaceYNet: A Novel Approach to Pose and Depth-Scene Regression Simultaneously
dc.typeActas de congresos


Este ítem pertenece a la siguiente institución