dc.creatorPosada, Luis Felipe
dc.creatorVelasquez-Lopez, Alejandro
dc.creatorHoffmann, Frank
dc.creatorBertram, Torsten
dc.date.accessioned2021-04-12T21:14:33Z
dc.date.accessioned2022-09-23T21:35:38Z
dc.date.available2021-04-12T21:14:33Z
dc.date.available2022-09-23T21:35:38Z
dc.date.created2021-04-12T21:14:33Z
dc.date.issued2018-01-01
dc.identifier10504729
dc.identifier2577087X
dc.identifierWOS;000446394501070
dc.identifierSCOPUS;2-s2.0-85063162565
dc.identifierhttp://hdl.handle.net/10784/28959
dc.identifier10.1109/ICRA.2018.8461165
dc.identifier.urihttp://repositorioslatinoamericanos.uchile.cl/handle/2250/3531748
dc.description.abstractThis paper presents a purely visual semantic mapping framework using omnidirectional images. The approach rests upon the robust segmentation of the robot's local free space, replacing conventional range sensors for the generation of occupancy grid maps. The perceptions are mapped into a bird's eye view allowing an inverse sensor model directly by removing the non-linear distortions of the omnidirectional camera mirror. The system relies on a place category classifier to label the navigation relevant categories: room, corridor, doorway, and open room. Each place class maintains a separated grid map that are fused with the range-based occupancy grid for building a dense semantic map.
dc.languageeng
dc.publisherIEEE COMPUTER SOC
dc.relationhttps://www.scopus.com/inward/record.uri?eid=2-s2.0-85063162565&doi=10.1109%2fICRA.2018.8461165&partnerID=40&md5=605724ef18669fcc66eec17070286d69
dc.rightshttps://v2.sherpa.ac.uk/id/publication/issn/1050-4729
dc.sourceIEEE International Conference on Robotics and Automation ICRA
dc.titleSemantic Mapping with Omnidirectional Vision
dc.typeinfo:eu-repo/semantics/conferencePaper
dc.typeconferencePaper
dc.typeinfo:eu-repo/semantics/publishedVersion
dc.typepublishedVersion


Este ítem pertenece a la siguiente institución