dc.contributorPaz Pérez, Lina María
dc.creatorDíaz Toro, Andrés Alejandro
dc.date.accessioned2020-04-03T21:04:02Z
dc.date.accessioned2023-09-07T19:15:17Z
dc.date.available2020-04-03T21:04:02Z
dc.date.available2023-09-07T19:15:17Z
dc.date.created2020-04-03T21:04:02Z
dc.date.issued2018
dc.identifierhttps://hdl.handle.net/10893/14886
dc.identifier.urihttps://repositorioslatinoamericanos.uchile.cl/handle/2250/8742952
dc.description.abstractAlgorithms for localization and map building (Simultaneous Localization and Mapping, SLAM) estimate the robot pose when it wanders an unknown environment and at the same time build a map of this environment in an incremental way. The initial objective of SLAM systems was getting autonomous robots [83]. However, vision-based SLAM systems have a great potential outside the ¿eld of mobile robotics, such as augmented reality, wearable devices, userinterfaces,surgicalprocedures,amongothers, becausetheygiveacamera the capability to work as a sensor of 3D position of general purpose. MostofthevisualSLAMsystemsusesparsefeaturessuchascorners[16],[58], edges [45], [81] or planes [78], and have as principal goal to localize the camera. Dense techniques for localization and mapping do not use sparse features of the environment but all the pixels of an image, getting high-quality maps. For achieving real time performance concepts such as keyframes, bundle adjustment and parallel computing are used. Depth maps built with monocular cameras are a¿ected by conditions commonly found in real environments as low-textured surfaces, very re¿ective or translucent objects and changes in light conditions. Even when a depth sensor is used, depth maps can be noisy and incomplete due to dark, very re¿ective and translucent objects, out of range surfaces and occlusions. This situation a¿ects negatively the dense reconstruction got by fusing raw depth maps. Inthisdoctoralthesisweproposeanoveltechniqueforenhancingdepthmaps builtwithamonocularcameraorobtainedfromadepthsensor,byintegrating depth data of an optimal model (shape prior) using a variational technique that besides merging depth data of two sources, makes inpainting and denoising, generating a signi¿cant impact in the appearance and in accuracy of the dense reconstruction, specially in the region of the object of interest. We present two approaches for dense localization: frame-to-frame and frame-tomodel approaches, the algorithm for building a depth map with a monocular camera, the optimization of the model w.r.t. pose, scale and shape, the proposed technique for integrating the shape prior and the fusion of enhanced depth maps for creating a dense reconstruction of the scene. We also present the conclusions in each chapter, the general conclusions and future work
dc.languagespa
dc.publisherUniversidad del Valle
dc.publisherColombia
dc.publisherFACULTAD DE INGENIERÍA
dc.publisherDOCTORADO EN INGENIERÍA-ÉNFASIS EN INGENIERÍA ELÉCTRICA Y ELECTRÓNICA
dc.rightsinfo:eu-repo/semantics/openAccess
dc.titleReal time dense tracking and mapping using a monocular camera
dc.typeTrabajo de grado - Doctorado


Este ítem pertenece a la siguiente institución