dc.contributorGomes, Rafael Beserra
dc.contributorhttp://lattes.cnpq.br/3725377549115537
dc.contributorhttp://lattes.cnpq.br/5849107545126304
dc.contributorGonçalves, Luiz Marcos Garcia
dc.contributor32541457120
dc.contributorhttp://lattes.cnpq.br/1562357566810393
dc.contributorSilva, Bruno Marques Ferreira da
dc.contributorhttp://lattes.cnpq.br/7878437620254155
dc.contributorDistante, Cosimo
dc.contributorClua, Esteban Walter Gonzalez
dc.contributorhttp://lattes.cnpq.br/4791589931798048
dc.creatorFernandez, Luis Enrique Ortiz
dc.date.accessioned2021-10-18T22:52:36Z
dc.date.accessioned2022-10-06T13:07:40Z
dc.date.available2021-10-18T22:52:36Z
dc.date.available2022-10-06T13:07:40Z
dc.date.created2021-10-18T22:52:36Z
dc.date.issued2021-08-02
dc.identifierFERNANDEZ, Luis Enrique Ortiz. Method to measure, model, and predict depth and positioning errors of RGB-D Cameras in function of distance, velocity, and vibration. 2021. 118f. Tese (Doutorado em Engenharia Elétrica e de Computação) - Centro de Tecnologia, Universidade Federal do Rio Grande do Norte, Natal, 2021.
dc.identifierhttps://repositorio.ufrn.br/handle/123456789/44632
dc.identifier.urihttp://repositorioslatinoamericanos.uchile.cl/handle/2250/3964269
dc.description.abstractThis thesis proposes a versatile methodology for measuring, modeling, and predicting errors as the Root Mean Square Error (RMSE) in depth and the Relative Positioning Error (RPE) using data captured from an RGB-D camera mounted on the top of a low-cost mobile robot platform. The proposed method has three stages. The first one consists of creating ground truth data for both 3D points (mapping) and camera poses (localization) using the novel Smart Markers. The next stage is the acquisition of a data set for RMSE and RPE errors computation using the mobile platform with the RGB-D camera. Finally, the third step is to model and predict the errors in the measurements of depth and positioning of the camera as a function of distance, velocity, and vibration. For this modeling and prediction stage, a simple approach based on Multi-Layer Perception neural networks is used. The modeling results in two networks, the NrmseZ for the depth error prediction and the NRPE for the prediction of camera positioning error. Experiments show that the NrmseZ and NRPE have an accuracy of ± 1% and ± 2.5%, respectively. The proposed methodology can be used straight in techniques that require an estimation of the dynamic error. For example, in applications of probabilistic robotics for mapping and localization, with RGB-D cameras mounted on Unmanned Aerial Vehicles, Unmanned Ground Vehicles, and also Unmanned Surface Vehicles (including sailboats). Tasks that use RGB-D sensors, such as environmental monitoring, maintenance of engineering works, and public security, could rely on this approach to obtain the error information associated with the camera measurements (depth and positioning).
dc.publisherUniversidade Federal do Rio Grande do Norte
dc.publisherBrasil
dc.publisherUFRN
dc.publisherPROGRAMA DE PÓS-GRADUAÇÃO EM ENGENHARIA ELÉTRICA E DE COMPUTAÇÃO
dc.rightsAcesso Aberto
dc.subjectRGB-D cameras
dc.subjectSmart markers
dc.subjectVisual mapping
dc.subjectVisual localization
dc.subjectDepth error
dc.subjectPositioning error
dc.titleMethod to measure, model, and predict depth and positioning errors of RGB-D Cameras in function of distance, velocity, and vibration
dc.typedoctoralThesis


Este ítem pertenece a la siguiente institución