doctoralThesis
Method to measure, model, and predict depth and positioning errors of RGB-D Cameras in function of distance, velocity, and vibration
Fecha
2021-08-02Registro en:
FERNANDEZ, Luis Enrique Ortiz. Method to measure, model, and predict depth and positioning errors of
RGB-D Cameras in function of distance, velocity, and vibration. 2021. 118f. Tese (Doutorado em Engenharia Elétrica e de Computação) - Centro de Tecnologia, Universidade Federal do Rio Grande do Norte, Natal, 2021.
Autor
Fernandez, Luis Enrique Ortiz
Resumen
This thesis proposes a versatile methodology for measuring, modeling, and predicting
errors as the Root Mean Square Error (RMSE) in depth and the Relative Positioning Error (RPE) using data captured from an RGB-D camera mounted on the top of a low-cost
mobile robot platform. The proposed method has three stages. The first one consists of
creating ground truth data for both 3D points (mapping) and camera poses (localization)
using the novel Smart Markers. The next stage is the acquisition of a data set for RMSE
and RPE errors computation using the mobile platform with the RGB-D camera. Finally,
the third step is to model and predict the errors in the measurements of depth and positioning of the camera as a function of distance, velocity, and vibration. For this modeling
and prediction stage, a simple approach based on Multi-Layer Perception neural networks
is used. The modeling results in two networks, the NrmseZ
for the depth error prediction
and the NRPE for the prediction of camera positioning error. Experiments show that the
NrmseZ
and NRPE have an accuracy of ± 1% and ± 2.5%, respectively. The proposed
methodology can be used straight in techniques that require an estimation of the dynamic
error. For example, in applications of probabilistic robotics for mapping and localization,
with RGB-D cameras mounted on Unmanned Aerial Vehicles, Unmanned Ground Vehicles, and also Unmanned Surface Vehicles (including sailboats). Tasks that use RGB-D
sensors, such as environmental monitoring, maintenance of engineering works, and public security, could rely on this approach to obtain the error information associated with
the camera measurements (depth and positioning).