Tesis Doctorado / doctoral Thesis
Predicting the best sensor fusion method for recognizing human activity using a machine learning approach based on a statistical signature meta-data set and its generalization to other domains
Fecha
2020-11Registro en:
Aguileta, A. (2020). Predicting the best sensor fusion method for recognizing human activity using a machine learning approach based on a statistical signature meta-data set and its generalization to other domains [Tesis de doctorado]. Instituto Tecnológico y de Estudios Superiores de Monterrey. Monterrey, Nuevo León, México.
964978
Autor
Aguileta Güemez, Antonio Armando
Institución
Resumen
Multi-sensor fusion refers to methods used to combine information from multiple (in some cases, different) sensors with the aim of making one sensor compensate for the weaknesses of others or to improve the overall accuracy or reliability of the decision-making process. An area where multi-sensor fusion has become relevant is the recognition of human activity (HAR). HAR, based on sensors. has drawn attention in recent years because it can help provide proactive and personalized services in applications such as health, fitness monitoring, personal biometric signature, urban computing, assistive technology, elderly care, to name a few. HAR research has made significant progress in recognizing the activity through the use of machine learning techniques and information from a sensor. Nevertheless, the use of a sensor in the activity recognition task has not been reliable because the sensors have faults and failures during their operation. To address the situation of faults and failures and achieve better results in the accuracy of activity recognition, a wide variety of multisensor data fusion methods have been proposed (and hence its relevance). However, although progress has been made in identifying the activity using these methods, researchers have focused mainly on improving the performance in recognition of the activity, with little attention in explaining why their method works for a set of data in particular. Consequently, it is not known which of these methods to choose for a specific data set. In this work, we contribute a data-driven machine-learning approach that predicts (with 90% precision) the best fusion method for a given data set, which stores human activity collected by an accelerometer and gyroscope. Also, we contribute an extension of our approach. This extended approach predicts (with 93% accuracy) the best fusion method in domains other than HAR, such as gas type identification (collected by gas sensors) and grammatical facial expression recognition (obtained by a deep camera), which demonstrates its generalization capabilities.