dc.contributor0000-0002-7635-4687
dc.contributorhttps://orcid.org/0000-0002-7635-4687
dc.creatorGarcía Ceja, Enrique
dc.creatorGalván Tejada, Carlos Eric
dc.creatorBrena, Ramón
dc.date.accessioned2020-05-21T18:44:27Z
dc.date.available2020-05-21T18:44:27Z
dc.date.created2020-05-21T18:44:27Z
dc.date.issued2018-03-10
dc.identifier1566-2535
dc.identifierhttp://ricaxcan.uaz.edu.mx/jspui/handle/20.500.11845/1928
dc.identifierhttps://doi.org/10.48779/f0cw-ft20
dc.description.abstractMany Ambient Intelligence (AmI) systems rely on automatic human activity recognition for getting crucial context information, so that they can provide personalized services based on the current users’ state. Activity recognition provides core functionality to many types of systems including: Ambient Assisted Living, fitness trackers, behavior monitoring, security, and so on. The advent of wearable devices along with their diverse set of embedded sensors opens new opportunities for ubiquitous context sensing. Recently, wearable devices such as smartphones and smart-watches have been used for activity recognition and monitoring. Most of the previous works use inertial sensors (accelerometers, gyroscopes) for activity recognition and combine them using an aggregation approach, i.e., extract features from each sensor and aggregate them to build the final classification model. This is not optimal since each sensor data source has its own statistical properties. In this work, we propose the use of a multi-view stacking method to fuse the data from heterogeneous types of sensors for activity recognition. Specifically, we used sound and accelerometer data collected with a smartphone and a wrist-band while performing home task activities. The proposed method is based on multi-view learning and stacked generalization, and consists of training a model for each of the sensor views and combining them with stacking. Our experimental results showed that the multi-view stacking method outperformed the aggregation approach in terms of accuracy, recall and specificity.
dc.languageeng
dc.publisherElsevier
dc.relationgeneralPublic
dc.relationhttps://www.sciencedirect.com/science/article/abs/pii/S1566253516301932
dc.sourceInformation Fusion Vol. 40, pp. 45-56
dc.titleMulti-view stacking for activity recognition with sound and accelerometer data
dc.typeinfo:eu-repo/semantics/article


Este ítem pertenece a la siguiente institución