dc.contributorhttps://orcid.org/0000-0002-9498-6602
dc.contributor0000-0002-9498-6602
dc.creatorGalván Tejada, Carlos Eric
dc.creatorGalván Tejada, Jorge
dc.creatorCelaya Padilla, José María
dc.creatorDelgado Contreras, Juan Rubén
dc.creatorMagallanes Quintanar, Rafael
dc.creatorMartínez Fierro, Margarita de la Luz
dc.creatorGarza Veloz, Idalia
dc.creatorLópez Hernández, Yamilé
dc.creatorGamboa Rosales, Hamurabi
dc.date.accessioned2020-03-25T02:52:46Z
dc.date.available2020-03-25T02:52:46Z
dc.date.created2020-03-25T02:52:46Z
dc.date.issued2016-11-23
dc.identifier1875-905X
dc.identifierhttp://ricaxcan.uaz.edu.mx/jspui/handle/20.500.11845/1458
dc.description.abstractThis work presents a human activity recognition (HAR) model based on audio features. The use of sound as an information source for HAR models represents a challenge because sound wave analyses generate very large amounts of data. However, feature selection techniques may reduce the amount of data required to represent an audio signal sample. Some of the audio features that were analyzed include Mel-frequency cepstral coefficients (MFCC). Although MFCC are commonly used in voice and instrument recognition, their utility within HAR models is yet to be confirmed, and this work validates their usefulness. Additionally, statistical features were extracted from the audio samples to generate the proposed HAR model. The size of the information is necessary to conform a HAR model impact directly on the accuracy of the model. This problem also was tackled in the present work; our results indicate that we are capable of recognizing a human activity with an accuracy of 85% using the HAR model proposed. This means that minimum computational costs are needed, thus allowing portable devices to identify human activities using audio as an information source.
dc.languageeng
dc.publisherHindawi
dc.relationgeneralPublic
dc.relationhttp://dx.doi.org/10.1155/2016/1784101
dc.rightshttp://creativecommons.org/licenses/by-nc-sa/3.0/us/
dc.rightsAtribución-NoComercial-CompartirIgual 3.0 Estados Unidos de América
dc.rightsAtribución-NoComercial-CompartirIgual 3.0 Estados Unidos de América
dc.sourceHindawi Vol. 2016, pp. 1-10
dc.titleAn Analysis of Audio Features to Develop a Human Activity Recognition Model Using Genetic Algorithms, Random Forests, and Neural Networks
dc.typeinfo:eu-repo/semantics/article


Este ítem pertenece a la siguiente institución