dc.creatorPlanet, Santiago
dc.creatorIriondo, Ignasi
dc.date.accessioned2019-12-03T10:36:45Z
dc.date.accessioned2023-03-07T19:25:24Z
dc.date.available2019-12-03T10:36:45Z
dc.date.available2023-03-07T19:25:24Z
dc.date.created2019-12-03T10:36:45Z
dc.identifier1989-1660
dc.identifierhttps://reunir.unir.net/handle/123456789/9606
dc.identifierhttp://dx.doi.org/10.9781/ijimai.2012.166
dc.identifier.urihttps://repositorioslatinoamericanos.uchile.cl/handle/2250/5903977
dc.description.abstractThe automatic analysis of speech to detect affective states may improve the way users interact with electronic devices. However, the analysis only at the acoustic level could be not enough to determine the emotion of a user in a realistic scenario. In this paper we analyzed the spontaneous speech recordings of the FAU Aibo Corpus at the acoustic and linguistic levels to extract two sets of features. The acoustic set was reduced by a greedy procedure selecting the most relevant features to optimize the learning stage. We compared two versions of this greedy selection algorithm by performing the search of the relevant features forwards and backwards. We experimented with three classification approaches: Nave-Bayes, a support vector machine and a logistic model tree, and two fusion schemes: decision-level fusion, merging the hard-decisions of the acoustic and linguistic classifiers by means of a decision tree; and feature-level fusion, concatenating both sets of features before the learning stage. Despite the low performance achieved by the linguistic data, a dramatic improvement was achieved after its combination with the acoustic information, improving the results achieved by this second modality on its own. The results achieved by the classifiers using the parameters merged at feature level outperformed the classification results of the decision-level fusion scheme, despite the simplicity of the scheme. Moreover, the extremely reduced set of acoustic features obtained by the greedy forward search selection algorithm improved the results provided by the full set.
dc.languagespa
dc.publisherInternational Journal of Interactive Multimedia and Artificial Intelligence (IJIMAI)
dc.relation;vol. 01, nº 06
dc.relationhttps://www.ijimai.org/journal/node/277
dc.rightsopenAccess
dc.subjectacoustic and linguistic features
dc.subjectdecision-level and future-level fusion
dc.subjectemotion recognition
dc.subjectspontaneous speech
dc.subjectIJIMAI
dc.titleComparative Study on Feature Selection and Fusion Schemes for Emotion Recognition from Speech
dc.typearticle


Este ítem pertenece a la siguiente institución