dc.creatorElkfury, Fernando
dc.creatorIerache, Jorge Salvador
dc.date2021
dc.date2021
dc.date2021-09-20T12:20:06Z
dc.date.accessioned2023-07-15T03:25:36Z
dc.date.available2023-07-15T03:25:36Z
dc.identifierhttp://sedici.unlp.edu.ar/handle/10915/125145
dc.identifierisbn:978-950-34-2016-4
dc.identifier.urihttps://repositorioslatinoamericanos.uchile.cl/handle/2250/7465666
dc.descriptionComputer-Human interaction is more frequent now than ever before, thus the main goal of this research area is to improve communication with computers, so it becomes as natural as possible. A key aspect to achieve such interaction is the affective component often missing from last decade developments. To improve computer human interaction in this paper we present a method to convert discrete or categorical data from a CNN emotion classifier trained with Mel scale spectrograms to a two-dimensional model, pursuing integration of the human voice as a feature for emotional inference multimodal frameworks. Lastly, we discuss preliminary results obtained from presenting audiovisual stimuli to different subject and comparing dimensional arousal-valence results and it’s SAM surveys.
dc.descriptionFacultad de Informática
dc.formatapplication/pdf
dc.format33-36
dc.languageen
dc.rightshttp://creativecommons.org/licenses/by-nc-sa/4.0/
dc.rightsCreative Commons Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)
dc.subjectCiencias Informáticas
dc.subjectEmotions
dc.subjectMultimodal Framework
dc.subjectAffective computing
dc.titleSpeech emotion representation : A method to convert discrete to dimensional emotional models for emotional inference multimodal frameworks
dc.typeObjeto de conferencia
dc.typeObjeto de conferencia


Este ítem pertenece a la siguiente institución