dc.creator | Elkfury, Fernando | |
dc.creator | Ierache, Jorge Salvador | |
dc.date | 2021 | |
dc.date | 2021 | |
dc.date | 2021-09-20T12:20:06Z | |
dc.date.accessioned | 2023-07-15T03:25:36Z | |
dc.date.available | 2023-07-15T03:25:36Z | |
dc.identifier | http://sedici.unlp.edu.ar/handle/10915/125145 | |
dc.identifier | isbn:978-950-34-2016-4 | |
dc.identifier.uri | https://repositorioslatinoamericanos.uchile.cl/handle/2250/7465666 | |
dc.description | Computer-Human interaction is more frequent now than ever before, thus the main goal of this research area is to improve communication with computers, so it becomes as natural as possible. A key aspect to achieve such interaction is the affective component often missing from last decade developments. To improve computer human interaction in this paper we present a method to convert discrete or categorical data from a CNN emotion classifier trained with Mel scale spectrograms to a two-dimensional model, pursuing integration of the human voice as a feature for emotional inference multimodal frameworks. Lastly, we discuss preliminary results obtained from presenting audiovisual stimuli to different subject and comparing dimensional arousal-valence results and it’s SAM surveys. | |
dc.description | Facultad de Informática | |
dc.format | application/pdf | |
dc.format | 33-36 | |
dc.language | en | |
dc.rights | http://creativecommons.org/licenses/by-nc-sa/4.0/ | |
dc.rights | Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0) | |
dc.subject | Ciencias Informáticas | |
dc.subject | Emotions | |
dc.subject | Multimodal Framework | |
dc.subject | Affective computing | |
dc.title | Speech emotion representation : A method to convert discrete to dimensional emotional models for emotional inference multimodal frameworks | |
dc.type | Objeto de conferencia | |
dc.type | Objeto de conferencia | |