dc.creatorAsawa, Krishna
dc.creatorManchanda, Priyanka
dc.date.accessioned2020-02-05T13:43:01Z
dc.date.accessioned2023-03-07T19:25:59Z
dc.date.available2020-02-05T13:43:01Z
dc.date.available2023-03-07T19:25:59Z
dc.date.created2020-02-05T13:43:01Z
dc.identifier1989-1660
dc.identifierhttps://reunir.unir.net/handle/123456789/9803
dc.identifierhttp://dx.doi.org/10.9781/ijimai.2014.272
dc.identifier.urihttps://repositorioslatinoamericanos.uchile.cl/handle/2250/5904156
dc.description.abstractMulti-sensor information fusion is a rapidly developing research area which forms the backbone of numerous essential technologies such as intelligent robotic control, sensor networks, video and image processing and many more. In this paper, we have developed a novel technique to analyze and correlate human emotions expressed in voice tone & facial expression. Audio and video streams captured to populate audio and video bimodal data sets to sense the expressed emotions in voice tone and facial expression respectively. An energy based mapping is being done to overcome the inherent heterogeneity of the recorded bi-modal signal. The fusion process uses sampled and mapped energy signal of both modalities’s data stream and further recognize the overall emotional component using Support Vector Machine (SVM) classifier with the accuracy 93.06%.
dc.languageeng
dc.publisherInternational Journal of Interactive Multimedia and Artificial Intelligence (IJIMAI)
dc.relation;vol. 02, nº 07
dc.relationhttps://www.ijimai.org/journal/node/671
dc.rightsopenAccess
dc.subjectbimodal fusion
dc.subjectemotion recognition
dc.subjectintelligent systems
dc.subjectmachine learning
dc.subjectenergy mapping
dc.subjectIJIMAI
dc.titleRecognition of Emotions using Energy Based Bimodal Information Fusion and Correlation
dc.typearticle


Este ítem pertenece a la siguiente institución