dc.creatorRamirez Cornejo
dc.creatorJadisha Yarif; Pedrini
dc.creatorHelio
dc.date2016
dc.date2017-11-13T13:22:12Z
dc.date2017-11-13T13:22:12Z
dc.date.accessioned2018-03-29T05:55:01Z
dc.date.available2018-03-29T05:55:01Z
dc.identifier978-1-4799-9988-0
dc.identifier2016 Ieee International Conference On Acoustics, Speech And Signal Processing Proceedings. Ieee, p. 1298 - 1302, 2016.
dc.identifier1520-6149
dc.identifierWOS:000388373401088
dc.identifierhttp://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=7471886
dc.identifierhttp://repositorio.unicamp.br/jspui/handle/REPOSIP/327843
dc.identifier.urihttp://repositorioslatinoamericanos.uchile.cl/handle/2250/1364868
dc.descriptionEmotion recognition based on facial expressions plays an important role in numerous applications, such as affective computing, behavior prediction, human-computer interactions, psychological health services, interpersonal relations, and social monitoring. In this work, we describe and analyze an emotion recognition system based on facial expressions robust to occlusions through Census Transform Histogram (CENTRIST) features. Initially, occluded facial regions are reconstructed by applying Robust Principal Component Analysis (RPCA). CENTRIST features are extracted from the facial expression representation, as well as Local Binary Patterns (LBP), Local Gradient Coding (LGC) and an extended Local Gradient Coding (LGC-HD). Then, the feature vector is reduced through Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA). For facial expression recognition, K-nearest neighbor (KNN) and Support Vector Machine (SVM) classifiers are applied and tested. Experimental results on two public data sets demonstrated that the CENTRIST representation achieved competitive accuracy rates for occluded and non-occluded facial expressions compared to other state-of-the-art approaches available in the literature.
dc.description1298
dc.description1302
dc.descriptionIEEE International Conference on Acoustics, Speech, and Signal Processing
dc.descriptionMAR 20-25, 2016
dc.descriptionShanghai, PEOPLES R CHINA
dc.description
dc.languageEnglish
dc.publisherIEEE
dc.publisherNew York
dc.relation2016 IEEE International conference on acoustics, speech and signal processing proceedings
dc.rightsfechado
dc.sourceWOS
dc.subjectEmotion Recognition
dc.subjectFacial Expression
dc.subjectOcclusion
dc.subjectFiducial Landmarks
dc.subjectFeature Descriptors
dc.titleRecognition Of Occluded Facial Expressions Based On Centrist Features
dc.typeActas de congresos


Este ítem pertenece a la siguiente institución