dc.creatorQuesada Quirós, Luis
dc.creatorMarín Raventós, Gabriela
dc.creatorGuerrero Blanco, Luis Alberto
dc.date.accessioned2018-04-09T13:48:07Z
dc.date.accessioned2022-10-20T01:18:04Z
dc.date.available2018-04-09T13:48:07Z
dc.date.available2022-10-20T01:18:04Z
dc.date.created2018-04-09T13:48:07Z
dc.date.issued2016-11-29
dc.identifierhttps://link.springer.com/chapter/10.1007/978-3-319-48746-5_41
dc.identifier978-3-319-48746-5
dc.identifier978-3-319-48745-8
dc.identifierhttps://hdl.handle.net/10669/74426
dc.identifier10.1007/978-3-319-48746-5_41
dc.identifier320-B5-291
dc.identifier.urihttps://repositorioslatinoamericanos.uchile.cl/handle/2250/4539244
dc.description.abstractPeople with disabilities have fewer opportunities. Technological developments should be used to help these people to have more opportunities. In this paper we present partial results of a research project which aims to help people with disabilities, specifically deaf and hard of hearing. We present a sign language recognition model. The model takes advantage of the natural user interfaces (NUI) and a classification algorithm (support vector machines). Moreover, we combine handshapes (signs) and non-manual markers (associated to emotions and face gestures) in the recognition process to enhance the sign language expressivity recognition. Additionally, non-manual markers representation is proposed. A model evaluation is also reported.
dc.languageen_US
dc.sourcePart of the Lecture Notes in Computer Science book series (LNCS, volume 10069)
dc.subjectSign language recognition
dc.subjectHandshapes recognition
dc.subjectNon-manual markers recognition
dc.subjectIntel RealSense
dc.titleSign language recognition model combining non-manual markers and handshapes
dc.typecontribución de congreso


Este ítem pertenece a la siguiente institución