dc.creatorGonzález-Crespo, Rubén
dc.creatorVerdú, Elena
dc.creatorKhari, Manju
dc.creatorGarg, Aditya Kumar
dc.date.accessioned2022-03-17T10:41:21Z
dc.date.accessioned2023-03-07T19:35:31Z
dc.date.available2022-03-17T10:41:21Z
dc.date.available2023-03-07T19:35:31Z
dc.date.created2022-03-17T10:41:21Z
dc.identifier1989-1660
dc.identifierhttps://reunir.unir.net/handle/123456789/12662
dc.identifierhttp://doi.org/10.9781/ijimai.2019.09.002
dc.identifier.urihttps://repositorioslatinoamericanos.uchile.cl/handle/2250/5906950
dc.description.abstractIn this era, the interaction between Human and Computers has always been a fascinating field. With the rapid development in the field of Computer Vision, gesture based recognition systems have always been an interesting and diverse topic. Though recognizing human gestures in the form of sign language is a very complex and challenging task. Recently various traditional methods were used for performing sign language recognition but achieving high accuracy is still a challenging task. This paper proposes a RGB and RGB-D static gesture recognition method by using a fine-tuned VGG19 model. The fine-tuned VGG19 model uses a feature concatenate layer of RGB and RGB-D images for increasing the accuracy of the neural network. Finally, on an American Sign Language (ASL) Recognition dataset, the authors implemented the proposed model. The authors achieved 94.8% recognition rate and compared the model with other CNN and traditional algorithms on the same dataset.
dc.languageeng
dc.publisherInternational Journal of Interactive Multimedia and Artificial Intelligence (IJIMAI)
dc.relation;vol. 5, nº 7
dc.relationhttps://www.ijimai.org/journal/bibcite/reference/2738
dc.rightsopenAccess
dc.subjectimage processing
dc.subjectgesture recognition
dc.subjectsign language
dc.subjectconvolutional neural network (CNN)
dc.subjectIJIMAI
dc.titleGesture Recognition of RGB and RGB-D Static Images Using Convolutional Neural Networks
dc.typearticle


Este ítem pertenece a la siguiente institución