Tesis
Diseño e implementación de un sistema inalámbrico que permita interactuar con el entorno a personas con limitación motriz utilizando el movimiento de ojos y comandos de voz.
Fecha
2018-03Registro en:
Morales Montero, Hugo Marcelo; Yánez Jácome, Cristian Danilo. (2018). Diseño e implementación de un sistema inalámbrico que permita interactuar con el entorno a personas con limitación motriz utilizando el movimiento de ojos y comandos de voz. Escuela Superior Politécnica de Chimborazo. Riobamba.
Autor
Morales Montero, Hugo Marcelo
Yánez Jácome, Cristian Danilo
Resumen
The objective of this research was the desing and implementation of a wireless system that allows interacting with environment to people with motor limitation using eye movement and voice commands, the prototype was designed using NI Vision Acquisition to the processing og image. Image captures were implemented witch a USB webcam connected to the Rasberry PI 3, which was coupled near the eye to the image acquisition using a script developed in Python. The Raspberry allows capturing and storing the image taken from the eye that is analyzed and processed in LabView by determining two movements: up and down. Secondly, the respective control of the input and output ports (GPIO) of the Raspberry depending on the movement of the eye. The image is shared by a SAMBA server in the local networks in such a way that LabView processes the image according to a classification of patterns and shape assigning a qualification based on the stored samples. When determining the position of the eye, the order is sent to it´s respective actuator through the WI-FI wireless network using ESP8266 modules. The voice commands are outlined under Arduino development cards plus a voice recognition module Module V3 that by using a microphone the basic remote control function is fulfilled using infrared (IR) wireless technology. In addition to the configuration of a voice command to activate image processing, the respective prototype performance tests were performed with disabled people, determining that the prototype has a reliability of 80% with respect to it´s functionality and a maximum latency of 11,3 seconds in the processing og image. When training voice commands, it is recommended to pronounce and modelate the words.