dc.contributorLUIS VILLASEÑOR PINEDA
dc.contributorCARLOS ALBERTO REYES GARCIA
dc.creatorJesus Salvador Garcia Salinas
dc.date2017-08
dc.date.accessioned2023-07-25T16:23:18Z
dc.date.available2023-07-25T16:23:18Z
dc.identifierhttp://inaoe.repositorioinstitucional.mx/jspui/handle/1009/1253
dc.identifier.urihttps://repositorioslatinoamericanos.uchile.cl/handle/2250/7806450
dc.descriptionThe interest for using of brain computer interfaces as a communication channel has been increasing nowadays, however, there are many challenges to achieve natural communication with this tool. On the particular case of imagined speech based brain computer interfaces, here still exists difficulties to extract information from the brain signals. The objective of this work is to propose a representation based on characteristic units focused on EEG signals generated in imagined speech. From this characteristic units a representation will be developed, which, more than recognize a specific vocabulary, allows to extend such vocabulary. In this work, a set of characteristic units or bag of features representation is explored. These type of representations have shown to be useful in similar tasks. Nevertheless, to determine an adequate bag of features for a specific problem requires to adjust many parameters. The proposed method aims to find an automatic signal characterization obtaining characteristic units and later generating a representative pattern from them. This method finds a set of characteristic units from each class (i.e. each imagined word), which are used for recognition and classification of the imagined vocabulary of a subject. The generation of characteristic units is performed by a clustering method. The generated prototypes are considered the characteristic units, and are called codewords. Each codeword is an instance from a general dictionary called codebook. For evaluating the method, a database composed of the electroencephalograms from twenty seven Spanish native speakers was used. The data consists of five Spanish imagined words ("Arriba", "Abajo", "Izquierda", "Derecha", "Seleccionar") repeated thirty three times each one, with a rest period between repetitions, this database was obtained in [Torres-García et al., 2013]. The proposed method achieved comparable results with related works, also have been tested in different approaches (i.e. transfer learning). Bag of features is able to incorporate frequency, temporal and spatial information from the data. Also, different representations which consider information of all channels, and feature extraction methods were explored. In further steps, is expected that extracted characteristic units of the signals allow to use transfer learning to recognize new imagined words, these units can be seen as prototypes of each imagined word.
dc.formatapplication/pdf
dc.languageeng
dc.publisherInstituto Nacional de Astrofísica, Óptica y Electrónica
dc.relationcitation:García Salinas, J. S., (2017). Bag of features for imagined speech classification in electroencephalograms, Tesis de Maestría, Instituto Nacional de Astrofísica, Óptica y Electrónica
dc.rightsinfo:eu-repo/semantics/openAccess
dc.rightshttp://creativecommons.org/licenses/by-nc-nd/4.0
dc.subjectinfo:eu-repo/classification/EEG/EEG
dc.subjectinfo:eu-repo/classification/Voz imaginada/Imagined speech
dc.subjectinfo:eu-repo/classification/Bolsa de características/Bag of features
dc.subjectinfo:eu-repo/classification/BCI/BCI
dc.subjectinfo:eu-repo/classification/Interfaz de computadora cerebral/Brain computer interface
dc.subjectinfo:eu-repo/classification/cti/1
dc.subjectinfo:eu-repo/classification/cti/12
dc.subjectinfo:eu-repo/classification/cti/1203
dc.subjectinfo:eu-repo/classification/cti/120323
dc.subjectinfo:eu-repo/classification/cti/120323
dc.titleBag of features for imagined speech classification in electroencephalograms
dc.typeinfo:eu-repo/semantics/masterThesis
dc.typeinfo:eu-repo/semantics/acceptedVersion
dc.audiencestudents
dc.audienceresearchers
dc.audiencegeneralPublic


Este ítem pertenece a la siguiente institución