dc.creatorZablocki, Luciano Ivan
dc.creatorMendoza, Agustín Nicolás
dc.creatorNieto, Nicolás
dc.date2023-05
dc.date2023-08-23T18:09:38Z
dc.date.accessioned2024-07-24T03:42:22Z
dc.date.available2024-07-24T03:42:22Z
dc.identifierhttp://sedici.unlp.edu.ar/handle/10915/156752
dc.identifier.urihttps://repositorioslatinoamericanos.uchile.cl/handle/2250/9534927
dc.descriptionBrain-Computer Interfaces are useful devices that can partially restore communication from severely compromised patients. Although advances in deep learning have significantly improved brain pattern recognition, a large amount of data is required for training these deep architectures. In recent years, the inner speech paradigm has drawn much attention, as it can potentially allow natural control of different devices. However, as of the date of this publication, there is only a small amount of data available in this paradigm. In this work we show that it is possible, through transfer learning and domain adaptation methods, to make the most of the scarce data, enhancing the training process of a deep learning architecture used in brain-computer interfaces.
dc.descriptionSociedad Argentina de Informática e Investigación Operativa
dc.formatapplication/pdf
dc.format67-81
dc.languageen
dc.rightshttp://creativecommons.org/licenses/by-nc/4.0/
dc.rightsCreative Commons Attribution-NonCommercial 4.0 International (CC BY-NC 4.0)
dc.subjectCiencias Informáticas
dc.subjectDeep Learning
dc.subjectDomain Adaptation
dc.subjectTransfer Learning
dc.subjectConvolutional Neural Network
dc.titleDomain adaptation and transfer learning methods enhance deep learning models used in inner speech based brain computer interfaces
dc.typeArticulo
dc.typeArticulo


Este ítem pertenece a la siguiente institución