dc.contributorAntonio de Padua Braga
dc.contributorAndre Paim Lemos
dc.contributorLuis Antonio Aguirre
dc.contributorCarlos Eduardo Pedreira
dc.contributorAdriano Lorena Inacio de Oliveira
dc.creatorEuler Guimarães Horta
dc.date.accessioned2019-08-11T20:56:34Z
dc.date.accessioned2022-10-03T22:50:58Z
dc.date.available2019-08-11T20:56:34Z
dc.date.available2022-10-03T22:50:58Z
dc.date.created2019-08-11T20:56:34Z
dc.date.issued2015-10-07
dc.identifierhttp://hdl.handle.net/1843/BUBD-A4BK3A
dc.identifier.urihttp://repositorioslatinoamericanos.uchile.cl/handle/2250/3811889
dc.description.abstractThe main objective of Active Learning is to choose only the most informative patterns to be labeled and learned. In Active Learning scenario a selection strategy is used to analyze a non-labeled pattern and to decide whether its label should be queried to a specialist. Usually, this labeling process has a high cost, which motivates the study of strategies that minimize the number of necessary labels for learning. Traditional Active Learning approaches make some unrealistic considerations about the data, such as requiring linear separability or that the data distribution should be uniform. Furthermore, traditional approaches require fine-tuning parameters, which implies that some labels should be reserved for this purpose, increasing the costs. In this thesis we present two Active Learning strategies that make no considerations about the data distribution and that do not require fine-tuning parameters. The proposed algorithms are based on Extreme Learning Machines (ELM) with a Hebbian Perceptron with normalized weights in the output layer. Our strategies decide whether a pattern should be labeled using a simple convergence test. This test was obtained by adapting the Perceptron Convergence Theorem. The proposed methods allow online learning, they are practical and fast, and they are able to obtain a good solution in terms of neural complexity and generalization capability. The experimental results show that our models have similar performance to regularized ELMs and SVMs with ELM kernel. However, the proposed models learn a fewer number of labeled patterns without any computationally expensive optimization process and without fine-tuning parameters.
dc.publisherUniversidade Federal de Minas Gerais
dc.publisherUFMG
dc.rightsAcesso Aberto
dc.subjectEngenharia elétrica
dc.titleAplicação de máquinas de aprendizado extremo ao problema de aprendizado ativo
dc.typeTese de Doutorado


Este ítem pertenece a la siguiente institución