dc.creatorCardellino, Cristian
dc.creatorAlonso i Alemany, Laura
dc.date2015
dc.date2015
dc.date2016-04-08T12:29:22Z
dc.identifierhttp://sedici.unlp.edu.ar/handle/10915/52131
dc.identifierhttp://44jaiio.sadio.org.ar/sites/default/files/asai184-191.pdf
dc.identifierissn:2451-7585
dc.descriptionActive learning provides promising methods to optimize the cost of manually annotating a dataset. However, practitioners in many areas do not massively resort to such methods because they present technical difficulties and do not provide a guarantee of good performance, especially in skewed distributions with scarcely populated minority classes and an undefined, catch-all majority class, which are very common in human-related phenomena like natural language. In this paper we present a comparison of the simplest active learning technique, pool-based uncertainty sampling, and its opposite, which we call reversed uncertainty sampling. We show that both obtain results comparable to the random, arguing for a more insightful approach to active learning.
dc.descriptionSociedad Argentina de Informática e Investigación Operativa (SADIO)
dc.formatapplication/pdf
dc.format184-191
dc.languageen
dc.rightshttp://creativecommons.org/licenses/by-sa/3.0/
dc.rightsCreative Commons Attribution-ShareAlike 3.0 Unported (CC BY-SA 3.0)
dc.subjectCiencias Informáticas
dc.subjectactive learning
dc.subjectpool-based uncertainty sampling
dc.subjectAprendizaje
dc.titleReversing uncertainty sampling to improve active learning schemes
dc.typeObjeto de conferencia
dc.typeObjeto de conferencia


Este ítem pertenece a la siguiente institución