dc.creatorCardellino, Cristian Adrián
dc.creatorTeruel, Milagro
dc.creatorAlonso i Alemany, Laura
dc.date.accessioned2021-12-30T12:41:42Z
dc.date.accessioned2022-10-14T18:31:57Z
dc.date.available2021-12-30T12:41:42Z
dc.date.available2022-10-14T18:31:57Z
dc.date.created2021-12-30T12:41:42Z
dc.date.issued2015
dc.identifierhttp://hdl.handle.net/11086/22140
dc.identifier.urihttps://repositorioslatinoamericanos.uchile.cl/handle/2250/4274057
dc.description.abstractActive learning provides promising methods to optimize the cost of manually annotating a dataset. However, practitioners in many areas do not massively resort to such methods because they present technical difficulties and do not provide a guarantee of good performance, especially in skewed distributions with scarcely populated minority classes and an undefined, catch-all majority class, which are very common in human-related phenomena like natural language. In this paper we present a comparison of the simplest active learning technique, pool-based uncertainty sampling, and its opposite, which we call reversed uncertainty sampling. We show that both obtain results comparable to the random, arguing for a more insightful approach to active learning.
dc.languageeng
dc.rightshttp://creativecommons.org/licenses/by-nc-nd/4.0/
dc.rightsAttribution-NonCommercial-NoDerivatives 4.0 International
dc.sourceISSN: 2451-7585
dc.subjectNatural language processing
dc.subjectActive learning
dc.titleReversing uncertainty sampling to improve active learning schemes
dc.typeconferenceObject


Este ítem pertenece a la siguiente institución