dc.creator | Cardellino, Cristian Adrián | |
dc.creator | Teruel, Milagro | |
dc.creator | Alonso i Alemany, Laura | |
dc.date.accessioned | 2021-12-30T12:41:42Z | |
dc.date.accessioned | 2022-10-14T18:31:57Z | |
dc.date.available | 2021-12-30T12:41:42Z | |
dc.date.available | 2022-10-14T18:31:57Z | |
dc.date.created | 2021-12-30T12:41:42Z | |
dc.date.issued | 2015 | |
dc.identifier | http://hdl.handle.net/11086/22140 | |
dc.identifier.uri | https://repositorioslatinoamericanos.uchile.cl/handle/2250/4274057 | |
dc.description.abstract | Active learning provides promising methods to optimize the cost of manually annotating a dataset. However, practitioners in many areas do not massively resort to such methods because they present technical difficulties and do not provide a guarantee of good performance, especially in skewed distributions with scarcely populated minority classes and an undefined, catch-all majority class, which are very common in human-related phenomena like natural language. In this paper we present a comparison of the simplest active learning technique, pool-based uncertainty sampling, and its opposite, which we call reversed uncertainty sampling. We show that both obtain results comparable to the random, arguing for a more insightful approach to active learning. | |
dc.language | eng | |
dc.rights | http://creativecommons.org/licenses/by-nc-nd/4.0/ | |
dc.rights | Attribution-NonCommercial-NoDerivatives 4.0 International | |
dc.source | ISSN: 2451-7585 | |
dc.subject | Natural language processing | |
dc.subject | Active learning | |
dc.title | Reversing uncertainty sampling to improve active learning schemes | |
dc.type | conferenceObject | |