dc.creatorVera, Matías Alejandro
dc.creatorRey Vega, Leonardo Javier
dc.creatorPiantanida, Pablo
dc.date.accessioned2019-11-13T17:26:42Z
dc.date.accessioned2022-10-15T04:00:34Z
dc.date.available2019-11-13T17:26:42Z
dc.date.available2022-10-15T04:00:34Z
dc.date.created2019-11-13T17:26:42Z
dc.date.issued2018-10
dc.identifierVera, Matías Alejandro; Rey Vega, Leonardo Javier; Piantanida, Pablo; Compression-based regularization with an application to multitask learning; Institute of Electrical and Electronics Engineers; Ieee Journal Of Selected Topics In Signal Processing; 12; 5; 10-2018; 1063-1076
dc.identifier1932-4553
dc.identifierhttp://hdl.handle.net/11336/88736
dc.identifierCONICET Digital
dc.identifierCONICET
dc.identifier.urihttps://repositorioslatinoamericanos.uchile.cl/handle/2250/4343075
dc.description.abstractThis paper investigates, from information theoretic grounds, a learning problem based on the principle that any regularity in a given dataset can be exploited to extract compact features from data, i.e., using fewer bits than needed to fully describe the data itself, in order to build meaningful representations of a relevant content (multiple labels). We begin studying a multitask learning (MTL) problem from the average (over the tasks) of misclassification probability point of view and linking it with the popular cross-entropy criterion. Our approach allows an information theoretic formulation of an MTL problem as a supervised learning framework, in which the prediction models for several related tasks are learned jointly from common representations to achieve better generalization performance. More precisely, our formulation of the MTL problem can be interpreted as an information bottleneck problem with side information at the decoder. Based on that, we present an iterative algorithm for computing the optimal tradeoffs and some of its convergence properties are studied. An important feature of this algorithm is to provide a natural safeguard against overfitting, because it minimizes the average risk taking into account a penalization induced by the model complexity. Remarkably, empirical results illustrate that there exists an optimal information rate minimizing the excess risk, which depends on the nature and the amount of available training data. Applications to hierarchical text categorization and distributional word clusters are also investigated, extending previous works.
dc.languageeng
dc.publisherInstitute of Electrical and Electronics Engineers
dc.relationinfo:eu-repo/semantics/altIdentifier/url/https://ieeexplore.ieee.org/document/8379424
dc.relationinfo:eu-repo/semantics/altIdentifier/doi/http://dx.doi.org/10.1109/JSTSP.2018.2846218
dc.rightshttps://creativecommons.org/licenses/by-nc-sa/2.5/ar/
dc.rightsinfo:eu-repo/semantics/restrictedAccess
dc.subjectARIMOTO-BLAHUT ALGORITHM
dc.subjectINFORMATION BOTTLENECK
dc.subjectMULTITASK LEARNING
dc.subjectREGULARIZATION
dc.subjectSIDE INFORMATION
dc.titleCompression-based regularization with an application to multitask learning
dc.typeinfo:eu-repo/semantics/article
dc.typeinfo:ar-repo/semantics/artículo
dc.typeinfo:eu-repo/semantics/publishedVersion


Este ítem pertenece a la siguiente institución