info:eu-repo/semantics/article
Compression-based regularization with an application to multitask learning
Fecha
2018-10Registro en:
Vera, Matías Alejandro; Rey Vega, Leonardo Javier; Piantanida, Pablo; Compression-based regularization with an application to multitask learning; Institute of Electrical and Electronics Engineers; Ieee Journal Of Selected Topics In Signal Processing; 12; 5; 10-2018; 1063-1076
1932-4553
CONICET Digital
CONICET
Autor
Vera, Matías Alejandro
Rey Vega, Leonardo Javier
Piantanida, Pablo
Resumen
This paper investigates, from information theoretic grounds, a learning problem based on the principle that any regularity in a given dataset can be exploited to extract compact features from data, i.e., using fewer bits than needed to fully describe the data itself, in order to build meaningful representations of a relevant content (multiple labels). We begin studying a multitask learning (MTL) problem from the average (over the tasks) of misclassification probability point of view and linking it with the popular cross-entropy criterion. Our approach allows an information theoretic formulation of an MTL problem as a supervised learning framework, in which the prediction models for several related tasks are learned jointly from common representations to achieve better generalization performance. More precisely, our formulation of the MTL problem can be interpreted as an information bottleneck problem with side information at the decoder. Based on that, we present an iterative algorithm for computing the optimal tradeoffs and some of its convergence properties are studied. An important feature of this algorithm is to provide a natural safeguard against overfitting, because it minimizes the average risk taking into account a penalization induced by the model complexity. Remarkably, empirical results illustrate that there exists an optimal information rate minimizing the excess risk, which depends on the nature and the amount of available training data. Applications to hierarchical text categorization and distributional word clusters are also investigated, extending previous works.