dc.contributorSapiro, Guillermo
dc.creatorRamirez, Ignacio
dc.date.accessioned2014-11-24T22:21:00Z
dc.date.accessioned2022-10-28T19:21:36Z
dc.date.available2014-11-24T22:21:00Z
dc.date.available2022-10-28T19:21:36Z
dc.date.created2014-11-24T22:21:00Z
dc.date.issued2011
dc.identifierRAMIREZ, I. Second generation sparse models. Tesis de doctorado. University of Minnesota, 2011.
dc.identifierhttp://hdl.handle.net/20.500.12008/2854
dc.identifier.urihttps://repositorioslatinoamericanos.uchile.cl/handle/2250/4959515
dc.description.abstractSparse data models, where data is assumed to be well represented as a linear combination of a few elements from a learned dictionary, have gained considerable attention in recent years, and their use has led to state-of-the-art results in many applications. The success of these models is largely attributed to two critical features: the use of sparsity as a robust mechanism for regularizing the linear coefficients that represent the data, and the flexibility provided by overcomplete dictionaries that are learned from the data. These features are controlled by two critical hyper-parameters: the desired sparsity of the coefficients, and the size of the dictionaries to be learned. However, lacking theoretical guidelines for selecting these critical parameters, applications based on sparse models often require hand-tuning and cross-validation to select them, for each application, and each data set. This can be both inefficient and ineffective. On the other hand, there are multiple scenarios in which imposing additional constraints to the produced representations, including the sparse codes and the dictionary itself, can result in further improvements. This thesis is about improving and/or extending current sparse models by addressing the two issues discussed above, providing the elements for a new generation of more powerful and flexible sparse models. First, we seek to gain a better understanding of sparse models as data modeling tools, so that critical parameters can be selected automatically, efficiently, and in a principled way. Secondly, we explore new sparse modeling formulations for effectively exploiting the prior information present in different scenarios. In order to achieve these goals, we combine ideas and tools from information theory, statistics, machine learning, and optimization theory. The theoretical contributions are complemented with applications in audio, image and video processing.
dc.rightsLicencia Creative Commons Atribución – No Comercial – Sin Derivadas (CC BY-NC-ND 4.0)
dc.rightsLas obras depositadas en el Repositorio se rigen por la Ordenanza de los Derechos de la Propiedad Intelectual de la Universidad De La República. (Res. Nº 91 de C.D.C. de 8/III/1994 – D.O. 7/IV/1994) y por la Ordenanza del Repositorio Abierto de la Universidad de la República (Res. Nº 16 de C.D.C. de 07/10/2014)
dc.titleSecond generation sparse models
dc.typeTesis de doctorado


Este ítem pertenece a la siguiente institución