dc.contributorIzbicki, Rafael
dc.contributorhttp://lattes.cnpq.br/9991192137633896
dc.contributorhttp://lattes.cnpq.br/8136176856567377
dc.creatorOtto, Mateus Piovezan
dc.date.accessioned2023-04-03T17:47:30Z
dc.date.accessioned2023-09-04T20:26:17Z
dc.date.available2023-04-03T17:47:30Z
dc.date.available2023-09-04T20:26:17Z
dc.date.created2023-04-03T17:47:30Z
dc.date.issued2023-03-29
dc.identifierOTTO, Mateus Piovezan. Scalable and interpretable kernel methods based on random Fourier features. 2023. Dissertação (Mestrado em Estatística) – Universidade Federal de São Carlos, São Carlos, 2023. Disponível em: https://repositorio.ufscar.br/handle/ufscar/17579.
dc.identifierhttps://repositorio.ufscar.br/handle/ufscar/17579
dc.identifier.urihttps://repositorioslatinoamericanos.uchile.cl/handle/2250/8630189
dc.description.abstractKernel methods are a class of statistical machine learning models based on positive semidefinite kernels, which serve as a measure of similarity between data features. Examples of kernel methods include kernel ridge regression, support vector machines, and smoothing splines. Despite their widespread use, kernel methods face two main challenges. Firstly, due to operating on all pairs of observations, they require a large amount of memory and calculation, making them unsuitable for use with large datasets. This issue can be solved by approximating the kernel function via random Fourier features or preconditioners. Secondly, most used kernels consider all features to be equally relevant, without considering their actual impact on the prediction. This results in decreased interpretability, as the influence of irrelevant features is not mitigated. In this work, we extend the random Fourier features framework to Automatic Relevance Determination (ARD) kernels and proposes a new kernel method that integrates the optimization of kernel parameters during training. The kernel parameters reduce the effect of irrelevant features and might be used for post-processing variable selection. The proposed method is evaluated on several datasets and compared to conventional algorithms in machine learning.
dc.languageeng
dc.publisherUniversidade Federal de São Carlos
dc.publisherUFSCar
dc.publisherPrograma Interinstitucional de Pós-Graduação em Estatística - PIPGEs
dc.publisherCâmpus São Carlos
dc.rightshttp://creativecommons.org/licenses/by/3.0/br/
dc.rightsAttribution 3.0 Brazil
dc.subjectImportância de covariáveis
dc.subjectMétodos de kernel
dc.subjectAprendizado de máquina
dc.subjectOtimização
dc.subjectKernel methods
dc.subjectFeature importance
dc.subjectMachine learning
dc.subjectOptimization
dc.titleScalable and interpretable kernel methods based on random Fourier features
dc.typeDissertação


Este ítem pertenece a la siguiente institución