dc.creatorBecker C.O.
dc.creatorFerreira P.A.V.
dc.date2013
dc.date2015-06-25T19:12:48Z
dc.date2015-11-26T15:10:07Z
dc.date2015-06-25T19:12:48Z
dc.date2015-11-26T15:10:07Z
dc.date.accessioned2018-03-28T22:20:19Z
dc.date.available2018-03-28T22:20:19Z
dc.identifier
dc.identifierProceedings - 2013 12th International Conference On Machine Learning And Applications, Icmla 2013. Ieee Computer Society, v. 2, n. , p. 339 - 344, 2013.
dc.identifier
dc.identifier10.1109/ICMLA.2013.145
dc.identifierhttp://www.scopus.com/inward/record.url?eid=2-s2.0-84899461109&partnerID=40&md5=8ac106019099d58c5be4cd499e8a7d37
dc.identifierhttp://www.repositorio.unicamp.br/handle/REPOSIP/88844
dc.identifierhttp://repositorio.unicamp.br/jspui/handle/REPOSIP/88844
dc.identifier2-s2.0-84899461109
dc.identifier.urihttp://repositorioslatinoamericanos.uchile.cl/handle/2250/1257982
dc.descriptionSemi-supervised learning can be defined as the ability to improve the predictive performance of an algorithm by providing it with data which hasn't been previously labeled. Manifold Regularization is a semi-supervised learning approach that extends the regularization framework so as to include additional regularization penalties that are based on the graph Laplacian as the empirical estimator of the underlying manifold. The incorporation of such terms rely on additional hyper-parameters, which, together with the original kernel and regularization parameters, are known to influence algorithm behavior. This paper proposes a gradient approach to the optimization of such hyper-parameters which is based on the closed form for the generalized cross validation estimate, being valid when the learning optimality conditions can be represented as a linear system, such as is the case for Laplacian Regularized Least Squares. For the subset hyper-parameters that are integer quantities, as is the case for the Laplacian matrix hyper-parameters, we propose the optimization of the weight components of a sum of base terms. Results of computational experiments are presented to illustrate the technique proposed. © 2013 IEEE.
dc.description2
dc.description
dc.description339
dc.description344
dc.descriptionAssociation for Machine Learning and Applications (AML and A),IEEE Computer Society
dc.descriptionChapelle, O., Scholkopf, B., Zien, A., (2006) Semi-Supervised Learning, , Eds Cambridge MA MIT Press
dc.descriptionBelkin, M., Niyogi, P., Sindhwani, V., Manifold regularization: A geometric framework for learning from labeled and unlabeled examples (2006) The Journal of Machine Learning Research, 7, pp. 2399-2434
dc.descriptionChapelle, O., Vapnik, V., Bousquet, O., Mukherjee, S., Choosing multiple parameters for support vector machines (2002) Machine Learning, 46 (1-3), pp. 131-159
dc.descriptionFriedrichs, F., Igel, C., Evolutionary tuning of multiple svm parameters (2005) Neurocomputing, 64, pp. 107-117
dc.descriptionBennett, K.P., Hu, J., Ji, X., Kunapuli, G., Pang, J.-S., Model selection via bilevel optimization (2006) Neural Networks 2006. IJCNN?06. International Joint Conference on, pp. 1922-1929. , IEEE
dc.descriptionCawley, G.C., Leave-one-out cross-validation based model selection criteria for weighted ls-svms (2006) Neural Networks 2006. IJCNN?06. International Joint Conference on, pp. 1661-1668. , http://theoval.cmp.uea.ac.uk/matlab/on16-Jul-2013, IEEE software retrieved from
dc.descriptionKeerthi, S., Sindhwani, V., Chapelle, O., An efficient method for gradient-based adaptation of hyperparameters in svm models (2007) NIPS 2006
dc.descriptionMoore, G., Bergeron, C., Bennett, K.P., Model selection for primal svm (2011) Machine Learning, 85 (1-2), pp. 175-208
dc.descriptionScholkopf, B., Herbrich, R., Smola, A.J., A generalized representer theorem (2001) Computational Learning Theory, pp. 416-426. , Springer
dc.descriptionCawley, G.C., Talbot, N.L., Fast exact leave-one-out crossvalidation of sparse least-squares support vector machines (2004) Neural Networks, 17 (10), pp. 1467-1476
dc.descriptionRifkin, R.M., Everything old is new again: A fresh look at historical approaches in machine learning (2002) Ph.D. Dissertation, , Massachussetts Institute of Technology
dc.descriptionRifkin, R.M., Lippert, R.A., (2007) Notes on Regularized Least Squares
dc.descriptionPahikkala, T., Boberg, J., Salakoski, T., Fast n-fold cross-validation for regularized least-squares (2006) Proceedings of the Ninth Scandinavian Conference on Artificial Intelligence (SCAI 2006), pp. 83-90. , Citeseer
dc.descriptionAllen, D.M., The relationship between variable selection and data agumentation and a method for prediction (1974) Technometrics, 16 (1), pp. 125-127
dc.descriptionYuan, J., Liu, X., Liu, C.-L., Leave-one-out manifold regularization (2012) Expert Systems with Applications, 39 (5), pp. 5317-5324
dc.descriptionNelder, J.A., Mead, R., A simplex method for function minimization (1965) Computer Journal, 7, pp. 308-313
dc.descriptionGonen, M., Alpaydn, E., Multiple kernel learning algorithms (2011) Journal of Machine Learning Research, 12, pp. 2211-2268
dc.descriptionGeng, B., Xu, C., Tao, D., Yang, Y., Hua, X.-S., Ensemble manifold regularization (2009) Computer Vision and Pattern Recognition 2009. CVPR 2009, pp. 2396-2402. , IEEE Conference on
dc.descriptionCawley, G.C., Talbot, N.L., Preventing over-fitting during model selection via bayesian regularisation of the hyper-parameters (2007) The Journal of Machine Learning Research, 8, pp. 841-861
dc.descriptionSindhwani, V., Niyogi, P., Belkin, M., Beyond the point cloud: From transductive to semi-supervised learning (2005) Proceedings of the 22nd International Conference on Machine Learning, pp. 824-831. , ACM
dc.descriptionMelacci, S., Belkin, M., Laplacian support vector machines trained in the primal (2011) Journal of Machine Learning Research, 12, pp. 1149-1184
dc.descriptionNene, S.A., Nayar, S.K., Murase, H., Columbia object image library (coil-20) (1996) Dept. Comput. Sci, 62. , http://www.cs.columbia.edu/CAVE/coil-20.html, Columbia Univ., New York. [Online]
dc.descriptionBache, K., Lichman, M., (2013) UCI Machine Learning Repository, , http://archive.ics.uci.edu/ml, [Online]. Available
dc.description(2009) Natick, Massachusetts: The MathWorks Inc., , MATLAB, version 7.8.0 (R2009a)
dc.descriptionBazaraa, M.S., Shetty, C.M., (1979) Nonlinear Programming: Theory and Algorithms, , New York Wiley
dc.descriptionRosenberg, D., Sindhwani, V., Bartlett, P., Niyogi, P., Multiview point cloud kernels for semisupervised learning[lecture notes] (2009) Signal Processing Magazine, IEEE, 26 (5), pp. 145-150
dc.descriptionMinh, H.Q., Bazzani, L., Murino, V., A unifying framework for vector-valued manifold regularization and multi-view learning (2013) Proceedings of the 30th International Conference on Machine Learning (ICML-13), pp. 100-108
dc.descriptionBergstra, J., Bengio, Y., Random search for hyper-parameter optimization (2012) The Journal of Machine Learning Research, 13, pp. 281-305
dc.languageen
dc.publisherIEEE Computer Society
dc.relationProceedings - 2013 12th International Conference on Machine Learning and Applications, ICMLA 2013
dc.rightsfechado
dc.sourceScopus
dc.titleGradient Hyper-parameter Optimization For Manifold Regularization
dc.typeActas de congresos


Este ítem pertenece a la siguiente institución