dc.creatorLopes A.P.B.
dc.creatorDa Santos E.R.S.
dc.creatorDo Valle Jr. E.A.
dc.creatorDe Almeida J.M.
dc.creatorDe Araujo A.A.
dc.date2011
dc.date2015-06-30T20:30:37Z
dc.date2015-11-26T14:50:33Z
dc.date2015-06-30T20:30:37Z
dc.date2015-11-26T14:50:33Z
dc.date.accessioned2018-03-28T22:01:46Z
dc.date.available2018-03-28T22:01:46Z
dc.identifier9780769545486
dc.identifierProceedings - 24th Sibgrapi Conference On Graphics, Patterns And Images. , v. , n. , p. 352 - 359, 2011.
dc.identifier
dc.identifier10.1109/SIBGRAPI.2011.41
dc.identifierhttp://www.scopus.com/inward/record.url?eid=2-s2.0-84857184838&partnerID=40&md5=6012fe35a567119f86462f4214fe2c84
dc.identifierhttp://www.repositorio.unicamp.br/handle/REPOSIP/108156
dc.identifierhttp://repositorio.unicamp.br/jspui/handle/REPOSIP/108156
dc.identifier2-s2.0-84857184838
dc.identifier.urihttp://repositorioslatinoamericanos.uchile.cl/handle/2250/1254175
dc.descriptionTo manually collect action samples from realistic videos is a time-consuming and error-prone task. This is a serious bottleneck to research related to video understanding, since the large intra-class variations of such videos demand training sets large enough to properly encompass those variations. Most authors dealing with this issue rely on (semi-) automated procedures to collect additional, generally noisy, examples. In this paper, we exploit a different approach, based on a Transfer Learning (TL) technique, to address the target task of action recognition. More specifically, we propose a framework that transfers the knowledge about concepts from a previously labeled still image database to the target action video database. It is assumed that, once identified in the target action database, these concepts provide some contextual clues to the action classifier. Our experiments with Caltech256 and Hollywood2 databases indicate: a) the feasibility of successfully using transfer learning techniques to detect concepts and, b) that it is indeed possible to enhance action recognition with the transferred knowledge of even a few concepts. In our case, only four concepts were enough to obtain statistically significant improvements for most actions. © 2011 IEEE.
dc.description
dc.description
dc.description352
dc.description359
dc.descriptionPan, S.J., Yang, Q., A survey on transfer learning (2009) Transactions on Knowledge and Data Engineering (Pre-print)
dc.descriptionMarszalek, M., Laptev, I., Schmid, C., Actions in context (2009) CVPR '09, pp. 2929-2936. , June
dc.descriptionGriffin, G., Holub, A., Perona, P., (2007) Caltech-256 Object Category Dataset, , http://authors.library.caltech.edu/7694, California Institute of Technology, Tech. Rep. 7694. [Online]
dc.descriptionLaptev, I., Marszalek, M., Schmid, C., Rozenfeld, B., Learning realistic human actions from movies (2008) CVPR '08, pp. 1-8. , June
dc.descriptionDuchenne, O., Laptev, I., Sivic, J., Bach, F., Ponce, J., Automatic annotation of human actions in video (2009) ICCV '09
dc.descriptionMitchell, T.M., (1997) Machine Learning, , New York: McGraw-Hill
dc.descriptionUlges, A., Schulze, C., Koch, M., Breuel, T.M., Learning automatic concept detectors from online video (2009) Computer Vision and Image Understanding, , http://www.sciencedirect.com/science/article/B6WCX-4X1J787-3/2/ 944190566d7103b11000f88dcc2eb526, In Press, Corrected Proof. [Online]
dc.descriptionWu, T.-F., Lin, C.-J., Weng, R.C., Probability estimates for multiclass classification by pairwise coupling (2004) Journal of Machine Learning Research, 5, pp. 975-1005. , August
dc.descriptionKumar, N., Berg, A.C., Belhumeur, P.N., Nayar, S.K., Attribute and simile classifiers for face verification (2009) ICCV '09
dc.descriptionDuan, L., Tsang, I.W., Xu, D., Chua, T.-S., Domain adaptation from multiple sources via auxiliary classifiers (2009) Proceedings of the 26th International Conference on Machine Learning, pp. 289-296. , L. Bottou and M. Littman, Eds. Montreal: Omnipress, June
dc.descriptionVerbancsics, P., Stanley, K.O., Evolving static representations for task transfer (2010) J. Mach. Learn. Res., 11, pp. 1737-1769. , http://portal.acm.org/citation.cfm?id=1756006.1859909, August. [Online]
dc.descriptionLopes, A.P.B., Oliveira, R.S., De Almeida, J.M., De Albuquerque Araújo, A., Comparing alternatives for capturing dynamic information in bag of visual features approaches applied to human actions recognition (2009) Proceedings of MMSP '09
dc.descriptionEbadollahi, S., Xie, L., Chang, S.-F., Smith, J.R., Visual event detection using multi-dimensional concept dynamics (2006) 2006 IEEE International Conference on Multimedia and Expo, ICME 2006 - Proceedings, 2006, pp. 881-884. , DOI 10.1109/ICME.2006.262691, 4036741, 2006 IEEE International Conference on Multimedia and Expo, ICME 2006 - Proceedings
dc.descriptionKennedy, L., (2006) Revision of LSCOM Event/Activity Annotations, DTO Challenge Workshop on Large Scale Concept Ontology for Multimedia, , Columbia University, Tech. Rep., December
dc.descriptionKennedy, L., Hauptmann, A., (2006) LSCOM Lexicon Definitions and Annotations(Version 1.0), , Columbia University, Tech. Rep., March
dc.descriptionSun, J., Wu, X., Yan, S., Cheong, L.-F., Chua, T.-S., Li, J., Hierarchical spatio-temporal context modeling for action recognition (2009) Computer Vision and Pattern Recognition, 2009. CVPR 2009 IEEE Conference on, pp. 2004-2011. , June
dc.descriptionLowe David, G., Object recognition from local scale-invariant features (1999) Proceedings of the IEEE International Conference on Computer Vision, 2, pp. 1150-1157
dc.descriptionWang, H., Ullah, M., Klaser, A., Laptev, I., Schmid, C., Evaluation of local spatio-temporal features for action recognition (2009) BMVC '09, pp. 1-5
dc.descriptionSchuldt, C., Laptev, I., Caputo, B., Recognizing human actions: A local SVM approach (2004) ICPR '04, 3, pp. 32-36
dc.descriptionFei-Fei, L., Perona, P., A bayesian hierarchical model for learning natural scene categories (2005) CVPR, pp. 524-531
dc.descriptionDai, W., Yang, Q., Xue, G., Yu, Y., (2007) Boosting for Transfer Learning, pp. 193-200
dc.descriptionRaina, R., Battle, A., Lee, H., Packer, B., Ng, A., (2007) Self-taught Learning: Transfer Learning from Unlabeled Data, pp. 759-766
dc.descriptionChang, C.-C., Lin, C.-J., (2001) LIBSVM: A Library for Support Vector Machines, , http://www.csie.ntu.edu.tw/cjlin/libsvm, software
dc.descriptionLazebnik, S., Schmid, C., Ponce, J., Beyond bags of features: Spatial pyramid matching for recognizing natural scene categories (2006) Proceedings - 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR 2006, 2, pp. 2169-2178. , DOI 10.1109/CVPR.2006.68, 1641019, Proceedings - 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR 2006
dc.descriptionDe Avila, S.E.F., Lopes, A.P.B.A., Da Luz Jr., A., De Albuquerque Araújo, A., Vsumm: A mechanism designed to produce static video summaries and a novel evaluation method (2011) Pattern Recogn. Lett., 32, pp. 56-68. , http://dx.doi.org/10.1016/j.patrec.2010.08.004, January. [Online]
dc.languageen
dc.publisher
dc.relationProceedings - 24th SIBGRAPI Conference on Graphics, Patterns and Images
dc.rightsfechado
dc.sourceScopus
dc.titleTransfer Learning For Human Action Recognition
dc.typeActas de congresos


Este ítem pertenece a la siguiente institución