dc.contributorUniversidade Federal de São Carlos (UFSCar)
dc.contributorScience and Technology of São Paulo
dc.contributorUniversidade Estadual Paulista (Unesp)
dc.date.accessioned2020-12-12T01:30:32Z
dc.date.accessioned2022-12-19T20:48:40Z
dc.date.available2020-12-12T01:30:32Z
dc.date.available2022-12-19T20:48:40Z
dc.date.created2020-12-12T01:30:32Z
dc.date.issued2020-10-01
dc.identifierApplied Soft Computing Journal, v. 95.
dc.identifier1568-4946
dc.identifierhttp://hdl.handle.net/11449/199093
dc.identifier10.1016/j.asoc.2020.106513
dc.identifier2-s2.0-85087755333
dc.identifier.urihttps://repositorioslatinoamericanos.uchile.cl/handle/2250/5379727
dc.description.abstractCurrently, there is a large number of methods that use 2D poses to represent and recognize human action in videos. Most of these methods use information computed from raw 2D poses based on the straight line segments that form the body parts in a 2D pose model in order to extract features (e.g., angles and trajectories). In our work, we propose a new method of representing 2D poses. Instead of directly using the straight line segments, firstly, the 2D pose is converted to the parameter space in which each segment is mapped to a point. Then, from the parameter space, spatiotemporal features are extracted and encoded using a Bag-of-Poses approach, then used for human action recognition in the video. Experiments on two well-known public datasets, Weizmann and KTH, showed that the proposed method using 2D poses encoded in parameter space can improve the recognition rates, obtaining competitive accuracy rates compared to state-of-the-art methods.
dc.languageeng
dc.relationApplied Soft Computing Journal
dc.sourceScopus
dc.subjectBag-of-poses
dc.subjectHuman action recognition
dc.subjectSpatiotemporal features
dc.subjectSurveillance systems
dc.subjectVideo sequences
dc.titleHuman action recognition in videos based on spatiotemporal features and bag-of-poses
dc.typeArtículos de revistas


Este ítem pertenece a la siguiente institución