dc.contributorMora Colque, Rensso Victor Hugo
dc.date.accessioned2023-02-16T21:14:55Z
dc.date.accessioned2023-05-30T23:31:46Z
dc.date.available2023-02-16T21:14:55Z
dc.date.available2023-05-30T23:31:46Z
dc.date.created2023-02-16T21:14:55Z
dc.date.issued2023
dc.identifier1076321
dc.identifierhttp://hdl.handle.net/20.500.12590/17445
dc.identifier.urihttps://repositorioslatinoamericanos.uchile.cl/handle/2250/6479218
dc.description.abstractForeground detection is the task of labelling the foreground (moving objects) or background (static scenario) pixels in the video sequence and it depends on the context of the scene. For many years, methods based on background model have been the most used approaches for detecting foreground; however, their methods are sensitive to error propagation from the first background model estimations. To address this problem, we proposed a U-net-based architecture with a feature attention module, where the encoding of the entire video sequence is used as the attention context to get features related to the background model. Furthermore, we added three spatial attention modules with the aim of highlighting regions with relevant features. We tested our network on sixteen scenes from the CDnet2014 dataset, with an average F-measure of 97.84. The results also show that our model outperforms traditional and neural networks methods. Thus, we demonstrated that feature and spatial attention modules on a U-net based architecture can deal with the foreground detection challenges.
dc.languageeng
dc.publisherUniversidad Católica San Pablo
dc.publisherPE
dc.rightshttps://creativecommons.org/licenses/by/4.0/
dc.rightsinfo:eu-repo/semantics/openAccess
dc.sourceUniversidad Católica San Pablo
dc.sourceRepositorio Institucional - UCSP
dc.subjectForeground Detection
dc.subjectU-Net
dc.subjectVideo Encoding
dc.subjectAttention
dc.titleForeground detection using attention modules and a video encoding
dc.typeinfo:eu-repo/semantics/bachelorThesis


Este ítem pertenece a la siguiente institución