dc.contributorNiño Vásquez, Luis Fernando
dc.contributorlaboratorio de Investigación en Sistemas Inteligentes Lisi
dc.creatorRoa García, Fabio Andrés
dc.date.accessioned2022-02-14T20:20:03Z
dc.date.available2022-02-14T20:20:03Z
dc.date.created2022-02-14T20:20:03Z
dc.date.issued2021-09-10
dc.identifierhttps://repositorio.unal.edu.co/handle/unal/80979
dc.identifierUniversidad Nacional de Colombia
dc.identifierRepositorio Institucional Universidad Nacional de Colombia
dc.identifierhttps://repositorio.unal.edu.co/
dc.description.abstractEn el campo de la biometría y análisis de imágenes se han dado avances importantes en los últimos años, de esta manera, se han formalizado técnicas de reconocimiento facial mediante el uso de redes neuronales convolucionales apoyándose por algoritmos de transfer learning y clasificación. Estas técnicas en conjunto, se pueden aplicar al análisis de video, realizando una serie de pasos adicionales para optimizar los tiempos procesamiento y la precisión del modelo. El propósito de este trabajo es utilizar el modelo ResNet-34 junto con transfer Learning para el reconocimiento e identificación de rostros sobre secuencias de video. (Texto tomado de la fuente).
dc.description.abstractNowadays, thanks to technological innovation, it has been possible to obtain a significant increase in the production of multimedia content through devices such as tablet cell phones and computers. This increase in multimedia content for the most part is in video format and implies a need to find useful information about this type of format, but the resulting problem will be a tedious task since it is not possible to analyze useful information about the vídeos without it being in excessive use of resources and long execution times. Fortunately, in the field of biometrics and image analysis, there have been important advances in recent years, in this way, facial recognition techniques have been formalized through the use of convolutional neural networks supported by transfer learning and classification algorithms. Together, these techniques can be applied to video analysis, performing a series of additional steps to optimize processing times and model accuracy. The purpose of this work is to use the ResNet-34 model and Transfer Learning for face recognition and identification on video footage.
dc.languagespa
dc.publisherUniversidad Nacional de Colombia
dc.publisherBogotá - Ingeniería - Maestría en Ingeniería - Ingeniería de Sistemas y Computación
dc.publisherDepartamento de Ingeniería de Sistemas e Industrial
dc.publisherFacultad de Ingeniería
dc.publisherBogotá, Colombia
dc.publisherUniversidad Nacional de Colombia - Sede Bogotá
dc.relationM. Liu and Z. Liu, “Deep Reinforcement Learning Visual-Text Attention for Multimodal Video Classification,” in 1st International Workshop on Multimodal Understanding and Learning for Embodied Applications - {MULEA} ’19, pp. 13–21.
dc.relationS. J. Pan and Q. Yang, “A survey on transfer learning,” IEEE Transactions on Knowledge and Data Engineering, vol. 22, no. 10. pp. 1345–1359, Oct-2010.
dc.relationX. Ran, H. Chen, Z. Liu, and J. Chen, “Delivering Deep Learning to Mobile Devices via Offloading,” in Proceedings of the Workshop on Virtual Reality and Augmented Reality Network - {VR}/{AR} Network ’17, pp. 42–47.
dc.relationO. I. Abiodun, A. Jantan, A. E. Omolara, K. V. Dada, N. A. Mohamed, and H. Arshad, “State-of-the-art in artificial neural network applications: A survey,” vol. 4, no. 11, p. e00938, 2018.
dc.relationG. Szirtes, D. Szolgay, Á. Utasi, D. Takács, I. Petrás, and G. Fodor, “Facing reality: an industrial view on large scale use of facial expression analysis,” in Proceedings of the 2013 on Emotion recognition in the wild challenge and workshop - {EmotiW} ’13, pp. 1–8.
dc.relationG. Levi and T. Hassner, “Emotion Recognition in the Wild via Convolutional Neural Networks and Mapped Binary Patterns,” in Proceedings of the 2015 {ACM} on International Conference on Multimodal Interaction - {ICMI} ’15, pp. 503–510.
dc.relationR. Ewerth, M. Mühling, and B. Freisleben, “Robust Video Content Analysis via Transductive Learning,” vol. 3, no. 3, pp. 1–26.
dc.relationM. Parchami, S. Bashbaghi, and E. Granger, “{CNNs} with cross-correlation matching for face recognition in video surveillance using a single training sample per person,” in 2017 14th {IEEE} International Conference on Advanced Video and Signal Based Surveillance ({AVSS}), pp. 1–6.
dc.relationH. Khan, A. Atwater, and U. Hengartner, “Itus: an implicit authentication framework for android,” in Proceedings of the 20th annual international conference on Mobile computing and networking - {MobiCom} ’14, pp. 507–518.
dc.relationL. N. Huynh, Y. Lee, and R. K. Balan, “DeepMon: Mobile GPU-based Deep Learning Framework for Continuous Vision Applications,” in Proceedings of the 15th Annual International Conference on Mobile Systems, Applications, and Services, pp. 82–95.
dc.relationR. Iqbal, F. Doctor, B. More, S. Mahmud, and U. Yousuf, “Big data analytics: Computational intelligence techniques and application areas,” Technol. Forecast. Soc. Change, vol. 153, p. 119253, 2020.
dc.relationU. Schmidt-Erfurth, A. Sadeghipour, B. S. Gerendas, S. M. Waldstein, and H. Bogunović, “Artificial intelligence in retina,” vol. 67, pp. 1–29.
dc.relationM. Mittal et al., “An efficient edge detection approach to provide better edge connectivity for image analysis,” IEEE Access, vol. 7, pp. 33240–33255, 2019.
dc.relationD. Sirohi, N. Kumar, and P. S. Rana, “Convolutional neural networks for 5G-enabled Intelligent Transportation System : A systematic review,” vol. 153, pp. 459–498.
dc.relationA. Kumar, A. Kaur, and M. Kumar, “Face detection techniques: a review,” Artif. Intell. Rev., vol. 52, no. 2, pp. 927–948, 2019.
dc.relationK. S. Gautam and S. K. Thangavel, “Video analytics-based intelligent surveillance system for smart buildings,” Soft Comput., vol. 23, no. 8, pp. 2813–2837, 2019.
dc.relationJ. Yu, K. Sun, F. Gao, and S. Zhu, “Face biometric quality assessment via light CNN,” vol. 107, pp. 25–32.
dc.relationL. T. Nguyen-Meidine, E. Granger, M. Kiran, and L.-A. Blais-Morin, “A comparison of {CNN}-based face and head detectors for real-time video surveillance applications,” in 2017 Seventh International Conference on Image Processing Theory, Tools and Applications ({IPTA}), pp. 1–7.
dc.relationB. Chacua et al., “People Identification through Facial Recognition using Deep Learning,” in 2019 IEEE Latin American Conference on Computational Intelligence (LA-CCI), 2019, pp.
dc.relationJ. Park, J. Chen, Y. K. Cho, D. Y. Kang, and B. J. Son, “CNN-based person detection using infrared images for night-time intrusion warning systems,” Sensors (Switzerland), vol. 20, no. 1, 2020.
dc.relationA. Bansal, C. Castillo, R. Ranjan, and R. Chellappa, “The Do’s and Don’ts for CNN-Based Face Verification,” in 2017 IEEE International Conference on Computer Vision Workshop (ICCVW), 2017, pp. 2545–2554.
dc.relationJ. Galbally, “A new Foe in biometrics: A narrative review of side-channel attacks,” vol. 96, p. 101902.
dc.relationY. Yao, H. Li, H. Zheng, and B. Y. Zhao, “Latent Backdoor Attacks on Deep Neural Networks,” in Proceedings of the 2019 {ACM} {SIGSAC} Conference on Computer and Communications Security, pp. 2041–2055.
dc.relationY. Akbulut, A. Sengur, U. Budak, and S. Ekici, “Deep learning based face liveness detection in vídeos,” in 2017 International Artificial Intelligence and Data Processing Symposium ({IDAP}), pp. 1–4.
dc.relationJ. Zhang, W. Li, P. Ogunbona, and D. Xu, “Recent Advances in Transfer Learning for Cross-Dataset Visual Recognition: A Problem-Oriented Perspective,” vol. 52, no. 1, pp. 1–38.
dc.relationC. X. Lu et al., “Autonomous Learning for Face Recognition in the Wild via Ambient Wireless Cues,” in The World Wide Web Conference on - {WWW} ’19, pp. 1175–1186.
dc.relationJ. C. Hung, K.-C. Lin, and N.-X. Lai, “Recognizing learning emotion based on convolutional neural networks and transfer learning,” vol. 84, p. 105724.
dc.relationS. Zhang, X. Pan, Y. Cui, X. Zhao, and L. Liu, “Learning Affective Video Features for Facial Expression Recognition via Hybrid Deep Learning,” IEEE Access, vol. 7, pp. 32297–32304, 2019.
dc.relationC. Herrmann, T. Müller, D. Willersinn, and J. Beyerer, “Real-time person detection in low-resolution thermal infrared imagery with MSER and CNNs,” p. 99870I.
dc.relationF. An and Z. Liu, “Facial expression recognition algorithm based on parameter adaptive initialization of CNN and LSTM,” vol. 36, no. 3, pp. 483–498.
dc.relationZ. Zhang, P. Luo, C. C. Loy, and X. Tang, “Joint Face Representation Adaptation and Clustering in Vídeos,” in Computer Vision – {ECCV} 2016, vol. 9907, B. Leibe, J. Matas, N. Sebe, and M. Welling, Eds. Springer International Publishing, pp. 236–251.
dc.relationE. G. Ortiz, A. Wright, and M. Shah, “Face recognition in movie trailers via mean sequence sparse representation-based classification,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2013, pp. 3531–3538.
dc.relation“Privacy Protection for Life-log Video.” [Online]. Available: https://www.researchgate.net/publication/4249807_Privacy_Protection_for_Life-log_Video. [Accessed: 13-Jun-2021].
dc.relationSUPERINTENDENDENCIA DE INDUSTRIA Y COMERCIO, “Proteccion de datos personales en sistemas de videovigilancia,” 2016.
dc.relationS. Ebrahimi Kahou, V. Michalski, K. Konda, R. Memisevic, and C. Pal, “Recurrent Neural Networks for Emotion Recognition in Video,” in Proceedings of the 2015 {ACM} on International Conference on Multimodal Interaction - {ICMI} ’15, pp. 467–474.
dc.relationE. Flouty, O. Zisimopoulos, and D. Stoyanov, “FaceOff: Anonymizing Vídeos in the Operating Rooms,” in {OR} 2.0 Context-Aware Operating Theaters, Computer Assisted Robotic Endoscopy, Clinical Image-Based Procedures, and Skin Image Analysis, vol. 11041, D. Stoyanov, Z. Taylor, D. Sarikaya, J. McLeod, M. A. González Ballester, N. C. F. Codella, A. Martel, L. Maier-Hein, A. Malpani, M. A. Zenati, S. De Ribaupierre, L. Xiongbiao, T. Collins, T. Reichl, K. Drechsler, M. Erdt, M. G. Linguraru, C. Oyarzun Laura, R. Shekhar, S. Wesarg, M. E. Celebi, K. Dana, and A. Halpern, Eds. Springer International Publishing, pp. 30–38.
dc.relationA. Turing, “Maquinaria computacional e Inteligencia Alan Turing, 1950,” 1950.
dc.relationG. R. Yang and X. J. Wang, “Artificial Neural Networks for Neuroscientists: A Primer,” Neuron, vol. 107, no. 6, pp. 1048–1070, Sep. 2020.
dc.relationJ. Singh and R. Banerjee, “A Study on Single and Multi-layer Perceptron Neural Network,” in 2019 3rd International Conference on Computing Methodologies and Communication (ICCMC), 2019.
dc.relationI. G. and Y. B. and A. Courville, Deep Learning. 2016.
dc.relationE. Stevens, L. Antiga, and T. Viehmann, “Deep Learning with PyTorch.”
dc.relationK. He, X. Zhang, S. Ren, and J. Sun, “Deep Residual Learning for Image Recognition.”
dc.relationM. KAYA and H. Ş. BİLGE, “Deep Metric Learning: A Survey,” Symmetry 2019, Vol. 11, Page 1066, vol. 11, no. 9, p. 1066, Aug. 2019.
dc.relationB. R. Vasconcellos, M. Rudek, and M. de Souza, “A Machine Learning Method for Vehicle Classification by Inductive Waveform Analysis,” IFAC-PapersOnLine, vol. 53, no. 2, pp. 13928–13932, Jan. 2020.
dc.rightsReconocimiento 4.0 Internacional
dc.rightshttp://creativecommons.org/licenses/by/4.0/
dc.rightsinfo:eu-repo/semantics/openAccess
dc.titleImplementar un sistema de reconocimiento e identificación de rostros sobre secuencias de video mediante un modelo de Redes Neuronales Convolucionales y Transfer Learning
dc.typeTrabajo de grado - Maestría


Este ítem pertenece a la siguiente institución