dc.contributorTakahashi Rodríguez, Silvia
dc.creatorFlórez Castro, José Manuel
dc.date.accessioned2022-09-20T15:57:49Z
dc.date.available2022-09-20T15:57:49Z
dc.date.created2022-09-20T15:57:49Z
dc.date.issued2022-09-12
dc.identifierhttp://hdl.handle.net/1992/60742
dc.identifierinstname:Universidad de los Andes
dc.identifierreponame:Repositorio Institucional Séneca
dc.identifierrepourl:https://repositorio.uniandes.edu.co/
dc.description.abstractEl objetivo de este proyecto es encontrar la mejor configuración del modelo VITON-GAN a partir del uso de diferentes hiperparámetros. Para ello, se cambió el modelo pre-entrenado VGG19 (visual geometric group 19), el cual se usó para calcular la función de pérdida VGG, por los modelos VGG16 y ResNet50. Además, se varió el hiperparámetro que se denomina pendiente negativa de la función de activación Leaky ReLU, de 0.2 a 0.1. Al entrenar modelo, se identificó que ResNet50 arrojó el resultado más óptimo a nivel cualitativo y cuantitativo.
dc.languagespa
dc.publisherUniversidad de los Andes
dc.publisherIngeniería de Sistemas y Computación
dc.publisherFacultad de Ingeniería
dc.publisherDepartamento de Ingeniería Sistemas y Computación
dc.relationS. Montes, "El comercio electrónico en la región creció 66% en 2020 y llegó a US$66.765 millones," La República, Colombia, Mar. 29, 2021. Accessed: Jul. 09, 2021. [Online]. Available: https://www.larepublica.co/globoeconomia/el-e-commerce-en-latinoamerica-aumento-66-durante-2020-y-llego-a-us66765-millones-3145702
dc.relationX. González, "E-commerce facturará US$5.386 millones al finalizar el año según informe de BlackSip," La República, Colombia, Mar. 29, 2021. Accessed: Jul. 09, 2021. [Online]. Available: https://www.larepublica.co/especiales/la-industria-del-e-commerce/e-commerce-facturara-us5386-millones-al-finalizar-el-ano-segun-informe-de-blacksip-3088455
dc.relationL. Ma, X. Jia, Q. Sun, B. Schiele, T. Tuytelaars, and L. Van Gool, "Pose Guided Person Image Generation," 2017, doi: 10.48550/ARXIV.1705.09368.
dc.relationN. Jetchev and U. Bergmann, "The Conditional Analogy GAN: Swapping Fashion Articles on People Images," 2017, doi: 10.48550/ARXIV.1709.04695.
dc.relationX. Han, Z. Wu, Z. Wu, R. Yu, and L. S. Davis, "VITON: An Image-based Virtual Try-on Network," 2017, doi: 10.48550/ARXIV.1711.08447.
dc.relationY. Pozdniakov, "Changing clothing on people images using generative adversarial networks," Master of Science, Ukranian Catholic University, Lviv, 2020. [Online]. Available: http://www.er.ucu.edu.ua/bitstream/handle/1/1904/Pozdniakov%20-%20Changing%20Clothing%20on%20People%20Images.pdf?sequence=1&isAllowed=y
dc.relationM. Lucic, K. Kurach, M. Michalski, S. Gelly, and O. Bousquet, "Are GANs Created Equal? A Large-Scale Study," 2017, doi: 10.48550/ARXIV.1711.10337.
dc.relationK. He, X. Zhang, S. Ren, and J. Sun, "Deep Residual Learning for Image Recognition," 2015, doi: 10.48550/ARXIV.1512.03385.
dc.relationC. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, "Rethinking the Inception Architecture for Computer Vision," 2015, doi: 10.48550/ARXIV.1512.00567.
dc.relationK. Simonyan and A. Zisserman, "Very Deep Convolutional Networks for Large-Scale Image Recognition," 2014, doi: 10.48550/ARXIV.1409.1556.
dc.relationR. Mohammadi, "Transfer Learning-Based Automatic Detection of Coronavirus Disease 2019 (COVID-19) from Chest X-ray Images," J Biomed Phys Eng, vol. 10, no. 5, Oct. 2020, doi: 10.31661/jbpe.v0i0.2008-1153.
dc.relationS. Honda, "VITON-GAN: Virtual Try-on Image Generator Trained with Adversarial Loss," Eurographics 2019 - Posters, p. 2 pages, 2019, doi: 10.2312/EGP.20191043.
dc.relationB. Wang, H. Zheng, X. Liang, Y. Chen, L. Lin, and M. Yang, "Toward Characteristic-Preserving Image-based Virtual Try-On Network," 2018, doi: 10.48550/ARXIV.1807.07688.
dc.relationE. Metheniti, G. Neumann, and J. van Genabith, "Linguistically inspired morphological inflection with a sequence to sequence model," 2020, doi: 10.48550/ARXIV.2009.02073.
dc.relationH. Noh, S. Hong, and B. Han, "Learning Deconvolution Network for Semantic Segmentation," 2015, doi: 10.48550/ARXIV.1505.04366.
dc.relationS. Guan, N. Kamona, and M. Loew, "Segmentation of Thermal Breast Images Using Convolutional and Deconvolutional Neural Networks," in 2018 IEEE Applied Imagery Pattern Recognition Workshop (AIPR), Washington, DC, USA, Oct. 2018, pp. 1-7. doi: 10.1109/AIPR.2018.8707379.
dc.relationA. Malekijoo and M. J. Fadaeieslam, "Convolution-deconvolution architecture with the pyramid pooling module for semantic segmentation," Multimed Tools Appl, vol. 78, no. 22, pp. 32379-32392, Nov. 2019, doi: 10.1007/s11042-019-07990-7.
dc.relationI. Sutskever, O. Vinyals, and Q. V. Le, "Sequence to Sequence Learning with Neural Networks," 2014, doi: 10.48550/ARXIV.1409.3215.
dc.relationA. Lou, S. Guan, N. Kamona, and M. Loew, "Segmentation of Infrared Breast Images Using MultiResUnet Neural Network," 2020, doi: 10.48550/ARXIV.2011.00376.
dc.relationD. Fourure, R. Emonet, E. Fromont, D. Muselet, A. Tremeau, and C. Wolf, "Residual Conv-Deconv Grid Network for Semantic Segmentation," 2017, doi: 10.48550/ARXIV.1707.07958.
dc.relationL. Mou and X. X. Zhu, "IM2HEIGHT: Height Estimation from Single Monocular Imagery via Fully Residual Convolutional-Deconvolutional Network," 2018, doi: 10.48550/ARXIV.1802.10249.
dc.relationM. M. M. Islam and J.-M. Kim, "Vision-Based Autonomous Crack Detection of Concrete Structures Using a Fully Convolutional Encoder"-Decoder Network," Sensors, vol. 19, no. 19, p. 4251, Sep. 2019, doi: 10.3390/s19194251.
dc.relationZ. Shou, J. Chan, A. Zareian, K. Miyazawa, and S.-F. Chang, "CDC: Convolutional-De-Convolutional Networks for Precise Temporal Action Localization in Untrimmed Videos," in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, Jul. 2017, pp. 1417-1426. doi: 10.1109/CVPR.2017.155.
dc.relationL. Ke, M.-C. Chang, H. Qi, and S. Lyu, "Multi-Scale Structure-Aware Network for Human Pose Estimation," 2018, doi: 10.48550/ARXIV.1803.09894.
dc.relationC. Szegedy et al., "Going Deeper with Convolutions," 2014, doi: 10.48550/ARXIV.1409.4842.
dc.relationA. Krizhevsky, I. Sutskever, and G. E. Hinton, "ImageNet classification with deep convolutional neural networks," Commun. ACM, vol. 60, no. 6, pp. 84-90, May 2017, doi: 10.1145/3065386.
dc.relationO. Ronneberger, P. Fischer, and T. Brox, "U-Net: Convolutional Networks for Biomedical Image Segmentation," 2015, doi: 10.48550/ARXIV.1505.04597.
dc.relationD. Rao, X.-J. Wu, H. Li, J. Kittler, and T. Xu, "UMFA: a photorealistic style transfer method based on U-Net and multi-layer feature aggregation," J. Electron. Imag., vol. 30, no. 05, Sep. 2021, doi: 10.1117/1.JEI.30.5.053013.
dc.relationS. Jadon, "A survey of loss functions for semantic segmentation," in 2020 IEEE Conference on Computational Intelligence in Bioinformatics and Computational Biology (CIBCB), Via del Mar, Chile, Oct. 2020, pp. 1-7. doi: 10.1109/CIBCB48159.2020.9277638.
dc.relationA. Abu-Srhan, M. A. M. Abushariah, and O. S. Al-Kadi, "The effect of loss function on conditional generative adversarial networks," Journal of King Saud University - Computer and Information Sciences, p. S1319157822000519, Mar. 2022, doi: 10.1016/j.jksuci.2022.02.018.
dc.relationA. R. Tej, S. S. Halder, A. P. Shandeelya, and V. Pankajakshan, "Enhancing Perceptual Loss with Adversarial Feature Matching for Super-Resolution," 2020, doi: 10.48550/ARXIV.2005.07502.
dc.relationI. J. Goodfellow et al., "Generative Adversarial Networks," 2014, doi: 10.48550/ARXIV.1406.2661.
dc.relationT. Karras, T. Aila, S. Laine, and J. Lehtinen, "Progressive Growing of GANs for Improved Quality, Stability, and Variation," 2017, doi: 10.48550/ARXIV.1710.10196.
dc.relationX. Mao, Q. Li, H. Xie, R. Y. K. Lau, Z. Wang, and S. P. Smolley, "Least Squares Generative Adversarial Networks," 2016, doi: 10.48550/ARXIV.1611.04076.
dc.relationJ.-Y. Zhu, T. Park, P. Isola, and A. A. Efros, "Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks," 2017, doi: 10.48550/ARXIV.1703.10593.
dc.relationP. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros, "Image-to-Image Translation with Conditional Adversarial Networks," 2016, doi: 10.48550/ARXIV.1611.07004.
dc.relationL. A. Gatys, A. S. Ecker, and M. Bethge, "Image Style Transfer Using Convolutional Neural Networks," in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, Jun. 2016, pp. 2414-2423. doi: 10.1109/CVPR.2016.265.
dc.relationK. He, X. Zhang, S. Ren, and J. Sun, "Identity Mappings in Deep Residual Networks," 2016, doi: 10.48550/ARXIV.1603.05027.
dc.rightsAtribución 4.0 Internacional
dc.rightsAtribución 4.0 Internacional
dc.rightshttp://creativecommons.org/licenses/by/4.0/
dc.rightsinfo:eu-repo/semantics/openAccess
dc.rightshttp://purl.org/coar/access_right/c_abf2
dc.titleOptimización de hiperparámetros en la arquitectura Viton-GAN
dc.typeTrabajo de grado - Pregrado


Este ítem pertenece a la siguiente institución