dc.contributor | Tibaduiza Burgos, Diego Alexander | |
dc.contributor | Leon-Medina, Jersson Xavier | |
dc.contributor | Grupo de Investigación en Electrónica de Alta Frecuencia y Telecomunicaciones (Cmun) | |
dc.contributor | Diego F. Godoy-Rojas [0000-0002-1639-7992] | |
dc.creator | Godoy Rojas, Diego Fernando | |
dc.date.accessioned | 2023-06-02T14:26:53Z | |
dc.date.accessioned | 2023-06-07T00:06:38Z | |
dc.date.available | 2023-06-02T14:26:53Z | |
dc.date.available | 2023-06-07T00:06:38Z | |
dc.date.created | 2023-06-02T14:26:53Z | |
dc.date.issued | 2022 | |
dc.identifier | https://repositorio.unal.edu.co/handle/unal/83956 | |
dc.identifier | Universidad Nacional de Colombia | |
dc.identifier | Repositorio Institucional Universidad Nacional de Colombia | |
dc.identifier | https://repositorio.unal.edu.co/ | |
dc.identifier.uri | https://repositorioslatinoamericanos.uchile.cl/handle/2250/6651736 | |
dc.description.abstract | En el presente documento se detalla el flujo de trabajo llevado a cabo para el desarrollo de modelos de aprendizaje profundo para la estimación de temperatura de pared media en dos hornos de arco eléctrico pertenecientes a la empresa Cerro Matoso S.A. El documento inicia con una introducción al contexto bajo el cual se desarrolló el trabajo final de maestría, dando paso a la descripción teórica de todos los aspectos relevantes y generalidades sobre el funcionamiento de la planta, las series de tiempo y el aprendizaje profundo requeridas durante el desarrollo del proyecto. El flujo de trabajo se divide en una metodología de 3 pasos, empezando por el estudio y preparación del conjunto de datos brindado por CMSA, seguido por el desarrollo, entrenamiento y selección de diversos modelos de aprendizaje profundo usados en predicciones con datos de un conjunto de prueba obteniendo errores RMSE entre 1-2 °C y finalizando con una etapa de validación que estudia el desempeño de los diversos modelos obtenidos frente a diversas variaciones en las condiciones de los parámetros de entrenamiento. (Texto tomado de la fuente) | |
dc.description.abstract | This document details the workflow followed for the development of deep learning models for the estimation of mean wall temperature in two electric arc furnaces belonging to the company Cerro Matoso S.A. The document begins by establishing the development context of the final master's degree project. Afterwards, the theoretical description of all the relevant aspects and generalities about the operation of the plant, the time series and the deep learning required during the development of the project is given. The workflow is divided into a 3-step methodology starting with the study and preparation of the data set provided by CMSA, followed by the development, training and selection of various deep learning models used in predictions with data from a test set. obtaining RMSE errors between 1-2 °C and ending with a validation stage that studies the performance of the various models obtained against various variations in the conditions of the training parameters. | |
dc.language | spa | |
dc.publisher | Universidad Nacional de Colombia | |
dc.publisher | Bogotá - Ingeniería - Maestría en Ingeniería - Automatización Industrial | |
dc.publisher | Facultad de Ingeniería | |
dc.publisher | Bogotá, Colombia | |
dc.publisher | Universidad Nacional de Colombia - Sede Bogotá | |
dc.relation | D. Tibaduiza et al., “Structural Health Monitoring System for Furnace Refractory Wall
Thickness Measurements at Cerro Matoso SA”, Lecture Notes in Civil Engineering, pp.
414-423, 2021. DOI: 10.1007/978-3-030-64594-6_41 | |
dc.relation | F. Pozo et al., “Structural health monitoring and condition monitoring applications:
sensing, distributed communication and processing”, International Journal of distributed
sensor networks, vol 16, no. 9, p 1-3, 2020. DOI: 10.1177/1550147720963270 | |
dc.relation | J. Birat, “A futures study analysis of the technological evolution of the EAF by 2010”,
Revue de Métallurgie, vol. 97, no. 11, pp. 1347-1363, 2000. DOI: 10.1051/metal:2000114 | |
dc.relation | “Redes neuronales profundas - Tipos y Características - Código Fuente”, Código Fuente, 2021. [Online]. Disponible: https://www.codigofuente.org/redes-neuronales-profundas-tipos-caracteristicas/. [Acceso: 17- Jul- 2021]. | |
dc.relation | “Illustrated Guide to LSTM’s and GRU’s: A step by step explanation”, Medium, 2021.
[Online]. Disponible: https://towardsdatascience.com/illustrated-guide-to-lstms-and-grus-a-step-by-step-explanation-44e9eb85bf21. [Acceso: 17- Jul-2021]. | |
dc.relation | “Major Mines & Projects | Cerro Matoso Mine”, Miningdataonline.com, 2021. [Online]. Disponible: https://miningdataonline.com/property/336/Cerro-Matoso-Mine.aspx.
[Acceso: 25- Nov- 2021] | |
dc.relation | Janzen, J.; Gerritsen, T.; Voermann, N.; Veloza, E.R.; Delgado, R.C. Integrated Furnace
Controls: Implementation on a Covered-Arc (Shielded Arc) Furnace at Cerro Matoso. In
Proceedings of the 10th International Ferroalloys Congress, Cape Town, South Africa, 1–4
Feb. 2004; pp. 659–669. | |
dc.relation | R. Garcia-Segura, J. Vázquez Castillo, F. Martell-Chavez, O. Longoria-Gandara, and J.
Ortegón Aguilar, “Electric Arc Furnace Modeling with Artificial Neural Networks and Arc
Length with Variable Voltage Gradient,” Energies, vol. 10, no. 9, p. 1424, Sep. 2017 | |
dc.relation | C. Chen, Y. Liu, M. Kumar, and J. Qin, “Energy Consumption Modelling Using Deep
Learning Technique — A Case Study of EAF”, Procedia CIRP, vol. 72, pp. 1063-1068,
2018. DOI: 10.1016/j.procir.2018.03.095. | |
dc.relation | S. Ismaeel, A. Miri, A. Sadeghian, and D. Chourishi, “An Extreme Learning Machine
(ELM) Predictor for Electric Arc Furnaces’ v-i Characteristics,”2015 IEEE 2nd International Conference on Cyber Security and Cloud Computing, 2015, pp. 329-334, DOI:
10.1109/CSCloud.2015.94. | |
dc.relation | J. Mesa Fernández, V. Cabal, V. Montequin and J. Balsera, “Online estimation
of electric arc furnace tap temperature by using fuzzy neural networks”, Engineering Applications of Artificial Intelligence, vol. 21, no. 7, pp. 1001-1012, 2008. DOI:
10.1016/j.engappai.2007.11.008. | |
dc.relation | M. Kordos, M. Blachnik and T. Wieczorek, “Temperature Prediction in Electric Arc
Furnace with Neural Network Tree”, Lecture Notes in Computer Science, pp. 71-78, 2011.
DOI: 10.1007/978-3-642-21738-8 10. | |
dc.relation | J. Camacho et al., “A Data Cleaning Approach for a Structural Health Monitoring
System in a 75 MW Electric Arc Ferronickel Furnace”, Proceedings of 7th International
Electronic Conference on Sensors and Applications, 2020. DOI: 10.3390/ecsa-7-08245. | |
dc.relation | J. Leon-Medina et al., “Deep Learning for the Prediction of Temperature Time Series in
the Lining of an Electric Arc Furnace for Structural Health Monitoring at Cerro Matoso
S.A. (CMSA)”, Proceedings of 7th International Electronic Conference on Sensors and
Applications, 2020. DOI: 10.3390/ecsa-7-08246. | |
dc.relation | J. Leon-Medina et al., “Temperature Prediction Using Multivariate Time Series Deep
Learning in the Lining of an Electric Arc Furnace for Ferronickel Production”, Sensors,
vol. 21, no. 20, p. 6894, 2021. DOI: 10.3390/s21206894. | |
dc.relation | R. Wan, S. Mei, J. Wang, M. Liu, and F. Yang, “Multivariate Temporal Convolutional
Network: A Deep Neural Networks Approach for Multivariate Time Series Forecasting”,
Electronics, vol. 8, no. 8, p. 876, 2019. DOI: 10.3390/electronics8080876 | |
dc.relation | S. Shih, F. Sun, and H. Lee, “Temporal pattern attention for multivariate time series forecasting”, Machine Learning, vol. 108, no. 8-9, pp. 1421-1441, 2019. DOI: 10.1007/s10994-
019-05815-0 | |
dc.relation | S. Du, T. Li, Y. Yang and S. Horng, “Multivariate time series forecasting via attention-based encoder–decoder framework”, Neurocomputing, vol. 388, pp. 269-279, 2020. DOI:
10.1016/j.neucom.2019.12.118. | |
dc.relation | S. Huang, D. Wang, X. Wu, and A. Tang, “DSANet: Dual Self-Attention Network for
Multivariate Time Series Forecasting”, Proceedings of the 28th ACM International Conference on Information and Knowledge Management, 2019. DOI: 10.1145/3357384.3358132 | |
dc.relation | CMSA, PR032018OP - Manual del Sistema de Control Estructural del Horno Eléctrico
412-FC-01, 02 ed., 2017. | |
dc.relation | D. F. Godoy-Rojas et al., “Attention-Based Deep Recurrent Neural Network to Forecast
the Temperature Behavior of an Electric Arc Furnace Side-Wall,” Sensors, vol. 22, no. 4,
p. 1418, Feb. 2022, doi: 10.3390/s22041418. | |
dc.relation | American Petroleum Institute (API), “API RP 551 - Process Measurement”,
2da edición, pp. 30-36, febrero 2016, Disponible: https://standards.globalspec.com
/std/9988220/API %20RP | |
dc.relation | “Specification for temperature-electromotive force (EMF) tables for standardized thermocouples” ASTM DOI: 10.1520/e0230_e0230m-17. | |
dc.relation | W. W. S. Wei, “Time Series analysis”, Oxford Handbooks Online, pp. 458–485, 2013. | |
dc.relation | J. D. Hamilton, “Time Series analysis”, Princeton, NJ: Princeton University Press, 2020. | |
dc.relation | P. P. Shinde and S. Shah, “A Review of Machine Learning and Deep Learning Applications”, 2018 Fourth International Conference on Computing Communication Control and Automation (ICCUBEA), Pune, India, 2018, pp. 1-6, doi: 10.1109/ICCU BEA.2018.8697857. | |
dc.relation | LeCun, Y., Bengio, Y. and Hinton, G. Deep learning. Nature 521, 436–444 (2015).
https://doi.org/10.1038/nature14539. | |
dc.relation | L. Zhang, J. Tan, D. Han, and H. Zhu, “From machine learning to Deep Learning:
Progress in Machine Intelligence for Rational Drug Discovery”, Drug Discovery Today,
vol. 22, no. 11, pp. 1680–1685, 2017. | |
dc.relation | C. M. Bishop, “Neural networks and their applications”, Review of Scientific Instruments, vol. 65, no. 6, pp. 1803–1832, 1994. | |
dc.relation | K. Suzuki, Ed., “Artificial Neural Networks - Architectures and Applications”, Jan.
2013, doi: 10.5772/3409. | |
dc.relation | C. Zanchettin and T. B. Ludermir, “A methodology to train and improve artificial
neural networks weights and connections”, The 2006 IEEE International Joint Conference
on Neural Network Proceedings. | |
dc.relation | Sibi, P., S. Allwyn Jones, and P. Siddarth., “Analysis of different activation functions
using back propagation neural networks”, Journal of theoretical and applied information
technology 47.3 (2013): 1264-1268. | |
dc.relation | A. D. Rasamoelina, F. Adjailia and P. Sincák, “A Review of Activation Function for
Artificial Neural Network”, 2020 IEEE 18th World Symposium on Applied Machine Intelligence and Informatics (SAMI), Herlany, Slovakia, 2020, pp. 281-286, doi: 10.1109/SA MI48414.2020.9108717 | |
dc.relation | Sharma, Sagar, Simone Sharma, and Anidhya Athaiya, “Activation functions in neural
networks”, towards data science 6.12 (2017): 310-316. | |
dc.relation | Elliott, David L., “A better activation function for artificial neural networks”, 1993. | |
dc.relation | XU, Jingyi, et al., “A semantic loss function for deep learning with symbolic knowledge”,
International conference on machine learning. PMLR, 2018. p. 5502-5511. | |
dc.relation | LEE, Tae-Hwy., “Loss functions in time series forecasting”, International encyclopedia
of the social sciences, 2008, p. 495-502. | |
dc.relation | HODSON, Timothy O., “Root-mean-square error (RMSE) or mean absolute error
(MAE): when to use them or not”, Geoscientific Model Development, 2022, vol. 15, no 14,
p. 5481-5487. | |
dc.relation | C. Alippi, “Weight update in back-propagation neural networks: The role of activation
functions”, 1991 IEEE International Joint Conference on Neural Networks, 1991. | |
dc.relation | D. Svozil, V. Kvasnicka, and Pospichal Jirí, “Introduction to multi-layer feed-forward
neural networks”, Chemometrics and Intelligent Laboratory Systems, vol. 39, no. 1, pp.
43–62, 1997. | |
dc.relation | Ruder, S., “An overview of gradient descent optimization algorithms”, arXiv:1609.04747 | |
dc.relation | S.-ichi Amari, “Backpropagation and stochastic gradient descent method,” Neurocomputing, vol. 5, no. 4-5, pp. 185–196, 1993. | |
dc.relation | Smith, Leslie N., “A disciplined approach to neural network hyper-parameters: Part
1–learning rate, batch size, momentum, and weight decay” arXiv:1803.09820, 2018. | |
dc.relation | N. Bacanin, T. Bezdan, E. Tuba, I. Strumberger, and M. Tuba, “Optimizing convolutional neural network hyperparameters by enhanced swarm intelligence metaheuristics”,
Algorithms, vol. 13, no. 3, p. 67, 2020. | |
dc.relation | M. Kuan and K. Hornik, “Convergence of learning algorithms with constant learning
rates”, IEEE Transactions on Neural Networks, vol. 2, no. 5, pp. 484-489, Sept. 1991, doi:
10.1109/72.134285. | |
dc.relation | D. R. Wilson and T. R. Martinez, “The need for small learning rates on large problems” IJCNN’01. International Joint Conference on Neural Networks. Proceedings (Cat. No.01CH37222), Washington, DC, USA, 2001, pp. 115-119 vol.1, doi:
10.1109/IJCNN.2001.939002 | |
dc.relation | Y. Bengio, “Gradient-based optimization of hyperparameters”, Neural Computation,
vol. 12, no. 8, pp. 1889–1900, 2000. | |
dc.relation | “Recurrent neural networks architectures”, Wiley Series in Adaptive and Learning Sys tems for Signal Processing, Communications and Control, pp. 69–89. | |
dc.relation | A. L. Caterini and D. E. Chang, “Recurrent neural networks”, Deep Neural Networks
in a Mathematical Framework, pp. 59–79, 2018. | |
dc.relation | Sutskever, Ilya, “Training recurrent neural networks” Toronto, ON, Canada: University
of Toronto, 2013. | |
dc.relation | A. Sherstinsky, “Fundamentals of Recurrent Neural Network (RNN) and long short-term memory (LSTM) network”, Physica D: Nonlinear Phenomena, vol. 404, p. 132306,
2020 | |
dc.relation | S. Hochreiter and J. Schmidhuber, “Long Short-Term Memory”, In Neural Computation, vol. 9, no. 8, pp. 1735-1780, 15 Nov. 1997, doi: 10.1162/neco.1997.9.8.1735. | |
dc.relation | Y. Hua, Z. Zhao, R. Li, X. Chen, Z. Liu and H. Zhang, “Deep Learning with Long
Short-Term Memory for Time Series Prediction”, In IEEE Communications Magazine,
vol. 57, no. 6, pp. 114-119, June 2019, doi: 10.1109/MCOM.2019.1800155. | |
dc.relation | X. Song, Y. Liu, L. Xue, J. Wang, J. Zhang, J. Wang, L. Jiang, and Z. Cheng, “Time-series well performance prediction based on Long Short-Term Memory (LSTM) neural
network model”, Journal of Petroleum Science and Engineering, vol. 186, p. 106682, 2020. | |
dc.relation | R. Dey and F. M. Salem, ”Gate-variants of Gated Recurrent Unit (GRU) neural networks”, 2017 IEEE 60th International Midwest Symposium on Circuits and
Systems (MWSCAS), Boston, MA, USA, 2017, pp. 1597-1600, doi: 10.1109/MWSCAS.2017.8053243. | |
dc.relation | Y. Wang, W. Liao, and Y. Chang, “Gated recurrent unit network-based short-term
photovoltaic forecasting”, Energies, vol. 11, no. 8, p. 2163, 2018. | |
dc.relation | H. Lin, A. Gharehbaghi, Q. Zhang, S. S. Band, H. T. Pai, K.-W. Chau, and A. Mosavi,
“Time Series-based groundwater level forecasting using gated recurrent unit deep neural
networks”, Engineering Applications of Computational Fluid Mechanics, vol. 16, no. 1,
pp. 1655–1672, 2022 | |
dc.relation | S. H. Park, B. Kim, C. M. Kang, C. C. Chung and J. W. Choi, “Sequence-to-Sequence Prediction of Vehicle Trajectory via LSTM Encoder-Decoder Architecture”, 2018
IEEE Intelligent Vehicles Symposium (IV), Changshu, China, 2018, pp. 1672-1678, doi:
10.1109/IVS.2018.8500658. | |
dc.relation | R. Laubscher, “Time-series forecasting of coal-fired power plant reheater metal temperatures using encoder-decoder recurrent neural networks”, Energy, vol. 189, p. 116187,
2019. | |
dc.relation | C. Olah and S. Carter, “Attention and augmented recurrent neural networks”, Distill,
08-Sep-2016. [Online]. Disponible: https://distill.pub/2016/augmented-rnns/. [Acceso: 15-
Sep-2022]. | |
dc.relation | Z. Niu, G. Zhong, and H. Yu, “A review on the attention mechanism of Deep Learning”,
Neurocomputing, vol. 452, pp. 48–62, 2021. | |
dc.relation | Y. Qin, D. Song, H. Chen, W. Cheng, G. Jiang, and G. W. Cottrell, “A dual-stage
attention-based recurrent neural network for time series prediction”, Proceedings of the
Twenty-Sixth International Joint Conference on Artificial Intelligence, 2017. | |
dc.relation | IT-0003-A28-C3-V1-18.11.2019 - Informe preliminar con análisis estadístico de datos y
correlaciones posibles. | |
dc.relation | IT-O3O4-C15C34.2.3-V1-17.06.2020 - Informe técnico de caracterización e identificación
de variables del horno línea 1 FC01. | |
dc.relation | IT-O3O4.C38.2.1-V1-04.10.2021 - Informe técnico de caracterización e identificación de
variables del horno línea 2 FC150. | |
dc.rights | Atribución-NoComercial 4.0 Internacional | |
dc.rights | http://creativecommons.org/licenses/by-nc/4.0/ | |
dc.rights | info:eu-repo/semantics/openAccess | |
dc.title | Aprendizaje profundo para la predicción de temperatura en las paredes refractarias de un horno de arco eléctrico | |
dc.type | Trabajo de grado - Maestría | |