dc.contributor | Giraldo Trujillo, Luis Felipe | |
dc.contributor | Zambrano Jacobo, Andrés Felipe | |
dc.creator | Amaya Carreño, Juan David | |
dc.date.accessioned | 2022-07-29T12:25:59Z | |
dc.date.available | 2022-07-29T12:25:59Z | |
dc.date.created | 2022-07-29T12:25:59Z | |
dc.date.issued | 2022-06-28 | |
dc.identifier | http://hdl.handle.net/1992/59341 | |
dc.identifier | instname:Universidad de los Andes | |
dc.identifier | reponame:Repositorio Institucional Séneca | |
dc.identifier | repourl:https://repositorio.uniandes.edu.co/ | |
dc.description.abstract | Este trabajo continúa con el uso de Aprendizaje por Refuerzo Profundo (en inglés Deep Reinforcement Learning o DRL) en simulaciones híbridas en tiempo real (en inglés Real-Time Hybrid Simulation o RTHS) para diseñar un agente capaz de realizar las acciones de control de seguimiento y compensación de fase entre las respuestas de las particiones numérica y experimental del entorno de simulación. Los resultados obtenidos a partir de diferentes pruebas demuestran que el agente alcanzó un buen desempeño, incluso llegando a ser mejor en comparación a otras alternativas desarrolladas. | |
dc.description.abstract | This paper continues with the use of Deep Reinforcement Learning (DRL) in Real-Time Hybrid Simulation (RTHS) to design an agent capable of doing both tracking control and phase-lead compensation between the responses of the numeric partition and the experimental partition in a virtual environment for simulation. The results obtained from different tests proved that the agent performed very well to even be better than other alternatives developed before. | |
dc.language | spa | |
dc.publisher | Universidad de los Andes | |
dc.publisher | Ingeniería Electrónica | |
dc.publisher | Facultad de Ingeniería | |
dc.publisher | Departamento de Ingeniería Eléctrica y Electrónica | |
dc.relation | A. F. Niño, A. P. Betacur, P. Miranda, J.D. Amaya, M. G. Soto, C. E. Silva y L. F. Giraldo, Using Deep Reinforcement Learning to design a tracking controller for a Real-Time Hybrid Simulation benchmark problem, tesis BA, Departamento de Ingeniería Eléctrica y Electrónica, Universidad de los Andes, Bogotá, 2021. | |
dc.relation | M. J. Harris y R. E. Christenson. (2020, jul.). Real-time hybrid simulation analysis of moat impacts in a base-isolated structure, Frontiers in Built Environment. DOI: http://dx.doi.org/10.3389/fbuil.2020.00120 | |
dc.relation | G. Ou, A. Maghareh, X. Gao, N. Castaneda y S. J. Dyke. (s.f.). Cyber-Physical Instrument for Real-time Hybrid Structural Testing (MRI). [Internet]. Disponible en https://engineering.purdue.edu/Bowen/Projects/All/cyberphysical-instrument-for-realtime-hybrid-structural-testing-mri/CIRST-Highlights.pdf | |
dc.relation | E. E. Bas y M. A. Moustafa. (2020, jun.). Performance and Limitations of Real-Time Hybrid Simulation with Nonlinear Computational Substructures, Experimental Techniques. Vol. 44, pp. 715-734 (2020). DOI: https://doi.org/10.1007/s40799-020-00385-6 | |
dc.relation | C. E. Silva, D. Gomez, A. Maghareh, S. J. Dyke y B. F. Spencer Jr. (2020, en.) Benchmark control problem for real-time hybrid simulation, Mechanical Systems and Signal Processing. Vol. 135, n.° 106381, 2020. DOI: https://doi.org/10.1016/j.ymssp.2019.106381 | |
dc.relation | V. Mnih et al. (2015, febr.). Human-level control through deep reinforcement learning. Nature Vol. 518, pp. 529-533. DOI: https://doi.org/10.1038/nature14236 | |
dc.relation | D. Silver, G. Lever, N. Heess, T. Degris, D. Wierstra y M. Riedmiller, Deterministic Policy Gradient Algorithms, en Proceedings of the 31st International Conference on Machine Learning, Beijing, China, 2014. En Proceedings of Machine Learning Research, Vol. 32(1), pp. 387-395. Disponible en https://www.davidsilver.uk/wp-content/uploads/2020/03/deterministic-policy-gradients.pdf | |
dc.relation | R. S. Sutton, D. McAllester, S. Singh y Y. Mansour, Policy gradient methods for reinforcement learning with function approximation, en Neural Information Processing Systems 12, Florham Park, New Jersey, 1999, pp. 1057-1063. | |
dc.relation | T. P. Lillicrap et al. (2016). Continuous control with deep reinforcement learning, CoRR, abs/1509.02971. Disponible en https://arxiv.org/abs/1509.02971 | |
dc.relation | S. Ioffe y C. Szegedy, Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift, en International Conference on Machine Learning, 2015. Disponible en https://arxiv.org/abs/1502.03167 | |
dc.relation | MathWorks. (s.f.). Deep Learning in MATLAB [Internet]. Disponible en https://www.mathworks.com/help/deeplearning/ug/deep-learning-in-matlab.html | |
dc.relation | N. P. Lawrence, M. G. Forbes, P. D. Loewen, D. G. McClement, J. U. Backstrom y R. B. Gopaluni. (2022). Deep Reinforcement Learning with Shallow Controllers: An Experimental Application to PID Tuning, ArXiv, abs/2111.07171. Disponible en https://arxiv.org/abs/2111.07171 | |
dc.relation | L. Chan. (2021, mzo. 06). 3 Common Problems with Neural Network Initialization. [Internet] Disponible en https://towardsdatascience.com/3-common-problems-with-neural-network-initialisation-5e6cacfcd8e6 | |
dc.relation | L. N. Smith, Cyclical Learning Rates for Training Neural Networks, en 2017 IEEE Winter Conference on Applications of Computer Vision, 2017. pp. 464-472. Disponible en https://arxiv.org/abs/1506.01186 | |
dc.relation | MathWorks. (s.f.). rlDDPGAgentOptions. [Internet] Disponible en https://www.mathworks.com/help/reinforcement-learning/ref/rlddpgagentoptions.html | |
dc.relation | S. Zhang y R. S. Sutton. (2017). A Deeper Look at Experience Replay, ArXiv, abs/1712.01275. Disponible en https://arxiv.org/abs/1712.01275 | |
dc.relation | J. E. Duque. (s.f.). 3. Controladores y acciones de control. [Internet]. Disponible en http://www.geocities.ws/joeldupar/control2/pid | |
dc.relation | Fundamentos del control de procesos usando el programa LVPROSIM, Lab-Volt Ltda., Canadá, 2004. Disponible en http://biblio3.url.edu.gt/Publi/Libros/2013/ManualesIng/FundamenP-O.pdf | |
dc.relation | MathWorks. (s.f.). Train Reinforcement Learning Agents. [Internet] Disponible en https://www.mathworks.com/help/reinforcement-learning/ug/train-reinforcement-learning-agents.html | |
dc.relation | C. Wu, A. Alomar y S. Jagwani. (2021, abr. 29) Lecture 19: Off-Policy Learning. [Internet]. Disponible en https://web.mit.edu/6.246/www/notes/L18-notes.pdf | |
dc.rights | Attribution-NonCommercial-NoDerivatives 4.0 Internacional | |
dc.rights | https://repositorio.uniandes.edu.co/static/pdf/aceptacion_uso_es.pdf | |
dc.rights | info:eu-repo/semantics/openAccess | |
dc.rights | http://purl.org/coar/access_right/c_abf2 | |
dc.title | Aprendizaje por refuerzo profundo para la compensación de fase y control de seguimiento en simulación híbrida | |
dc.type | Trabajo de grado - Pregrado | |