dc.contributor | Lozano Martínez, Fernando Enrique | |
dc.contributor | Jiménez Estévez, Guillermo Andrés | |
dc.contributor | Marín Collazos, Luis Gabriel | |
dc.contributor | Giraldo Trujillo, Luis Felipe | |
dc.contributor | Mendoza Araya, Patricio | |
dc.creator | Garrido Urbano, César Daniel | |
dc.date.accessioned | 2022-07-14T16:40:03Z | |
dc.date.available | 2022-07-14T16:40:03Z | |
dc.date.created | 2022-07-14T16:40:03Z | |
dc.date.issued | 2021 | |
dc.identifier | http://hdl.handle.net/1992/58828 | |
dc.identifier | instname:Universidad de los Andes | |
dc.identifier | reponame:Repositorio Institucional Séneca | |
dc.identifier | repourl:https://repositorio.uniandes.edu.co/ | |
dc.description.abstract | The increasing use of distributed and renewable energy resources poses a challenge for traditional control methods. This happens due to the higher complexity and uncertainty introduced by these new technologies, specially in smaller self sufficient systems like microgrids. To address this challenge, reinforcement learning algorithms are used to design and implement an energy management system (EMS) for different microgrids configurations. Reinforcement Learning (RL) approaches seek to train an agent from their interaction with the environment rather than from direct data, such as in supervised learning. With this in mind, the problem of energy management is defined in a Markov decision process and it is solved using different state-of-the-art Deep Reinforcement Learning (DRL) algorithms, such as Deep Q-Networks (DQN), Proximal Policy Optimization (PPO) and Twin Delayed Deep Deterministic Policy Gradient (TD3). Additionally, these results are compared with traditional EMS implementations such as Rule-Based and Model Predictive Control (MPC), used as benchmarks. Simulations are run with the novel Pymgrid module build for this purpose. Results show that DRL EMS agents have comparable results to some the classical implementations with possible benefits for generic and specific use cases. Source code of this project can be found at: https://github.com/Cesard97/DRL-Microgrid-Control | |
dc.language | eng | |
dc.publisher | Universidad de los Andes | |
dc.publisher | Maestría en Ingeniería Eléctrica | |
dc.publisher | Facultad de Ingeniería | |
dc.publisher | Departamento de Ingeniería Eléctrica y Electrónica | |
dc.relation | Zhang, Z., Zhang, D., & Qiu, R. C. (2020). Deep reinforcement learning for power system applications: An overview. CSEE Journal of Power and Energy Systems, 6(1), 213-225. | |
dc.relation | Ye, Y., Qiu, D., Wu, X., Strbac, G., & Ward, J. (2020). Model-Free Real-Time Autonomous Control for a Residential Multi-Energy System Using Deep Reinforcement Learning. IEEE Transactions on Smart Grid, 11(4), 3068-3082. | |
dc.relation | Henri, G., Levent, T., Halev, A., Alami, R., & Cordier, P. (11 2020). pymgrid: An Open-Source Python Microgrid Simulator for Applied Artificial Intelligence Research. arXiv. org. | |
dc.relation | Zia, M. F., Elbouchikhi, E., & Benbouzid, M. (2018). Microgrids energy management systems: A critical review on methods, solutions, and prospects. Applied Energy, 222, 1033-1055. doi:10.1016/j.apenergy.2018.04.103 | |
dc.relation | Boqtob, O., Moussaoui, H., El Markhi, H., & Lamhamdi, T. (03 2019). Microgrid energy management system: A state-of-the-art review. Journal of Electrical Systems, 15, 53-67. | |
dc.relation | Espín-Sarzosa, D., Palma-Behnke, R., & Núñez-Mata, O. (2020). Energy Management Systems for Microgrids: Main Existing Trends in Centralized Control Architectures. Energies, 13(3), 547. doi:10.3390/en13030547 | |
dc.relation | Mbuwir, B., Ruelens, F., Spiessens, F., & Deconinck, G. (2017). Battery Energy Management in a Microgrid Using Batch Reinforcement Learning. Energies, 10(11), 1846. doi:10.3390/en10111846 | |
dc.relation | Hu, J., Shan, Y., Guerrero, J. M., Ioinovici, A., Chan, K. W., & Rodriguez, J. (2021). Model predictive control of microgrids An overview. Renewable and Sustainable Energy Reviews, 136, 110422. doi:10.1016/j.rser.2020.110422 | |
dc.relation | van Hasselt, H., Guez, A., & Silver, D. (2016). Deep Reinforcement Learning with Double Q-Learning. Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, 2094-2100. Phoenix, Arizona: AAAI Press. | |
dc.relation | Lincoln, R., Galloway, S., Stephen, B., & Burt, G. (2012). Comparing Policy Gradient and Value Function Based Reinforcement Learning Methods in Simulated Electrical Power Trade. IEEE Transactions on Power Systems, 27(1), 373-380. doi:10.1109/TPWRS.2011.2166091 | |
dc.relation | Sutton, R. S., & Barto, A. G. (2018). Reinforcement Learning: An Introduction. Cambridge, MA, USA: A Bradford Book. | |
dc.relation | Watkins, C. J. C. H. (1989). Learning from Delayed Rewards. King's College, Cambridge, UK. | |
dc.relation | Williams, R. J. (1992). Simple Statistical Gradient-Following Algorithms for Connectionist Reinforcement Learning. Mach. Learn., 8(3-4), 229-256. doi:10.1007/BF00992696 | |
dc.relation | Ceusters, G., Rodríguez, R. C., García, A. B., Franke, R., Deconinck, G., Helsen, L., Camargo, L. R. (2021). Model-predictive control and reinforcement learning in multi-energy system case studies. Applied Energy, 303, 117634. doi:10.1016/j.apenergy.2021.117634 | |
dc.relation | Ministerio de Minas y Energía, R. de C. (2014). Ley N 1715. | |
dc.relation | Tomin, N., Zhukov, A., & Domyshev, A. (01 2019). Deep Reinforcement Learning for Energy Microgrids Management Considering Flexible Energy Sources. EPJ Web of Conferences, 217, 01016. doi:10.1051/epjconf/201921701016 | |
dc.relation | Marín, L. G., Sumner, M., Muñoz-Carpintero, D., Köbrich, D., Pholboon, S., Sáez, D., & Núñez, A. (2019). Hierarchical Energy Management System for Microgrid Operation Based on Robust Model Predictive Control. Energies, 12(23). doi:10.3390/en12234453 | |
dc.relation | Akiba, T., Sano, S., Yanase, T., Ohta, T., & Koyama, M. (2019). Optuna: A Next-generation Hyperparameter Optimization Framework. Proceedings of the 25rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. | |
dc.relation | Mnih, V., Kavukcuoglu, K., Silver, D., Graves, A., Antonoglou, I., Wierstra, D., & Riedmiller, M. A. (2013). Playing Atari with Deep Reinforcement Learning. CoRR, abs/1312.5602. | |
dc.relation | Hasselt, H.V. (2010) Double Q-Learning. In: Lafferty, J.D., Williams, C.K.I., Shawe-Taylor, J., Zemel, R.S. and Culotta, A., Eds., Advances in Neural Information Processing Systems, Curran Associates, Inc., New York, 2613-2621. | |
dc.relation | Hill, A., Raffin, A., Ernestus, M., Gleave, A., Kanervisto, A., Traore, R., Wu, Y. (2018). Stable Baselines. GitHub repository. | |
dc.relation | Schulman, J., Wolski, F., Dhariwal, P., Radford, A., & Klimov, O. (2017). Proximal Policy Optimization Algorithms. CoRR, abs/1707.06347. | |
dc.relation | Mnih, V., Badia, A.P., Mirza, M., Graves, A., Lillicrap, T., Harley, T., Silver, D. & Kavukcuoglu, K.. (2016). Asynchronous Methods for Deep Reinforcement Learning. Proceedings of The 33rd International Conference on Machine Learning, in Proceedings of Machine Learning Research 48:1928-1937 Available from https://proceedings.mlr.press/v48/mniha16.html. | |
dc.relation | Fujimoto, S., Hoof, H. & Meger, D.. (2018). Addressing Function Approximation Error in Actor-Critic Methods. Proceedings of the 35th International Conference on Machine Learning, in Proceedings of Machine Learning Research 80:1587-1596 Available from https://proceedings.mlr.press/v80/fujimoto18a.html. | |
dc.relation | OpenAI. (2019). Introduction to RL: Kinds of RL Algorithms.https://spinningup.openai.com/en/latest/spinningup/rl_intro2.html | |
dc.rights | Attribution-NonCommercial-NoDerivatives 4.0 Internacional | |
dc.rights | https://repositorio.uniandes.edu.co/static/pdf/aceptacion_uso_es.pdf | |
dc.rights | info:eu-repo/semantics/openAccess | |
dc.rights | http://purl.org/coar/access_right/c_abf2 | |
dc.title | Energy management system for microgrids based on deep reinforcement learning | |
dc.type | Trabajo de grado - Maestría | |