dc.contributorLozano Martínez, Fernando Enrique
dc.contributorJiménez Estévez, Guillermo Andrés
dc.contributorMarín Collazos, Luis Gabriel
dc.contributorGiraldo Trujillo, Luis Felipe
dc.contributorMendoza Araya, Patricio
dc.creatorGarrido Urbano, César Daniel
dc.date.accessioned2022-07-14T16:40:03Z
dc.date.available2022-07-14T16:40:03Z
dc.date.created2022-07-14T16:40:03Z
dc.date.issued2021
dc.identifierhttp://hdl.handle.net/1992/58828
dc.identifierinstname:Universidad de los Andes
dc.identifierreponame:Repositorio Institucional Séneca
dc.identifierrepourl:https://repositorio.uniandes.edu.co/
dc.description.abstractThe increasing use of distributed and renewable energy resources poses a challenge for traditional control methods. This happens due to the higher complexity and uncertainty introduced by these new technologies, specially in smaller self sufficient systems like microgrids. To address this challenge, reinforcement learning algorithms are used to design and implement an energy management system (EMS) for different microgrids configurations. Reinforcement Learning (RL) approaches seek to train an agent from their interaction with the environment rather than from direct data, such as in supervised learning. With this in mind, the problem of energy management is defined in a Markov decision process and it is solved using different state-of-the-art Deep Reinforcement Learning (DRL) algorithms, such as Deep Q-Networks (DQN), Proximal Policy Optimization (PPO) and Twin Delayed Deep Deterministic Policy Gradient (TD3). Additionally, these results are compared with traditional EMS implementations such as Rule-Based and Model Predictive Control (MPC), used as benchmarks. Simulations are run with the novel Pymgrid module build for this purpose. Results show that DRL EMS agents have comparable results to some the classical implementations with possible benefits for generic and specific use cases. Source code of this project can be found at: https://github.com/Cesard97/DRL-Microgrid-Control
dc.languageeng
dc.publisherUniversidad de los Andes
dc.publisherMaestría en Ingeniería Eléctrica
dc.publisherFacultad de Ingeniería
dc.publisherDepartamento de Ingeniería Eléctrica y Electrónica
dc.relationZhang, Z., Zhang, D., & Qiu, R. C. (2020). Deep reinforcement learning for power system applications: An overview. CSEE Journal of Power and Energy Systems, 6(1), 213-225.
dc.relationYe, Y., Qiu, D., Wu, X., Strbac, G., & Ward, J. (2020). Model-Free Real-Time Autonomous Control for a Residential Multi-Energy System Using Deep Reinforcement Learning. IEEE Transactions on Smart Grid, 11(4), 3068-3082.
dc.relationHenri, G., Levent, T., Halev, A., Alami, R., & Cordier, P. (11 2020). pymgrid: An Open-Source Python Microgrid Simulator for Applied Artificial Intelligence Research. arXiv. org.
dc.relationZia, M. F., Elbouchikhi, E., & Benbouzid, M. (2018). Microgrids energy management systems: A critical review on methods, solutions, and prospects. Applied Energy, 222, 1033-1055. doi:10.1016/j.apenergy.2018.04.103
dc.relationBoqtob, O., Moussaoui, H., El Markhi, H., & Lamhamdi, T. (03 2019). Microgrid energy management system: A state-of-the-art review. Journal of Electrical Systems, 15, 53-67.
dc.relationEspín-Sarzosa, D., Palma-Behnke, R., & Núñez-Mata, O. (2020). Energy Management Systems for Microgrids: Main Existing Trends in Centralized Control Architectures. Energies, 13(3), 547. doi:10.3390/en13030547
dc.relationMbuwir, B., Ruelens, F., Spiessens, F., & Deconinck, G. (2017). Battery Energy Management in a Microgrid Using Batch Reinforcement Learning. Energies, 10(11), 1846. doi:10.3390/en10111846
dc.relationHu, J., Shan, Y., Guerrero, J. M., Ioinovici, A., Chan, K. W., & Rodriguez, J. (2021). Model predictive control of microgrids An overview. Renewable and Sustainable Energy Reviews, 136, 110422. doi:10.1016/j.rser.2020.110422
dc.relationvan Hasselt, H., Guez, A., & Silver, D. (2016). Deep Reinforcement Learning with Double Q-Learning. Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, 2094-2100. Phoenix, Arizona: AAAI Press.
dc.relationLincoln, R., Galloway, S., Stephen, B., & Burt, G. (2012). Comparing Policy Gradient and Value Function Based Reinforcement Learning Methods in Simulated Electrical Power Trade. IEEE Transactions on Power Systems, 27(1), 373-380. doi:10.1109/TPWRS.2011.2166091
dc.relationSutton, R. S., & Barto, A. G. (2018). Reinforcement Learning: An Introduction. Cambridge, MA, USA: A Bradford Book.
dc.relationWatkins, C. J. C. H. (1989). Learning from Delayed Rewards. King's College, Cambridge, UK.
dc.relationWilliams, R. J. (1992). Simple Statistical Gradient-Following Algorithms for Connectionist Reinforcement Learning. Mach. Learn., 8(3-4), 229-256. doi:10.1007/BF00992696
dc.relationCeusters, G., Rodríguez, R. C., García, A. B., Franke, R., Deconinck, G., Helsen, L., Camargo, L. R. (2021). Model-predictive control and reinforcement learning in multi-energy system case studies. Applied Energy, 303, 117634. doi:10.1016/j.apenergy.2021.117634
dc.relationMinisterio de Minas y Energía, R. de C. (2014). Ley N 1715.
dc.relationTomin, N., Zhukov, A., & Domyshev, A. (01 2019). Deep Reinforcement Learning for Energy Microgrids Management Considering Flexible Energy Sources. EPJ Web of Conferences, 217, 01016. doi:10.1051/epjconf/201921701016
dc.relationMarín, L. G., Sumner, M., Muñoz-Carpintero, D., Köbrich, D., Pholboon, S., Sáez, D., & Núñez, A. (2019). Hierarchical Energy Management System for Microgrid Operation Based on Robust Model Predictive Control. Energies, 12(23). doi:10.3390/en12234453
dc.relationAkiba, T., Sano, S., Yanase, T., Ohta, T., & Koyama, M. (2019). Optuna: A Next-generation Hyperparameter Optimization Framework. Proceedings of the 25rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining.
dc.relationMnih, V., Kavukcuoglu, K., Silver, D., Graves, A., Antonoglou, I., Wierstra, D., & Riedmiller, M. A. (2013). Playing Atari with Deep Reinforcement Learning. CoRR, abs/1312.5602.
dc.relationHasselt, H.V. (2010) Double Q-Learning. In: Lafferty, J.D., Williams, C.K.I., Shawe-Taylor, J., Zemel, R.S. and Culotta, A., Eds., Advances in Neural Information Processing Systems, Curran Associates, Inc., New York, 2613-2621.
dc.relationHill, A., Raffin, A., Ernestus, M., Gleave, A., Kanervisto, A., Traore, R., Wu, Y. (2018). Stable Baselines. GitHub repository.
dc.relationSchulman, J., Wolski, F., Dhariwal, P., Radford, A., & Klimov, O. (2017). Proximal Policy Optimization Algorithms. CoRR, abs/1707.06347.
dc.relationMnih, V., Badia, A.P., Mirza, M., Graves, A., Lillicrap, T., Harley, T., Silver, D. & Kavukcuoglu, K.. (2016). Asynchronous Methods for Deep Reinforcement Learning. Proceedings of The 33rd International Conference on Machine Learning, in Proceedings of Machine Learning Research 48:1928-1937 Available from https://proceedings.mlr.press/v48/mniha16.html.
dc.relationFujimoto, S., Hoof, H. & Meger, D.. (2018). Addressing Function Approximation Error in Actor-Critic Methods. Proceedings of the 35th International Conference on Machine Learning, in Proceedings of Machine Learning Research 80:1587-1596 Available from https://proceedings.mlr.press/v80/fujimoto18a.html.
dc.relationOpenAI. (2019). Introduction to RL: Kinds of RL Algorithms.https://spinningup.openai.com/en/latest/spinningup/rl_intro2.html
dc.rightsAttribution-NonCommercial-NoDerivatives 4.0 Internacional
dc.rightshttps://repositorio.uniandes.edu.co/static/pdf/aceptacion_uso_es.pdf
dc.rightsinfo:eu-repo/semantics/openAccess
dc.rightshttp://purl.org/coar/access_right/c_abf2
dc.titleEnergy management system for microgrids based on deep reinforcement learning
dc.typeTrabajo de grado - Maestría


Este ítem pertenece a la siguiente institución