TY - GEN
T1 - Multi-Objective Energy Management System for Isolated Solar Microgrids using Pareto Q learning
AU - Esan, Ayodele Benjamin
AU - Shareef, Hussain
AU - Saeed, Nasir
N1 - Publisher Copyright:
© 2023 IEEE.
PY - 2023
Y1 - 2023
N2 - Microgrids play a crucial role in the development of smart energy grids in developed countries and in addressing energy poverty in developing countries. However, accurately modeling the uncertainties associated with renewable distributed generation technologies (RDGs) proves challenging due to their stochastic nature, particularly when considering the non-convex constraints of microgrid components. This study focuses on a multi-objective formulation for an islanded solar microgrid, aiming to minimize operational costs (OC) while ensuring a minimum loss of load probability (LOLP). By applying the principle of Pareto-optimality, the problem is represented as a Markov Decision Process and solved using a Pareto-Q learning (PQL) algorithm. Real-time data from the IEEE open dataset repository was utilized to train the microgrid agent. The obtained results for the seven-day period considered revealed that the agent achieved an optimal policy for each day while still adhering to the state-of-charge constraint, and simultaneously obtaining the Pareto-front for each system state. In comparison to three baseline methods, the PQL agent exhibited an overall improvement of 25-45% across all reward values obtained, along with OC enhancements ranging from 40% to 43% respectively.
AB - Microgrids play a crucial role in the development of smart energy grids in developed countries and in addressing energy poverty in developing countries. However, accurately modeling the uncertainties associated with renewable distributed generation technologies (RDGs) proves challenging due to their stochastic nature, particularly when considering the non-convex constraints of microgrid components. This study focuses on a multi-objective formulation for an islanded solar microgrid, aiming to minimize operational costs (OC) while ensuring a minimum loss of load probability (LOLP). By applying the principle of Pareto-optimality, the problem is represented as a Markov Decision Process and solved using a Pareto-Q learning (PQL) algorithm. Real-time data from the IEEE open dataset repository was utilized to train the microgrid agent. The obtained results for the seven-day period considered revealed that the agent achieved an optimal policy for each day while still adhering to the state-of-charge constraint, and simultaneously obtaining the Pareto-front for each system state. In comparison to three baseline methods, the PQL agent exhibited an overall improvement of 25-45% across all reward values obtained, along with OC enhancements ranging from 40% to 43% respectively.
KW - Energy management system
KW - Microgrids
KW - Q learning
KW - Reinforcement learning
KW - Reliability
UR - http://www.scopus.com/inward/record.url?scp=85185801972&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85185801972&partnerID=8YFLogxK
U2 - 10.1109/ETFG55873.2023.10407230
DO - 10.1109/ETFG55873.2023.10407230
M3 - Conference contribution
AN - SCOPUS:85185801972
T3 - 2023 IEEE International Conference on Energy Technologies for Future Grids, ETFG 2023
BT - 2023 IEEE International Conference on Energy Technologies for Future Grids, ETFG 2023
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2023 IEEE International Conference on Energy Technologies for Future Grids, ETFG 2023
Y2 - 3 December 2023 through 6 December 2023
ER -