Augmented deep reinforcement learning for the energy management of microgrids considering renewable stochastic parameters

Research output: Contribution to journalArticlepeer-review

2 Citations (Scopus)

Abstract

The increasing integration of renewable energy resources in microgrids necessitates intelligent and efficient scheduling strategies to address the stochastic nature of sources like solar and wind, alongside the nonlinear operating characteristics of conventional diesel generators. This paper proposes a two-phase augmented framework that combines deep reinforcement learning (DRL) with quadratic programming (QP). Initially in the first phase, the DRL algorithms identify feasible action regions; and then subsequently in the second phase, QP refines these discrete outputs into continuous, precise schedules. Four DRL algorithms i.e. Deep Q Networks (DQN), Double Deep Q Networks (DDQN), Dueling DQN (D2QN), and Dueling DDQN (D3QN) were implemented and evaluated on a realistic microgrid representing a locality in New South Wales, Australia. Among them, the DDQN algorithm demonstrated superior performance during training, achieving the highest cumulative reward and the lowest operating cost of $1,243,227, with constraint violations under 0.2%. In the testing phase, DDQN also yielded the lowest cost ($8127) and zero constraint violations, outperforming the other three algorithms. All algorithms completed test executions in under 0.1 s, confirming their real-time applicability. Sensitivity analysis revealed that setting equal weights (k1=k2=1) in the reward function consistently reduced operating costs across all DRL variants. Cost reductions ranged from 10.6–25.6% for DQN, 7.4–8.0% for DDQN, 34.6% for D2QN, and 5.2–9.7% for D3QN, depending on the discount factor, γ. When benchmarked against a classical genetic algorithm (GA), all DRL-based approaches achieved faster runtimes and lower operating costs. Specifically, the total costs obtained from GA were 25.4%, 20.9%, 26.5%, and 20.4% higher than those achieved by D3QN, D2QN, DDQN, and DQN, respectively, highlighting the effectiveness of the proposed QP-augmented DRL framework for dynamic and cost-sensitive microgrid energy management.

Original languageEnglish
Article number112785
JournalEngineering Applications of Artificial Intelligence
Volume162
DOIs
Publication statusPublished - Dec 26 2025

Keywords

  • Deep Q networks
  • Double Deep Q Networks
  • Dueling Deep Q Networks
  • Dueling Double Deep Q Networks
  • Energy management
  • Microgrids

ASJC Scopus subject areas

  • Control and Systems Engineering
  • Electrical and Electronic Engineering
  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'Augmented deep reinforcement learning for the energy management of microgrids considering renewable stochastic parameters'. Together they form a unique fingerprint.

Cite this