Towards DQN Reinforcement Learning for energy management for bidirectional charging of EV’s
PDF

Keywords

Reinforcement Learning
DQN
MicroGrid
Bidirectional Charging

Abstract

This study applies Deep Q-Network (DQN) reinforcement learning to optimize bidirectional EV charging in a microgrid with dynamic pricing and renewable energy. The environment includes an EV, wind turbine, stationary battery, flexible household loads, and grid connection. DQN agents learn to minimize energy costs by charging during low-price periods and discharging during high-price windows. Simulations across four scenarios show improved cumulative rewards and grid efficiency. Future work will address stochastic elements, realistic EV availability, and continuous action spaces to enhance adaptability and performance in real-world applications.

https://doi.org/10.60643/urai.v2025p31
PDF
Creative Commons License

This work is licensed under a Creative Commons Attribution 4.0 International License.

Copyright (c) 2025 Rainer Gasper, Michael Quarti, Nick Abermeth, Yannik Heizmann, Joshua Ruf, Markus Portugal, Bennet Märtin