Community energy management based on reinforcement learning
Sun, Yunqiang (2025)
Kandidaatintyö
Sun, Yunqiang
2025
School of Energy Systems, Sähkötekniikka
Julkaisun pysyvä osoite on
https://urn.fi/URN:NBN:fi-fe2025051240358
https://urn.fi/URN:NBN:fi-fe2025051240358
Tiivistelmä
This undergraduate thesis presents a comparative analysis of community energy management (CEM) methods based on non-reinforcement learning (non-RL) algorithms and various reinforcement learning (RL) algorithms. The non-RL method examined is baseline control (BLC) and rule-based control (RBC). RL algorithms explored in this study encompass deep deterministic policy gradient (DDPG), soft actor-critic (SAC) algorithm, SAC based on RBC algorithm (SAC-RBC), decentralized-coordinated multi-agent reinforcement learning with iterative sequential action selection (MARLISA), and SAC with Stable-Baselines3 (SB3-SAC). To enhance KPIs performance of different algorithms, new reward functions are designed for SAC and MARLISA, and compared with the default functions provided by the Python library CityLearn. The study employs CityLearn for simulation experiments and utilizes the dataset from the CityLearn Challenge 2023 to evaluate and compare the effectiveness and key performance indicators (KPIs) performance of different methods. The findings demonstrate that RL-based CEM, including SAC, SAC-RBC, and DDPG, achieve superior optimization of KPIs compared to MARLISA and non-RL-based methods. In addition, the study validates through CityLearn that algorithms incorporating designed reward functions present enhanced KPI optimization compared to those utilizing the default reward function.