Sseguya et al. (2025) Deep Reinforcement Learning for Optimized Reservoir Operation and Flood Risk Mitigation
⚠️ Warning: This summary was generated from the abstract only, as the full text was not available.
Identification
- Journal: Water
- Year: 2025
- Date: 2025-11-11
- Authors: Fred Sseguya, Kyung Soo Jun
- DOI: 10.3390/w17223226
Research Groups
Not explicitly provided in the text. The study focuses on the Soyang River Dam, South Korea.
Short Summary
This study applies deep reinforcement learning (DRL) models (DQN, PPO, DDPG) to optimize reservoir operations at the Soyang River Dam, South Korea, using 30 years of daily hydrometeorological data. The DRL framework effectively balances flood risk mitigation and water supply, with models like PPO and DQN demonstrating superior performance over observed operations during high-inflow periods by increasing storage buffers and reducing peak discharge.
Objective
- To optimize reservoir operations at the Soyang River Dam, South Korea, using deep reinforcement learning (DRL) models to balance flood risk mitigation, water supply reliability, and operational stability under evolving hydrological conditions.
Study Configuration
- Spatial Scale: Soyang River Dam, South Korea.
- Temporal Scale: 30 years of daily data (1993–2022).
Methodology and Data
- Models used: Deep Reinforcement Learning (DRL) models: Deep Q-Network (DQN), Proximal Policy Optimization (PPO), and Deep Deterministic Policy Gradient (DDPG).
- Data sources: 30 years of daily hydrometeorological data (1993–2022), including observed and remotely sensed variables such as precipitation, temperature, and soil moisture. Discharge is computed via mass balance.
Main Results
- PPO achieved the highest cumulative reward and the most stable actions but incurred six flood control violations.
- DQN recorded one flood control violation, reflecting larger buffers and strong flood control compliance.
- DDPG provided smooth, intermediate responses with one flood control violation.
- No DRL model exceeded the total storage capacity.
- A consistent operational pattern emerged: retain water on the rise of an event, moderate the crest, and release on the recession to keep Flood Risk (FR) < 0.
- During high-inflow days, DRL optimization consistently outperformed observed operation by increasing storage buffers and typically reducing peak discharge, thereby mitigating flood risk.
Contributions
- First application of a comparative analysis of multiple DRL algorithms (DQN, PPO, DDPG) for optimizing real-world reservoir operations under complex hydrological conditions.
- Integration of observed and remotely sensed hydrometeorological variables within a DRL framework for adaptive storage decisions.
- Demonstration that DRL-optimized reservoir operations can significantly outperform traditional observed operations in mitigating flood risk during high-inflow events by strategically managing storage and discharge.
Funding
Not explicitly provided in the text.
Citation
@article{Sseguya2025Deep,
author = {Sseguya, Fred and Jun, Kyung Soo},
title = {Deep Reinforcement Learning for Optimized Reservoir Operation and Flood Risk Mitigation},
journal = {Water},
year = {2025},
doi = {10.3390/w17223226},
url = {https://doi.org/10.3390/w17223226}
}
Original Source: https://doi.org/10.3390/w17223226