Lee et al. (2025) Enhancement of hydrologic model optimization with single-step reinforcement learning
Identification
- Journal: Journal of Hydrology
- Year: 2025
- Date: 2025-11-11
- Authors: Byeongwon Lee, Hyemin Jeong, Younghun Lee, Gregory W. McCarty, Xuesong Zhang, Sangchul Lee
- DOI: 10.1016/j.jhydrol.2025.134595
Research Groups
- Division of Environmental Science & Ecological Engineering, College of Life Sciences & Biotechnology, Korea University, Seoul, the Republic of Korea
- USDA-ARS, Hydrology and Remote Sensing Laboratory, Beltsville, MD, USA
Short Summary
This study proposes a single-step Reinforcement Learning (PPO-1) approach for efficient calibration of hydrological models with static parameters. It demonstrates that PPO-1 achieves better or comparable calibration accuracy with significantly reduced computational resources compared to traditional methods.
Objective
- To develop and evaluate a single-step Reinforcement Learning (PPO-1) approach for efficient and accurate calibration of hydrological models with static parameters, addressing the high computational demands of traditional methods.
Study Configuration
- Spatial Scale: Tuckahoe Creek Watershed (220.7 km²) in the U.S. and Miho River Watershed (1,855 km²) in South Korea.
- Temporal Scale: The reinforcement learning method was tested for 1,000 episodes, with performance evaluated at 500 and 1,000 episodes. Comparison was made against 1,500 simulations of SUFI-2. The hydrological models typically simulate processes spanning years or decades.
Methodology and Data
- Models used: Soil and Water Assessment Tool (SWAT), Single-step Proximal Policy Optimization (PPO-1) algorithm, Sequential Uncertainty Fitting version 2 (SUFI-2) for comparison.
- Data sources: Not explicitly detailed in the provided text.
Main Results
- In the Tuckahoe Creek Watershed, RL with 500 episodes achieved Nash-Sutcliffe Efficiency (NSE) values of 0.67–0.72 for calibration and 0.70–0.80 for validation, outperforming SUFI-2 (NSE: 0.62 for calibration and 0.61–0.63 for validation).
- Simulation time in the Tuckahoe Creek Watershed was reduced by 69 %, requiring 3.3 hours for RL compared to 12.5 hours for SUFI-2.
- In the Miho River Watershed, RL with 500 episodes yielded NSE values of 0.63–0.65 for both calibration and validation, which were comparable to SUFI-2, while reducing runtime from 575 hours to 260 hours.
- Single-step RL offers better or comparable calibration accuracy using fewer computational resources, making it effective for hydrological models with static parameter structures.
Contributions
- Proposes and validates a novel single-step Reinforcement Learning (PPO-1) approach specifically designed for hydrological models with static parameters, overcoming a common challenge for RL implementations in this domain.
- Demonstrates significant computational efficiency gains (up to 69 % reduction in runtime) while maintaining or improving calibration accuracy compared to traditional methods like SUFI-2.
- Highlights the potential for transferability of the single-step RL approach to broader environmental modeling applications beyond hydrology.
Funding
- Not explicitly detailed in the provided text.
Citation
@article{Lee2025Enhancement,
author = {Lee, Byeongwon and Jeong, Hyemin and Lee, Younghun and McCarty, Gregory W. and Zhang, Xuesong and Lee, Sangchul},
title = {Enhancement of hydrologic model optimization with single-step reinforcement learning},
journal = {Journal of Hydrology},
year = {2025},
doi = {10.1016/j.jhydrol.2025.134595},
url = {https://doi.org/10.1016/j.jhydrol.2025.134595}
}
Original Source: https://doi.org/10.1016/j.jhydrol.2025.134595