Hydrology and Climate Change Article Summaries

Liu et al. (2025) From RNNs to Transformers: benchmarking deep learning architectures for hydrologic prediction

Identification

Research Groups

Short Summary

This study introduces a deep learning framework to benchmark 11 Transformer-based architectures against a baseline Long Short-Term Memory (LSTM) model and evaluate pretrained Large Language Models (LLMs) and Time Series Attention Models (TSAMs) for diverse hydrologic prediction tasks, revealing that LSTM excels in regression but attention-based models surpass it in complex tasks like autoregression and zero-shot forecasting.

Objective

Study Configuration

Methodology and Data

Main Results

Contributions

Funding

Citation

@article{Liu2025From,
  author = {Liu, Jiangtao and Shen, Chaopeng and O’Donncha, Fearghal and Song, Yalan and Wei, Zhi and Beck, Hylke E. and Bindas, Tadd and Kraabel, Nicholas and Lawson, Kathryn},
  title = {From RNNs to Transformers: benchmarking deep learning architectures for hydrologic prediction},
  journal = {Hydrology and earth system sciences},
  year = {2025},
  doi = {10.5194/hess-29-6811-2025},
  url = {https://doi.org/10.5194/hess-29-6811-2025}
}

Original Source: https://doi.org/10.5194/hess-29-6811-2025