Wang et al. (2026) Rainfall intensity estimation at night using deep learning and urban surveillance cameras in Jiangsu Province, China
Identification
- Journal: Journal of Hydrology Regional Studies
- Year: 2026
- Date: 2026-01-14
- Authors: Xing Wang, Haiqin Chen, Ang Zhou, Ye Chen
- DOI: 10.1016/j.ejrh.2026.103112
Research Groups
- School of Computer Engineering, Nanjing Institute of Technology, Nanjing, China
- China Meteorological Administration Radar Meteorology Key Laboratory, Nanjing, China
- Key Laboratory for Mesoscale Severe Weather/Ministry of Education and School of Atmospheric Science, Nanjing University, Nanjing, China
Short Summary
This study proposes NightRAIN-Net, a novel deep learning framework for nighttime rainfall intensity estimation using urban surveillance cameras, addressing challenges like low visibility and complex backgrounds. The framework achieves a Mean Absolute Error (MAE) of 3.22 mm/h and a Root Mean Squared Error (RMSE) of 3.88 mm/h, outperforming state-of-the-art methods and enabling scalable, near-continuous urban hydrological monitoring.
Objective
- To develop NightRAIN-Net, a deep learning framework specifically designed for robust nighttime rainfall intensity estimation from surveillance video, overcoming challenges such as low visibility, uneven illumination, and complex background noise.
Study Configuration
- Spatial Scale: Yangtze River Delta of eastern China, focusing on the highly urbanized corridor of Nanjing, Yangzhou, and Wuxi (Jiangsu Province).
- Temporal Scale: Nighttime rainfall video data collected from 2022 to 2025.
Methodology and Data
- Models used: NightRAIN-Net, a deep learning framework integrating:
- Multimodal Feature Extraction: Rain-Adaptive Channel Enhancement (RACE), Selective Raindrop Localization (SRL), and a ResNet block (RES) for background features.
- Temporal Modeling: A hybrid approach combining Long Short-Term Memory (LSTM) and Transformer architectures.
- Data sources:
- Nighttime surveillance video data from over 30 cameras deployed across Nanjing, Yangzhou, and Wuxi, Jiangsu Province, China.
- Ground truth rainfall intensity values from Two-Dimensional Video Disdrometer (2-DVD) and tipping bucket gauges, located up to 1 kilometer from the cameras.
- A self-constructed dataset of 120 hours of video covering 40 nighttime rainfall events (2022–2025), including 12,500 rain-containing clips and 2000 rain-free clips.
Main Results
- NightRAIN-Net achieved an overall Mean Absolute Error (MAE) of 3.22 mm/h and a Root Mean Squared Error (RMSE) of 3.88 mm/h when aggregating two real-world rainfall events.
- It demonstrated superior performance compared to existing algorithms (Lee et al., 2023 and Wang et al., 2023b), with R²/NSE of 0.97 and KGE of 0.95.
- RMSE was reduced by 27.5–55.8 % and MAERI by 22.4–39.2 % relative to the comparison algorithms.
- The optimal configuration identified through ablation experiments for 15-frame input was 5 RACE/SRL/RES layers combined with 2 LSTM layers and 1 Transformer layer, achieving a Mean Absolute Percentage Error (MAPE) of 19.9 %.
- The model exhibited strong robustness to variations in camera parameters and maintained consistent accuracy across different types of rainfall events.
- Bland-Altman analysis showed NightRAIN-Net had the smallest bias (+1.25 mm/h) and tightest limits of agreement [−5.88, 8.39] mm/h, indicating superior numerical agreement and robustness under heavy rainfall.
- Achieved accumulated rainfall MAPE of 4.94 % for Rainfall1 and 7.97 % for Rainfall2, with corresponding absolute errors of 3.73 mm and 7.01 mm, respectively.
Contributions
- Proposes NightRAIN-Net, a novel deep learning framework specifically tailored for nighttime rainfall intensity estimation using urban surveillance cameras, addressing the unique challenges of low visibility, uneven illumination, and complex backgrounds.
- Introduces an attention-based Multimodal Feature Extraction module (Rain-Adaptive Channel Enhancement and Selective Raindrop Localization) to effectively enhance raindrop visibility and mitigate background interference.
- Employs a hybrid temporal modeling approach combining LSTM and Transformer architectures to capture both short-term fluctuations and long-range dependencies in rainfall intensity across diverse precipitation types.
- Constructed a comprehensive multi-year (2022–2025) nighttime rainfall video dataset from real-world surveillance cameras for robust model training and evaluation.
- Demonstrates superior performance and robustness to varying camera parameters, offering a "plug-and-play" solution that eliminates the need for per-camera calibration and reduces deployment costs.
- Presents a practical and potentially cost-effective solution for city-scale rainfall monitoring by leveraging existing urban surveillance infrastructure, with minimal incremental computational and integration costs.
Funding
- National Natural Science Foundation of China (NSFC) (No. 42405140)
- China Postdoctoral Foundation (No. 2024M761383)
- China Postdoctoral Science Foundation Special Funding Program (No. 2025T180080)
- Fundamental Research Funds for the Central Universities—Cemac “GeoX” Interdisciplinary Program (No. 020714380210, 020714380222, 020714380217)
- Open Grants of China Meteorological Administration Radar Meteorology Key Laboratory (No. 2024LRM-A01 and 2024LRM-A02)
- Fundamental Research Funds for the Central Universities (No. 14380231)
Citation
@article{Wang2026Rainfall,
author = {Wang, Xing and Chen, Haiqin and Zhou, Ang and Chen, Ye},
title = {Rainfall intensity estimation at night using deep learning and urban surveillance cameras in Jiangsu Province, China},
journal = {Journal of Hydrology Regional Studies},
year = {2026},
doi = {10.1016/j.ejrh.2026.103112},
url = {https://doi.org/10.1016/j.ejrh.2026.103112}
}
Original Source: https://doi.org/10.1016/j.ejrh.2026.103112