Chen et al. (2026) PAFNet: A Parallel Attention Fusion Network for Water Body Extraction of Remote Sensing Images
⚠️ Warning: This summary was generated from the abstract only, as the full text was not available.
Identification
- Journal: Remote Sensing
- Year: 2026
- Date: 2026-01-03
- Authors: Shaochuan Chen, Chenlong Ding, Mutian Li, Xin Lyu, X. L. Li, Zhennan Xu, Yiwei Fang, Heng Li
- DOI: 10.3390/rs18010153
Research Groups
Not specified in the provided text.
Short Summary
This paper proposes a Parallel Attention Fusion Network (PAFNet) to overcome limitations of Deep Convolutional Neural Networks (DCNNs) in remote sensing water body extraction, achieving superior performance through effective multi-scale feature aggregation and attention mechanisms.
Objective
- To address the limitations of Deep Convolutional Neural Networks (DCNNs) in water body extraction, specifically channel redundancy and ineffective feature fusion, by proposing a Parallel Attention Fusion Network (PAFNet) for more effective multi-scale feature aggregation, precise boundary recovery, and robust noise suppression.
Study Configuration
- Spatial Scale: Remote sensing imagery, typically covering various geographic extents and resolutions (e.g., meters to tens of meters per pixel) suitable for water body mapping.
- Temporal Scale: Static image analysis; no temporal dynamics or change detection are discussed.
Methodology and Data
- Models used: Parallel Attention Fusion Network (PAFNet), which includes:
- Feature Refinement Module (FRM) utilizing multi-branch asymmetric convolutions.
- Parallel Attention Module (PAM) applying spatial and channel attention.
- Semantic Feature Fusion Module (SFM) integrating multi-level features via adaptive channel weighting.
- Data sources: Four representative remote sensing datasets: GID, LandCover.ai, QTPL, and LoveDA.
Main Results
- PAFNet demonstrates superior performance compared to existing state-of-the-art methods for water body extraction.
- Achieved high accuracy metrics across diverse datasets:
- GID: 94.29% Overall Accuracy (OA) and 95.95% F1-Score.
- LandCover.ai: 86.17% OA and 88.70% F1-Score.
- QTPL: 98.99% OA and 98.96% F1-Score.
- LoveDA: 89.01% OA and 85.59% F1-Score.
Contributions
- Proposed PAFNet, a novel deep learning architecture designed to enhance multi-scale feature aggregation and address channel redundancy in water body extraction from remote sensing imagery.
- Introduced the Feature Refinement Module (FRM) for effective multi-scale feature extraction and channel redundancy suppression.
- Developed the Parallel Attention Module (PAM) to improve discriminative representation of water features and mitigate interference from spectrally similar land covers.
- Designed the Semantic Feature Fusion Module (SFM) for precise boundary recovery and robust noise suppression.
- Demonstrated state-of-the-art performance of PAFNet across multiple challenging and diverse remote sensing datasets.
Funding
Not specified in the provided text.
Citation
@article{Chen2026PAFNet,
author = {Chen, Shaochuan and Ding, Chenlong and Li, Mutian and Lyu, Xin and Li, X. L. and Xu, Zhennan and Fang, Yiwei and Li, Heng},
title = {PAFNet: A Parallel Attention Fusion Network for Water Body Extraction of Remote Sensing Images},
journal = {Remote Sensing},
year = {2026},
doi = {10.3390/rs18010153},
url = {https://doi.org/10.3390/rs18010153}
}
Original Source: https://doi.org/10.3390/rs18010153