Sultana et al. (2025) ArcticNet for Semantic Segmentation of Meltpond Regions in the Arctic Sea Ice
Identification
- Journal: IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing
- Year: 2025
- Date: 2025-11-10
- Authors: Aqsa Sultana, Vijayan K. Asari, Ivan Sudakow, Lee W. Cooper
- DOI: 10.1109/jstars.2025.3631391
Research Groups
- Vision Lab and Department of Electrical and Computer Engineering, University of Dayton, USA
- School of Mathematics and Statistics, The Open University, U.K.
- Center for Environmental Science, University of Maryland, USA
Short Summary
This paper introduces ArcticNet, a novel deep learning architecture based on UNet with recurrent, residual, and attention operations, for semantic segmentation of meltpond regions in Arctic sea ice. ArcticNet demonstrates superior performance in accurately delineating meltponds, open water, and snow compared to existing state-of-the-art models.
Objective
- To develop a robust deep learning technique, ArcticNet, for pixel-level multi-class semantic segmentation of meltpond regions, open water, and snow in high-resolution Arctic sea ice imagery.
Study Configuration
- Spatial Scale: Arctic Basin (76.2° N, 157.9° W to 80.5° N, 7.6° E) and Chukchi Sea. Image resolutions range from 5 cm/pixel to 25 cm/pixel.
- Temporal Scale: Summer melt season (August 12th to September 26th, 2005 for HOTRAX; July 2016 for Operation IceBridge).
Methodology and Data
- Models used: ArcticNet (novel architecture combining UNet, R2UNet, WNet with recurrent, residual, and attention operations, and a cross-network skip connection), UNet, R2UNet, WNet.
- Data sources:
- Healy–Oden Trans Arctic Expedition (HOTRAX) aerial images (2005).
- NASA’s Operation IceBridge Digital Mapping System (DMS) Level 1B imagery (2016).
Main Results
- ArcticNet achieved superior semantic segmentation performance for meltponds, open water, and snow compared to UNet, R2UNet, and WNet.
- On the HOTRAX dataset, ArcticNet obtained an F1 score of 0.919, an accuracy of 96.39%, and a mean Intersection over Union (mIoU) of 0.854. This represents a 7.0% relative mIoU gain over UNet.
- On the Operation IceBridge dataset, ArcticNet achieved an F1 score of 0.969, an accuracy of 97.85%, and an mIoU of 0.941. This represents a 6.3% relative mIoU gain over UNet.
- The inclusion of the attention mechanism significantly improved ArcticNet's performance, yielding a 1.9% and 1.4% relative mIoU gain on HOTRAX and Operation IceBridge datasets, respectively, compared to ArcticNet without attention.
- ArcticNet demonstrated enhanced boundary delineation and robustness in capturing complex meltpond shapes and subtle surface transitions.
Contributions
- Proposed ArcticNet, a novel deep convolutional neural network for semantic segmentation of Arctic regions, integrating strengths from UNet, R2UNet, WNet, and an attention mechanism.
- Introduced a dual-network structure with a cross-network skip connection from the decoder of the first UNet to the encoder of the second UNet for hierarchical feature propagation and refinement.
- Incorporated recurrent and residual operations within network blocks to enhance feature accumulation, propagation, and expand the field of view for segmentation.
- Utilized an attention mechanism to adaptively weight features, focusing on target regions of diverse shapes and sizes, thereby improving model sensitivity and prediction accuracy.
- Validated the model's effectiveness using high-resolution RGB aerial images from the HOTRAX and NASA’s Operation IceBridge datasets, demonstrating superior performance over existing state-of-the-art methods.
Funding
- U.S. National Science Foundation (NSF), Grant No. 2102906.
Citation
@article{Sultana2025ArcticNet,
author = {Sultana, Aqsa and Asari, Vijayan K. and Sudakow, Ivan and Cooper, Lee W.},
title = {ArcticNet for Semantic Segmentation of Meltpond Regions in the Arctic Sea Ice},
journal = {IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing},
year = {2025},
doi = {10.1109/jstars.2025.3631391},
url = {https://doi.org/10.1109/jstars.2025.3631391}
}
Original Source: https://doi.org/10.1109/jstars.2025.3631391