Kang et al. (2025) Multi-Satellite Image Matching and Deep Learning Segmentation for Detection of Daytime Sea Fog Using GK2A AMI and GK2B GOCI-II
⚠️ Warning: This summary was generated from the abstract only, as the full text was not available.
Identification
- Journal: Remote Sensing
- Year: 2025
- Date: 2025-12-23
- Authors: JongGu Kang, Hiroyuki Miyazaki, Seung Hee Kim, M. Kafatos, Daesun Kim, Jinsoo Kim, Yangwon Lee
- DOI: 10.3390/rs18010034
Research Groups
Not explicitly mentioned in the provided text.
Short Summary
This study aimed to enhance sea fog detection accuracy and reliability by integrating multi-satellite imagery using a deep learning-based co-registration technique and autotuning state-of-the-art semantic segmentation models. The approach, particularly with multi-satellite fusion, significantly improved detection performance, outperforming existing operational products and reducing the omission of disaster-critical information.
Objective
- To achieve higher accuracy and reliability in sea fog detection by employing a deep learning-based advanced co-registration technique for multi-satellite image fusion and autotuning-based optimization of State-of-the-Art (SOTA) semantic segmentation models.
Study Configuration
- Spatial Scale: Regional scale, covering vast marine areas, specifically utilizing geostationary satellites over the Korean peninsula and surrounding waters.
- Temporal Scale: Continuous monitoring, characteristic of geostationary satellite operations.
Methodology and Data
- Models used: Deep learning-based advanced co-registration technique for multi-satellite image fusion, autotuning-based optimization, Swin Transformer, Mask2Former, and SegNeXt (semantic segmentation models).
- Data sources: Advanced Meteorological Imager (AMI) sensor on the Geostationary Korea Multi-Purpose Satellite 2A (GK2A) and GOCI-II sensor on the Geostationary Korea Multi-Purpose Satellite 2B (GK2B).
Main Results
- Swin Transformer achieved an Intersection over Union (IoU) of 77.24% and an F1-score of 87.16%.
- Multi-satellite fusion significantly improved the Recall score from 88.78% (single AMI product) to 92.01%, effectively mitigating the omission of disaster information.
- Swin Transformer, Mask2Former, and SegNeXt demonstrated balanced and excellent performance across overall metrics.
- The developed deep learning approach was superior to both the officially operational GK2A AMI Fog and GK2B GOCI-II Marine Fog (MF) products.
Contributions
- Introduction of a deep learning-based advanced co-registration technique for multi-satellite image fusion to improve sea fog detection.
- Application and autotuning-based optimization of State-of-the-Art (SOTA) semantic segmentation models (Swin Transformer, Mask2Former, SegNeXt) for sea fog detection.
- Demonstrated significant improvement in sea fog detection accuracy and reliability through multi-satellite data fusion, particularly in enhancing Recall.
- Established a superior sea fog detection method compared to existing officially operational satellite products.
Funding
Not explicitly mentioned in the provided text.
Citation
@article{Kang2025MultiSatellite,
author = {Kang, JongGu and Miyazaki, Hiroyuki and Kim, Seung Hee and Kafatos, M. and Kim, Daesun and Kim, Jinsoo and Lee, Yangwon},
title = {Multi-Satellite Image Matching and Deep Learning Segmentation for Detection of Daytime Sea Fog Using GK2A AMI and GK2B GOCI-II},
journal = {Remote Sensing},
year = {2025},
doi = {10.3390/rs18010034},
url = {https://doi.org/10.3390/rs18010034}
}
Original Source: https://doi.org/10.3390/rs18010034