Jin et al. (2026) Deep-Learning Spatial and Temporal Fusion Model for Land Surface Temperature Based on a Spatially Adaptive Feature and Temperature-Adaptive Correction Module
⚠️ Warning: This summary was generated from the abstract only, as the full text was not available.
Identification
- Journal: Remote Sensing
- Year: 2026
- Date: 2026-01-12
- Authors: Chenhao Jin, Jiasheng Li, Yao Shen
- DOI: 10.3390/rs18020238
Research Groups
Not explicitly mentioned in the paper.
Short Summary
This study develops a novel Deep-Learning Spatial and Temporal Fusion Model (DLSTFM) to fuse Landsat-8 and MODIS Land Surface Temperature (LST) data, overcoming limitations of existing methods by achieving significantly higher accuracy and clearer surface features with a mean absolute error of approximately 2.1 K.
Objective
- To develop a Deep-Learning Spatial and Temporal Fusion Model (DLSTFM) for Landsat-8 and MODIS LST imagery to address challenges in existing spatiotemporal fusion methods, particularly for heterogeneous surfaces and high-precision applications, by producing clearer surface features and more accurate temperatures.
Study Configuration
- Spatial Scale: Fusion of Landsat-8 (e.g., 100 m thermal band) and MODIS (1 km) LST imagery, aiming for high spatial resolution output (e.g., 30 m to 100 m). Test areas include Griffith and Ardiethan, Australia.
- Temporal Scale: Fusion of daily MODIS LST and 16-day revisit Landsat-8 LST, aiming for daily high spatial resolution LST products.
Methodology and Data
- Models used: Deep-Learning Spatial and Temporal Fusion Model (DLSTFM). This model employs a dual-branch structure for dual-temporal and multi-source feature fusion, incorporating a Spatial Adaptive Feature Modulation (SAFM) module for adaptive multi-scale feature fusion and a Temperature Adaptive Correction Module (TCM) for pixel-wise adjustments using reference data.
- Data sources: Landsat-8 Land Surface Temperature (LST) imagery and MODIS Land Surface Temperature (LST) imagery. Reference data is also used for the Temperature Adaptive Correction Module (TCM).
Main Results
- DLSTFM significantly outperforms both traditional and existing deep-learning fusion methods in LST spatiotemporal fusion.
- The model achieves clearer surface features in the fused LST products.
- DLSTFM demonstrates a mean absolute temperature error of approximately 2.1 K.
- The model exhibited excellent generalization performance in an independent test area (Ardiethan) without requiring retraining.
Contributions
- Development of a novel Deep-Learning Spatial and Temporal Fusion Model (DLSTFM) with a unique dual-branch architecture for enhanced LST fusion.
- Introduction of the Spatial Adaptive Feature Modulation (SAFM) module, enabling adaptive multi-scale feature fusion.
- Introduction of the Temperature Adaptive Correction Module (TCM), providing pixel-wise temperature adjustments using reference data for improved accuracy.
- Achieves superior accuracy (mean absolute error of 2.1 K) and clearer surface feature preservation compared to state-of-the-art methods.
- Demonstrates strong generalization capabilities across different geographical areas without retraining, highlighting its practical utility for high-accuracy LST monitoring.
Funding
Not explicitly mentioned in the paper.
Citation
@article{Jin2026DeepLearning,
author = {Jin, Chenhao and Li, Jiasheng and Shen, Yao},
title = {Deep-Learning Spatial and Temporal Fusion Model for Land Surface Temperature Based on a Spatially Adaptive Feature and Temperature-Adaptive Correction Module},
journal = {Remote Sensing},
year = {2026},
doi = {10.3390/rs18020238},
url = {https://doi.org/10.3390/rs18020238}
}
Original Source: https://doi.org/10.3390/rs18020238