Zhao et al. (2025) UAV multi-source data fusion with super-resolution for accurate soybean leaf area index estimation
Identification
- Journal: Frontiers in Plant Science
- Year: 2025
- Date: 2025-11-20
- Authors: Zhenqing Zhao, Hong Yao, Depeng Zeng, Zhenfeng Jiang, Xihai Zhang
- DOI: 10.3389/fpls.2025.1700660
Research Groups
- College of Electrical Engineering and Information, Northeast Agricultural University, Harbin, China
- National Key Laboratory of Smart Farm Technologies and Systems, Harbin, China
- College of Agriculture, Northeast Agricultural University, Harbin, China
Short Summary
This study developed a UAV multi-source data fusion framework with super-resolution to accurately estimate soybean Leaf Area Index (LAI) across varying flight altitudes. It demonstrated that combining super-resolution-enhanced RGB and multispectral data significantly improves LAI estimation accuracy, mitigating the negative impact of higher flight altitudes.
Objective
- To investigate the integration of super-resolution (SR) image reconstruction with multi-sensor data to enhance LAI estimation for soybeans across varying UAV flight altitudes.
- To enhance the resolution of UAV imagery using SR techniques and use these enhanced images to estimate the LAI of soybeans, selecting an appropriate estimation model.
- To determine the combination of multivariate remote sensing imagery datasets (RGB+MS, MS, RGB) that yields the highest precision.
- To assess the key features associated with the LAI.
Study Configuration
- Spatial Scale: Experimental plots at Xiangyang Farm, Harbin City, Heilongjiang Province, China (45°45’N latitude, 126°54’E longitude, average elevation 150 m). 100 soybean varieties were cultivated. UAV flight altitudes were 15 m, 30 m, 45 m, and 60 m, corresponding to spatial resolutions of 0.1875 cm, 0.375 cm, 0.5625 cm, and 0.75 cm, respectively.
- Temporal Scale: Data collection from August to September 2024, during three distinct growth stages: V6 (sixth trifoliate leaf), R1 (beginning bloom), and R3 (beginning pod). Specific flight dates were August 7, August 23, and September 11, 2024.
Methodology and Data
- Models used:
- Super-resolution (SR) algorithms: Super-Resolution Convolutional Neural Network (SRCNN), Enhanced Deep Super-Resolution Network (EDSR), Real-Enhanced Super-Resolution Generative Adversarial Network (Real-ESRGAN), Swin Transformer for Image Restoration (SwinIR).
- LAI regression models: Random Forest (RF), Extreme Gradient Boosting (XGBoost).
- Feature selection method: SelectFromModel (SFM).
- Data sources:
- UAV-based RGB images captured by a DJI ZENMUSE P1 camera.
- UAV-based multispectral (MS) images captured by an MS600 Pro MS camera (six monochromatic channels: NIR (840 ± 15 nm), Red Edge 750 nm (750 ± 5 nm), Red Edge 720 nm (720 ± 5 nm), Red (660 ± 11 nm), Green (550 ± 14 nm), Blue (450 ± 15 nm)).
- Ground truth LAI measurements collected using an AccuPAR LP-80 Plant Canopy Analyzer.
Main Results
- Super-resolution (SR) performance declined with increasing altitude, with SwinIR achieving superior image reconstruction quality (higher PSNR and SSIM values) compared to SRCNN, EDSR, and Real-ESRGAN.
- Texture features extracted from RGB images showed strong sensitivity to LAI.
- The XGBoost model leveraging fused RGB and multispectral data achieved the highest accuracy for LAI estimation (relative error: 4.16%), significantly outperforming models using only RGB data (5.25%) or only multispectral data (9.17%).
- The application of SR techniques significantly improved model accuracy at 30 m and 45 m altitudes. At 30 m, models incorporating Real-ESRGAN and SwinIR achieved an average R² of 0.86, while at 45 m, these methods yielded models with an average R² of 0.77. No noticeable precision enhancement was observed at 60 m.
- Among the features, MSAVI2 and NDVI were identified as the most influential for LAI prediction.
Contributions
- Introduces a novel method for soybean LAI estimation by integrating UAV-based multi-sensor data with super-resolution (SR) techniques.
- Demonstrates that SR-enhanced RGB imagery, when fused with multispectral data, can effectively mitigate the negative impact of higher UAV flight altitudes on LAI estimation accuracy, allowing for more efficient large-scale monitoring.
- Provides a robust and efficient framework for UAV-based crop monitoring, enhancing data-driven decision-making in precision agriculture.
- Identifies the superior performance of Transformer-based (SwinIR) and GAN-based (Real-ESRGAN) SR methods over traditional CNN-based methods (SRCNN, EDSR) for agricultural image enhancement.
- Highlights the importance of multi-source data fusion (RGB+MS) and advanced machine learning models (XGBoost) for achieving high-precision LAI estimation.
Funding
- National Key Laboratory of Smart Farm Technologies and Systems
- Key R&D Program Project of Heilongjiang Province of China (Grant No. JD2023GJ01-13)
- Natural Science Foundation of Heilongjiang Province, China (Grant No. ZL2024C004)
- Programs for Science and Technology Development of Heilongjiang Province of China (Grant No. 2024ZX01A07)
- Key R&D Program of Heilongjiang Province of China (Grant No. 2022ZX01A23)
Citation
@article{Zhao2025UAV,
author = {Zhao, Zhenqing and Yao, Hong and Zeng, Depeng and Jiang, Zhenfeng and Zhang, Xihai},
title = {UAV multi-source data fusion with super-resolution for accurate soybean leaf area index estimation},
journal = {Frontiers in Plant Science},
year = {2025},
doi = {10.3389/fpls.2025.1700660},
url = {https://doi.org/10.3389/fpls.2025.1700660}
}
Original Source: https://doi.org/10.3389/fpls.2025.1700660