Li et al. (2025) Achieving precise cropland parcel extraction from remote sensing images through integration of segment anything model and adaptive mask refinement
Identification
- Journal: Computers and Electronics in Agriculture
- Year: 2025
- Date: 2025-12-27
- Authors: H. G. Li, Jianyu Zhu, Xing Mao, Xueli Hao, S. W. Li, Qiangyi Yu, Yun Shi, Jianping Qian
- DOI: 10.1016/j.compag.2025.111347
Research Groups
- State Key Laboratory of Efficient Utilization of Arable Land in China, Institute of Agricultural Resources and Regional Planning, Chinese Academy of Agricultural Sciences, Beijing, China
- XinJiang Institute of Ecology and Geography, Chinese Academy of Science, Urumqi, China
- Institute of Agricultural Information, Jiangsu Academy of Agricultural Sciences, Nanjing, China
- College of Mechanical and Electronic Engineering, Northwest A&F University, Yangling, China
Short Summary
This study proposes a novel unsupervised methodology integrating the Segment Anything Model (SAM) with an adaptive mask refinement strategy to precisely extract cropland parcels from remote sensing images under minimal supervision. The method significantly improves extraction accuracy over baseline SAM and outperforms five state-of-the-art methods, demonstrating strong generalization across diverse agricultural landscapes.
Objective
- To develop and evaluate a novel, unsupervised methodology that integrates the Segment Anything Model (SAM) with an adaptive mask refinement strategy to achieve precise cropland parcel extraction from remote sensing images under minimal supervision, addressing SAM's challenges in handling diverse and heterogeneous cropland types.
Study Configuration
- Spatial Scale: Approximately 160 square kilometers (km²) across seven representative regions in China, the United States, and South Africa.
- Temporal Scale: Seasonal analysis, with images captured during the sowing period yielding the highest extraction accuracy.
Methodology and Data
- Models used: Segment Anything Model (SAM) integrated with an adaptive mask refinement strategy, which comprises:
- An adaptive prompt point module (leveraging superpixels).
- An overlap filtering module.
- A boundary-matching stitching module.
- Data sources: Diverse satellite images.
Main Results
- The proposed approach achieved notable improvements over the baseline SAM, with increases in recall (R) of 0.971, Intersection over Union (IoU) of 0.908, and a Global Total Classification Error (GTC) of 0.124.
- It outperformed five contemporary state-of-the-art methods, achieving a precision (P) of 0.960.
- The method demonstrated strong generalization across different cropland configurations, including large, regular parcels (e.g., Xinjiang, Illinois) and fragmented landscapes (e.g., Guangdong, Western Cape).
- Seasonal analysis confirmed that images captured during the sowing period yielded the highest extraction accuracy.
Contributions
- Proposes a novel unsupervised methodology that integrates SAM with an adaptive mask refinement strategy for accurate cropland parcel extraction with minimal supervision.
- Introduces a three-component refinement strategy (adaptive prompt point, overlap filtering, boundary-matching stitching) to enhance SAM's performance on diverse and heterogeneous cropland types.
- Demonstrates significant quantitative improvements in key metrics (R, IoU, GTC, P) over both baseline SAM and existing state-of-the-art methods.
- Validates the method's robust generalization capability across varied agricultural landscapes and identifies optimal seasonal timing for image acquisition, highlighting its potential for scalable and accurate mapping.
Funding
- Not explicitly mentioned in the provided paper text.
Citation
@article{Li2025Achieving,
author = {Li, H. G. and Zhu, Jianyu and Mao, Xing and Hao, Xueli and Li, S. W. and Yu, Qiangyi and Shi, Yun and Qian, Jianping},
title = {Achieving precise cropland parcel extraction from remote sensing images through integration of segment anything model and adaptive mask refinement},
journal = {Computers and Electronics in Agriculture},
year = {2025},
doi = {10.1016/j.compag.2025.111347},
url = {https://doi.org/10.1016/j.compag.2025.111347}
}
Original Source: https://doi.org/10.1016/j.compag.2025.111347