Sun et al. (2025) Canopy3D-Net: Semantic segmentation of fruit tree canopies based on 3D point clouds
Identification
- Journal: Smart Agricultural Technology
- Year: 2025
- Date: 2025-11-25
- Authors: Zhilei Sun, Kangting Yan, Shaozhen Lin, Yeqing Lin, Zhijie Zhang, Wei Peng, Yubin Lan, Yali Zhang
- DOI: 10.1016/j.atech.2025.101673
Research Groups
- College of Engineering, South China Agricultural University, Guangzhou 510642, China
- College of Electronic Engineering (College of Artificial Intelligence), South China Agricultural University, Guangzhou 510642, China
- Department of Mechatronic Engineering, Guangdong Polytechnic Normal University, Guangzhou 510665, China
- National Center for International Collaboration Research on Precision Agricultural Aviation Pesticides Spraying Technology, Guangzhou 510642, China
Short Summary
This paper proposes Canopy3D-Net, a semantic segmentation network for 3D point clouds, to accurately delineate fruit tree canopies in complex agricultural environments. The network achieves high segmentation performance and strong generalization, offering an efficient solution for precision agriculture and forestry.
Objective
- To develop an efficient, accurate, and robust semantic segmentation network (Canopy3D-Net) for fruit tree canopies from 3D point clouds, addressing challenges like irregular geometry and variable point density in agricultural settings.
- To provide a reliable algorithmic solution to support variable-rate operations, such as precision pesticide application and fertilization, as well as growth monitoring in precision agriculture.
Study Configuration
- Spatial Scale: A self-built citrus orchard dataset from Huangtian Town, Sihui City, Zhaoqing, Guangdong Province, China, comprising 456 point cloud samples (450 individual canopies and 6 background ground point clouds). Evaluated generalization on the public Semantic3D dataset, which includes over 4 billion points across 30 large-scale outdoor scenes.
- Temporal Scale: Data acquisition for the self-built dataset was conducted on November 7, 2024. Model performance was visualized across six citrus orchard datasets representing different growth states (spring and autumn).
Methodology and Data
- Models used: Canopy3D-Net (proposed), RandLA-Net, PointNet, PointNet++, Point Transformer, VoxelNet, Point Pillars, SPG, KPConv, GACNet, ShellNet, NeiEA-Net.
- Data sources:
- Self-built dataset: 3D point cloud data of a citrus orchard acquired using a DJI Matrice 300 RTK unmanned aerial vehicle (UAV) equipped with a Zenmuse L1 LiDAR sensor. Data processed with DJI Terra software.
- Public dataset: Semantic3D dataset for generalization evaluation.
Main Results
- On the self-built citrus orchard dataset, Canopy3D-Net achieved a mean Intersection over Union (mIoU) of 0.849, an Overall Accuracy (OA) of 0.938, an IoU for canopy of 0.832, and an IoU for background of 0.968.
- The proposed Height-Aware Random Sampling (HARS) method demonstrated faster convergence and better initial overall accuracy compared to standard random sampling.
- The Local Multi-Feature Fusion (LMFF) module, particularly with Local Geometric Feature Enhancement (LGFE) in the first two encoder layers, significantly improved mIoU by 3.2%, OA by 1.7%, and IoU for canopy by 11.1% compared to a baseline without LGFE, while maintaining computational efficiency.
- The Adaptive Multi-Channel Attention (AMCA) mechanism improved mIoU by 1-3.3%, OA by 1.5-4.3%, and IoU for canopy by 8.6-12% compared to other attention mechanisms.
- A hybrid loss function combining Focal Loss and Dice Loss further improved segmentation accuracy, increasing mIoU by 3%, OA by 2.9%, and IoU for canopy by 3% compared to using Focal Loss alone, effectively mitigating class imbalance and enhancing boundary delineation.
- On the public Semantic3D dataset, Canopy3D-Net achieved an mIoU of 0.752 and an OA of 0.945, outperforming RandLA-Net by 1.7% in mIoU and 0.7% in OA, demonstrating strong generalization capabilities across diverse environments.
- Canopy3D-Net exhibited excellent portability and suitability for lightweight deployment, with competitive parameter count (1.36 million) and inference time (192 seconds for a large-scale scene) while maintaining low memory usage (2.87 gigabytes).
Contributions
- Proposed Canopy3D-Net, a novel semantic segmentation network specifically designed for fruit tree canopy point clouds in agricultural and forestry environments.
- Introduced a Height-Aware Random Sampling (HARS) method to efficiently sample points while prioritizing upper canopy regions, leading to faster network convergence.
- Developed a Local Multi-Feature Fusion (LMFF) module that incorporates local geometric features (normal vectors and density) to enrich contextual information and better capture canopy edge structures.
- Integrated an Adaptive Multi-Channel Attention (AMCA) mechanism to enable the network to autonomously learn and prioritize discriminative feature channels for semantic segmentation.
- Designed a hybrid loss function (Focal Loss + Dice Loss) to effectively mitigate class imbalance and enhance the accuracy of boundary delineation between canopy and background.
- Demonstrated superior segmentation performance and robust generalization capability on both a self-built citrus orchard dataset and the public Semantic3D dataset, outperforming existing state-of-the-art models in agroforestry scenarios.
- Provided an efficient and accurate solution for point cloud processing in agricultural and forestry applications, supporting precision agriculture tasks such as variable-rate pesticide application and growth monitoring.
Funding
- National Key Research and Development Program (Development of Long-Endurance, Heavy-Load, Gasoline-Powered Rotary-Wing Agricultural Drones) (2023YFD2000202).
Citation
@article{Sun2025Canopy3DNet,
author = {Sun, Zhilei and Yan, Kangting and Lin, Shaozhen and Lin, Yeqing and Zhang, Zhijie and Peng, Wei and Lan, Yubin and Zhang, Yali},
title = {Canopy3D-Net: Semantic segmentation of fruit tree canopies based on 3D point clouds},
journal = {Smart Agricultural Technology},
year = {2025},
doi = {10.1016/j.atech.2025.101673},
url = {https://doi.org/10.1016/j.atech.2025.101673}
}
Original Source: https://doi.org/10.1016/j.atech.2025.101673