Zhou et al. (2025) FoScenes: A high-fidelity, large-scale 3D forest plant area density product derived from open-access airborne lidar data
Identification
- Journal: Remote Sensing of Environment
- Year: 2025
- Date: 2025-11-21
- Authors: Chao Zhou, Tiangang Yin, Shanshan Wei, Bruce D. Cook, Weiwei Tan, Wai Yeung Yan, Qi Chen, Jean‐Philippe Gastellu‐Etchegorry
- DOI: 10.1016/j.rse.2025.115150
Research Groups
- Department of Land Surveying and Geo-Informatics, The Hong Kong Polytechnic University, Hung Hom, Hong Kong
- JC STEM Lab of Earth Observations, The Hong Kong Polytechnic University, Hung Hom, Hong Kong
- Research Centre for Artificial Intelligence in Geomatics, The Hong Kong Polytechnic University, Hung Hom, Hong Kong
- Research Institute for Land and Space, The Hong Kong Polytechnic University, Hung Hom, Hong Kong
- Centre for Remote Imaging, Sensing and Processing, National University of Singapore, Singapore
- Biospheric Science Laboratory, NASA Goddard Space Flight Center, Greenbelt, MD, USA
- Department of Civil Engineering, Toronto Metropolitan University, Toronto, Ontario, Canada
- Centre d’Etudes Spatiales de la Biosphère – UT3, CNES, CNRS, IRD, Université de Toulouse, 31401 Toulouse, Cedex 9, France
Short Summary
This study develops LS-PVlad, a novel workflow for large-scale 3D forest reconstruction from airborne lidar data, and introduces FoScenes, a high-fidelity plant area density product comprising 40 seamless scenes from 28 diverse forest sites, validated against field measurements and satellite products.
Objective
- To develop the Large-Scale Path Volume Leaf Area Density (LS-PVlad), a novel 3D forest reconstruction workflow capable of producing extensive high-resolution 3D voxelized forest scenes (up to 100 km² with ≤2 m voxel size) from worldwide open-access airborne lidar scanning (ALS) data.
- To establish and release FoScenes, a high-fidelity plant area density (PAD) product, to address the limitation of current lidar-based voxelization methods covering only limited areas, thereby facilitating broad forest studies and Earth Observation Satellite (EOS) data interpretation.
Study Configuration
- Spatial Scale: Individual scenes range from approximately 50 to 11,000 hectares (0.5 to 110 square kilometers). The product comprises 40 seamless scenes from 28 diverse forest sites across North America (USA and Mexico). Voxel size is typically 2 meters, with some tests at 1 meter.
- Temporal Scale: ALS data acquired from 2011 to 2021, including multi-year and multi-season (leaf-on/leaf-off) acquisitions for specific sites (e.g., SERC 2012, 2017, 2021). Comparisons with MODIS (8-day product) and Sentinel-2 (cloud-free acquisitions within ±15 days of ALS).
Methodology and Data
- Models used:
- Large-Scale Path Volume Leaf Area Density (LS-PVlad) workflow (developed in this study)
- Path Volume Leaf Area Density (PVlad) model (core algorithm)
- Discrete Anisotropic Radiative Transfer (DART) model (for RTM applications and simulations)
- mSCOPE (mentioned as 1-D RTM)
- PROSAIL and Neural Networks algorithm (for Sentinel-2 LAI product)
- Sentinel-2 Level 2 Prototype Processor (SL2P)
- VoxLAD model (for TLS-based LAD profile comparison in previous work)
- Data sources:
- Airborne Laser Scanning (ALS) data from NASA Goddard’s LiDAR, Hyperspectral & Thermal Imager (G-LiHT) campaigns (open-access, 40 acquisitions from 28 sites across USA and Mexico).
- Ground measurements for validation:
- Litter collection (annual reference LAI for eight 100 m x 100 m plots at Smithsonian Environmental Research Center (SERC), Maryland, USA).
- Digital Hemispherical Photography (DHP) images (from NEON and GBOV RM7 product, 14 plots at SERC).
- Earth Observation Satellite (EOS) LAI products for inter-comparison:
- MODIS Aqua and Terra combined 500 m 8-day LAI product (MCD15A2H, Collection 6.1).
- Sentinel-2 (Sen-2) 20 m LAI product (derived from L1C/L2A reflectance images using ESA SNAP biophysical processor).
Main Results
- Developed FoScenes, the first high-fidelity Plant Area Density (PAD) product, comprising 40 seamless 3D voxelized forest scenes from 28 diverse sites across North America, covering a total forested area of approximately 74,760 hectares.
- LS-PVlad leaf area estimates were validated with high accuracy against field measurements at a deciduous forest site (SERC):
- Litter collection: Best RMSE = 0.35 m²/m² (with known VLIA), RMSE = 0.61 m²/m² (spherical VLIA assumption), RMSE = 0.56 m²/m² (leaf-only LAD).
- Digital Hemispherical Photography (DHP): RMSE = 0.46 m²/m² (overall), RMSE = 0.20 m²/m² (for temporally aligned data).
- FoScenes demonstrated capability to capture forest structural changes due to disturbances (e.g., logging, beaver activity) and canopy development over time through multi-dimensional (3D scenes, 2D PAI maps, 1D vertical PAD profiles) analyses.
- Broad comparison between FoScenes PAI and MODIS LAI products showed high consistency (R² = 0.70, RMSE = 0.86 m²/m²), with evergreen needleleaf forests showing strong agreement.
- FoScenes PAI maps provide significantly greater spatial detail and more heterogeneous textures compared to Sentinel-2 SNAP LAI products, which generally underestimated LAI.
- The accuracy of leaf area estimation using 1 meter and 2 meter voxel sizes showed minimal variations, with 1 meter reconstruction slightly improving accuracy for litter collection validation (RMSE reduction of 0.49 m²/m² and 0.07 m²/m² for 2012 and 2017, respectively).
Contributions
- Development of LS-PVlad, a novel, efficient, and scalable workflow for large-scale 3D forest reconstruction from airborne lidar data, overcoming limitations of previous methods in spatial coverage and computational cost.
- Creation and open release of FoScenes, the first high-fidelity, large-scale (up to 110 km² per scene) 3D voxel-based forest Plant Area Density (PAD) product, providing multi-dimensional forest characterizations for 28 diverse sites.
- Rigorous validation of the LS-PVlad model and FoScenes product against multi-year field measurements (litter collection, DHP) and broad inter-comparison with established satellite LAI products (MODIS, Sentinel-2), demonstrating high accuracy and consistency.
- Enabling extensive 3D Radiative Transfer Model (RTM) applications at various scales by providing essential, realistic scene inputs, which was previously hampered by the scarcity of such large-scale, detailed forest scenes.
- Facilitating continuous evaluation and optimization of global LAI products and supporting the generation of realistic training datasets for deep learning inversion models in remote sensing.
Funding
- Hong Kong RGC Early Career Scheme (Grant No. 25236824)
- Hong Kong Polytechnic University (Project ID: WZ87)
Citation
@article{Zhou2025FoScenes,
author = {Zhou, Chao and Yin, Tiangang and Wei, Shanshan and Cook, Bruce D. and Tan, Weiwei and Yan, Wai Yeung and Chen, Qi and Gastellu‐Etchegorry, Jean‐Philippe},
title = {FoScenes: A high-fidelity, large-scale 3D forest plant area density product derived from open-access airborne lidar data},
journal = {Remote Sensing of Environment},
year = {2025},
doi = {10.1016/j.rse.2025.115150},
url = {https://doi.org/10.1016/j.rse.2025.115150}
}
Original Source: https://doi.org/10.1016/j.rse.2025.115150