Wen et al. (2026) Semantic-Aware Remote Sensing Visual Question Answering via Segmentation-Guided Learning
Identification
- Journal: IEEE Transactions on Geoscience and Remote Sensing
- Year: 2026
- Date: 2026-01-01
- Authors: Shuyi Wen, Aihua Mao, Ran Yi, Yong-Jin Liu
- DOI: 10.1109/tgrs.2026.3663435
Research Groups
[Information not available from the provided text]
Short Summary
This paper introduces a semantic-aware approach for Visual Question Answering (VQA) in remote sensing, leveraging segmentation-guided learning to enhance the understanding and answering capabilities for remote sensing imagery.
Objective
- To develop a novel framework for remote sensing Visual Question Answering that integrates semantic awareness through segmentation-guided learning to improve the accuracy and relevance of answers.
Study Configuration
- Spatial Scale: Remote sensing imagery (e.g., satellite, aerial images).
- Temporal Scale: Not specified, likely static image analysis.
Methodology and Data
- Models used: Visual Question Answering (VQA) models, semantic segmentation models, and deep learning architectures designed for integrating image segmentation information into VQA tasks.
- Data sources: Remote sensing image datasets, potentially annotated with questions, answers, and semantic segmentation masks for training and evaluation.
Main Results
[Information not available from the provided text]
Contributions
[Information not available from the provided text]
Funding
[Information not available from the provided text]
Citation
@article{Wen2026SemanticAware,
author = {Wen, Shuyi and Mao, Aihua and Yi, Ran and Liu, Yong-Jin},
title = {Semantic-Aware Remote Sensing Visual Question Answering via Segmentation-Guided Learning},
journal = {IEEE Transactions on Geoscience and Remote Sensing},
year = {2026},
doi = {10.1109/tgrs.2026.3663435},
url = {https://doi.org/10.1109/tgrs.2026.3663435}
}
Original Source: https://doi.org/10.1109/tgrs.2026.3663435