Svoboda | Graniru | BBC Russia | Golosameriki | Facebook
Next Article in Journal
Matching Spring Phenology Indicators in Ground Observations and Remote-Sensing Metrics
Previous Article in Journal
Evaluation of Rayleigh-Corrected Reflectance on Remote Detection of Algal Blooms in Optically Complex Coasts of East China Sea
Previous Article in Special Issue
Identifying Winter Wheat Using Landsat Data Based on Deep Learning Algorithms in the North China Plain
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
This is an early access version, the complete PDF, HTML, and XML versions will be available soon.
Article

Multi-Branch Attention Fusion Network for Cloud and Cloud Shadow Segmentation

1
Collaborative Innovation Center on Atmospheric Environment and Equipment Technology, Nanjing University of Information Science and Technology, Nanjing 210044, China
2
College of Information Science and Technology, Nanjing Forestry University, Nanjing 210000, China
3
Department of Computer Science, University of Reading, Whiteknights, Reading RG6 6DH, UK
*
Author to whom correspondence should be addressed.
Remote Sens. 2024, 16(13), 2308; https://doi.org/10.3390/rs16132308
Submission received: 29 May 2024 / Revised: 15 June 2024 / Accepted: 18 June 2024 / Published: 24 June 2024

Abstract

In remote sensing image processing, the segmentation of clouds and their shadows is a fundamental and vital task. For cloud images, traditional deep learning methods often have weak generalization capabilities and are prone to interference from ground objects and noise, which not only results in poor boundary segmentation but also causes false and missed detections of small targets. To address these issues, we proposed a multi-branch attention fusion network (MAFNet). In the encoder section, the dual branches of ResNet50 and the Swin transformer extract features together. A multi-branch attention fusion module (MAFM) uses positional encoding to add position information. Additionally, multi-branch aggregation attention (MAA) in the MAFM fully fuses the same level of deep features extracted by ResNet50 and the Swin transformer, which enhances the boundary segmentation ability and small target detection capability. To address the challenge of detecting small cloud and shadow targets, an information deep aggregation module (IDAM) was introduced to perform multi-scale deep feature aggregation, which supplements high semantic information, improving small target detection. For the problem of rough segmentation boundaries, a recovery guided module (RGM) was designed in the decoder section, which enables the model to effectively allocate attention to complex boundary information, enhancing the network’s focus on boundary information. Experimental results on the Cloud and Cloud Shadow dataset, HRC-WHU dataset, and SPARCS dataset indicate that MAFNet surpasses existing advanced semantic segmentation techniques.
Keywords: cloud and cloud shadow; multi-branch; boundary segmentation; small target detection cloud and cloud shadow; multi-branch; boundary segmentation; small target detection

Share and Cite

MDPI and ACS Style

Gu, H.; Gu, G.; Liu, Y.; Lin, H.; Xu, Y. Multi-Branch Attention Fusion Network for Cloud and Cloud Shadow Segmentation. Remote Sens. 2024, 16, 2308. https://doi.org/10.3390/rs16132308

AMA Style

Gu H, Gu G, Liu Y, Lin H, Xu Y. Multi-Branch Attention Fusion Network for Cloud and Cloud Shadow Segmentation. Remote Sensing. 2024; 16(13):2308. https://doi.org/10.3390/rs16132308

Chicago/Turabian Style

Gu, Hongde, Guowei Gu, Yi Liu, Haifeng Lin, and Yao Xu. 2024. "Multi-Branch Attention Fusion Network for Cloud and Cloud Shadow Segmentation" Remote Sensing 16, no. 13: 2308. https://doi.org/10.3390/rs16132308

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop