Svoboda | Graniru | BBC Russia | Golosameriki | Facebook
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (251)

Search Parameters:
Keywords = cloud and cloud shadow

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
32 pages, 30788 KiB  
Article
Illumination and Shadows in Head Rotation: Experiments with Denoising Diffusion Models
by Andrea Asperti, Gabriele Colasuonno and Antonio Guerra
Electronics 2024, 13(15), 3091; https://doi.org/10.3390/electronics13153091 - 5 Aug 2024
Viewed by 512
Abstract
Accurately modeling the effects of illumination and shadows during head rotation is critical in computer vision for enhancing image realism and reducing artifacts. This study delves into the latent space of denoising diffusion models to identify compelling trajectories that can express continuous head [...] Read more.
Accurately modeling the effects of illumination and shadows during head rotation is critical in computer vision for enhancing image realism and reducing artifacts. This study delves into the latent space of denoising diffusion models to identify compelling trajectories that can express continuous head rotation under varying lighting conditions. A key contribution of our work is the generation of additional labels from the CelebA dataset, categorizing images into three groups based on prevalent illumination direction: left, center, and right. These labels play a crucial role in our approach, enabling more precise manipulations and improved handling of lighting variations. Leveraging a recent embedding technique for Denoising Diffusion Implicit Models (DDIM), our method achieves noteworthy manipulations, encompassing a wide rotation angle of ±30°. while preserving individual distinct characteristics even under challenging illumination conditions. Our methodology involves computing trajectories that approximate clouds of latent representations of dataset samples with different yaw rotations through linear regression. Specific trajectories are obtained by analyzing subsets of data that share significant attributes with the source image, including light direction. Notably, our approach does not require any specific training of the generative model for the task of rotation; we merely compute and follow specific trajectories in the latent space of a pre-trained face generation model. This article showcases the potential of our approach and its current limitations through a qualitative discussion of notable examples. This study contributes to the ongoing advancements in representation learning and the semantic investigation of the latent space of generative models. Full article
(This article belongs to the Special Issue Generative AI and Its Transformative Potential)
Show Figures

Figure 1

25 pages, 20390 KiB  
Article
A New and Robust Index for Water Body Extraction from Sentinel-2 Imagery
by Zhenfeng Su, Longwei Xiang, Holger Steffen, Lulu Jia, Fan Deng, Wenliang Wang, Keyu Hu, Jingjing Guo, Aile Nong, Haifu Cui and Peng Gao
Remote Sens. 2024, 16(15), 2749; https://doi.org/10.3390/rs16152749 - 27 Jul 2024
Viewed by 344
Abstract
Land surface water is a key part in the global ecosystem balance and hydrological cycle. Remote sensing has become an effective tool for its spatio-temporal monitoring. However, remote sensing results exemplified in so-called water indices are subject to several limitations. This paper proposes [...] Read more.
Land surface water is a key part in the global ecosystem balance and hydrological cycle. Remote sensing has become an effective tool for its spatio-temporal monitoring. However, remote sensing results exemplified in so-called water indices are subject to several limitations. This paper proposes a new and effective water index called the Sentinel Multi-Band Water Index (SMBWI) to extract water bodies in complex environments from Sentinel-2 satellite imagery. Individual tests explore the effectiveness of the SMBWI in eliminating interference of various special interfering cover features. The Iterative Self-Organizing Data Analysis Technique Algorithm (ISODATA) method and confusion matrix along with the derived accuracy evaluation indicators are used to provide a threshold reference when extracting water bodies and evaluate the accuracy of the water body extraction results, respectively. The SMBWI and eight other commonly used water indices are qualitatively and quantitatively compared through vision and accuracy evaluation indicators, respectively. Here, the SMBWI is proven to be the most effective at suppressing interference of buildings and their shadows, cultivated lands, vegetation, clouds and their shadows, alpine terrain with bare ground and glaciers when extracting water bodies. The overall accuracy in all tests was consistently greater than 96.5%. The SMBWI is proven to have a high ability to identify mixed pixels of water and non-water, with the lowest total error among nine water indices. Most notably, better results are obtained when extracting water bodies under interfering environments of cover features. Therefore, we propose that our novel and robust water index, the SMBWI, is ready to be used for mapping land surface water with high accuracy. Full article
(This article belongs to the Special Issue Remote Sensing for Surface Water Monitoring)
Show Figures

Figure 1

23 pages, 12771 KiB  
Article
Harmonized Landsat and Sentinel-2 Data with Google Earth Engine
by Elias Fernando Berra, Denise Cybis Fontana, Feng Yin and Fabio Marcelo Breunig
Remote Sens. 2024, 16(15), 2695; https://doi.org/10.3390/rs16152695 - 23 Jul 2024
Viewed by 636
Abstract
Continuous and dense time series of satellite remote sensing data are needed for several land monitoring applications, including vegetation phenology, in-season crop assessments, and improving land use and land cover classification. Supporting such applications at medium to high spatial resolution may be challenging [...] Read more.
Continuous and dense time series of satellite remote sensing data are needed for several land monitoring applications, including vegetation phenology, in-season crop assessments, and improving land use and land cover classification. Supporting such applications at medium to high spatial resolution may be challenging with a single optical satellite sensor, as the frequency of good-quality observations can be low. To optimize good-quality data availability, some studies propose harmonized databases. This work aims at developing an ‘all-in-one’ Google Earth Engine (GEE) web-based workflow to produce harmonized surface reflectance data from Landsat-7 (L7) ETM+, Landsat-8 (L8) OLI, and Sentinel-2 (S2) MSI top of atmosphere (TOA) reflectance data. Six major processing steps to generate a new source of near-daily Harmonized Landsat and Sentinel (HLS) reflectance observations at 30 m spatial resolution are proposed and described: band adjustment, atmospheric correction, cloud and cloud shadow masking, view and illumination angle adjustment, co-registration, and reprojection and resampling. The HLS is applied to six equivalent spectral bands, resulting in a surface nadir BRDF-adjusted reflectance (NBAR) time series gridded to a common pixel resolution, map projection, and spatial extent. The spectrally corresponding bands and derived Normalized Difference Vegetation Index (NDVI) were compared, and their sensor differences were quantified by regression analyses. Examples of HLS time series are presented for two potential applications: agricultural and forest phenology. The HLS product is also validated against ground measurements of NDVI, achieving very similar temporal trajectories and magnitude of values (R2 = 0.98). The workflow and script presented in this work may be useful for the scientific community aiming at taking advantage of multi-sensor harmonized time series of optical data. Full article
(This article belongs to the Section Forest Remote Sensing)
Show Figures

Figure 1

26 pages, 2861 KiB  
Article
Attention Guide Axial Sharing Mixed Attention (AGASMA) Network for Cloud Segmentation and Cloud Shadow Segmentation
by Guowei Gu, Zhongchen Wang, Liguo Weng, Haifeng Lin, Zikai Zhao and Liling Zhao
Remote Sens. 2024, 16(13), 2435; https://doi.org/10.3390/rs16132435 - 2 Jul 2024
Viewed by 543
Abstract
Segmenting clouds and their shadows is a critical challenge in remote sensing image processing. The shape, texture, lighting conditions, and background of clouds and their shadows impact the effectiveness of cloud detection. Currently, architectures that maintain high resolution throughout the entire information-extraction process [...] Read more.
Segmenting clouds and their shadows is a critical challenge in remote sensing image processing. The shape, texture, lighting conditions, and background of clouds and their shadows impact the effectiveness of cloud detection. Currently, architectures that maintain high resolution throughout the entire information-extraction process are rapidly emerging. This parallel architecture, combining high and low resolutions, produces detailed high-resolution representations, enhancing segmentation prediction accuracy. This paper continues the parallel architecture of high and low resolution. When handling high- and low-resolution images, this paper employs a hybrid approach combining the Transformer and CNN models. This method facilitates interaction between the two models, enabling the extraction of both semantic and spatial details from the images. To address the challenge of inadequate fusion and significant information loss between high- and low-resolution images, this paper introduces a method based on ASMA (Axial Sharing Mixed Attention). This approach establishes pixel-level dependencies between high-resolution and low-resolution images, aiming to enhance the efficiency of image fusion. In addition, to enhance the effective focus on critical information in remote sensing images, the AGM (Attention Guide Module) is introduced, to integrate attention elements from original features into ASMA, to alleviate the problem of insufficient channel modeling of the self-attention mechanism. Our experimental results on the Cloud and Cloud Shadow dataset, the SPARCS dataset, and the CSWV dataset demonstrate the effectiveness of our method, surpassing the state-of-the-art techniques for cloud and cloud shadow segmentation. Full article
Show Figures

Figure 1

29 pages, 26734 KiB  
Article
Variational-Based Spatial–Temporal Approximation of Images in Remote Sensing
by Majid Amirfakhrian and Faramarz F. Samavati
Remote Sens. 2024, 16(13), 2349; https://doi.org/10.3390/rs16132349 - 27 Jun 2024
Viewed by 585
Abstract
Cloud cover and shadows often hinder the accurate analysis of satellite images, impacting various applications, such as digital farming, land monitoring, environmental assessment, and urban planning. This paper presents a new approach to enhancing cloud-contaminated satellite images using a novel variational model for [...] Read more.
Cloud cover and shadows often hinder the accurate analysis of satellite images, impacting various applications, such as digital farming, land monitoring, environmental assessment, and urban planning. This paper presents a new approach to enhancing cloud-contaminated satellite images using a novel variational model for approximating the combination of the temporal and spatial components of satellite imagery. Leveraging this model, we derive two spatial-temporal methods containing an algorithm that computes the missing or contaminated data in cloudy images using the seamless Poisson blending method. In the first method, we extend the Poisson blending method to compute the spatial-temporal approximation. The pixel-wise temporal approximation is used as a guiding vector field for Poisson blending. In the second method, we use the rate of change in the temporal domain to divide the missing region into low-variation and high-variation sub-regions to better guide Poisson blending. In our second method, we provide a more general case by introducing a variation-based method that considers the temporal variation in specific regions to further refine the spatial–temporal approximation. The proposed methods have the same complexity as conventional methods, which is linear in the number of pixels in the region of interest. Our comprehensive evaluation demonstrates the effectiveness of the proposed methods through quantitative metrics, including the Root Mean Square Error (RMSE), Peak Signal-to-Noise Ratio (PSNR), and Structural Similarity Index Metric (SSIM), revealing significant improvements over existing approaches. Additionally, the evaluations offer insights into how to choose between our first and second methods for specific scenarios. This consideration takes into account the temporal and spatial resolutions, as well as the scale and extent of the missing data. Full article
(This article belongs to the Special Issue Remote Sensing in Environmental Modelling)
Show Figures

Figure 1

22 pages, 1266 KiB  
Article
Multi-Branch Attention Fusion Network for Cloud and Cloud Shadow Segmentation
by Hongde Gu, Guowei Gu, Yi Liu, Haifeng Lin and Yao Xu
Remote Sens. 2024, 16(13), 2308; https://doi.org/10.3390/rs16132308 - 24 Jun 2024
Viewed by 594
Abstract
In remote sensing image processing, the segmentation of clouds and their shadows is a fundamental and vital task. For cloud images, traditional deep learning methods often have weak generalization capabilities and are prone to interference from ground objects and noise, which not only [...] Read more.
In remote sensing image processing, the segmentation of clouds and their shadows is a fundamental and vital task. For cloud images, traditional deep learning methods often have weak generalization capabilities and are prone to interference from ground objects and noise, which not only results in poor boundary segmentation but also causes false and missed detections of small targets. To address these issues, we proposed a multi-branch attention fusion network (MAFNet). In the encoder section, the dual branches of ResNet50 and the Swin transformer extract features together. A multi-branch attention fusion module (MAFM) uses positional encoding to add position information. Additionally, multi-branch aggregation attention (MAA) in the MAFM fully fuses the same level of deep features extracted by ResNet50 and the Swin transformer, which enhances the boundary segmentation ability and small target detection capability. To address the challenge of detecting small cloud and shadow targets, an information deep aggregation module (IDAM) was introduced to perform multi-scale deep feature aggregation, which supplements high semantic information, improving small target detection. For the problem of rough segmentation boundaries, a recovery guided module (RGM) was designed in the decoder section, which enables the model to effectively allocate attention to complex boundary information, enhancing the network’s focus on boundary information. Experimental results on the Cloud and Cloud Shadow dataset, HRC-WHU dataset, and SPARCS dataset indicate that MAFNet surpasses existing advanced semantic segmentation techniques. Full article
Show Figures

Figure 1

20 pages, 7213 KiB  
Article
Improvement of High-Resolution Daytime Fog Detection Algorithm Using GEO-KOMPSAT-2A/Advanced Meteorological Imager Data with Optimization of Background Field and Threshold Values
by Ji-Hye Han, Myoung-Seok Suh, Ha-Yeong Yu and So-Hyeong Kim
Remote Sens. 2024, 16(11), 2031; https://doi.org/10.3390/rs16112031 - 5 Jun 2024
Cited by 1 | Viewed by 491
Abstract
This study aimed to improve the daytime fog detection algorithm GK2A_HR_FDA using the GEO-KOMPSAT-2A (GK2A) satellite by increasing the resolution (2 km to 500 m), improving predicted surface temperature by the numerical model, and optimizing some threshold values. GK2A_HR_FDA uses numerical model prediction [...] Read more.
This study aimed to improve the daytime fog detection algorithm GK2A_HR_FDA using the GEO-KOMPSAT-2A (GK2A) satellite by increasing the resolution (2 km to 500 m), improving predicted surface temperature by the numerical model, and optimizing some threshold values. GK2A_HR_FDA uses numerical model prediction temperature to distinguish between fog and low clouds and evaluates the fog detection level using ground observation visibility data. To correct the errors of the numerical model prediction temperature, a dynamic bias correction (DBC) technique was developed that reflects the geographic location, time, and altitude in real time. As the numerical model prediction temperature was significantly improved after DBC application, the fog detection level improved (FAR: −0.02–−0.06; bias: −0.07–−0.23) regardless of the training and validation cases and validation method. In most cases, the fog detection level was improved due to DBC and threshold adjustment. Still, the detection level was abnormally low in some cases due to background reflectance problems caused by cloud shadow effects and navigation errors. As a result of removing navigation errors and cloud shadow effects, the fog detection level was greatly improved. Therefore, it is necessary to improve navigation accuracy and develop removal techniques for cloud shadows to improve fog detection levels. Full article
(This article belongs to the Section Satellite Missions for Earth and Planetary Exploration)
Show Figures

Graphical abstract

19 pages, 4887 KiB  
Article
AFMUNet: Attention Feature Fusion Network Based on a U-Shaped Structure for Cloud and Cloud Shadow Detection
by Wenjie Du, Zhiyong Fan, Ying Yan, Rui Yu and Jiazheng Liu
Remote Sens. 2024, 16(9), 1574; https://doi.org/10.3390/rs16091574 - 28 Apr 2024
Viewed by 890
Abstract
Cloud detection technology is crucial in remote sensing image processing. While cloud detection is a mature research field, challenges persist in detecting clouds on reflective surfaces like ice, snow, and sand. Particularly, the detection of cloud shadows remains a significant area of concern [...] Read more.
Cloud detection technology is crucial in remote sensing image processing. While cloud detection is a mature research field, challenges persist in detecting clouds on reflective surfaces like ice, snow, and sand. Particularly, the detection of cloud shadows remains a significant area of concern within cloud detection technology. To address the above problems, a convolutional self-attention mechanism feature fusion network model based on a U-shaped structure is proposed. The model employs an encoder–decoder structure based on UNet. The encoder performs down-sampling to extract deep features, while the decoder uses up-sampling to reconstruct the feature map. To capture the key features of the image, Channel Spatial Attention Module (CSAM) is introduced in this work. This module incorporates an attention mechanism for adaptive field-of-view adjustments. In the up-sampling process, different channels are selected to obtain rich information. Contextual information is integrated to improve the extraction of edge details. Feature fusion at the same layer between up-sampling and down-sampling is carried out. The Feature Fusion Module (FFM) facilitates the positional distribution of the image on a pixel-by-pixel basis. A clear boundary is distinguished using an innovative loss function. Finally, the experimental results on the dataset GF1_WHU show that the segmentation results of this method are better than the existing methods. Hence, our model is of great significance for practical cloud shadow segmentation. Full article
(This article belongs to the Special Issue Remote Sensing Image Classification and Semantic Segmentation)
Show Figures

Figure 1

40 pages, 18945 KiB  
Article
Sensitivity of Sentinel-1 Backscatter to Management-Related Disturbances in Temperate Forests
by Sietse van der Woude, Johannes Reiche, Frank Sterck, Gert-Jan Nabuurs, Marleen Vos and Martin Herold
Remote Sens. 2024, 16(9), 1553; https://doi.org/10.3390/rs16091553 - 27 Apr 2024
Viewed by 896
Abstract
The rapid and accurate detection of forest disturbances in temperate forests has become increasingly crucial as policy demands and climate pressure on these forests rise. The cloud-penetrating Sentinel-1 radar constellation provides frequent and high-resolution observations with global coverage, but few studies have assessed [...] Read more.
The rapid and accurate detection of forest disturbances in temperate forests has become increasingly crucial as policy demands and climate pressure on these forests rise. The cloud-penetrating Sentinel-1 radar constellation provides frequent and high-resolution observations with global coverage, but few studies have assessed its potential for mapping disturbances in temperate forests. This study investigated the sensitivity of temporally dense C-band backscatter data from Sentinel-1 to varying management-related disturbance intensities in temperate forests, and the influence of confounding factors such as radar backscatter signal seasonality, shadow, and layover on the radar backscatter signal at a pixel level. A unique network of 14 experimental sites in the Netherlands was used in which trees were removed to simulate different levels of management-related forest disturbances across a range of representative temperate forest species. Results from six years (2016–2022) of Sentinel-1 observations indicated that backscatter seasonality is dependent on species phenology and degree of canopy cover. The backscatter change magnitude was sensitive to medium- and high-severity disturbances, with radar layover having a stronger impact on the backscatter disturbance signal than radar shadow. Combining ascending and descending orbits and complementing polarizations compared to a single orbit or polarization was found to result in a 34% mean increase in disturbance detection sensitivity across all disturbance severities. This study underlines the importance of linking high-quality experimental ground-based data to dense satellite time series to improve future forest disturbance mapping. It suggests a key role for C-band backscatter time series in the rapid and accurate large-area monitoring of temperate forests and, in particular, the disturbances imposed by logging practices or tree mortality driven by climate change factors. Full article
Show Figures

Figure 1

19 pages, 8487 KiB  
Article
MRFA-Net: Multi-Scale Receptive Feature Aggregation Network for Cloud and Shadow Detection
by Jianxiang Wang, Yuanlu Li, Xiaoting Fan, Xin Zhou and Mingxuan Wu
Remote Sens. 2024, 16(8), 1456; https://doi.org/10.3390/rs16081456 - 20 Apr 2024
Viewed by 630
Abstract
The effective segmentation of clouds and cloud shadows is crucial for surface feature extraction, climate monitoring, and atmospheric correction, but it remains a critical challenge in remote sensing image processing. Cloud features are intricate, with varied distributions and unclear boundaries, making accurate extraction [...] Read more.
The effective segmentation of clouds and cloud shadows is crucial for surface feature extraction, climate monitoring, and atmospheric correction, but it remains a critical challenge in remote sensing image processing. Cloud features are intricate, with varied distributions and unclear boundaries, making accurate extraction difficult, with only a few networks addressing this challenge. To tackle these issues, we introduce a multi-scale receptive field aggregation network (MRFA-Net). The MRFA-Net comprises an MRFA-Encoder and MRFA-Decoder. Within the encoder, the net includes the asymmetric feature extractor module (AFEM) and multi-scale attention, which capture diverse local features and enhance contextual semantic understanding, respectively. The MRFA-Decoder includes the multi-path decoder module (MDM) for blending features and the global feature refinement module (GFRM) for optimizing information via learnable matrix decomposition. Experimental results demonstrate that our model excelled in generalization and segmentation performance when addressing various complex backgrounds and different category detections, exhibiting advantages in terms of parameter efficiency and computational complexity, with the MRFA-Net achieving a mean intersection over union (MIoU) of 94.12% on our custom Cloud and Shadow dataset, and 87.54% on the open-source HRC_WHU dataset, outperforming other models by at least 0.53% and 0.62%. The proposed model demonstrates applicability in practical scenarios where features are difficult to distinguish. Full article
Show Figures

Figure 1

21 pages, 20756 KiB  
Article
A Novel Method for Cloud and Cloud Shadow Detection Based on the Maximum and Minimum Values of Sentinel-2 Time Series Images
by Kewen Liang, Gang Yang, Yangyan Zuo, Jiahui Chen, Weiwei Sun, Xiangchao Meng and Binjie Chen
Remote Sens. 2024, 16(8), 1392; https://doi.org/10.3390/rs16081392 - 15 Apr 2024
Viewed by 1098
Abstract
Automatic and accurate detection of clouds and cloud shadows is a critical aspect of optical remote sensing image preprocessing. This paper provides a time series maximum and minimum mask method (TSMM) for cloud and cloud shadow detection. Firstly, the Cloud Score+S2_HARMONIZED (CS+S2) is [...] Read more.
Automatic and accurate detection of clouds and cloud shadows is a critical aspect of optical remote sensing image preprocessing. This paper provides a time series maximum and minimum mask method (TSMM) for cloud and cloud shadow detection. Firstly, the Cloud Score+S2_HARMONIZED (CS+S2) is employed as a preliminary mask for clouds and cloud shadows. Secondly, we calculate the ratio of the maximum and sub-maximum values of the blue band in the time series, as well as the ratio of the minimum and sub-minimum values of the near-infrared band in the time series, to eliminate noise from the time series data. Finally, the maximum value of the clear blue band and the minimum value of the near-infrared band after noise removal are employed for cloud and cloud shadow detection, respectively. A national and a global dataset were used to validate the TSMM, and it was quantitatively compared against five other advanced methods or products. When clouds and cloud shadows are detected simultaneously, in the S2ccs dataset, the overall accuracy (OA) reaches 0.93 and the F1 score reaches 0.85. Compared with the most advanced CS+S2, there are increases of 3% and 9%, respectively. In the CloudSEN12 dataset, compared with CS+S2, the producer’s accuracy (PA) and F1 score show increases of 10% and 4%, respectively. Additionally, when applied to Landsat-8 images, TSMM outperforms Fmask, demonstrating its strong generalization capability. Full article
(This article belongs to the Special Issue Satellite-Based Cloud Climatologies)
Show Figures

Graphical abstract

18 pages, 21669 KiB  
Article
Shadow-Aware Point-Based Neural Radiance Fields for High-Resolution Remote Sensing Novel View Synthesis
by Li Li, Yongsheng Zhang, Ziquan Wang, Zhenchao Zhang, Zhipeng Jiang, Ying Yu, Lei Li and Lei Zhang
Remote Sens. 2024, 16(8), 1341; https://doi.org/10.3390/rs16081341 - 11 Apr 2024
Viewed by 715
Abstract
Novel view synthesis using neural radiance fields (NeRFs) for remote sensing images is important for various applications. Traditional methods often use implicit representations for modeling, which have slow rendering speeds and cannot directly obtain the structure of the 3D scene. Some studies have [...] Read more.
Novel view synthesis using neural radiance fields (NeRFs) for remote sensing images is important for various applications. Traditional methods often use implicit representations for modeling, which have slow rendering speeds and cannot directly obtain the structure of the 3D scene. Some studies have introduced explicit representations, such as point clouds and voxels, but this kind of method often produces holes when processing large-scale scenes from remote sensing images. In addition, NeRFs with explicit 3D expression are more susceptible to transient phenomena (shadows and dynamic objects) and even plane holes. In order to address these issues, we propose an improved method for synthesizing new views of remote sensing images based on Point-NeRF. Our main idea focuses on two aspects: filling in the spatial structure and reconstructing ray-marching rendering using shadow information. First, we introduce hole detection, conducting inverse projection to acquire candidate points that are adjusted during training to fill the holes. We also design incremental weights to reduce the probability of pruning the plane points. We introduce a geometrically consistent shadow model based on a point cloud to divide the radiance into albedo and irradiance, allowing the model to predict the albedo of each point, rather than directly predicting the radiance. Intuitively, our proposed method uses a sparse point cloud generated with traditional methods for initialization and then builds the dense radiance field. We evaluate our method on the LEVIR_NVS data set, demonstrating its superior performance compared to state-of-the-art methods. Overall, our work provides a promising approach for synthesizing new viewpoints of remote sensing images. Full article
Show Figures

Graphical abstract

46 pages, 18613 KiB  
Article
Improved Landsat Operational Land Imager (OLI) Cloud and Shadow Detection with the Learning Attention Network Algorithm (LANA)
by Hankui K. Zhang, Dong Luo and David P. Roy
Remote Sens. 2024, 16(8), 1321; https://doi.org/10.3390/rs16081321 - 9 Apr 2024
Viewed by 1238
Abstract
Landsat cloud and cloud shadow detection has a long heritage based on the application of empirical spectral tests to single image pixels, including the Landsat product Fmask algorithm, which uses spectral tests applied to optical and thermal bands to detect clouds and uses [...] Read more.
Landsat cloud and cloud shadow detection has a long heritage based on the application of empirical spectral tests to single image pixels, including the Landsat product Fmask algorithm, which uses spectral tests applied to optical and thermal bands to detect clouds and uses the sun-sensor-cloud geometry to detect shadows. Since the Fmask was developed, convolutional neural network (CNN) algorithms, and in particular U-Net algorithms (a type of CNN with a U-shaped network structure), have been developed and are applied to pixels in square patches to take advantage of both spatial and spectral information. The purpose of this study was to develop and assess a new U-Net algorithm that classifies Landsat 8/9 Operational Land Imager (OLI) pixels with higher accuracy than the Fmask algorithm. The algorithm, termed the Learning Attention Network Algorithm (LANA), is a form of U-Net but with an additional attention mechanism (a type of network structure) that, unlike conventional U-Net, uses more spatial pixel information across each image patch. The LANA was trained using 16,861 512 × 512 30 m pixel annotated Landsat 8 OLI patches extracted from 27 images and 69 image subsets that are publicly available and have been used by others for cloud mask algorithm development and assessment. The annotated data were manually refined to improve the annotation and were supplemented with another four annotated images selected to include clear, completely cloudy, and developed land images. The LANA classifies image pixels as either clear, thin cloud, cloud, or cloud shadow. To evaluate the classification accuracy, five annotated Landsat 8 OLI images (composed of >205 million 30 m pixels) were classified, and the results compared with the Fmask and a publicly available U-Net model (U-Net Wieland). The LANA had a 78% overall classification accuracy considering cloud, thin cloud, cloud shadow, and clear classes. As the LANA, Fmask, and U-Net Wieland algorithms have different class legends, their classification results were harmonized to the same three common classes: cloud, cloud shadow, and clear. Considering these three classes, the LANA had the highest (89%) overall accuracy, followed by Fmask (86%), and then U-Net Wieland (85%). The LANA had the highest F1-scores for cloud (0.92), cloud shadow (0.57), and clear (0.89), and the other two algorithms had lower F1-scores, particularly for cloud (Fmask 0.90, U-Net Wieland 0.88) and cloud shadow (Fmask 0.45, U-Net Wieland 0.52). In addition, a time-series evaluation was undertaken to examine the prevalence of undetected clouds and cloud shadows (i.e., omission errors). The band-specific temporal smoothness index (TSIλ) was applied to a year of Landsat 8 OLI surface reflectance observations after discarding pixel observations labelled as cloud or cloud shadow. This was undertaken independently at each gridded pixel location in four 5000 × 5000 30 m pixel Landsat analysis-ready data (ARD) tiles. The TSIλ results broadly reflected the classification accuracy results and indicated that the LANA had the smallest cloud and cloud shadow omission errors, whereas the Fmask had the greatest cloud omission error and the second greatest cloud shadow omission error. Detailed visual examination, true color image examples and classification results are included and confirm these findings. The TSIλ results also highlight the need for algorithm developers to undertake product quality assessment in addition to accuracy assessment. The LANA model, training and evaluation data, and application codes are publicly available for other researchers. Full article
(This article belongs to the Special Issue Deep Learning on the Landsat Archive)
Show Figures

Figure 1

25 pages, 4590 KiB  
Article
Intercomparison of Same-Day Remote Sensing Data for Measuring Winter Cover Crop Biophysical Traits
by Alison Thieme, Kusuma Prabhakara, Jyoti Jennewein, Brian T. Lamb, Greg W. McCarty and Wells Dean Hively
Sensors 2024, 24(7), 2339; https://doi.org/10.3390/s24072339 - 6 Apr 2024
Cited by 1 | Viewed by 1735
Abstract
Winter cover crops are planted during the fall to reduce nitrogen losses and soil erosion and improve soil health. Accurate estimations of winter cover crop performance and biophysical traits including biomass and fractional vegetative groundcover support accurate assessment of environmental benefits. We examined [...] Read more.
Winter cover crops are planted during the fall to reduce nitrogen losses and soil erosion and improve soil health. Accurate estimations of winter cover crop performance and biophysical traits including biomass and fractional vegetative groundcover support accurate assessment of environmental benefits. We examined the comparability of measurements between ground-based and spaceborne sensors as well as between processing levels (e.g., surface vs. top-of-atmosphere reflectance) in estimating cover crop biophysical traits. This research examined the relationships between SPOT 5, Landsat 7, and WorldView-2 same-day paired satellite imagery and handheld multispectral proximal sensors on two days during the 2012–2013 winter cover crop season. We compared two processing levels from three satellites with spatially aggregated proximal data for red and green spectral bands as well as the normalized difference vegetation index (NDVI). We then compared NDVI estimated fractional green cover to in-situ photographs, and we derived cover crop biomass estimates from NDVI using existing calibration equations. We used slope and intercept contrasts to test whether estimates of biomass and fractional green cover differed statistically between sensors and processing levels. Compared to top-of-atmosphere imagery, surface reflectance imagery were more closely correlated with proximal sensors, with intercepts closer to zero, regression slopes nearer to the 1:1 line, and less variance between measured values. Additionally, surface reflectance NDVI derived from satellites showed strong agreement with passive handheld multispectral proximal sensor-sensor estimated fractional green cover and biomass (adj. R2 = 0.96 and 0.95; RMSE = 4.76% and 259 kg ha−1, respectively). Although active handheld multispectral proximal sensor-sensor derived fractional green cover and biomass estimates showed high accuracies (R2 = 0.96 and 0.96, respectively), they also demonstrated large intercept offsets (−25.5 and 4.51, respectively). Our results suggest that many passive multispectral remote sensing platforms may be used interchangeably to assess cover crop biophysical traits whereas SPOT 5 required an adjustment in NDVI intercept. Active sensors may require separate calibrations or intercept correction prior to combination with passive sensor data. Although surface reflectance products were highly correlated with proximal sensors, the standardized cloud mask failed to completely capture cloud shadows in Landsat 7, which dampened the signal of NIR and red bands in shadowed pixels. Full article
(This article belongs to the Section Environmental Sensing)
Show Figures

Figure 1

22 pages, 5870 KiB  
Article
Hierarchical Integration of UAS and Sentinel-2 Imagery for Spruce Bark Beetle Grey-Attack Detection by Vegetation Index Thresholding Approach
by Grigorijs Goldbergs and Emīls Mārtiņš Upenieks
Forests 2024, 15(4), 644; https://doi.org/10.3390/f15040644 - 2 Apr 2024
Viewed by 1860
Abstract
This study aimed to examine the efficiency of the vegetation index (VI) thresholding approach for mapping deadwood caused by spruce bark beetle outbreak. For this, the study used upscaling from individual dead spruce detection by unmanned aerial (UAS) imagery as reference data for [...] Read more.
This study aimed to examine the efficiency of the vegetation index (VI) thresholding approach for mapping deadwood caused by spruce bark beetle outbreak. For this, the study used upscaling from individual dead spruce detection by unmanned aerial (UAS) imagery as reference data for continuous spruce deadwood mapping at a stand/landscape level by VI thresholding binary masks calculated from satellite Sentinel-2 imagery. The study found that the Normalized Difference Vegetation Index (NDVI) was most effective for distinguishing dead spruce from healthy trees, with an accuracy of 97% using UAS imagery. The study results showed that the NDVI minimises cloud and dominant tree shadows and illumination differences during UAS imagery acquisition, keeping the NDVI relatively stable over sunny and cloudy weather conditions. Like the UAS case, the NDVI calculated from Sentinel-2 (S2) imagery was the most reliable index for spruce deadwood cover mapping using a binary threshold mask at a landscape scale. Based on accuracy assessment, the summer leaf-on period (June–July) was found to be the most appropriate for spruce deadwood mapping by S2 imagery with an accuracy of 85% and a deadwood detection rate of 83% in dense, close-canopy mixed conifer forests. The study found that the spruce deadwood was successfully classified by S2 imagery when the spatial extent of the isolated dead tree cluster allocated at least 5–7 Sentinel-2 pixels. Full article
(This article belongs to the Section Forest Inventory, Modeling and Remote Sensing)
Show Figures

Figure 1

Back to TopTop