Svoboda | Graniru | BBC Russia | Golosameriki | Facebook
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

Search Results (461)

Search Parameters:
Keywords = CBAM

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
16 pages, 1007 KiB  
Article
Image Recognition and Classification of Farmland Pests Based on Improved Yolox-tiny Algorithm
by Yuxue Wang, Hao Dong, Songyu Bai, Yang Yu and Qingwei Duan
Appl. Sci. 2024, 14(13), 5568; https://doi.org/10.3390/app14135568 - 26 Jun 2024
Viewed by 65
Abstract
In order to rapidly detect pest types in farmland and mitigate their adverse effects on agricultural production, we proposed an improved Yolox-tiny-based target detection method for farmland pests. This method enhances the detection accuracy of farmland pests by limiting downsampling and incorporating the [...] Read more.
In order to rapidly detect pest types in farmland and mitigate their adverse effects on agricultural production, we proposed an improved Yolox-tiny-based target detection method for farmland pests. This method enhances the detection accuracy of farmland pests by limiting downsampling and incorporating the Convolution Block Attention Module (CBAM). In the experiments, images of pests common to seven types of farmland and particularly harmful to crops were processed through the original Yolox-tiny model after preprocessing and partial target expansion for comparative training and testing. The results indicate that the improved Yolox-tiny model increased the average precision by 7.18%, from 63.55% to 70.73%, demonstrating enhanced precision in detecting farmland pest targets compared to the original model. Full article
22 pages, 1891 KiB  
Article
Fire-RPG: An Urban Fire Detection Network Providing Warnings in Advance
by Xiangsheng Li and Yongquan Liang
Fire 2024, 7(7), 214; https://doi.org/10.3390/fire7070214 - 26 Jun 2024
Viewed by 117
Abstract
Urban fires are characterized by concealed ignition points and rapid escalation, making the traditional methods of detecting early stage fire accidents inefficient. Thus, we focused on the features of early stage fire accidents, such as faint flames and thin smoke, and established a [...] Read more.
Urban fires are characterized by concealed ignition points and rapid escalation, making the traditional methods of detecting early stage fire accidents inefficient. Thus, we focused on the features of early stage fire accidents, such as faint flames and thin smoke, and established a dataset. We found that these features are mostly medium-sized and small-sized objects. We proposed a model based on YOLOv8s, Fire-RPG. Firstly, we introduced an extra very small object detection layer to enhance the detection performance for early fire features. Next, we optimized the model structure with the bottleneck in GhostV2Net, which reduced the computational time and the parameters. The Wise-IoUv3 loss function was utilized to decrease the harmful effects of low-quality data in the dataset. Finally, we integrated the low-cost yet high-performance RepVGG block and the CBAM attention mechanism to enhance learning capabilities. The RepVGG block enhances the extraction ability of the backbone and neck structures, while CBAM focuses the attention of the model on specific size objects. Our experiments showed that Fire-RPG achieved an mAP of 81.3%, an improvement of 2.2%. In addition, Fire-RPG maintained high detection performance across various fire scenarios. Therefore, our model can provide timely warnings and accurate detection services. Full article
18 pages, 1130 KiB  
Article
Intelligent Identification of Liquid Aluminum Leakage in Deep Well Casting Production Based on Image Segmentation
by Junwei Yan, Xin Li and Xuan Zhou
Appl. Sci. 2024, 14(13), 5470; https://doi.org/10.3390/app14135470 - 24 Jun 2024
Viewed by 226
Abstract
This study proposes a method based on image segmentation for accurately identifying liquid aluminum leakage during deep well casting, which is crucial for providing early warnings and preventing potential explosions in aluminum processing. Traditional DeepLabV3+ models in this domain encounter challenges such as [...] Read more.
This study proposes a method based on image segmentation for accurately identifying liquid aluminum leakage during deep well casting, which is crucial for providing early warnings and preventing potential explosions in aluminum processing. Traditional DeepLabV3+ models in this domain encounter challenges such as prolonged training duration, the requirement for abundant data, and insufficient understanding of the liquid surface characteristics of casting molds. This work presents an enhanced DeepLabV3+ method to address the restrictions and increase the accuracy of calculating liquid surface areas for casting molds. This algorithm substitutes the initial feature extraction network with ResNet-50 and integrates the CBAM attention mechanism and transfer learning techniques. The results of ablation experiments and comparative trials demonstrate that the proposed algorithm can achieve favorable segmentation performance, delivering an MIoU of 91.88%, an MPA of 96.53%, and an inference speed of 55.05 FPS. Furthermore, this study presents a technique utilizing OpenCV to accurately measure variations in the surface areas of casting molds when there are leakages of liquid aluminum. In addition, this work introduces a measurement to quantify these alterations and establish an abnormal threshold by utilizing the Interquartile Range (IQR) method. Empirical tests confirm that the threshold established in this study can accurately detect instances of liquid aluminum leakage. Full article
(This article belongs to the Section Applied Industrial Technologies)
15 pages, 7934 KiB  
Article
SIGAN: A Multi-Scale Generative Adversarial Network for Underwater Sonar Image Super-Resolution
by Chengyang Peng, Shaohua Jin, Gang Bian and Yang Cui
J. Mar. Sci. Eng. 2024, 12(7), 1057; https://doi.org/10.3390/jmse12071057 - 24 Jun 2024
Viewed by 218
Abstract
Super-resolution (SR) is a technique that restores image details based on existing information, enhancing the resolution of images to prevent quality degradation. Despite significant achievements in deep-learning-based SR models, their application in underwater sonar scenarios is limited due to the lack of underwater [...] Read more.
Super-resolution (SR) is a technique that restores image details based on existing information, enhancing the resolution of images to prevent quality degradation. Despite significant achievements in deep-learning-based SR models, their application in underwater sonar scenarios is limited due to the lack of underwater sonar datasets and the difficulty in recovering texture details. To address these challenges, we propose a multi-scale generative adversarial network (SIGAN) for super-resolution reconstruction of underwater sonar images. The generator is built on a residual dense network (RDN), which extracts rich local features through densely connected convolutional layers. Additionally, a Convolutional Block Attention Module (CBAM) is incorporated to capture detailed texture information by focusing on different scales and channels. The discriminator employs a multi-scale discriminative structure, enhancing the detail perception of both generated and high-resolution (HR) images. Considering the increased noise in super-resolved sonar images, our loss function emphasizes the PSNR metric and incorporates the L2 loss function to improve the quality of the output images. Meanwhile, we constructed a dataset for side-scan sonar experiments (DNASI-I). We compared our method with the current state-of-the-art super-resolution image reconstruction methods on the public dataset KLSG-II and our self-built dataset DNASI-I. The experimental results show that at a scale factor of 4, the average PSNR value of our method was 3.5 higher than that of other methods, and the accuracy of target detection using the super-resolution reconstructed images can be improved to 91.4%. Through subjective qualitative comparison and objective quantitative analysis, we demonstrated the effectiveness and superiority of the proposed SIGAN in the super-resolution reconstruction of side-scan sonar images. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

15 pages, 3358 KiB  
Article
Deep Learning Evaluation of Glaucoma Detection Using Fundus Photographs in Highly Myopic Populations
by Yen-Ying Chiang, Ching-Long Chen and Yi-Hao Chen
Biomedicines 2024, 12(7), 1394; https://doi.org/10.3390/biomedicines12071394 - 23 Jun 2024
Viewed by 275
Abstract
Objectives: This study aimed to use deep learning to identify glaucoma and normal eyes in groups with high myopia using fundus photographs. Methods: Patients who visited Tri-Services General Hospital from 1 November 2018 to 31 October 2022 were retrospectively reviewed. Patients with [...] Read more.
Objectives: This study aimed to use deep learning to identify glaucoma and normal eyes in groups with high myopia using fundus photographs. Methods: Patients who visited Tri-Services General Hospital from 1 November 2018 to 31 October 2022 were retrospectively reviewed. Patients with high myopia (spherical equivalent refraction of ≤−6.0 D) were included in the current analysis. Meanwhile, patients with pathological myopia were excluded. The participants were then divided into the high myopia group and high myopia glaucoma group. We used two classification models with the convolutional block attention module (CBAM), an attention mechanism module that enhances the performance of convolutional neural networks (CNNs), to investigate glaucoma cases. The learning data of this experiment were evaluated through fivefold cross-validation. The images were categorized into training, validation, and test sets in a ratio of 6:2:2. Grad-CAM visual visualization improved the interpretability of the CNN results. The performance indicators for evaluating the model include the area under the receiver operating characteristic curve (AUC), sensitivity, and specificity. Results: A total of 3088 fundus photographs were used for the deep-learning model, including 1540 and 1548 fundus photographs for the high myopia glaucoma and high myopia groups, respectively. The average refractive power of the high myopia glaucoma group and the high myopia group were −8.83 ± 2.9 D and −8.73 ± 2.6 D, respectively (p = 0.30). Based on a fivefold cross-validation assessment, the ConvNeXt_Base+CBAM architecture had the best performance, with an AUC of 0.894, accuracy of 82.16%, sensitivity of 81.04%, specificity of 83.27%, and F1 score of 81.92%. Conclusions: Glaucoma in individuals with high myopia was identified from their fundus photographs. Full article
(This article belongs to the Special Issue Glaucoma: New Diagnostic and Therapeutic Approaches)
18 pages, 6237 KiB  
Article
Crop Land Change Detection with MC&N-PSPNet
by Yuxin Chen, Yulin Duan, Wen Zhang, Chang Wang, Qiangyi Yu and Xu Wang
Appl. Sci. 2024, 14(13), 5429; https://doi.org/10.3390/app14135429 - 22 Jun 2024
Viewed by 260
Abstract
To enhance the accuracy of agricultural area classification and enable remote sensing monitoring of agricultural regions, this paper investigates classification models and their application in change detection within rural areas, proposing the MC&N-PSPNet (CBAM into MobileNetV2 and NAM into PSPNet) network model. Initially, [...] Read more.
To enhance the accuracy of agricultural area classification and enable remote sensing monitoring of agricultural regions, this paper investigates classification models and their application in change detection within rural areas, proposing the MC&N-PSPNet (CBAM into MobileNetV2 and NAM into PSPNet) network model. Initially, the HRSCD (High Resolution Semantic Change Detection) dataset labels undergo binary redrawing. Subsequently, to efficiently extract image features, the original PSPNet (Pyramid Scene Parsing Network) backbone network, ResNet50 (Residual Network-50), is substituted with the MobileNetV2 (Inverted Residuals and Linear Bottlenecks) model. Furthermore, to enhance the model’s training efficiency and classification accuracy, the NAM (Normalization-Based Attention Module) attention mechanism is introduced into the improved PSPNet model to obtain the categories of land cover changes in remote sensing images before and after the designated periods. Finally, the final change detection results are obtained by performing a different operation on the classification results for different periods. Through experimental analysis, this paper demonstrates the proposed method’s superior capability in segmenting agricultural areas, which is crucial for effective agricultural area change detection. The model achieves commendable performance metrics, including overall accuracy, Kappa value, MIoU, and MPA values of 95.03%, 88.15%, 93.55%, and 88.90%, respectively, surpassing other models. Moreover, the model exhibits robust performance in final change detection, achieving an overall accuracy and Kappa value of 93.24% and 92.29%, respectively. The results of this study show that the MC&N-PSPNet model has significant advantages in the detection of changes in agricultural zones, which provides a scientific basis and technical support for agricultural resource management and policy formulation. Full article
(This article belongs to the Section Agricultural Science and Technology)
Show Figures

Figure 1

16 pages, 7590 KiB  
Article
Cable Conduit Defect Recognition Algorithm Based on Improved YOLOv8
by Fanfang Kong, Yi Zhang, Lulin Zhan, Yuling He, Hai Zheng and Derui Dai
Electronics 2024, 13(13), 2427; https://doi.org/10.3390/electronics13132427 - 21 Jun 2024
Viewed by 225
Abstract
The underground cable conduit system, a vital component of urban power transmission and distribution infrastructure, faces challenges in maintenance and residue detection. Traditional detection methods, such as Closed-Circuit Television (CCTV), rely heavily on the expertise and prior experience of professional inspectors, leading to [...] Read more.
The underground cable conduit system, a vital component of urban power transmission and distribution infrastructure, faces challenges in maintenance and residue detection. Traditional detection methods, such as Closed-Circuit Television (CCTV), rely heavily on the expertise and prior experience of professional inspectors, leading to time-consuming and subjective results acquisition. To address these issues and automate defect detection in underground cable conduits, this paper proposes a defect recognition algorithm based on an enhanced YOLOv8 model. Firstly, we replace the Spatial Pyramid Pooling (SPPF) module in the original model with the Atrous Spatial Pyramid Pooling (ASPP) module to capture multi-scale defect features effectively. Secondly, to enhance feature representation and reduce noise interference, we integrate the Convolutional Block Attention Module (CBAM) into the detection head. Finally, we enhance the YOLOv8 backbone network by replacing the C2f module with the base module of ShuffleNet V2, reducing the number of model parameters and optimizing the model efficiency. Experimental results demonstrate the efficacy of the proposed algorithm in recognizing pipe misalignment and residual foreign objects. The precision and mean average precision (mAP) reach 96.2% and 97.6%, respectively, representing improvements over the original YOLOv8 model. This study significantly improves the capability of capturing and characterizing defect characteristics, thereby enhancing the maintenance efficiency and accuracy of underground cable conduit systems. Full article
(This article belongs to the Special Issue Image and Video Processing Based on Deep Learning)
Show Figures

Figure 1

21 pages, 5940 KiB  
Article
Improved YOLOv5 Network for Aviation Plug Defect Detection
by Li Ji and Chaohang Huang
Aerospace 2024, 11(6), 488; https://doi.org/10.3390/aerospace11060488 - 19 Jun 2024
Viewed by 257
Abstract
Ensuring the integrity of aviation plug components is crucial for maintaining the safety and functionality of the aerospace industry. Traditional methods for detecting surface defects often show low detection probabilities, highlighting the need for more advanced automated detection systems. This paper enhances the [...] Read more.
Ensuring the integrity of aviation plug components is crucial for maintaining the safety and functionality of the aerospace industry. Traditional methods for detecting surface defects often show low detection probabilities, highlighting the need for more advanced automated detection systems. This paper enhances the YOLOv5 model by integrating the Generalized Efficient Layer Aggregation Network (GELAN), which optimizes feature aggregation and boosts model robustness, replacing the conventional Convolutional Block Attention Module (CBAM). The upgraded YOLOv5 architecture, incorporating GELAN, effectively aggregates multi-scale and multi-layer features, thus preserving essential information across the network’s depth. This capability is vital for maintaining high-fidelity feature representations, critical for detecting minute and complex defects. Additionally, the Focal EIOU loss function effectively tackles class imbalance and concentrates the model’s attention on difficult detection areas, thus significantly improving its sensitivity and overall accuracy in identifying defects. Replacing the traditional coupled head with a lightweight decoupled head improves the separation of localization and classification tasks, enhancing both accuracy and convergence speed. The lightweight decoupled head also reduces computational load without compromising detection efficiency. Experimental results demonstrate that the enhanced YOLOv5 architecture significantly improves detection probability, achieving a detection rate of 78.5%. This improvement occurs with only a minor increase in inference time per image, underscoring the efficiency of the proposed model. The optimized YOLOv5 model with GELAN proves highly effective, offering significant benefits for the precision and reliability required in aviation component inspections. Full article
Show Figures

Figure 1

15 pages, 41936 KiB  
Article
An Improved YOLOv8 Model for Lotus Seedpod Instance Segmentation in the Lotus Pond Environment
by Jie Ma, Yanke Zhao, Wanpeng Fan and Jizhan Liu
Agronomy 2024, 14(6), 1325; https://doi.org/10.3390/agronomy14061325 - 19 Jun 2024
Viewed by 239
Abstract
Lotus seedpod maturity detection and segmentation in pond environments play a significant role in yield prediction and picking pose estimation for lotus seedpods. However, it is a great challenge to accurately detect and segment lotus seedpods due to insignificant phenotypic differences between the [...] Read more.
Lotus seedpod maturity detection and segmentation in pond environments play a significant role in yield prediction and picking pose estimation for lotus seedpods. However, it is a great challenge to accurately detect and segment lotus seedpods due to insignificant phenotypic differences between the adjacent maturity, changing illumination, overlap, and occlusion of lotus seedpods. The existing research pays attention to lotus seedpod detection while ignoring maturity detection and segmentation problems. Therefore, a semantic segmentation dataset of lotus seedpods was created, where a copy-and-paste data augmentation tool was employed to eliminate the class-imbalanced problem and improve model generalization ability. Afterwards, an improved YOLOv8-seg model was proposed to detect and segment the maturity of lotus seedpods. In the model, the convolutional block attention module (CBAM) was embedded in the neck network to extract distinguished features of different maturity stages with negligible computation cost. Wise-Intersection over Union (WIoU) regression loss function was adopted to refine the regression inference bias and improve the bounding box prediction accuracy. The experimental results showed that the proposed YOLOv8-seg model provides an effective method for “ripe” and “overripe” lotus seedpod detection and instance segmentation, where the mean average precision of segmentation mask (mAPmask) reaches 97.4% and 98.6%, respectively. In addition, the improved YOLOv8-seg exhibits high robustness and adaptability to complex illumination in a challenging environment. Comparative experiments were conducted using the proposed YOLOv8-seg and other state-of-the-art instance segmentation methods. The results showed that the improved model is superior to the Mask R-CNN and YOLACT models, with recall, precision, mAPbox and mAPmask being 96.5%, 94.3%, 97.8%, and 98%, respectively. The average running time and weight size of the proposed model are 25.9 ms and 7.4 M, respectively. The proposed model obtained the highest mAP for lotus seedpod maturity detection and segmentation while maintaining an appropriate model size and speed. Furthermore, based on the obtained segmentation model, 3D visualization of the lotus pond scene is performed, and cloud point of lotus seedpods is generated, which provides a theoretical foundation for robot harvesting in the lotus pond. Full article
(This article belongs to the Section Precision and Digital Agriculture)
Show Figures

Figure 1

15 pages, 2560 KiB  
Article
Study on the Detection Mechanism of Multi-Class Foreign Fiber under Semi-Supervised Learning
by Xue Zhou, Wei Wei, Zhen Huang and Zhiwei Su
Appl. Sci. 2024, 14(12), 5246; https://doi.org/10.3390/app14125246 - 17 Jun 2024
Viewed by 250
Abstract
Foreign fibers directly impact the quality of raw cotton, affecting the prices of textile products and the economic efficiency of cotton textile enterprises. The accurate differentiation and labeling of foreign fibers require domain-specific knowledge, and labeling scattered cotton foreign fibers in images consumes [...] Read more.
Foreign fibers directly impact the quality of raw cotton, affecting the prices of textile products and the economic efficiency of cotton textile enterprises. The accurate differentiation and labeling of foreign fibers require domain-specific knowledge, and labeling scattered cotton foreign fibers in images consumes substantial time and labor costs. In this study, we propose a semi-supervised foreign fiber detection approach that uses unlabeled image information and a small amount of labeled data for model training. Our proposed method, Efficient YOLOv5-cotton, introduces CBAM to address the issue of the missed detection and false detection of small-sized cotton foreign fibers against complex backgrounds. Second, the algorithm designs a multiscale feature information extraction network, SPPFCSPC, which improves its ability to generalize to fibers of different shapes. Lastly, to reduce the increased network parameters and computational complexity introduced by the SPPFCSPC module, we replace the C3 layer with the C3Ghost module. We evaluate Efficient YOLOv5 for detecting various types of foreign fibers. The results demonstrate that the improved Efficient YOLOv5-cotton achieves a 1.6% increase in [email protected] (mean average precision) compared with the original Efficient YOLOv5 and reduces model parameters by 10% compared to the original Efficient YOLOv5 with SPPFCSPC. Our experiments show that our proposed method enhances the accuracy of foreign fiber detection using Efficient YOLOv5-cotton and considers the trade-off between the model size and computational cost. Full article
Show Figures

Figure 1

18 pages, 5611 KiB  
Article
A Visible and Synthetic Aperture Radar Image Fusion Algorithm Based on a Transformer and a Convolutional Neural Network
by Liushun Hu, Shaojing Su, Zhen Zuo, Junyu Wei, Siyang Huang, Zongqing Zhao, Xiaozhong Tong and Shudong Yuan
Electronics 2024, 13(12), 2365; https://doi.org/10.3390/electronics13122365 - 17 Jun 2024
Viewed by 322
Abstract
For visible and Synthetic Aperture Radar (SAR) image fusion, this paper proposes a visible and SAR image fusion algorithm based on a Transformer and a Convolutional Neural Network (CNN). Firstly, in this paper, the Restormer Block is used to extract cross-modal shallow features. [...] Read more.
For visible and Synthetic Aperture Radar (SAR) image fusion, this paper proposes a visible and SAR image fusion algorithm based on a Transformer and a Convolutional Neural Network (CNN). Firstly, in this paper, the Restormer Block is used to extract cross-modal shallow features. Then, we introduce an improved Transformer–CNN Feature Extractor (TCFE) with a two-branch residual structure. This includes a Transformer branch that introduces the Lite Transformer (LT) and DropKey for extracting global features and a CNN branch that introduces the Convolutional Block Attention Module (CBAM) for extracting local features. Finally, the fused image is output based on global features extracted by the Transformer branch and local features extracted by the CNN branch. The experiments show that the algorithm proposed in this paper can effectively achieve the extraction and fusion of global and local features of visible and SAR images, so that high-quality visible and SAR fusion images can be obtained. Full article
(This article belongs to the Topic Computer Vision and Image Processing, 2nd Edition)
Show Figures

Figure 1

20 pages, 8244 KiB  
Article
Diagnosis of Citrus Greening Using Artificial Intelligence: A Faster Region-Based Convolutional Neural Network Approach with Convolution Block Attention Module-Integrated VGGNet and ResNet Models
by Ruihao Dong, Aya Shiraiwa, Achara Pawasut, Kesaraporn Sreechun and Takefumi Hayashi
Plants 2024, 13(12), 1631; https://doi.org/10.3390/plants13121631 - 13 Jun 2024
Viewed by 307
Abstract
The vector-transmitted Citrus Greening (CG) disease, also called Huanglongbing, is one of the most destructive diseases of citrus. Since no measures for directly controlling this disease are available at present, current disease management integrates several measures, such as vector control, the use of [...] Read more.
The vector-transmitted Citrus Greening (CG) disease, also called Huanglongbing, is one of the most destructive diseases of citrus. Since no measures for directly controlling this disease are available at present, current disease management integrates several measures, such as vector control, the use of disease-free trees, the removal of diseased trees, etc. The most essential issue in integrated management is how CG-infected trees can be detected efficiently. For CG detection, digital image analyses using deep learning algorithms have attracted much interest from both researchers and growers. Models using transfer learning with the Faster R-CNN architecture were constructed and compared with two pre-trained Convolutional Neural Network (CNN) models, VGGNet and ResNet. Their efficiency was examined by integrating their feature extraction capabilities into the Convolution Block Attention Module (CBAM) to create VGGNet+CBAM and ResNet+CBAM variants. ResNet models performed best. Moreover, the integration of CBAM notably improved CG disease detection precision and the overall performance of the models. Efficient models with transfer learning using Faster R-CNN were loaded on web applications to facilitate access for real-time diagnosis by farmers via the deployment of in-field images. The practical ability of the applications to detect CG disease is discussed. Full article
Show Figures

Figure 1

17 pages, 7019 KiB  
Article
Colorectal Polyp Detection Model by Using Super-Resolution Reconstruction and YOLO
by Shaofang Wang, Jun Xie, Yanrong Cui and Zhongju Chen
Electronics 2024, 13(12), 2298; https://doi.org/10.3390/electronics13122298 - 12 Jun 2024
Viewed by 338
Abstract
Colorectal cancer (CRC) is the second leading cause of cancer-related deaths worldwide. Colonoscopy is the primary method to prevent CRC. However, traditional polyp detection methods face problems such as low image resolution and the possibility of missing polyps. In recent years, deep learning [...] Read more.
Colorectal cancer (CRC) is the second leading cause of cancer-related deaths worldwide. Colonoscopy is the primary method to prevent CRC. However, traditional polyp detection methods face problems such as low image resolution and the possibility of missing polyps. In recent years, deep learning techniques have been extensively employed in the detection of colorectal polyps. However, these algorithms have not yet addressed the issue of detection in low-resolution images. In this study, we propose a novel YOLO-SRPD model by integrating SRGAN and YOLO to address the issue of low-resolution colonoscopy images. Firstly, the SRGAN with integrated ACmix is used to convert low-resolution images to high-resolution images. The generated high-resolution images are then used as the training set for polyp detection. Then, the C3_Res2Net is integrated into the YOLOv5 backbone to enhance multiscale feature extraction. Finally, CBAM modules are added before the prediction head to enhance attention to polyp information. The experimental results indicate that YOLO-SRPD achieves a mean average precision (mAP) of 94.2% and a precision of 95.2%. Compared to the original model (YOLOv5), the average accuracy increased by 1.8% and the recall rate increased by 5.6%. These experimental results confirm that YOLO-SRPD can address the low-resolution problem during colorectal polyp detection and exhibit exceptional robustness. Full article
Show Figures

Figure 1

19 pages, 1217 KiB  
Article
Measuring the Cost of the European Union’s Carbon Border Adjustment Mechanism on Moroccan Exports
by Wissal Morchid, Eduardo A. Haddad and Luc Savard
Sustainability 2024, 16(12), 4967; https://doi.org/10.3390/su16124967 - 11 Jun 2024
Viewed by 454
Abstract
The ‘Fit for 55’ policy package was presented in the European Commission’s Green Deal framework, comprising a set of proposals to improve existing energy and climate legislation. Among its main proposals was a revision of the European Union’s Emission Trading System to expand [...] Read more.
The ‘Fit for 55’ policy package was presented in the European Commission’s Green Deal framework, comprising a set of proposals to improve existing energy and climate legislation. Among its main proposals was a revision of the European Union’s Emission Trading System to expand its sectoral coverage. Anticipating the possible loss of competitiveness with carbon pricing within the EU—which may lead to ‘carbon leakage’—a carbon border adjustment mechanism (CBAM) was included in the package. This scheme takes the form of an export tax levied by the European Union on some goods manufactured in non-carbon-taxing countries. In this paper, we provide a first-order estimate of the potential impact of CBAM on Morocco’s exports using an input–output approach. Our main findings suggest that the scheme would yield a carbon bill ranging from USD 20 to 34 million annually to Moroccan exporters in its initial phase. Morocco can mitigate such economic losses by instituting a national Emission Trading System, a tax reform, or speeding up the decarbonization of its economy. Full article
Show Figures

Figure 1

24 pages, 18033 KiB  
Article
Full-Scale Aggregated MobileUNet: An Improved U-Net Architecture for SAR Oil Spill Detection
by Yi-Ting Chen, Lena Chang and Jung-Hua Wang
Sensors 2024, 24(12), 3724; https://doi.org/10.3390/s24123724 - 7 Jun 2024
Viewed by 333
Abstract
Oil spills are a major threat to marine and coastal environments. Their unique radar backscatter intensity can be captured by synthetic aperture radar (SAR), resulting in dark regions in the images. However, many marine phenomena can lead to erroneous detections of oil spills. [...] Read more.
Oil spills are a major threat to marine and coastal environments. Their unique radar backscatter intensity can be captured by synthetic aperture radar (SAR), resulting in dark regions in the images. However, many marine phenomena can lead to erroneous detections of oil spills. In addition, SAR images of the ocean include multiple targets, such as sea surface, land, ships, and oil spills and their look-alikes. The training of a multi-category classifier will encounter significant challenges due to the inherent class imbalance. Addressing this issue requires extracting target features more effectively. In this study, a lightweight U-Net-based model, Full-Scale Aggregated MobileUNet (FA-MobileUNet), was proposed to improve the detection performance for oil spills using SAR images. First, a lightweight MobileNetv3 model was used as the backbone of the U-Net encoder for feature extraction. Next, atrous spatial pyramid pooling (ASPP) and a convolutional block attention module (CBAM) were used to improve the capacity of the network to extract multi-scale features and to increase the speed of module calculation. Finally, full-scale features from the encoder were aggregated to enhance the network’s competence in extracting features. The proposed modified network enhanced the extraction and integration of features at different scales to improve the accuracy of detecting diverse marine targets. The experimental results showed that the mean intersection over union (mIoU) of the proposed model reached more than 80% for the detection of five types of marine targets including sea surface, land, ships, and oil spills and their look-alikes. In addition, the IoU of the proposed model reached 75.85 and 72.67% for oil spill and look-alike detection, which was 18.94% and 25.55% higher than that of the original U-Net model, respectively. Compared with other segmentation models, the proposed network can more accurately classify the black regions in SAR images into oil spills and their look-alikes. Furthermore, the detection performance and computational efficiency of the proposed model were also validated against other semantic segmentation models. Full article
(This article belongs to the Special Issue Intelligent SAR Target Detection and Recognition)
Show Figures

Figure 1

Back to TopTop