Svoboda | Graniru | BBC Russia | Golosameriki | Facebook
Next Article in Journal
Environmental Factors Influencing Annual Changes in Bycatch per Unit Effort of Delphinus delphis around Their Main Hotspot in Korean Waters
Next Article in Special Issue
A Novel Positional Calibration Method for an Underwater Acoustic Beacon Array Based on the Equivalent Virtual Long Baseline Positioning Model
Previous Article in Journal
Biological and Fishery Parameters of Jumbo Squid (Dosidicus gigas) in the Colombian Pacific, a Resource without Directed Fishing Exploitation
Previous Article in Special Issue
Bio-Inspired Cooperative Control Scheme of Obstacle Avoidance for UUV Swarm
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

YOLOv7t-CEBC Network for Underwater Litter Detection

1
Research Institute Laboratory of Underwater Vehicles and Intelligent Systems, University of Shanghai for Science and Technology, Jungong Road 516, Shanghai 200093, China
2
Shanghai Engineering Research Center of Intelligent Maritime Search & Rescue and Underwater Vehicles, Shanghai Maritime University, Haigang Avenue 1550, Shanghai 201306, China
*
Author to whom correspondence should be addressed.
J. Mar. Sci. Eng. 2024, 12(4), 524; https://doi.org/10.3390/jmse12040524
Submission received: 22 February 2024 / Revised: 17 March 2024 / Accepted: 19 March 2024 / Published: 22 March 2024
(This article belongs to the Special Issue Navigation and Detection Fusion for Autonomous Underwater Vehicles)

Abstract

:
The issue of marine litter has been an important concern for marine environmental protection for a long time, especially underwater litter. It is not only challenging to clean up, but its prolonged presence underwater can cause damage to marine ecosystems and biodiversity. This has led to underwater robots equipped with powerful visual detection algorithms becoming the mainstream alternative to human labor for cleaning up underwater litter. This study proposes an enhanced underwater litter detection algorithm, YOLOv7t-CEBC, based on YOLOv7-tiny, to assist underwater robots in target identification. The research introduces some modules tailored for marine litter detection within the model framework, addressing inter-class similarity and intra-class variability inherent in underwater waste while balancing detection precision and speed. Experimental results demonstrate that, on the Deep Plastic public dataset, YOLOv7t-CEBC achieves a detection accuracy (mAP) of 81.8%, markedly surpassing common object detection algorithms. Moreover, the detection frame rate reaches 118 FPS, meeting the operational requirements of underwater robots. The findings affirm that the enhanced YOLOv7t-CEBC network serves as a reliable tool for underwater debris detection, contributing to the maintenance of marine health.

1. Introduction

In the contemporary marine environment, a substantial amount of pollution is attributed to waste, with the predominant contributor being plastic debris. According to statistics, approximately 15 million tons of plastic are annually discharged into the oceans, and this figure exhibits exponential growth on an annual basis [1]. The prolonged deposition of plastic waste in water leads to its decomposition into minuscule particles that are imperceptible to the naked eye or entanglement with underwater organisms, causing detrimental impacts on ecological environments and species diversity. This critical issue compels us to intensify efforts in cleaning marine plastic litter [2]. Currently, two primary methods are employed for marine litter removal: manual cleaning, which suffers from low efficiency, high costs, and potential safety hazards for workers, primarily targeting surface litter; and the utilization of intelligent devices, such as underwater robots, for cleaning. The latter approach offers advantages of high efficiency, reliability, and safety, capable of addressing both surface and underwater litter, thereby gradually emerging as the mainstream option in litter cleaning operations [3]. An exemplary object detection algorithm could be considered as the eyes of underwater robots, furnishing them with real-time and precise object information to guide the completion of litter collection tasks [4]. Due to the heightened complexity of underwater environments in practical applications, the difficulty of object detection in marine settings increases. Therefore, research specifically addressing underwater object detection methods for marine litter becomes particularly crucial [5].
Object detection is a computer vision problem that aims to identify and locate certain items inside pictures or video footage. Object detection is the process of recognizing various objects in a picture and locating each one, which is often represented by bounding boxes. The progression of object detection may be categorized into two phases: the era of conventional object identification algorithms (1998–2012) and the era of object detection algorithms using deep learning (2012–present). Traditionally object detection relied on manual feature extraction. By contrast, object identification systems that use deep learning employ convolutional neural networks (CNNs) to automatically extract the characteristics of the target, learning these features by matching the training data. This approach replaced the manual design of filters, offering improved generalization ability and robustness. As a result, we gradually stopped using traditional object detection algorithms [6]. Currently, deep-learning-based object detection algorithms could be categorized into two main types: region-proposal-based algorithms (two-stage detection algorithms) and regression-based algorithms (one-stage detection algorithms). The former, represented by the R-FCN [7] and the R-CNN [8] series of algorithms, initially identifies potential target regions and then proceeds with classification. While these algorithms exhibit high precision, their detection speed is relatively slow, making them less appropriate for practical production applications. At the same time, the latter, exemplified by SSD [9] (single-shot multi-box detectors; region-based detection strategies) and the YOLO [10] (You Only Look Once: a widely used model known for its speed and precision; it was first introduced by Joseph Redmon et al. in 2016 and has since undergone several iterations) series of algorithms, where we directly extracted features and performed object classification and localization using CNNs. This type of algorithm has been widely embraced in object detection due to its faster detection speed [11].
Underwater robots rely on underwater video footage or images for underwater litter detection [12]. Common methods for video and image acquisition include satellite remote sensing, sonar, and optical cameras. However, satellite remote sensing is primarily used for detecting large debris targets on the water surface and is ineffective for underwater debris detection. Additionally, it is challenging to deploy satellite remote sensing on mobile platforms such as underwater robots. While sonar can detect underwater debris and is suitable for deployment on mobile devices, its high cost and susceptibility to noise interference limit its generalizability. Optical cameras, due to their economical, stable, and flexible characteristics, are the preferred choice for video or image capture [13]. Nevertheless, the intricate underwater space and lighting circumstances often lead to indistinct optical pictures, posing challenges to the localization and identification of underwater litter [14]. Chen et al. [15] proposed a small target detection network, SWIPENet, which incorporates the sample reweighting algorithm IMA. This approach aimed to mitigate the impact of underwater environmental noise on detection results. Lin et al. [16] introduced an underwater image-enhancement method, RoIMix, which synthesizes enhanced samples from multiple images as training data. The image-enhancement effect is evident. These image-enhancement methods effectively address challenges in complex underwater images, such as uneven lighting, color distortion, noise, and other issues, thereby providing significant assistance in underwater litter detection. Currently, numerous deep-learning-based object detection methods are employed for underwater litter detection. Tian et al. [4] suggested a method for detecting underwater litter for underwater robots improvement upon YOLOv4, attaining rapid and accurate object detection. To address the issues of limited storage space and computational capabilities in underwater mobile devices, a underwater litter detection algorithm was proposed by Wu et al. [17]. This algorithm was based on an improved YOLOv5s algorithm, ensuring high detection precision while reducing the model size. Serious inter-class similarity (deformation) and intra-class variability (discoloration) exist in plastic litter deposited in the ocean [18]. This means that the attributes of marine litter of the same category are no longer uniform, and diverse categories of marine litter may also display resemblances in the photos taken of them. To deal with this problem, Ma et al. [19] presented a very effective and accurate deep learning technique called MLDet to tackle the problem of detecting marine litter. This method does not rely on general detection frameworks and has demonstrated favorable results.
In order to balance the precision and real-time detection performance of the object detection algorithm and effectively address the inter-class similarity and intra-class variability of underwater litter, we chose to improve the YOLOv7-tiny model from the YOLOv7 [20] series. We incorporated a series of efficient components designed for the detection of underwater plastic litter, resulting in the proposed YOLOv7t-CEBC model. Compared to previous object detection models, YOLOv7t-CEBC demonstrates improvements in both precision and detection speed, making it suitable for portable mobile devices. Experimental validation on underwater litter images confirms the efficacy of the improved model for detecting underwater litter. In this article, we will discuss what makes YOLOv7t-CEBC stand out and how it compares to other object detection algorithms. The innovation of this article can be summarized as follows:
(1)
The ConvNeXt Block (CNeB) [21] full-convolution module was incorporated into the backbone network, making use of its simplicity and efficiency in full convolution. By drawing inspiration from the structural advantages of models like Swin Transformer [22], ResNeXt [23], and MobileNetV2 [24], the model’s ability to learn from input feature maps was enhanced.
(2)
The introduction of the EMA (efficient multiscale attention) [25] mechanism in the backbone network resulted in an improved feature extraction capacity for capturing global target features. The inclusion of the Biformer (bi-level routing attention) [26] attention mechanism in the head network leads to enhanced detection performance for small and densely packed targets.
(3)
In the head network, the upsampling layer was replaced with the universal upsampling operator CARAFE (content-aware reassembly of features) [27]. A larger receptive field was provided by CARAFE during feature reassembly, and the reassembly process was guided based on input information. Superior performance was achieved compared to regular upsampling, with minimal additional parameters and computational overhead being introduced.
An overview of the sections of this article is provided: Section 2 presents detailed descriptions of the YOLOv7-tiny algorithm; the dataset used in the experiments is introduced. Section 3 presents the proposed YOLOv7t-CEBC model. Section 4 validates the effectiveness of the YOLOv7t-CEBC model through experiments using an underwater litter dataset and analyzes the limitations of the model. Section 5 provides a summary of this article.

2. Related Works

2.1. YOLOv7-Tiny

Chien Yao Wang and Alexey Bochkovskiy et al. came up with the YOLOv7 series model in 2022. The integration incorporates techniques such as E-ELAN (extended efficient layer-aggregation network) and cascaded model scaling, and model reparameterization. It offers seven versions that are tailored to meet the demands of various application scenarios and computing resources [20]. The YOLOv7-tiny used in this article belongs to a lightweight network and was designed for use in portable devices. Figure 1 displays its network diagram.
The preprocessing phase serves as the input layer for the YOLOv7-tiny model, incorporating mosaic and mixed data-augmentation approaches. Furthermore, it employs the adaptive anchor frame computation technique derived from YOLOv5. The color picture input was scaled to 640 × 640 in order to comply with the backbone network’s input size specifications [28]. This resizing process guaranteed that the image would be uniformly adjusted to the desired dimensions.
The backbone component primarily serves the purpose of feature extraction, here comprising the CBL module, the ELAN-T module, and the MaxPool module. The CBL module is a versatile component consisting of convolution, batch normalization [29], and Leaky ReLU activation functions. It is divided into three main categories, each serving a specific purpose in the feature extraction process. For feature extraction itself, a 3 × 3 convolutional kernel with a step size of 1 was employed, capturing crucial details from the input data. To perform downsampling and alter the size of the feature map, a 3 × 3 convolutional kernel with a step size of 2 was utilized. Finally, for smoothing and refining the extracted features, a 1 × 1 convolutional kernel with a step size of 1 was applied. This modular approach within the CBL module ensures efficient and effective feature extraction in various applications. The ELAN-T module was a lightweight E-ELAN (Extended ELAN) module [20]; this is a productive network configuration. The network has gained more robustness and is able to obtain various characteristics by splitting and merging gradient pathways. The MaxPool module incorporates maximum pooling (MaxPool) to augment the network’s capability to extract features by obtaining pertinent information from localized locations.
YOLOv7-tiny’s head network was built using the Pyramid Attention Network (PANet) [30] architecture, including the CBL module, the SPPCSP module, the UP module, and the ELAN-T module. The SPPCSP module is a lightweight SPPCSPC module [20], which comprises two sections. SPP’s main purpose is to expand the area that is receptive, enabling the algorithm to adjust the pictures to varying resolutions. It achieves distinct receptive fields by using maximum pooling. Initially, CSP categorizes characteristics into two distinct sections: one processor is used for ordinary processing, while the other is dedicated to SPP structure processing. Finally, combining these two parts together allows for a reduction comprising half the computational load, resulting in increased speed and improved precision. The UP module utilizes UPsample, which is an upsampling module that employs nearest-neighbor interpolation.
Feature maps from several levels are integrated by the head network in order to build feature maps that contained multiscale information. This was performed here with the intention of improving the exactness of object detection. Following the receipt of the multiscale feature maps that are produced by the head network, the prediction network is responsible for performing object detection. Anchor boxes are leveraged by the prediction network in order to make predictions regarding the positions, sizes, and types of items that were present in the input image. In subsequent steps, it enhances the predicted object boxes by employing post-processing non-maximum suppression (NMS) in order to dispose of redundant detection findings and improve the precision of the model.

2.2. Introduction to the Dataset

The marine litter dataset utilized in this study was primarily sourced from the publicly available Deep Plastic dataset on GitHub (https://github.com/gautamtata/DeepPlastic, accessed on 3 April 2023). To lessen the likelihood of overfitting in the training stage, the dataset was supplemented and pruned by us. The final dataset was composed of the following components: 1. Field images captured from Lake Tahoe, San Francisco Bay, and Bodega Bay in California. 2. Internet images (comprising less than 20% of the dataset) collected from a Google Image search. 3. Underwater litter images obtained from the JAMSTEK JEDI dataset. Figure 2 shows a selection of the dataset images.
The dataset was categorized into three main types, with “plastic” representing soft litter, “plastic bottle” representing bottle-shaped litter, and “jellyfish” representing shallow-sea organisms. This diverse composition makes it well-suited for scientific research in shallow-sea litter collection. The dataset comprises a total of 1570 images. We randomly divided all the images into three subsets using an 8:1:1 ratio, resulting in 1256 training images, 157 validation images, and 157 test images. Each image was annotated with one of three class labels: “plastic”, “plastic bottle”, or “jellyfish”. During the data annotation process, we employed the open-source tool Labelimg to label each image using single-class bounding box annotations. Figure 3 presents an analysis of the dataset labels; in Figure 3A, a visual analysis of annotated data is displayed, revealing the quantity of each category. Notably, there are 1168 instances of plastic, 965 instances of plastic bottles, and 930 instances of jellyfish. The incorporation of jellyfish targets serves to enhance the model’s resistance to interference. From Figure 3B, the normalized target location map, it can be discerned that the majority of targets are located within the central areas of the dataset images. Figure 3C shows a normalized target size map, showing that target sizes are relatively scattered, with most being tiny. (Figure 3B,C show that there are more targets when the colors are darker.)

3. Methods

To address the impact of complex underwater environments on optical images and the influence of inter-class similarity and intra-class variability of underwater litter on detection precision, several components designed for marine litter detection were introduced into the improved YOLOv7t-CEBC. The specific introductions made are discussed in the following subsections.

3.1. CNeB (ConvNeXt Block)

ConvNeXt Block [21] is a convolution-based visual model proposed by the FAIR team. Building upon the design principles of Swin Transformer [28], ConvNeXt [31] introduces improvements by adjusting the stacking sequence of ResNet50 to (3, 3, 9, 3) and replacing the downsampling module from stem to patchify. Borrowing the concept of group convolution from ResNeXt [32], ConvNeXt adopts a more aggressive depthwise convolution, adjusting the initial channel count from 64 to 96, consistent with Swin Transformer [28]. Additionally, ConvNeXt incorporates the inverted bottleneck module from MobileNetV2 [24] and learnt the technique used by Swin Transformer to change the convolution kernel of depthwise conv from 3 × 3 to 7 × 7. In terms of activation functions and normalization layers, ConvNeXt reduces the use of activation functions and batch normalization (BN), replacing Relu with GELU as the activation function and swapping BN with layer normalization (LN), used in the transformer. Lastly, a separate downsampling layer is introduced, composed of layer normalization and a convolution layer with a size of 2 and a stride of 2. A structure diagram of ConvNeXt Block is shown in Figure 4, which synthesizes the aforementioned improvements, offering multifaceted optimizations for model performance enhancement.
ConvNeXt is a purely convolutional model that leverages the advantages of various state-of-the-art models, demonstrating faster inference speed and higher accuracy while retaining the inherent simplicity and efficiency of standard ConvNets. Exploiting these outstanding features, we integrated ConvNeXt into the backbone network of YOLOv7t-CEBC to improve the extraction of characteristics and learning capabilities of the backbone network for better handling of underwater litter, especially addressing the non-rigidity and susceptibility-to-deformation characteristics of plastic waste. Moreover, this fusion method efficiently mitigated the problem of information loss, allowing the network to attain increased depth without experiencing gradient vanishing. Additionally, it enhanced the network’s sensitivity to variations in network weights. Consequently, this enhanced the overall detection efficiency and precision of the model.

3.2. EMA (Efficient Multiscale Attention)

EMA, a novel and efficient multiscale attention mechanism proposed in [25], does not require dimensionality reduction. It primarily relies on channel attention mechanisms and embeds spatial attention mechanisms into channel attention mechanisms to enhance feature fusion (Figure 5 showed the EMA network structure). EMA utilizes feature grouping and selected the shared component of the 1 × 1 convolution from the CA [33] attention mechanism as the 1 × 1 branch of EMA. This branch decomposed the input tensor into two parallel 1D feature encoding vectors. The input tensor X R C × W × H , where C represented the number of input channels, and H and W represented the spatial dimensions of the input features, performing 1D global average pooling along the horizontal dimension direction in the C dimension at height H ( z c H H ), can be expressed as:
z c H H = 1 W 0 i W x c H , i
Similarly, performing 1D global average pooling along the vertical dimension direction in the C dimension at width W ( z c w W ) to encode global information along the vertical dimension direction can be expressed as:
z c w W = 1 H 0 j H x c j , W
Among them, x c represents the input feature at channel c; i and j , respectively, represent the positions along the width and height dimensions in the context of the pooling operation. These two 1D global average pooling operations are designed to encode global information in two spatial dimension directions and capture the long-distance spatial interactions in different dimension directions, helping the network improve its understanding of feature images. Then, two branches are concatenated and processed, applying the Sigmoid to recalibrate the weights of each channel. A 3 × 3 convolution is parallelly placed next to the 1 × 1 branch because the 3 × 3 branch in EMA serves to expand the branch network’s receptive field, capturing short-range interactions in space and aggregating multiscale spatial structural information. Furthermore, these parallel substructures assisted the network in avoiding more sequential processing and deeper architectures, effectively establishing both short- and long-range feature dependencies, resulting in improved training and inference speed, and ultimately achieving better performance.
Next, a method for aggregating cross-spatial information in different spatial dimensions was used to achieve a richer feature aggregation. Utilizing 2D global average pooling ( z c ), as follows:
z c = 1 H × W j H i W x c i , j
This was applied separately to encode the 1 × 1 branch and the 3 × 3 branch. After applying softmax functions to the results, they were fused separately with the original outputs of the 3 × 3 branch and the 1 × 1 branch. This process generates two spatial attention maps, preserving the complete and precise spatial positional information, which is crucial for capturing pixel-level relationships. Finally, the two generated feature maps were aggregated, and after passing through a sigmoid function, they were then multiplied with the input feature map, resulting in a feature map with weight redistribution.
The goal of EMA is to minimize computational costs while maintaining information for every channel. This was accomplished by transforming a section of the batch dimension into the channel dimension, avoiding some form of dimensionality reduction through generic convolutions. Additionally, the channel dimension was partitioned into several sub-features, guaranteeing an equitable distribution of spatial semantic data within each feature group. In addition to encoding global information in each parallel branch to adjust the weights of each channel, the output features from the two parallel branches are also combined through cross-dimensional interactions to capture associations between pixels at the pairwise level.
The ELAN-T module borrows the design of the ELAN module [34] and consists of two parts: the initial branch undergoes a 1 × 1 convolution operation to adjust the channel count, while the second branch utilizes a 1 × 1 convolution module to change the channel number, followed by two 3 × 3 convolution modules for feature extraction. The final feature extraction result is obtained by aggregating four features (as shown in Figure 6). We integrated EMA into the ELAN-T module’s second branch before the 3 × 3 convolution, forming the ELAN-EMA module (as illustrated in Figure 6), aiming to retain the feature information inputted into the 3 × 3 convolution channels of the second branch and alleviate feature degradation during the feature extraction process. Such modifications make the network topology more efficient, avoiding information loss caused by dimension reduction through generic convolutions and enhancing the network’s robustness. This customized modification for underwater litter detection better preserves learned litter features, strengthens cross-channel feature fusion, and reduces computational and parameter overheads.

3.3. BiFormer (Bi-Level Routing Attention)

BiFormer (submitted in [26]), normally designed based on the innovative dual-layer routing attention, achieved content-aware sparse patterns in a query-adaptive manner. It employs dual-layer routing attention as the fundamental building block. In other words, BiFormer is a dynamic, query-aware sparse attention mechanism. The main concept was to eliminate a large portion of useless key–value pairs at a high-level, while keeping only a small group of routing areas and then applying fine-grained token-to-token attention within these routing regions to better capture relationships and context information between tokens.
BiFormer utilizes sparsity to optimize computation and memory usage. It exclusively relies on GPU-friendly dense matrix multiplication to enable more adaptable computation allocation and content awareness, making it sparsely dynamic-query-aware. Due to the fact that BiFormer focused on a subset of related tags in a query-adaptive manner without being disturbed by other irrelevant tag items, it performed well in terms of performance and computational efficiency, making it particularly suitable for detecting small and dense targets. We added BiFormer to the back of the ELAN-T module to form the ELAN-BIF module, as shown in Figure 6. The purpose was to enhance deep network feature extraction and fusion, enhancing the network’s capability in detecting small-sized and densely packed litter.

3.4. CARAFE (Content-Aware Reassembly of Features)

The CARAFE method, a lightweight general-purpose upsampling operator (submitted in [27]), involves two main stages: the kernel prediction module and the generation of image results through the content-aware reassembly module. Given an input feature map with dimensions H × W × C and an upsampling factor of σ , and assuming that the predicted size of the upsampling kernel is k u p × k u p (larger kernels would result in a larger receptive field and an increased computational complexity), the upsampling kernel prediction begins by compressing the channel count of the input image to C m through convolutional operations. Subsequently, the compressed feature map is encoded using a convolutional layer with a kernel size of k e n c o d e r × k e n c o d e r . The encoded result is then unfolded spatially to obtain a collection of upsampling kernels with dimensions σ H × σ W × k u p 2 . Following this, normalization is applied to the upsampling kernels to ensure that the sum of convolutional kernel weights equals 1. Then, to handle the upsampling, the content-aware reassembly module is utilized. The input feature map was remapped so that it corresponded to each position in the output feature map, extracting a k u p × k u p region centered at that position. The output value was obtained by calculating the dot product between the extracted region and the predicted upsampling kernel at this point. (Various channels share the same upsampling kernel at the same place.) Ultimately, this process yielded a feature map that had the form of σ H × σ W × C . Figure 7 shows the workflow of the universal upsampling operator CARAFE.
We replaced the upsampling module in the YOLOv7t-CEBC head network with the CARAFE module. The CARAFE module covers a wider range of input information and can more finely utilize the surrounding details during the upsampling process. It can utilize content information from the lower layers to predict recombination kernels and recombine features within predefined neighboring regions. Based on content information, CARAFE can adaptively and optimally use the recombination kernels at different positions. This enables CARAFE to design corresponding feature recombination processes according to the different shapes of litter features, thereby enhancing the shape perception capability of YOLOv7t-CEBC. Compared to mainstream upsampling operations such as interpolation or deconvolution, CARAFE achieves a better performance and can minimize parameter count as much as possible, maintaining a lightweight network model.

3.5. YOLOv7t-CEBC

After incorporating various modules designed for underwater litter detection, the detailed structure of the improved YOLOv7t-CEBC is illustrated in Figure 8. (Red boxes represent modules designed specifically for underwater litter detection.)

4. Experiment

This section offers a comprehensive account of the configuration of the experiment, comprising the arrangement of the environment, hyperparameters, evaluation criteria, and analysis of the experimental results. The experimental findings demonstrate that the YOLOv7t-CEBC model significantly enhanced the precision of underwater object identification while just slightly increasing the parameter count. This has been confirmed to be effective and superior in underwater detection situations.

4.1. Experimental Environment

The experimental platform employed an Intel® Core™ i9-10900X X-series processor CPU @ 3.7 GHz, with an NVIDIA GeForce RTX 4090 GPU utilized for graphics processing, boasting a memory size of 24 GB. The system operated on a 64-bit Windows 11 operating system. The experimental runtime environment comprised PyTorch, CUDA version 11.8, CUDNN version 8.2.2, and Python compiler version 3.10.

4.2. Experimental Parameter Setting

For each batch of experiments in this investigation, identical starting training conditions were established. The model underwent training for 500 epochs, with input images resized prior to experimentation, and hyperparameters such as learning rate, momentum, and weight decay were set, and the Adam optimizer was used; specific parameters are indicated in Table 1.

4.3. Model Evaluation Metrics

The primary metrics used for object detection are precision, recall, IOU, and AP and mAP values. The IOU indicator calculated how much the bounding boxes in the original image and the anticipated bounding boxes generated by the algorithm overlapped. This metric was commonly employed to assess the precision of the detection model. A higher IOU value indicates a greater overlap between the model’s detection results and the actual scenario, resulting in improved detection performance. The calculation method was to calculate the ratio of the intersection and union between the detection results and the actual situation, as follows:
I O U = D e t e c t i o n R e s u l t G r o u d T r u t h D e t e c t i o n R e s u l t G r o u d T r u t h
The experiment established the IOU threshold. When the IOU value, which measures the overlap between the detection result and the true value, exceeds the specified threshold, the detection result can be considered as a true positive (TP), indicating accurate target recognition. On the contrary, when the IOU value is less than the threshold, the detection result can be considered to be a false positive (FP), indicating an error in target recognition. The number of undetected targets would be called false negative (FN). This statistic represents the number of ground truths that do not have related detection results.
Precision can be defined as the ratio of correctly identified positive instances in the recognition image, expressed as a percentage.
P r e c i s i o n = T P T P + F P
The recall rate refers to the ratio of correctly identified positive samples in the test set to all positive samples.
R e c a l l = T P T P + F N
AP and mAP comprehensively consider precision and recall and are typically used to assess the effectiveness of models across multiple categories. The precision–recall rate (PR) curve represents the relationship between the precision and recall rates of a classifier. The precision is plotted on the vertical axis, while the recall rate is plotted on the horizontal axis. This curve illustrates how well the classifier can detect positive examples and include all of them. The average precision (AP) is a numerical measure of the area under the precision–recall rate (PR) curve. A higher AP value indicates the superior performance of the classifier.
A P = 0 1 P r e c i s i o n R e c a l l d R e c a l l
Object detection models typically identify multiple sorts of targets, with each type having the ability to generate a precision–recall (PR) curve for calculating the average precision (AP) value. The term “mAP” is determined by averaging the precisions across all categories.
m A P = 1 c l a s s n u m b e r 1 c l a s s n u m b e r A P

4.4. Experimental Results and Analysis

4.4.1. Experimental Results

The proposed YOLOv7t-CEBC model was experimentally evaluated for its detection performance on the Deep Plastic dataset, and the experimental results are shown in Figure 9. The results demonstrate that the improved model exhibited enhancements in detection across all categories when compared to the baseline model. The most noticeable improvement is observed in the precision of detecting “plastic”, which had seen an increase of 7.2% in average precision (AP). The model’s overall average precision (mAP) was computed to be 81.8%.

4.4.2. Comparative of Different Object Detection Models

To further validate the superiority of the improved YOLOv7t-CEBC model, we compared it well-known object detection algorithms such as SSD, Faster-RCNN, Retinanet, YOLOv3, YOLOv4, YOLOv5s, YOLOXs, YOLOv8n, and the original algorithm YOLOv7-tiny. The Deep Plastic dataset served as the experimental dataset, and the evaluation included metrics such as accuracy, recall, [email protected], model computational complexity (GFLOPs), parameters count, and frame rate (FPS) (refer to Table 2 for details). From Table 2, it is evident that our proposed YOLOv7t-CEBC significantly outperforms all other object detection algorithms in terms of precision and recall. Regarding the detection precision metric [email protected], our improved algorithm showed a 3.8% increase compared to the original YOLOv7-tiny, attaining a clear advantage over other detection algorithms. In terms of computational complexity (GFLOPs), YOLOv7t-CEBC surpasses YOLOv8n, YOLOv7-tiny, YOLOXs, and YOLOv5s. The parameter count of YOLOv7t-CEBC was also higher than YOLOv7-tiny and YOLOv8n. The increase was attributed to the integration of modules specifically designed for underwater litter detection, resulting in an expansion of YOLOv7t-CEBC’s overall size. However, the improvements in mAP and outstanding performance in other evaluation metrics compensate for these drawbacks. The algorithm achieved a detection speed of 118 fps, slightly lower than the original YOLOv7-tiny but significantly surpassing current mainstream detection algorithms, ensuring good real-time performance. The experimental results robustly demonstrate the outstanding performance of YOLOv7t-CEBC in underwater litter detection, making it more suitable for marine litter detection than other mainstream object detectors.

4.4.3. Ablation Experiment of Deep Plastic Dataset

This study used ablation experiments to assess the efficacy of various enhanced model performances. The ablation experimental results are shown in Table 3.
Table 3 clearly demonstrates that adding the crucial CNeB module to the backbone network increased the parameter count by 1.46M compared to the baseline model, resulting in a 2.1% improvement in mAP. Furthermore, the addition of the EMA and Biformer attention mechanisms on this basis resulted in a 0.7% enhancement in mean average precision (mAP). In contrast to the baseline model, there is an increase of 0.25M parameters. Lastly, by replacing the upsampling module of the improved model with the CARAFE upsampling operator, precision was enhanced further. In comparison with the baseline model, the change led to a 3.8% rise in mAP with an additional 0.89M parameters. If only the CNeB module and CARAFE upsampling operator were added to the baseline model, the mAP was improved by only 0.6% compared to the baseline model, but the parameter count increased by 35%. In comparison to the final improved model, the addition of the EMA and Biformer attention mechanisms not only reduced the parameter count and network complexity but also contributed to enhanced model precision. Throughout the entire ablation experiment, there was a slight decrease in detection speed, resulting in a final speed of 118FPS, which was sufficient to meet the model’s rapid detection requirements.
Furthermore, the values of k e n c o d e r and k u p in CAREFE also affected the final results. Comparative experiments showed that (see Table 4) increasing k u p requires a larger k e n c o d e r because the content encoder needed a larger receptive field to predict a larger recombination kernel. Simultaneously increasing both k e n c o d e r and k u p could improve detection precision, whereas increasing only one of them did not. We have summarized a formula:
k e n c o d e r = k u p 2
The experimental results indicated that three combinations of ( k e n c o d e r , k u p )—(1, 3), (3, 5), and (5, 7)—were all good choices. The larger the kernel size used, the better the results, and increasing the kernel size did not lead to a significant increase in the number of parameters, nor did it introduce a substantial impact on detection speed. Therefore, the CAREFE upsampling operator with a kernel combination of (5, 7) was chosen as the model improvement module.

4.5. Model Performance Discussion

Deep learning networks are often seen as inscrutable experiments, posing difficulties in interpretation. Comprehending the model’s recognition process is essential for examining its internal operations, structure, training data, feature extraction, and prediction processes [35]. In our experiments, we introduced a technique for visualizing the attention of deep learning models called Grad-CAM [36]. This method leverages the model’s gradient information to backpropagate to the input image, generating a class activation map (CAM) that highlights the input image regions most crucial for the model’s predictions. The Grad-CAM heatmaps are illustrated in Figure 10. The results demonstrated that the enhanced model exhibits a stronger capability in recognizing and extracting underwater litter features compared to the original model. Moreover, it was found to be less susceptible to the influence of complex underwater environmental factors. Therefore, the improved model was better equipped to handle the inter-class similarity and intra-class variability of underwater garbage, while effectively filtering out irrelevant information.
To better evaluate the visual capability of the enhanced algorithm, we implemented detection on the validation set of the dataset. Experimental findings show that, in comparison with the traditional YOLOv7-tiny algorithm, the presented YOLOv7t-CEBC performed better in harsh underwater scenarios. As shown in Figure 11, an analysis of the detection outcomes on the Deep Plastic dataset between YOLOv7-tiny and YOLOv7t-CEBC demonstrated that the proposed YOLOv7t-CEBC exhibited significantly improved detection performance over YOLOv7-tiny. It not only accurately detected more targets but also achieved higher precision.
However, the model still had limitations; for example, YOLOv7t-CEBC still faced challenges in error detection and omission detection in complex underwater environments, as shown in Figure 12. Thus, it requires further research for improvement. Furthermore, due to the influence of time span and lighting conditions, significant changes were observed in the shape and color of underwater litter. Despite attempts to integrate modules for underwater litter detection, the impact of inter-class similarity and intra-class variability in underwater litter on detection precision could not be fully resolved, and the situation of missed detection and false alarms by the model persisted. The collection of pictures of marine litter with different storage times and different forms to enrich our dataset is thus necessary to enhance the model’s capacity for both migration and detection. Due to the current focus of our dataset on shallow-water underwater litter detection, the influence of factors such as light and noise on the target images is relatively weak and is less impactful for the outcomes of the detection. Consequently, the method suggested in the article does not involve image processing. Furthermore, mainstream image-enhancement methods are primarily conducted offline and have not been deployed on real-time detection models. Therefore, there is an urgent need for advanced image-enhancement algorithms to be deployed on detection models for real-time image processing, seeking to fulfill the demands of deepwater object detection in intricate circumstances. The “tiny” network architecture is a targeted solution designed specifically for resource-constrained environments, with the aim of achieving efficient detection performance with limited resources. When underwater robots perform tasks, they typically utilize portable computing devices, where computational resources are severely constrained. Thus, there is an urgent need for such micro-network architectures to assist in achieving the required functionality, thereby enhancing the overall efficiency of the robots. Although we utilized a “tiny” network structure as the foundational architecture improvement, there remains room for further lightweight optimization. These improvement strategies will be the focal point of our next phase of work.

5. Conclusions

To be able to tackle the complex issue of detecting litter in underwater environments, this study introduced a series of components specifically designed for underwater litter detection, proposing the underwater litter detection algorithm YOLOv7t-CEBC. Firstly, the CNeB module was incorporated into the backbone network to enhance the learning capability for underwater litter features, expedited network convergence, and elevated detection precision. Secondly, the EMA attention mechanism and Biformer attention mechanism were introduced to improve the ability of the network for obtaining targets’ global characteristics and densely clustered litter targets, effectively reducing the network’s complexity. Finally, the conventional upsampling module was replaced with CAREFE to enlarge the receptive field, better utilize surrounding information, and enhance the deep network’s ability to extract underwater litter features. Experiments were conducted using the underwater litter dataset, Deep Plastic, and YOLOv7t-CEBC was compared with state-of-the-art object detection algorithms. The results demonstrated that the YOLOv7t-CEBC model successfully attained a mean average precision (mAP) of 81.8% for detecting litter in intricate underwater settings, with a detection speed of 118 FPS, surpassing the most advanced object detection models and meeting real-time requirements. Currently, the method proposed in this paper largely fulfills the requirements for shallow-water underwater litter detection by underwater robots and guides the mechanical arm in collecting underwater litter. In the future, to improve the model’s generalization performance and detection capacity, we will concentrate on gathering a large and varied dataset of underwater litter. Advanced image processing techniques will be employed to restore underwater optical images, thereby improving detection precision and extending the robot’s working range to deeper waters. Additionally, efforts will be made to lightweight the model to alleviate the computational burden on underwater robots, assisting them in executing litter cleaning tasks more efficiently. In summary, this research is of paramount importance for underwater litter detection and subsequent cleanup efforts. Through the continuous refinement of model functionalities, this research will make significant contributions to the protection of the marine environment in the future.

Author Contributions

Conceptualization, X.Z., D.Z. and W.G.; methodology, X.Z., D.Z. and W.G.; software, X.Z.; writing—original draft preparation, X.Z.; writing—review and editing, D.Z. and W.G.; funding acquisition, D.Z. and W.G. All authors have read and agreed to the published version of the manuscript.

Funding

This project is supported by the National Natural Science Foundation of China (62033009), Creative Activity Plan for Science and Technology Commission of Shanghai (23550730300, 21DZ2293500).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Martin, C.; Young, C.A.; Valluzzi, L.; Duarte, C.M. Ocean sediments as the global sink for marine micro-and mesoplastics. Limnol. Oceanogr. Lett. 2022, 7, 235–243. [Google Scholar] [CrossRef]
  2. Madricardo, F.; Ghezzo, M.; Nesto, N.; Mc Kiver, W.J.; Faussone, G.C.; Fiorin, R.; Riccato, F.; Mackelworth, P.C.; Basta, J.; De Pascalis, F. How to deal with seafloor marine litter: An overview of the state-of-the-art and future perspectives. Front. Mar. Sci. 2020, 7, 505134. [Google Scholar] [CrossRef]
  3. Akib, A.; Tasnim, F.; Biswas, D.; Hashem, M.B.; Rahman, K.; Bhattacharjee, A.; Fattah, S.A. Unmanned floating waste collecting robot. In Proceedings of the TENCON 2019–2019 IEEE Region 10 Conference (TENCON), Kochi, India, 17–20 October 2019; pp. 2645–2650. [Google Scholar]
  4. Tian, M.; Li, X.; Kong, S.; Wu, L.; Yu, J. A modified YOLOv4 detection method for a vision-based underwater garbage cleaning robot. Front. Inf. Technol. Electron. Eng. 2022, 23, 1217–1228. [Google Scholar] [CrossRef]
  5. Li, P.; Fan, Y.; Cai, Z.; Lyu, Z.; Ren, W. Detection Method of Marine Biological Objects Based on Image Enhancement and Improved YOLOv5S. J. Mar. Sci. Eng. 2022, 10, 1503. [Google Scholar] [CrossRef]
  6. Xu, S.; Zhang, M.; Song, W.; Mei, H.; He, Q.; Liotta, A. A systematic review and analysis of deep learning-based underwater object detection. Neurocomputing 2023, 527, 204–232. [Google Scholar] [CrossRef]
  7. Dai, J.; Li, Y.; He, K.; Sun, J. R-FCN: Object detection via region-based fully convolutional networks. In Proceedings of the 30th International Conference on Neural Information Processing Systems, Barcelona, Spain, 5–10 December 2016; pp. 379–387. [Google Scholar]
  8. Girshick, R.; Donahue, J.; Darrell, T.; Malik, J. Rich feature hierarchies for accurate object detection and semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 580–587. [Google Scholar]
  9. Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.; Fu, C.-Y.; Berg, A.C. Ssd: Single shot multibox detector. In Proceedings of the Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, 11–14 October 2016; Proceedings, Part I 14, 2016. pp. 21–37. [Google Scholar]
  10. Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You only look once: Unified, real-time object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 779–788. [Google Scholar]
  11. Wang, J.; Li, Q.; Fang, Z.; Zhou, X.; Tang, Z.; Han, Y.; Ma, Z. YOLOv6-ESG: A lightweight seafood detection method. J. Mar. Sci. Eng. 2023, 11, 1623. [Google Scholar] [CrossRef]
  12. Sun, Y.; Zheng, W.; Du, X.; Yan, Z. Underwater small target detection based on yolox combined with mobilevit and double coordinate attention. J. Mar. Sci. Eng. 2023, 11, 1178. [Google Scholar] [CrossRef]
  13. Gaya, J.O.; Gonçalves, L.T.; Duarte, A.C.; Zanchetta, B.; Drews, P.; Botelho, S.S. Vision-based obstacle avoidance using deep learning. In Proceedings of the 2016 XIII Latin American Robotics Symposium and IV Brazilian Robotics Symposium (LARS/SBR), Recife, Brazil, 8–12 October 2016; pp. 7–12. [Google Scholar]
  14. Fulton, M.; Hong, J.; Islam, M.J.; Sattar, J. Robotic detection of marine litter using deep visual detection models. In Proceedings of the 2019 International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada, 20–24 May 2019; pp. 5752–5758. [Google Scholar]
  15. Chen, L.; Liu, Z.; Tong, L.; Jiang, Z.; Wang, S.; Dong, J.; Zhou, H. Underwater object detection using Invert Multi-Class Adaboost with deep learning. In Proceedings of the 2020 International Joint Conference on Neural Networks (IJCNN), Glasgow, UK, 19–24 July 2020; pp. 1–8. [Google Scholar]
  16. Lin, W.-H.; Zhong, J.-X.; Liu, S.; Li, T.; Li, G. Roimix: Proposal-fusion among multiple images for underwater object detection. In Proceedings of the ICASSP 2020–2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Barcelona, Spain, 4–8 May 2020; pp. 2588–2592. [Google Scholar]
  17. Wu, C.; Sun, Y.; Wang, T.; Liu, Y. Underwater trash detection algorithm based on improved YOLOv5s. J. Real-Time Image Process. 2022, 19, 911–920. [Google Scholar] [CrossRef]
  18. Xue, B.; Huang, B.; Wei, W.; Chen, G.; Li, H.; Zhao, N.; Zhang, H. An efficient deep-sea debris detection method using deep neural networks. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 12348–12360. [Google Scholar] [CrossRef]
  19. Ma, D.; Wei, J.; Li, Y.; Zhao, F.; Chen, X.; Hu, Y.; Yu, S.; He, T.; Jin, R.; Li, Z. MLDet: Towards efficient and accurate deep learning method for Marine Litter Detection. Ocean Coast. Manag. 2023, 243, 106765. [Google Scholar] [CrossRef]
  20. Wang, C.-Y.; Bochkovskiy, A.; Liao, H.-Y.M. YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 17–24 June 2023; pp. 7464–7475. [Google Scholar]
  21. Liu, Z.; Mao, H.; Wu, C.-Y.; Feichtenhofer, C.; Darrell, T.; Xie, S. A convnet for the 2020s. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 11976–11986. [Google Scholar]
  22. Liu, Z.; Lin, Y.; Cao, Y.; Hu, H.; Wei, Y.; Zhang, Z.; Lin, S.; Guo, B. Swin transformer: Hierarchical vision transformer using shifted windows. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, QC, Canada, 10–17 October 2021; pp. 10012–10022. [Google Scholar]
  23. Xie, S.; Girshick, R.; Dollár, P.; Tu, Z.; He, K. Aggregated residual transformations for deep neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 1492–1500. [Google Scholar]
  24. Sandler, M.; Howard, A.; Zhu, M.; Zhmoginov, A.; Chen, L.-C. Mobilenetv2: Inverted residuals and linear bottlenecks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 4510–4520. [Google Scholar]
  25. Ouyang, D.; He, S.; Zhang, G.; Luo, M.; Guo, H.; Zhan, J.; Huang, Z. Efficient Multi-Scale Attention Module with Cross-Spatial Learning. In Proceedings of the ICASSP 2023–2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Rhodes Island, Greece, 4–10 June 2023; pp. 1–5. [Google Scholar]
  26. Zhu, L.; Wang, X.; Ke, Z.; Zhang, W.; Lau, R.W. BiFormer: Vision Transformer with Bi-Level Routing Attention. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 17–24 June 2023; pp. 10323–10333. [Google Scholar]
  27. Wang, J.; Chen, K.; Xu, R.; Liu, Z.; Loy, C.C.; Lin, D. Carafe: Content-aware reassembly of features. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Korea, 27 October–2 November 2019; pp. 3007–3016. [Google Scholar]
  28. Liu, K.; Sun, Q.; Sun, D.; Peng, L.; Yang, M.; Wang, N. Underwater target detection based on improved YOLOv7. J. Mar. Sci. Eng. 2023, 11, 677. [Google Scholar] [CrossRef]
  29. Ioffe, S.; Szegedy, C. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Proceedings of the International Conference on Machine Learning, Lille, France, 7–9 July 2015; pp. 448–456. [Google Scholar]
  30. Mei, Y.; Fan, Y.; Zhang, Y.; Yu, J.; Zhou, Y.; Liu, D.; Fu, Y.; Huang, T.S.; Shi, H. Pyramid Attention Network for Image Restoration. Int. J. Comput. Vis. 2023, 131, 3207–3225. [Google Scholar] [CrossRef]
  31. Dollár, P.; Singh, M.; Girshick, R. Fast and accurate model scaling. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 924–932. [Google Scholar]
  32. Bochkovskiy, A.; Wang, C.-Y.; Liao, H.-Y.M. Yolov4: Optimal speed and accuracy of object detection. arXiv 2020, arXiv:2004.10934. [Google Scholar]
  33. Hou, Q.; Zhou, D.; Feng, J. Coordinate attention for efficient mobile network design. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 13713–13722. [Google Scholar]
  34. Zhang, X.; Zeng, H.; Guo, S.; Zhang, L. Efficient long-range attention network for image super-resolution. In Proceedings of the European Conference on Computer Vision, Tel Aviv, Israel, 23–27 October 2022; pp. 649–667. [Google Scholar]
  35. Zhao, K.; Zhao, L.; Zhao, Y.; Deng, H. Study on Lightweight Model of Maize Seedling Object Detection Based on YOLOv7. Appl. Sci. 2023, 13, 7731. [Google Scholar] [CrossRef]
  36. Selvaraju, R.R.; Cogswell, M.; Das, A.; Vedantam, R.; Parikh, D.; Batra, D. Grad-cam: Visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 618–626. [Google Scholar]
Figure 1. Simplified diagram of the YOLOv7-tiny network architecture.
Figure 1. Simplified diagram of the YOLOv7-tiny network architecture.
Jmse 12 00524 g001
Figure 2. The Deep Plastic dataset. In different underwater environments, different forms of litter are displayed. At the same time, jellyfish are added as interference terms help improve the dataset’s richness.
Figure 2. The Deep Plastic dataset. In different underwater environments, different forms of litter are displayed. At the same time, jellyfish are added as interference terms help improve the dataset’s richness.
Jmse 12 00524 g002
Figure 3. (A) Bar graph displaying the quantity of goals in each category; (B) normalized target position diagram indicating the position of all objects in the dataset images; (C) normalized target size map, indicating the size of all objects in the dataset images.
Figure 3. (A) Bar graph displaying the quantity of goals in each category; (B) normalized target position diagram indicating the position of all objects in the dataset images; (C) normalized target size map, indicating the size of all objects in the dataset images.
Jmse 12 00524 g003
Figure 4. CNeB (ConvNeXt Block) structure diagram. “Layer Norm”—layer normalization operation; “Layer Scale” operation—scale the data of each channel.
Figure 4. CNeB (ConvNeXt Block) structure diagram. “Layer Norm”—layer normalization operation; “Layer Scale” operation—scale the data of each channel.
Jmse 12 00524 g004
Figure 5. The EMA network structure. “g”—divided groups; “X Avg Pool”—1D horizontal global pooling; “Y Avg Pool”—1D vertical global pooling; “Group Norm”—group normalization; “Matmul”—matrix multiplication; “*”—matrix addition.
Figure 5. The EMA network structure. “g”—divided groups; “X Avg Pool”—1D horizontal global pooling; “Y Avg Pool”—1D vertical global pooling; “Group Norm”—group normalization; “Matmul”—matrix multiplication; “*”—matrix addition.
Jmse 12 00524 g005
Figure 6. ELAN-T—the lightweight E-ELAN module; ELAN-EMA—the EMA attention mechanism is added to ELAN-T; ELAN-BIF—the BiFormer attention mechanism is added to ELAN-T.
Figure 6. ELAN-T—the lightweight E-ELAN module; ELAN-EMA—the EMA attention mechanism is added to ELAN-T; ELAN-BIF—the BiFormer attention mechanism is added to ELAN-T.
Jmse 12 00524 g006
Figure 7. The overall framework of the upsampling module CARAFE. CARAFE module consists of two essential components: the kernel prediction module and the content-aware reassembly module. A feature map with size H × W × C is upsampled by a factor of σ (=2) in this figure, “⊗”: Dot product operation.
Figure 7. The overall framework of the upsampling module CARAFE. CARAFE module consists of two essential components: the kernel prediction module and the content-aware reassembly module. A feature map with size H × W × C is upsampled by a factor of σ (=2) in this figure, “⊗”: Dot product operation.
Jmse 12 00524 g007
Figure 8. The network structure of YOLOv7t-CEBC.
Figure 8. The network structure of YOLOv7t-CEBC.
Jmse 12 00524 g008
Figure 9. The precision–recall curve of YOLOv7-tiny (left) and YOLOv7t-CEBC (right) on the Deep Plastic dataset. A better precision–recall curve is one which shows a top-right convex.
Figure 9. The precision–recall curve of YOLOv7-tiny (left) and YOLOv7t-CEBC (right) on the Deep Plastic dataset. A better precision–recall curve is one which shows a top-right convex.
Jmse 12 00524 g009
Figure 10. The heat maps of Grad-CAM, through color overlays, we can observe which areas of the image the network emphasizes more for achieving precision detection.
Figure 10. The heat maps of Grad-CAM, through color overlays, we can observe which areas of the image the network emphasizes more for achieving precision detection.
Jmse 12 00524 g010
Figure 11. Detection results of YOLOv7-tiny (top) and YOLOv7t-CEBC (bottom) in harsh underwater scenes. The detection precision of YOLOv7t-CEBC is significantly better than YOLOv7-tiny.
Figure 11. Detection results of YOLOv7-tiny (top) and YOLOv7t-CEBC (bottom) in harsh underwater scenes. The detection precision of YOLOv7t-CEBC is significantly better than YOLOv7-tiny.
Jmse 12 00524 g011
Figure 12. Omission detection of YOLOv7t-CEBC in a highly complex underwater environment, marked by black boxes in (A,B); YOLOv7t-CEBC incorrectly detected in a highly complex underwater environment, marked by black boxes in (C,D).
Figure 12. Omission detection of YOLOv7t-CEBC in a highly complex underwater environment, marked by black boxes in (A,B); YOLOv7t-CEBC incorrectly detected in a highly complex underwater environment, marked by black boxes in (C,D).
Jmse 12 00524 g012
Table 1. Experimental configuration.
Table 1. Experimental configuration.
ParameterValue
Input size pixels640 × 640
Training epochs500
Learning rate0.01
Momentum0.937
Weight decay0.0005
OptimizerAdam
Batch size16
Table 2. Performance comparison ofYOLOv7t-CEBC and other models.
Table 2. Performance comparison ofYOLOv7t-CEBC and other models.
MethodPrecisionRecall[email protected]GFLOPsParametersFPS
SSD (vgg)0.7670.6160.72062.8G26.29M90
Faster-RCNN (resnet50)0.6360.6830.658370.2G137.12M31
Retinanet0.7820.7010.795164.2G36.37M69
YOLOv30.8020.7300.80465.6G61.54M72
YOLOv40.8240.6790.79860.0G63.95M80
YOLOv5s0.7630.7420.76717.1G7.23M94
YOLOXs0.7860.7140.80526.8G8.43M73
YOLOv7-tiny0.8010.7030.78013.2G6.22M156
YOLOv8n0.7690.6930.7848.7G3.16M72
YOLOv7t-CEBC (ours)0.8440.7440.81829.5G6.90M118
Table 3. Ablation comparison of model performance improvement on the Deep Plastic dataset.
Table 3. Ablation comparison of model performance improvement on the Deep Plastic dataset.
IndexCNeBEMABiformerCARAFEAP
(Plastic)
AP
(Plastic Bottle)
AP
(Jellyfish)
mAPFPSParameters
YOLOv7-tiny 0.7730.7390.8290.7801576.22M
1 0.8100.7490.8440.8011407.47M
2 0.7980.7130.8180.7861178.11M
3 0.7560.7220.8390.7721365.85M
4 0.7910.7280.8430.7871346.26M
50.8450.7440.8640.8181186.90M
Table 4. Detection results with various encoder kernel size k e n c o d e r and reassembly kernel size k u p .
Table 4. Detection results with various encoder kernel size k e n c o d e r and reassembly kernel size k u p .
k e n c o d e r k u p mAPFPSParameters
130.7981316.28M
330.7911306.32M
150.7761276.29M
350.8071226.40M
550.8011216.60M
570.8181186.90M
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhang, X.; Zhu, D.; Gan, W. YOLOv7t-CEBC Network for Underwater Litter Detection. J. Mar. Sci. Eng. 2024, 12, 524. https://doi.org/10.3390/jmse12040524

AMA Style

Zhang X, Zhu D, Gan W. YOLOv7t-CEBC Network for Underwater Litter Detection. Journal of Marine Science and Engineering. 2024; 12(4):524. https://doi.org/10.3390/jmse12040524

Chicago/Turabian Style

Zhang, Xinyu, Daqi Zhu, and Wenyang Gan. 2024. "YOLOv7t-CEBC Network for Underwater Litter Detection" Journal of Marine Science and Engineering 12, no. 4: 524. https://doi.org/10.3390/jmse12040524

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop