Svoboda | Graniru | BBC Russia | Golosameriki | Facebook
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (2,696)

Search Parameters:
Keywords = YOLOv7

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
15 pages, 4275 KiB  
Article
Real-Time Detection of Microplastics Using an AI Camera
by Md Abdul Baset Sarker, Masudul H. Imtiaz, Thomas M. Holsen and Abul B. M. Baki
Sensors 2024, 24(13), 4394; https://doi.org/10.3390/s24134394 (registering DOI) - 6 Jul 2024
Viewed by 129
Abstract
Microplastics (MPs, size ≤ 5 mm) have emerged as a significant worldwide concern, threatening marine and freshwater ecosystems, and the lack of MP detection technologies is notable. The main goal of this research is the development of a camera sensor for the detection [...] Read more.
Microplastics (MPs, size ≤ 5 mm) have emerged as a significant worldwide concern, threatening marine and freshwater ecosystems, and the lack of MP detection technologies is notable. The main goal of this research is the development of a camera sensor for the detection of MPs and measuring their size and velocity while in motion. This study introduces a novel methodology involving computer vision and artificial intelligence (AI) for the detection of MPs. Three different camera systems, including fixed-focus 2D and autofocus (2D and 3D), were implemented and compared. A YOLOv5-based object detection model was used to detect MPs in the captured image. DeepSORT was then implemented for tracking MPs through consecutive images. In real-time testing in a laboratory flume setting, the precision in MP counting was found to be 97%, and during field testing in a local river, the precision was 96%. This study provides foundational insights into utilizing AI for detecting MPs in different environmental settings, contributing to more effective efforts and strategies for managing and mitigating MP pollution. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

17 pages, 9234 KiB  
Article
Algorithm for Corn Crop Row Recognition during Different Growth Stages Based on ST-YOLOv8s Network
by Zhihua Diao, Shushuai Ma, Dongyan Zhang, Jingcheng Zhang, Peiliang Guo, Zhendong He, Suna Zhao and Baohua Zhang
Agronomy 2024, 14(7), 1466; https://doi.org/10.3390/agronomy14071466 (registering DOI) - 6 Jul 2024
Viewed by 144
Abstract
Corn crop row recognition during different growth stages is a major difficulty faced by the current development of visual navigation technology for agricultural robots. In order to solve this problem, an algorithm for recognizing corn crop rows during different growth stages is presented [...] Read more.
Corn crop row recognition during different growth stages is a major difficulty faced by the current development of visual navigation technology for agricultural robots. In order to solve this problem, an algorithm for recognizing corn crop rows during different growth stages is presented based on the ST-YOLOv8s network. Firstly, a dataset of corn crop rows during different growth stages, including the seedling stage and mid-growth stage, is constructed in this paper; secondly, an improved YOLOv8s network, in which the backbone network is replaced by the swin transformer (ST), is proposed in this paper for detecting corn crop row segments; after that, an improved supergreen method is introduced in this paper, and the segmentation of crop rows and background within the detection frame is achieved utilizing the enhanced method; finally, the corn crop row lines are identified using the proposed local–global detection method, which detects the local crop rows first, and then detects the global crop rows. The corn crop row segment detection experiments show that the mean average precision (MAP) of the ST-YOLOv8s network during different growth stages increases by 7.34%, 11.92%, and 4.03% on average compared to the MAP of YOLOv5s, YOLOv7, and YOLOv8s networks, respectively, indicating that the ST-YOLOv8s network has a better crop row segment detection effect compared to the comparison networks. Corn crop row line detection experiments show that the accuracy of the local–global detection method proposed in this paper is improved by 17.38%, 10.47%, and 5.99%, respectively, compared with the accuracy of the comparison method; the average angle error is reduced by 3.78°, 1.61°, and 0.7°, respectively, compared with the average angle error of the comparison method; and the average fitting time is reduced by 5.30 ms, 18 ms, and 33.77 ms, respectively, compared with the average fitting time of the comparison method, indicating that the local–global detection method has a better crop row line detection effect compared to the comparison method. In summary, the corn crop row recognition algorithm proposed in this paper can well accomplish the task of corn crop row recognition during different growth stages and contribute to the development of crop row detection technology. Full article
(This article belongs to the Special Issue AI, Sensors and Robotics for Smart Agriculture—2nd Edition)
Show Figures

Figure 1

19 pages, 8273 KiB  
Article
An Efficient and Accurate Surface Defect Detection Method for Wood Based on Improved YOLOv8
by Rijun Wang, Fulong Liang, Bo Wang, Guanghao Zhang, Yesheng Chen and Xiangwei Mou
Forests 2024, 15(7), 1176; https://doi.org/10.3390/f15071176 (registering DOI) - 6 Jul 2024
Viewed by 152
Abstract
Accurate detection of wood surface defects plays a pivotal role in enhancing wood grade sorting precision, maintaining high standards in wood processing quality, and safeguarding forest resources. This paper introduces an efficient and precise approach to detecting wood surface defects, building upon enhancements [...] Read more.
Accurate detection of wood surface defects plays a pivotal role in enhancing wood grade sorting precision, maintaining high standards in wood processing quality, and safeguarding forest resources. This paper introduces an efficient and precise approach to detecting wood surface defects, building upon enhancements to the YOLOv8 model, which demonstrates significant performance enhancements in handling multi-scale and small-target defects commonly found in wood. The proposed method incorporates the dilation-wise residual (DWR) module in the trunk and the deformable large kernel attention (DLKA) module in the neck of the YOLOv8 architecture to enhance the network’s capability in extracting and fusing multi-scale defective features. To further improve the detection accuracy of small-target defects, the model replaces all the detector heads of YOLOv8 with dynamic heads and adds an additional small-target dynamic detector head in the shallower layers. Additionally, to facilitate faster and more-efficient regression, the original complete intersection over union (CIoU) loss function of YOLOv8 is replaced with the IoU with minimum points distance (MPDIoU) loss function. Experimental results indicate that compared with the YOLOv8n baseline model, the proposed method improves the mean average precision (mAP) by 5.5%, with enhanced detection accuracy across all seven defect types tested. These findings suggest that the proposed model exhibits a superior ability to detect wood surface defects accurately. Full article
(This article belongs to the Special Issue Wood Quality and Wood Processing)
Show Figures

Figure 1

17 pages, 15406 KiB  
Article
YOLOv8 Model for Weed Detection in Wheat Fields Based on a Visual Converter and Multi-Scale Feature Fusion
by Yinzeng Liu, Fandi Zeng, Hongwei Diao, Junke Zhu, Dong Ji, Xijie Liao and Zhihuan Zhao
Sensors 2024, 24(13), 4379; https://doi.org/10.3390/s24134379 - 5 Jul 2024
Viewed by 185
Abstract
Accurate weed detection is essential for the precise control of weeds in wheat fields, but weeds and wheat are sheltered from each other, and there is no clear size specification, making it difficult to accurately detect weeds in wheat. To achieve the precise [...] Read more.
Accurate weed detection is essential for the precise control of weeds in wheat fields, but weeds and wheat are sheltered from each other, and there is no clear size specification, making it difficult to accurately detect weeds in wheat. To achieve the precise identification of weeds, wheat weed datasets were constructed, and a wheat field weed detection model, YOLOv8-MBM, based on improved YOLOv8s, was proposed. In this study, a lightweight visual converter (MobileViTv3) was introduced into the C2f module to enhance the detection accuracy of the model by integrating input, local (CNN), and global (ViT) features. Secondly, a bidirectional feature pyramid network (BiFPN) was introduced to enhance the performance of multi-scale feature fusion. Furthermore, to address the weak generalization and slow convergence speed of the CIoU loss function for detection tasks, the bounding box regression loss function (MPDIOU) was used instead of the CIoU loss function to improve the convergence speed of the model and further enhance the detection performance. Finally, the model performance was tested on the wheat weed datasets. The experiments show that the YOLOv8-MBM proposed in this paper is superior to Fast R-CNN, YOLOv3, YOLOv4-tiny, YOLOv5s, YOLOv7, YOLOv9, and other mainstream models in regards to detection performance. The accuracy of the improved model reaches 92.7%. Compared with the original YOLOv8s model, the precision, recall, mAP1, and mAP2 are increased by 10.6%, 8.9%, 9.7%, and 9.3%, respectively. In summary, the YOLOv8-MBM model successfully meets the requirements for accurate weed detection in wheat fields. Full article
(This article belongs to the Special Issue Sensor and AI Technologies in Intelligent Agriculture: 2nd Edition)
15 pages, 1258 KiB  
Article
Detection of Cervical Lesion Cell/Clumps Based on Adaptive Feature Extraction
by Gang Li, Xingguang Li, Yuting Wang, Shu Gong, Yanting Yang and Chuanyun Xu
Bioengineering 2024, 11(7), 686; https://doi.org/10.3390/bioengineering11070686 (registering DOI) - 5 Jul 2024
Viewed by 184
Abstract
Automated detection of cervical lesion cell/clumps in cervical cytological images is essential for computer-aided diagnosis. In this task, the shape and size of the lesion cell/clumps appeared to vary considerably, reducing the detection performance of cervical lesion cell/clumps. To address the issue, we [...] Read more.
Automated detection of cervical lesion cell/clumps in cervical cytological images is essential for computer-aided diagnosis. In this task, the shape and size of the lesion cell/clumps appeared to vary considerably, reducing the detection performance of cervical lesion cell/clumps. To address the issue, we propose an adaptive feature extraction network for cervical lesion cell/clumps detection, called AFE-Net. Specifically, we propose the adaptive module to acquire the features of cervical lesion cell/clumps, while introducing the global bias mechanism to acquire the global average information, aiming at combining the adaptive features with the global information to improve the representation of the target features in the model, and thus enhance the detection performance of the model. Furthermore, we analyze the results of the popular bounding box loss on the model and propose the new bounding box loss tendency-IoU (TIoU). Finally, the network achieves the mean Average Precision (mAP) of 64.8% on the CDetector dataset, with 30.7 million parameters. Compared with YOLOv7 of 62.6% and 34.8M, the model improved mAP by 2.2% and reduced the number of parameters by 11.8%. Full article
(This article belongs to the Special Issue Artificial Intelligence (AI) for Medical Image Processing)
Show Figures

Figure 1

20 pages, 11128 KiB  
Article
Enhancing Badminton Game Analysis: An Approach to Shot Refinement via a Fusion of Shuttlecock Tracking and Hit Detection from Monocular Camera
by Yi-Hua Hsu, Chih-Chang Yu and Hsu-Yung Cheng
Sensors 2024, 24(13), 4372; https://doi.org/10.3390/s24134372 - 5 Jul 2024
Viewed by 161
Abstract
Extracting the flight trajectory of the shuttlecock in a single turn in badminton games is important for automated sports analytics. This study proposes a novel method to extract shots in badminton games from a monocular camera. First, TrackNet, a deep neural network designed [...] Read more.
Extracting the flight trajectory of the shuttlecock in a single turn in badminton games is important for automated sports analytics. This study proposes a novel method to extract shots in badminton games from a monocular camera. First, TrackNet, a deep neural network designed for tracking small objects, is used to extract the flight trajectory of the shuttlecock. Second, the YOLOv7 model is used to identify whether the player is swinging. As both TrackNet and YOLOv7 may have detection misses and false detections, this study proposes a shot refinement algorithm to obtain the correct hitting moment. By doing so, we can extract shots in rallies and classify the type of shots. Our proposed method achieves an accuracy of 89.7%, a recall rate of 91.3%, and an F1 rate of 90.5% in 69 matches, with 1582 rallies of the Badminton World Federation (BWF) match videos. This is a significant improvement compared to the use of TrackNet alone, which yields 58.8% accuracy, 93.6% recall, and 72.3% F1 score. Furthermore, the accuracy of shot type classification at three different thresholds is 72.1%, 65.4%, and 54.1%. These results are superior to those of TrackNet, demonstrating that our method effectively recognizes different shot types. The experimental results demonstrate the feasibility and validity of the proposed method. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

21 pages, 10870 KiB  
Article
An Improved Instance Segmentation Method for Fast Assessment of Damaged Buildings Based on Post-Earthquake UAV Images
by Ran Zou, Jun Liu, Haiyan Pan, Delong Tang and Ruyan Zhou
Sensors 2024, 24(13), 4371; https://doi.org/10.3390/s24134371 - 5 Jul 2024
Viewed by 235
Abstract
Quickly and accurately assessing the damage level of buildings is a challenging task for post-disaster emergency response. Most of the existing research mainly adopts semantic segmentation and object detection methods, which have yielded good results. However, for high-resolution Unmanned Aerial Vehicle (UAV) imagery, [...] Read more.
Quickly and accurately assessing the damage level of buildings is a challenging task for post-disaster emergency response. Most of the existing research mainly adopts semantic segmentation and object detection methods, which have yielded good results. However, for high-resolution Unmanned Aerial Vehicle (UAV) imagery, these methods may result in the problem of various damage categories within a building and fail to accurately extract building edges, thus hindering post-disaster rescue and fine-grained assessment. To address this issue, we proposed an improved instance segmentation model that enhances classification accuracy by incorporating a Mixed Local Channel Attention (MLCA) mechanism in the backbone and improving small object segmentation accuracy by refining the Neck part. The method was tested on the Yangbi earthquake UVA images. The experimental results indicated that the modified model outperformed the original model by 1.07% and 1.11% in the two mean Average Precision (mAP) evaluation metrics, mAPbbox50 and mAPseg50, respectively. Importantly, the classification accuracy of the intact category was improved by 2.73% and 2.73%, respectively, while the collapse category saw an improvement of 2.58% and 2.14%. In addition, the proposed method was also compared with state-of-the-art instance segmentation models, e.g., Mask-R-CNN and YOLO V9-Seg. The results demonstrated that the proposed model exhibits advantages in both accuracy and efficiency. Specifically, the efficiency of the proposed model is three times faster than other models with similar accuracy. The proposed method can provide a valuable solution for fine-grained building damage evaluation. Full article
Show Figures

Figure 1

24 pages, 9559 KiB  
Article
Research on a Recognition Algorithm for Traffic Signs in Foggy Environments Based on Image Defogging and Transformer
by Zhaohui Liu, Jun Yan and Jinzhao Zhang
Sensors 2024, 24(13), 4370; https://doi.org/10.3390/s24134370 - 5 Jul 2024
Viewed by 211
Abstract
The efficient and accurate identification of traffic signs is crucial to the safety and reliability of active driving assistance and driverless vehicles. However, the accurate detection of traffic signs under extreme cases remains challenging. Aiming at the problems of missing detection and false [...] Read more.
The efficient and accurate identification of traffic signs is crucial to the safety and reliability of active driving assistance and driverless vehicles. However, the accurate detection of traffic signs under extreme cases remains challenging. Aiming at the problems of missing detection and false detection in traffic sign recognition in fog traffic scenes, this paper proposes a recognition algorithm for traffic signs based on pix2pixHD+YOLOv5-T. Firstly, the defogging model is generated by training the pix2pixHD network to meet the advanced visual task. Secondly, in order to better match the defogging algorithm with the target detection algorithm, the algorithm YOLOv5-Transformer is proposed by introducing a transformer module into the backbone of YOLOv5. Finally, the defogging algorithm pix2pixHD is combined with the improved YOLOv5 detection algorithm to complete the recognition of traffic signs in foggy environments. Comparative experiments proved that the traffic sign recognition algorithm proposed in this paper can effectively reduce the impact of a foggy environment on traffic sign recognition. Compared with the YOLOv5-T and YOLOv5 algorithms in moderate fog environments, the overall improvement of this algorithm is achieved. The precision of traffic sign recognition of the algorithm in the fog traffic scene reached 78.5%, the recall rate was 72.2%, and [email protected] was 82.8%. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

19 pages, 18726 KiB  
Article
A Small-Object Detection Model Based on Improved YOLOv8s for UAV Image Scenarios
by Jianjun Ni, Shengjie Zhu, Guangyi Tang, Chunyan Ke and Tingting Wang
Remote Sens. 2024, 16(13), 2465; https://doi.org/10.3390/rs16132465 - 5 Jul 2024
Viewed by 232
Abstract
Small object detection for unmanned aerial vehicle (UAV) image scenarios is a challenging task in the computer vision field. Some problems should be further studied, such as the dense small objects and background noise in high-altitude aerial photography images. To address these issues, [...] Read more.
Small object detection for unmanned aerial vehicle (UAV) image scenarios is a challenging task in the computer vision field. Some problems should be further studied, such as the dense small objects and background noise in high-altitude aerial photography images. To address these issues, an enhanced YOLOv8s-based model for detecting small objects is presented. The proposed model incorporates a parallel multi-scale feature extraction module (PMSE), which enhances the feature extraction capability for small objects by generating adaptive weights with different receptive fields through parallel dilated convolution and deformable convolution, and integrating the generated weight information into shallow feature maps. Then, a scale compensation feature pyramid network (SCFPN) is designed to integrate the spatial feature information derived from the shallow neural network layers with the semantic data extracted from the higher layers of the network, thereby enhancing the network’s capacity for representing features. Furthermore, the largest-object detection layer is removed from the original detection layers, and an ultra-small-object detection layer is applied, with the objective of improving the network’s detection performance for small objects. Finally, the WIOU loss function is employed to balance high- and low-quality samples in the dataset. The results of the experiments conducted on the two public datasets illustrate that the proposed model can enhance the object detection accuracy in UAV image scenarios. Full article
Show Figures

Figure 1

22 pages, 7578 KiB  
Article
EC-YOLO: Improved YOLOv7 Model for PCB Electronic Component Detection
by Shiyi Luo, Fang Wan, Guangbo Lei, Li Xu, Zhiwei Ye, Wei Liu, Wen Zhou and Chengzhi Xu
Sensors 2024, 24(13), 4363; https://doi.org/10.3390/s24134363 - 5 Jul 2024
Viewed by 174
Abstract
Electronic components are the main components of PCBs (printed circuit boards), so the detection and classification of ECs (electronic components) is an important aspect of recycling used PCBs. However, due to the variety and quantity of ECs, traditional target detection methods for EC [...] Read more.
Electronic components are the main components of PCBs (printed circuit boards), so the detection and classification of ECs (electronic components) is an important aspect of recycling used PCBs. However, due to the variety and quantity of ECs, traditional target detection methods for EC classification still have problems such as slow detection speed and low performance, and the accuracy of the detection needs to be improved. To overcome these limitations, this study proposes an enhanced YOLO (you only look once) network (EC-YOLOv7) for detecting EC targets. The network uses ACmix (a mixed model that enjoys the benefits of both self-attention and convolution) as a substitute for the 3 × 3 convolutional modules in the E-ELAN (Extended ELAN) architecture and implements branch links and 1 × 1 convolutional arrays between the ACmix modules to improve the speed of feature retrieval and network inference. Furthermore, the ResNet-ACmix module is engineered to prevent the leakage of function data and to minimise calculation time. Subsequently, the SPPCSPS (spatial pyramid pooling connected spatial pyramid convolution) block has been improved by replacing the serial channels with concurrent channels, which improves the fusion speed of the image features. To effectively capture spatial information and improve detection accuracy, the DyHead (the dynamic head) is utilised to enhance the model’s size, mission, and sense of space, which effectively captures spatial information and improves the detection accuracy. A new bounding-box loss regression method, the WIoU-Soft-NMS method, is finally suggested to facilitate prediction regression and improve the localisation accuracy. The experimental results demonstrate that the enhanced YOLOv7 net surpasses the initial YOLOv7 model and other common EC detection methods. The proposed EC-YOLOv7 network reaches a mean accuracy ([email protected]) of 94.4% on the PCB dataset and exhibits higher FPS compared to the original YOLOv7 model. In conclusion, it can significantly enhance high-density EC target recognition. Full article
(This article belongs to the Section Electronic Sensors)
Show Figures

Figure 1

17 pages, 13174 KiB  
Article
Enhanced YOLOv7 for Improved Underwater Target Detection
by Daohua Lu, Junxin Yi and Jia Wang
J. Mar. Sci. Eng. 2024, 12(7), 1127; https://doi.org/10.3390/jmse12071127 - 4 Jul 2024
Viewed by 291
Abstract
Aiming at the problems of the underwater existence of some targets with relatively small size, low contrast, and a lot of surrounding interference information, which lead to a high leakage rate and low recognition accuracy, a new improved YOLOv7 underwater target detection algorithm [...] Read more.
Aiming at the problems of the underwater existence of some targets with relatively small size, low contrast, and a lot of surrounding interference information, which lead to a high leakage rate and low recognition accuracy, a new improved YOLOv7 underwater target detection algorithm is proposed. First, the original YOLOv7 anchor frame information is updated by the K-Means algorithm to generate anchor frame sizes and ratios suitable for the underwater target dataset; second, we use the PConv (Partial Convolution) module instead of part of the standard convolution in the multi-scale feature fusion module to reduce the amount of computation and number of parameters, thus improving the detection speed; then, the existing CIou loss function is improved with the ShapeIou_NWD loss function, and the new loss function allows the model to learn more feature information during the training process; finally, we introduce the SimAM attention mechanism after the multi-scale feature fusion module to increase attention to the small feature information, which improves the detection accuracy. This method achieves an average accuracy of 85.7% on the marine organisms dataset, and the detection speed reaches 122.9 frames/s, which reduces the number of parameters by 21% and the amount of computation by 26% compared with the original YOLOv7 algorithm. The experimental results show that the improved algorithm has a great improvement in detection speed and accuracy. Full article
Show Figures

Figure 1

17 pages, 10158 KiB  
Article
A Multi-Scale-Enhanced YOLO-V5 Model for Detecting Small Objects in Remote Sensing Image Information
by Jing Li, Haochen Sun and Zhiyong Zhang
Sensors 2024, 24(13), 4347; https://doi.org/10.3390/s24134347 - 4 Jul 2024
Viewed by 231
Abstract
As a typical component of remote sensing signals, remote sensing image (RSI) information plays a strong role in showing macro, dynamic and accurate information on the earth’s surface and environment, which is critical to many application fields. One of the core technologies is [...] Read more.
As a typical component of remote sensing signals, remote sensing image (RSI) information plays a strong role in showing macro, dynamic and accurate information on the earth’s surface and environment, which is critical to many application fields. One of the core technologies is the object detection (OD) of RSI signals (RSISs). The majority of existing OD algorithms only consider medium and large objects, regardless of small-object detection, resulting in an unsatisfactory performance in detection precision and the miss rate of small objects. To boost the overall OD performance of RSISs, an improved detection framework, I-YOLO-V5, was proposed for OD in high-altitude RSISs. Firstly, the idea of a residual network is employed to construct a new residual unit to achieve the purpose of improving the network feature extraction. Then, to avoid the gradient fading of the network, densely connected networks are integrated into the structure of the algorithm. Meanwhile, a fourth detection layer is employed in the algorithm structure in order to reduce the deficiency of small-object detection in RSISs in complex environments, and its effectiveness is verified. The experimental results confirm that, compared with existing advanced OD algorithms, the average accuracy of the proposed I-YOLO-V5 is improved by 15.4%, and the miss rate is reduced by 46.8% on the RSOD dataset. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

17 pages, 5346 KiB  
Article
Improvement of the YOLOv8 Model in the Optimization of the Weed Recognition Algorithm in Cotton Field
by Lu Zheng, Junchao Yi, Pengcheng He, Jun Tie, Yibo Zhang, Weibo Wu and Lyujia Long
Plants 2024, 13(13), 1843; https://doi.org/10.3390/plants13131843 - 4 Jul 2024
Viewed by 359
Abstract
Due to the existence of cotton weeds in a complex cotton field environment with many different species, dense distribution, partial occlusion, and small target phenomena, the use of the YOLO algorithm is prone to problems such as low detection accuracy, serious misdetection, etc. [...] Read more.
Due to the existence of cotton weeds in a complex cotton field environment with many different species, dense distribution, partial occlusion, and small target phenomena, the use of the YOLO algorithm is prone to problems such as low detection accuracy, serious misdetection, etc. In this study, we propose a YOLOv8-DMAS model for the detection of cotton weeds in complex environments based on the YOLOv8 detection algorithm. To enhance the ability of the model to capture multi-scale features of different weeds, all the BottleNeck are replaced by the Dilation-wise Residual Module (DWR) in the C2f network, and the Multi-Scale module (MSBlock) is added in the last layer of the backbone. Additionally, a small-target detection layer is added to the head structure to avoid the omission of small-target weed detection, and the Adaptively Spatial Feature Fusion mechanism (ASFF) is used to improve the detection head to solve the spatial inconsistency problem of feature fusion. Finally, the original Non-maximum suppression (NMS) method is replaced with SoftNMS to improve the accuracy under dense weed detection. In comparison to YOLO v8s, the experimental results show that the improved YOLOv8-DMAS improves accuracy, recall, mAP0.5, and mAP0.5:0.95 by 1.7%, 3.8%, 2.1%, and 3.7%, respectively. Furthermore, compared to the mature target detection algorithms YOLOv5s, YOLOv7, and SSD, it improves 4.8%, 4.5%, and 5.9% on mAP0.5:0.95, respectively. The results show that the improved model could accurately detect cotton weeds in complex field environments in real time and provide technical support for intelligent weeding research. Full article
(This article belongs to the Section Plant Modeling)
Show Figures

Figure 1

25 pages, 22898 KiB  
Article
Research on Segmentation Method of Maize Seedling Plant Instances Based on UAV Multispectral Remote Sensing Images
by Tingting Geng, Haiyang Yu, Xinru Yuan, Ruopu Ma and Pengao Li
Plants 2024, 13(13), 1842; https://doi.org/10.3390/plants13131842 - 4 Jul 2024
Viewed by 547
Abstract
The accurate instance segmentation of individual crop plants is crucial for achieving a high-throughput phenotypic analysis of seedlings and smart field management in agriculture. Current crop monitoring techniques employing remote sensing predominantly focus on population analysis, thereby lacking precise estimations for individual plants. [...] Read more.
The accurate instance segmentation of individual crop plants is crucial for achieving a high-throughput phenotypic analysis of seedlings and smart field management in agriculture. Current crop monitoring techniques employing remote sensing predominantly focus on population analysis, thereby lacking precise estimations for individual plants. This study concentrates on maize, a critical staple crop, and leverages multispectral remote sensing data sourced from unmanned aerial vehicles (UAVs). A large-scale SAM image segmentation model is employed to efficiently annotate maize plant instances, thereby constructing a dataset for maize seedling instance segmentation. The study evaluates the experimental accuracy of six instance segmentation algorithms: Mask R-CNN, Cascade Mask R-CNN, PointRend, YOLOv5, Mask Scoring R-CNN, and YOLOv8, employing various combinations of multispectral bands for a comparative analysis. The experimental findings indicate that the YOLOv8 model exhibits exceptional segmentation accuracy, notably in the NRG band, with bbox_mAP50 and segm_mAP50 accuracies reaching 95.2% and 94%, respectively, surpassing other models. Furthermore, YOLOv8 demonstrates robust performance in generalization experiments, indicating its adaptability across diverse environments and conditions. Additionally, this study simulates and analyzes the impact of different resolutions on the model’s segmentation accuracy. The findings reveal that the YOLOv8 model sustains high segmentation accuracy even at reduced resolutions (1.333 cm/px), meeting the phenotypic analysis and field management criteria. Full article
(This article belongs to the Section Plant Modeling)
Show Figures

Figure 1

25 pages, 11915 KiB  
Article
Improving YOLO Detection Performance of Autonomous Vehicles in Adverse Weather Conditions Using Metaheuristic Algorithms
by İbrahim Özcan, Yusuf Altun and Cevahir Parlak
Appl. Sci. 2024, 14(13), 5841; https://doi.org/10.3390/app14135841 - 4 Jul 2024
Viewed by 266
Abstract
Despite the rapid advances in deep learning (DL) for object detection, existing techniques still face several challenges. In particular, object detection in adverse weather conditions (AWCs) requires complex and computationally costly models to achieve high accuracy rates. Furthermore, the generalization capabilities of these [...] Read more.
Despite the rapid advances in deep learning (DL) for object detection, existing techniques still face several challenges. In particular, object detection in adverse weather conditions (AWCs) requires complex and computationally costly models to achieve high accuracy rates. Furthermore, the generalization capabilities of these methods struggle to show consistent performance under different conditions. This work focuses on improving object detection using You Only Look Once (YOLO) versions 5, 7, and 9 in AWCs for autonomous vehicles. Although the default values of the hyperparameters are successful for images without AWCs, there is a need to find the optimum values of the hyperparameters in AWCs. Given the many numbers and wide range of hyperparameters, determining them through trial and error is particularly challenging. In this study, the Gray Wolf Optimizer (GWO), Artificial Rabbit Optimizer (ARO), and Chimpanzee Leader Selection Optimization (CLEO) are independently applied to optimize the hyperparameters of YOLOv5, YOLOv7, and YOLOv9. The results show that the preferred method significantly improves the algorithms’ performances for object detection. The overall performance of the YOLO models on the object detection for AWC task increased by 6.146%, by 6.277% for YOLOv7 + CLEO, and by 6.764% for YOLOv9 + GWO. Full article
(This article belongs to the Special Issue Deep Learning in Object Detection)
Show Figures

Figure 1

Back to TopTop