Skip main navigation

Cookies Notification

We use cookies on this site to enhance your user experience. By continuing to browse the site, you consent to the use of our cookies. Learn More
×

SEARCH GUIDE  Download Search Tip PDF File

  • articleNo Access

    Transparent Component Defect Detection Method Based on Improved YOLOv7 Algorithm

    Transparent components such as glass and fiber-reinforced plastics are widely used in engineering practice, which are prone to generate defects, and change its surface and internal structure, and cause great risks to the performance and stability of products. To solve the above problems, firstly, we studied the defect detection of transparent components, proposed an improved YOLOv7 (You Only Look Once V7) algorithm, replaced the Loss function CIoU (Complete-Intersection over Union) of the network model with Wise-IoU (Wise-Integration Over Union), and raised its convergence performance. Secondly, Global Attention Mechanism (GAM) is embedded in the backbone, and a dynamic target head frame is used in the output layer to generate the standard head frame and the attention function, improving the network’s attention to micro defects and ensuring the detection accuracy of micro defects. Thirdly, an intelligent defect detection platform was designed by combining mechanical engineering, visual perception, information processing and other technologies, and 150 rounds of comparative ablation experiments were conducted on typical transparent components. The improved algorithm has raised 2.6% in Mean Average Precision (MAP) value compared to the original algorithm. The improved model has better detection performance for micro defects and higher recognition accuracy. It can effectively screen out the location and category of defects, and eliminate defective components, which is consistent with the actual engineering situation. It satisfies the actual needs of product quality testing in the production process and provides reference experience for the industrial use of defect detection methods.

  • articleOpen Access

    Optimization of YOLOv7 Based on PConv, SE Attention and Wise-IoU

    With the rapid development of deep learning technology, object detection algorithms have made significant breakthroughs in the field of computer vision. However, due to the complexity and computational requirements of deep Convolutional Neural Network (CNN), these models face many challenges in practical applications, especially on resource-constrained edge devices. To address this problem, researchers have proposed many lightweight methods that aim to reduce the model size and computational complexity while maintaining high performance. The popularity of mobile devices and embedded systems has led to an increasing demand for lightweight models. However, existing lightweight methods often lead to accuracy loss, limiting their feasibility in practical applications. Therefore, how to realize the light weight of the model while maintaining high accuracy has become an urgent problem to be solved. To address this challenge, this paper proposes a lightweight YOLOv7 method based on PConv, Squeeze-and-Excitation (SE) attention mechanism and Wise-IoU (WIoU), which we refer to as YOLOv7-PSW. PConv can effectively reduce the number of parameters and computational complexity. The SE can help the model focus on important feature information, thereby improving performance. WIoU is introduced to measure the similarity between the detection box and the Ground Truth, so that the model can effectively reduce the False Positive rate. By applying these advanced techniques to the YOLOv7, we achieve a lightweight model while maintaining a high detection accuracy. Experimental results on PASCAL VOC dataset show that YOLOv7-PSW performs better than the original YOLOv7 on object detection tasks. The number of parameters is reduced by 12.3%, FLOPs is reduced by 18.86%, and the accuracy is improved by about 0.5%. While the detection accuracy is not decreased or even slightly improved, the number of FLOPs and parameters is greatly reduced, which realizes lightweight to a certain extent. The proposed method can provide new ideas and directions for the subsequent research on lightweight object detection, and is expected to promote its application on edge devices. Meanwhile, YOLOv7-PSW can also be applied to other computer vision tasks to improve its performance and efficiency. In summary, the proposed YOLOv7-PSW lightweight method realizes the light weight of the model while maintaining high accuracy. This is of great significance for promoting the application of object detection algorithms on edge devices.