Processing math: 100%
Skip main navigation

Cookies Notification

We use cookies on this site to enhance your user experience. By continuing to browse the site, you consent to the use of our cookies. Learn More
×

System Upgrade on Tue, May 28th, 2024 at 2am (EDT)

Existing users will be able to log into the site and access content. However, E-commerce and registration of new users may not be available for up to 12 hours.
For online purchase, please visit us again. Contact us at customercare@wspc.com for any enquiries.

SEARCH GUIDE  Download Search Tip PDF File

  Bestsellers

  • articleOpen Access

    Real-time fire detection and response system using machine vision for industrial safety

    As industrialization accelerates, the risk and damage caused by fires in industrial settings have become increasingly severe. Current fire detection and response systems suffer from slow response times and inadequate accuracy, failing to meet the demands of modern industrial safety. This study presents the design and implementation of a real-time fire detection and response system based on machine vision. The system employs high-precision fire source recognition algorithms and intelligent control algorithms, utilizing cameras for real-time fire monitoring and deep learning techniques to accurately locate fire sources. Firefighting robots then promptly extinguish the identified fires. Experimental results demonstrate that the system achieves a fire source detection accuracy of up to 95% and an average response time of less than 3s in simulated industrial environments, significantly enhancing the intelligence and effectiveness of industrial fire protection. Furthermore, the system can automatically monitor and alert, transmitting fire information to relevant personnel in real-time, thereby providing robust technological support and assurance for industrial safety management. Moving forward, the research team will optimize existing algorithms and introduce new deep learning models to maintain high-efficiency fire detection performance in complex and dynamic industrial environments. Additionally, IoT integration and multi-sensor fusion will further enhance the system’s monitoring and response capabilities. We will also explore the application of the system in actual industrial sites and study its feasibility and scalability in other high-risk environments.

  • articleOpen Access

    An Industrial System for Inspecting Product Quality Based on Machine Vision and Deep Learning

    With the breakthrough development of technology in the 4.0 digitalization era, computer vision and deep learning have emerged as promising technologies for industrial quality inspection. By leveraging the power of machine learning algorithms, computer vision systems can automatically detect and classify defects in industrial products with high precision and efficiency. As the system processes more data and identifies more complicated defects, it can become more accurate and efficient in detecting imperfections and ensuring product quality. This paper proposes an inspection system integrated with the YOLOv8 network to assess the quality of products based on their surface. The data multi-threading mechanism is also applied in the system to ensure real-time processing operations. The experimental results show that the proposed system reaches high detection accuracy among different types of defects, at above 90%. Additionally, the proposed model reveals that the scratch defect is the most difficult error to detect, requiring a long time for decision analysis.

  • articleOpen Access

    Self-Supervised Texture Image Anomaly Detection by Fusing Normalizing Flow and Dictionary Learning

    A common study area in anomaly identification is industrial images anomaly detection based on texture background. The interference of texture images and the minuteness of texture anomalies are the main reasons why many existing models fail to detect anomalies. We propose a strategy for anomaly detection that combines dictionary learning and normalizing flow based on the aforementioned questions. The two-stage anomaly detection approach that is already in use is enhanced by our method. In order to improve baseline method, this research adds normalizing flow in representation learning and combines deep learning and dictionary learning. Improved algorithms have exceeded 95% detection accuracy on all MVTec AD texture type data after experimental validation. It shows strong robustness. The baseline method’s detection accuracy for the Carpet data was 67.9%. The paper was upgraded, raising the detection accuracy to 99.7%.

  • articleOpen Access

    DEVELOPMENT OF A MACHINE-VISION-BASED SYSTEM FOR RECORDING OF FORCE CALIBRATION DATA

    This paper presents the development of a new system for recording of force calibration data using machine vision technology. Real time camera and computer system were used to capture images of the reading from the instruments during calibration. Then, the measurement images were transformed and translated to numerical data using optical character recognition (OCR) technique. These numerical data along with raw images were automatically saved to memories as the calibration database files.

    With this new system, the human error of recording would be eliminated. The verification experiments were done by using this system for recording the measurement results from an amplifier (DMP 40) with load cell (HBM-Z30-10kN). The NIMT's 100-kN deadweight force standard machine (DWM-100kN) was used to generate test forces. The experiments setup were done in 3 categories; 1) dynamics condition (record during load changing), 2) statics condition (record during fix load), and 3) full calibration experiments in accordance with ISO 376:2011.

    The captured images from dynamics condition experiment gave >94% without overlapping of number. The results from statics condition experiment were >98% images without overlapping. All measurement images without overlapping were translated to number by the developed program with 100% accuracy. The full calibration experiments also gave 100% accurate results. Moreover, in case of incorrect translation of any result, it is also possible to trace back to the raw calibration image to check and correct it. Therefore, this machine-vision-based system and program should be appropriate for recording of force calibration data.

  • articleOpen Access

    Multicamera 3D Viewpoint Adjustment for Robotic Surgery via Deep Reinforcement Learning

    While robot-assisted minimally invasive surgery (RMIS) procedures afford a variety of benefits over open surgery and manual laparoscopic operations (including increased tool dexterity, reduced patient pain, incision size, trauma and recovery time, and lower infection rates [1], lack of spatial awareness remains an issue. Typical laparoscopic imaging can lack sufficient depth cues and haptic feedback, if provided, rarely reflects realistic tissue–tool interactions. This work is part of a larger ongoing research effort to reconstruct 3D surfaces using multiple viewpoints in RMIS to increase visual perception. The manual placement and adjustment of multicamera systems in RMIS are nonideal and prone to error [2], and other autonomous approaches focus on tool tracking and do not consider reconstruction of the surgical scene [3,4,5]. The group’s previous work investigated a novel, context-aware autonomous camera positioning method [6], which incorporated both tool location and scene coverage for multiple camera viewpoint adjustments. In this paper, the authors expand upon this prior work by implementing a streamlined deep reinforcement learning approach between optimal viewpoints calculated using the prior method [6] which encourages discovery of otherwise unobserved and additional camera viewpoints. Combining the framework and robustness of the previous work with the efficiency and additional viewpoints of the augmentations presented here results in improved performance and scene coverage promising towards real-time implementation.

  • articleOpen Access

    THE MACHINE VISION BLIND GUIDE SYSTEM

    The available guide tools of the orientation and mobility for a blind are the cane, guide dog and electronic guide devices. A cane is easy to detect the hindrance that is in front of the user but not for the hindrance above the user waist. That's why a cane user of a blind sometimes will be hit by the upper hindrance. Guide dog is a very powerful mobility guider but expensive and the training and living care for the dogs are difficult. Hence, guide dog is not popular in many countries. The electronic devices for blind guide tools such as laser cane; sonic glasses, sonic guide etc. can only detect a single point at a time and not for a whole view. In our system, a machine vision blind guide system is proposed. A CCD grabbed the image of front view and divided the image into nine blocks. Each block is calculated to get the distance message, which is multipoint data to guide the blind by the converted voice signal.

  • articleOpen Access

    ARTIFICIAL NEURAL NETWORKS BASED SLEEP MOTION RECOGNITION USING NIGHT VISION CAMERAS

    The body movement is one of the most important factors to evaluate the sleep quality. In general, the sleep motion is hardly investigated, and it must take a long time to observe the motion of the patient in terms of a pre-recoded video storage media with high speed playing. This paper proposes an image-based solution to recognize the sleep motions. We use the contact free and IR-based night vision camera to capture the video frames during the sleep of the patient. The video frames are used to recognize the body positions and the body directions such as the “body up”, “body down”, “body right”, and “body left”. In addition to the image processing, the proposed artificial neural network (ANN) sleep motion recognition solution is composed of two neural networks. These two neural networks are organized as in a cascade configuration. The first ANN model is used to identify the body position features from the images; and the follower ANN model is constructed based on the features that are identified by the first ANN model to recognize the body direction. Finally, the implementations and the practical results of this work are all illustrated in this paper.