World Scientific
Skip main navigation

Cookies Notification

We use cookies on this site to enhance your user experience. By continuing to browse the site, you consent to the use of our cookies. Learn More
×

System Upgrade on Tue, May 28th, 2024 at 2am (EDT)

Existing users will be able to log into the site and access content. However, E-commerce and registration of new users may not be available for up to 12 hours.
For online purchase, please visit us again. Contact us at customercare@wspc.com for any enquiries.

MOTION TARGET MONITORING AND RECOGNITION IN VIDEO SURVEILLANCE USING CLOUD–EDGE–IOT AND MACHINE LEARNING TECHNIQUES

    https://doi.org/10.1142/S0218348X25400134Cited by:0 (Source: Crossref)
    This article is part of the issue:

    We are aware that autonomous vehicle handles camera and LiDAR data pipelines and uses the sensor pictures to provide an autonomous object identification solution. While current research yields reasonable results, it falls short of offering practical solutions. For example, lane markings and traffic signs may become obscured by accumulation on roads, making it unsafe for a self-driving car to navigate. Moreover, the car’s sensors may be severely hindered by intense rain, snow, fog, or dust storms, which could endanger human safety. So, this research introduced Multi-Sensor Fusion and Segmentation for Deep Q-Network (DQN)-based Multi-Object Tracking in Autonomous Vehicles. Improved Adaptive Extended Kalman Filter (IAEKF) for noise reduction, Normalized Gamma Transformation-based CLAHE (NGT-CLAHE) for contrast enhancement, and Improved Adaptive Weighted Mean Filter (IAWMF) for adaptive thresholding have been used. A novel multi-segmentation using several segmentation methods and degrees dependent on the orientation of images has been used. DenseNet (D Net)-based multi-image fusion provides faster processing speeds and increased efficiency. The grid map-based pathways and lanes are chosen using the Energy Valley Optimizer (EVO) technique. This method easily achieves flexibility, robustness, and scalability by simplifying the complex activities. Furthermore, the YOLOv7 model is used for classification and detection. Metrics like velocity, accuracy rate, success rate, success ratio, and mean-squared error are used to assess the proposed method.