Skip main navigation

Cookies Notification

We use cookies on this site to enhance your user experience. By continuing to browse the site, you consent to the use of our cookies. Learn More
×

System Upgrade on Tue, May 28th, 2024 at 2am (EDT)

Existing users will be able to log into the site and access content. However, E-commerce and registration of new users may not be available for up to 12 hours.
For online purchase, please visit us again. Contact us at customercare@wspc.com for any enquiries.

SEARCH GUIDE  Download Search Tip PDF File

  • articleNo Access

    Foreground Detection by Competitive Learning for Varying Input Distributions

    One of the most important challenges in computer vision applications is the background modeling, especially when the background is dynamic and the input distribution might not be stationary, i.e. the distribution of the input data could change with time (e.g. changing illuminations, waving trees, water, etc.). In this work, an unsupervised learning neural network is proposed which is able to cope with progressive changes in the input distribution. It is based on a dual learning mechanism which manages the changes of the input distribution separately from the cluster detection. The proposal is adequate for scenes where the background varies slowly. The performance of the method is tested against several state-of-the-art foreground detectors both quantitatively and qualitatively, with favorable results.

  • articleNo Access

    Fusing Self-Organized Neural Network and Keypoint Clustering for Localized Real-Time Background Subtraction

    Moving object detection in video streams plays a key role in many computer vision applications. In particular, separation between background and foreground items represents a main prerequisite to carry out more complex tasks, such as object classification, vehicle tracking, and person re-identification. Despite the progress made in recent years, a main challenge of moving object detection still regards the management of dynamic aspects, including bootstrapping and illumination changes. In addition, the recent widespread of Pan–Tilt–Zoom (PTZ) cameras has made the management of these aspects even more complex in terms of performance due to their mixed movements (i.e. pan, tilt, and zoom). In this paper, a combined keypoint clustering and neural background subtraction method, based on Self-Organized Neural Network (SONN), for real-time moving object detection in video sequences acquired by PTZ cameras is proposed. Initially, the method performs a spatio-temporal tracking of the sets of moving keypoints to recognize the foreground areas and to establish the background. Then, it adopts a neural background subtraction, localized in these areas, to accomplish a foreground detection able to manage bootstrapping and gradual illumination changes. Experimental results on three well-known public datasets, and comparisons with different key works of the current literature, show the efficiency of the proposed method in terms of modeling and background subtraction.

  • articleNo Access

    Automated Counting and Tracking of Vehicles

    A robust traffic surveillance system is crucial in improving the control and management of traffic systems. Vehicle flow processing primarily involves counting and tracking vehicles; however, due to complex situations such as brightness changes and vehicle partial occlusions, traditional image segmentation methods are unable to segment and count vehicles correctly. This paper presents a novel framework for vision-based vehicle counting and tracking, which consists of four main procedures: foreground detection, feature extraction, feature analysis, and vehicles counting/tracking. Foreground detection intends to generate regions of interest in an image, which are used to produce significant feature points. Vehicles counting and tracking are achieved by analyzing clusters of feature points. As for testing on recorded traffic videos, the proposed framework is verified to be able to separate occluded vehicles and count the number of vehicles accurately and efficiently. By comparing with other methods, we observe that the proposed framework achieves the highest occlusion segment rate and the counting accuracy.