India’s construction industry contributes over 13% to the country’s total GDP. Materials, plant, machinery, equipment, and labor are the building blocks or assets of any project. Additionally, cost is also associated with procurement, storage, use, tracking, repair, maintenance, mobilization, and demobilization, among other things. There are human errors, such as incorrect data recording, transportation, safety and security, or system difficulties, such as inventory issues and transparency, which can lead to stakeholder distrust and wastage of time and money. Construction personnel data is vast, and automatically capturing and analyzing it is currently a top priority in the industry. In this paper, a tracking system based on the Internet of Things (IoT) to track labor with motion status is described. Then, the collected data is utilized to identify participants, their activities, average productivity, hours worked overtime, estimated productivity, and wages paid to the participants.
This paper addresses the tracking control challenge in the diving motion system of a specific class of autonomous underwater vehicles (AUVs) characterized by a torpedo-like shape. A decoupled and reduced-order three degrees-of-freedom linearized diving motion model is employed for depth position control. A control law is synthesized using the immersion and invariance (I&I) technique to achieve the control objectives. The primary aim is to attain tracking by immersing a stable, lower-order target (second-order) dynamic system into a three-dimensional manifold, upon which the closed-loop system evolves. We address the regulation problem as a specialized instance of the tracking problem, with the reference input set as a predetermined known depth that requires regulation. The efficacy of the proposed control law is evaluated through simulation studies involving various scenarios. Robustness tests are conducted to assess the control law’s performance under modeling uncertainties and underwater disturbances. The computer simulation employs an AUV named MAYA, utilizing experimentally validated diving motion parameters. A comparative analysis is performed between the proposed control law and other benchmark controllers to gauge its performance. Additionally, the effectiveness of the proposed control law is confirmed by validating its application to the nonlinear model of the diving motion system.
This paper proposes a concurrent implementation of a background subtraction algorithm suitable for monochromatic video sequences. In the training period, a set of frames is used to determine an estimate of the scene background, as well as background noise. In the test period, each new frame of the video sequence is compared to the background model, and foreground objects are obtained taking into account the background noise estimate. Shadows and highlights are also detected and removed. Concurrent programming tools and techniques, such as multithreading, are used to implement a concurrent solution for this problem, and combined with other optimization tools (such as destination word accumulation for morphological operators and the use of lookup tables for shadow detection/removal) to achieve the highest frame rate possible. Experimental results comprise a comparison of the performance including progressively each optimization tool, and they show that the fully optimized algorithm is much faster than a naive sequential implementation. Also, experiments performed on two-way processor and on dual-core processor computers indicate that the proposed algorithm is suited for the recent multi-core technology.
Positioning implant device in WBAN(Wireless Body Area Network) becomes difficult since signal strength varies in different human tissues and shadowing noise due to body movement. We propose a novel tracking algorithm in WBAN. To estimate position under various human tissues, we adopt a sliding linear window using past position information. By adjusting window parameters, the proposed algorithm is to reduce noise and to track position. The proposed algorithm has little computational increment as compared to the α-β tracker. Throughout simulations, we verify outstanding position tracking performance and insensitivity of initial parameters.
The Minimum Description Length (MDL) criterion is used to fit a facet model of a car to an image. The best fit is achieved when the difference image between the car and the background has the greatest compression. MDL overcomes the overfitting and parameter precision problems which hamper the more usual maximum likelihood method of model fitting. Some preliminary results are shown.
In this paper we address the issue of how form and motion can be integrated in order to provide suitable information to attentively track multiple moving objects. Such integration is designed in a Bayesian framework, and a Belief Propagation technique is exploited to perform coherent form/motion labeling of regions of the observed scene. Experiments on both synthetic and real data are presented and discussed.
This paper presents a thorough study of some particle filter (PF) strategies dedicated to human motion capture from a trinocular vision surveillance setup. An experimental procedure is used, based on a commercial motion capture ring to provide ground truth. Metrics are proposed to assess performances in terms of accuracy, robustness, but also estimator dispersion which is often neglected elsewhere. Relative performances are discussed through some quantitative and qualitative evaluations on a video database. PF strategies based on Quasi Monte Carlo sampling, a scheme which is surprisingly seldom exploited in the Vision community, provide an interesting way to explore. Future works are finally discussed.
A novel method is proposed to achieve robust and real-time ball tracking in broadcast soccer videos. In sports video, the soccer ball is small, often occluded, and with high motion speed. Thus, it is difficult to detect the sole ball in a single frame. To solve this problem, rather than locate the ball in one of several frames through detection or tracking, we find the ball through optimizing its motion trajectory in successive frames. The proposed method includes three level processes: object level, intra-trajectory level, and inter-trajectory level processing. In object level, multiple objects instead of a single ball are detected and all of them are taken as ball candidates through shape and color features identification. Then at intra-trajectory level, each ball candidate is tracked by a Kalman filter and verified by detection in successive frames, which results in lots of initial short trajectories in a video shot. These trajectories are thereafter scored and filtered according to their length and spatial-temporal relationship in a time-line model. With these trajectories, we construct a distance graph, in which a node represents a trajectory, and an edge means distance between two trajectories. We then get the optimal path using the Dijkstra algorithm in the graph at the inter-trajectory level. The optimal path is composed by a sequence of initial trajectories which make the whole route smooth and long in duration. To get a complete and reasonable path, we finally apply cubic spline interpolation to bridge the gap between adjacent trajectories (the duration corresponding to when the ball is occluded). We select three representative real FIFA2006 soccer video clips (containing a total of 16,500 frames) and manually elaborately labeling each frame in it, and take it as ground-truth to evaluate the algorithm. The average F-score is 80.59%. The algorithm was used in our soccer analysis system and tested on a wide range of real soccer videos, and all the results are satisfied. The algorithm is effective and its whole speed far exceeds real-time, 35.6 fps on mpeg2 data on the Intel Conroe platform.
Number of mobile devices such as Smartphones or Tablet PCs has been dramatically increased over the recent years. New mobile devices are equipped with integrated cameras and large displays that make the interaction with the device easier and more efficient. Although most of the previous works on interaction between humans and mobile devices are based on 2D touch-screen displays, camera-based interaction opens a new way to manipulate in 3D space behind the device in the camera's field of view. In this paper, our gestural interaction relies on particular patterns from local orientation of the image called rotational symmetries. This approach is based on finding the most suitable pattern from a large set of rotational symmetries of different orders that ensures a reliable detector for fingertips and user's gesture. Consequently, gesture detection and tracking can be used as an efficient tool for 3D manipulation in various virtual/augmented reality applications.
Eye–hand coordination (EHC) is of great importance in the research areas of human visual perception, computer vision and robotic vision. A computer-using robot (CUBot) is designed for investigating the EHC mechanism and its implementation is presented in this paper. The CUBot possesses the ability of operating a computer with a mouse like a human being. Based on the three phases of people using computer with a mouse, i.e. watching the screen, recognizing the graphical objects on the screen as well as controlling the mouse to let the cursor approach to the target, our CUBot can also perceive information merely through its vision and control the mouse by its robotic hand without any physical data communication connected to the operated computer. The CUBot is mainly composed of “Mouse-Hand” for operating the mouse, “mind” for realizing the object perception, cursor tracking, and EHC. Two experiments used for testing the ability of our EHC algorithm and the perception of CUBot confirm the feasibility of the proposed approach.
Detection process of airborne targets may be thought simple because of the incompatible nature of aircraft, choppers, UAVs, and drones regarding clear sky background. When changes in the background are considered, brightness variation of the sky complicates the process. Changes in the shapes and types of clouds add another challenge to the process. Tracking process directly depends on the detection process and type of the data stream. The practical systems used for video detection and tracking of airborne targets are manual, and manual structures have some drawbacks compared to automatic structures. For video surveillance, guidance, regional security, and defense applications in dense environments, automatic detection and tracking process may be an obligation rather than preference. In this study, an automatic detection and tracking algorithm for video streams of airborne targets is proposed. A land-based moving camera captures the video data, and not only the flying objects but probably also the camera are in motion. Although the detection and tracking of moving objects via moving sensors is a relatively arduous task, this is the prevalent case in real-life scenarios. Video detection and tracking systems have one or more moving video sensors, while one or more flying air vehicles are in operation area. The proposed algorithm includes an image processing stage for detection and a tracking stage for initiation and continuation. An assessment study has been conducted for the actual video data and found that the proposed method yields successful results for detection, track formation, and continuation processes.
Fault recognition is a difficult problem in seismic exploration data interpretation, and there is still no solution both well in terms of accuracy and signal-to-noise ratio. To solve this problem, based on the region energy algorithm, a novel fault recognition method is proposed, which determines the direction of fault tracking based on region energy when identifying fault points. First, the third-generation coherence cube algorithm is adopted to calculate the coherence attribute of the seismic data volume. Then, fault tracking is performed on each seismic section. When conducting fault tracking, the seismic sample is scanned and identified one by one. If it is a fault point, it is assigned to the corresponding fault in the connected area, and then, track along a certain direction of the current pixel point in the front left, directly ahead, or front right direction. The selection of the tracking directions is based on the energy of the corresponding area in the direction. The direction with the highest energy is tracked in the direction until the complete fault is tracked or the stopping condition is reached. If the point is not judged as a fault point, a certain distance is tracked down continue and the path is stored temporarily. If a fault point is tracked, the tracking path is classified as a fault, otherwise return to continue scanning. When all the sample points on the seismic section are scanned, the fault tracking on the corresponding section is completed. Subsequently, the fault points are fitted using the least squares fitting algorithm, and the fault line is obtained. Finally, comparative experiments were conducted on actual seismic data, and the effectiveness of the novel method was validated.
Motion provides extra information that can aid in the recognition of objects. One of the most commonly seen objects is, perhaps, the human body. Yet little attention has been paid to the analysis of human motion. One of the key steps required for a successful motion analysis system is the ability to track moving objects. In this paper, we describe a new system called Log-Tracker, which was recently developed for tracking the motion of the different parts of the human body. Occlusion of body parts is termed a forking condition. Two classes of forks as well as the attributes required to classify them are described. Experimental results from two gymnastics sequences indicate that the system is able to track the body parts even when they are occluded for a short period of time. Occlusions that extend for a long period of time still pose problems to Log-Tracker.
The ability to dynamically control the imaging parameters such as camera position, focus, and aperture is satisfied through the use of special purpose hardware such as a robotic head. This paper presents the design and control aspects of the Harvard Head, a binocular image acquisition system. We present three applications of the head in vision tasks concentrating on the computation of depth from controlled camera motion.
In this paper, we show the interest of the 3D discrete surface notion for the extraction of object contours. We introduce some notions related to surfaces of 18-and 26-connected objects in 3D discrete images, and a new sequential algorithm to extract the surface and contours of 26-connected objects. Then, we present a PRAM related algorithm to construct the successor function of the surface graph. The complexity of the algorithm is O(log N) for an N×N×N image, with N3 processors.
This paper proposes a robust method for tracking an object contour in a sequence of images. In this method, both object extraction and tracking problems can be solved simultaneously. Furthermore, it is applicable to the tracking of arbitrary shapes since it does not need a priori knowledge about the object shapes. In the contour tracking, energy-minimizing elastic contour models are utilized, which is newly presented in this paper. The contour tracking is formulated as an optimization problem to find the position that minimizes both the elastic energy of its model and the potential energy derived from the edge potential image that includes a target object contour. We also present an algorithm which efficiently solves energy minimization problems within a dynamic programming framework. The algorithm enables us to obtain optimal solution even when the variables to be optimized are not ordered. We show the validity and usefulness of the proposed method with some experimental results.
In this paper, a nonlinear system aiming at reducing the signal transmission rate in a networked control system is constructed by adding nonlinear constraints to a linear feedback control system. Its stability is investigated in detail. It turns out that this nonlinear system exhibits very interesting dynamical behaviors: in addition to local stability, its trajectories may converge to a nonorigin equilibrium or be periodic or just be random. Furthermore it exhibits sensitive dependence on initial conditions — a sign of chaos. Complicated bifurcation phenomena are exhibited by this system. After that, control of the chaotic system is discussed. All these are studied under scalar cases in detail. Some difficulties involved in the study of this type of systems are analyzed. Finally an example is employed to reveal the effectiveness of the scheme in the framework of networked control systems.
In this paper, based on a generalized Lyapunov function, a simple proof is given to improve the estimation of globally attractive and positive invariant set of the Lorenz system. In particular, a new estimation is derived for the variable x. On the globally attractive set, the Lorenz system satisfies Lipschitz condition, which is very useful in the study of chaos control and chaos synchronization. Applications are presented for globally, exponentially tracking periodic solutions, stabilizing equilibrium points and synchronizing two Lorenz systems.
In this paper, we first give a constructive proof for the existence of globally exponential attractive set of Chua's system with a smooth nonlinear function. Then, we derive a series of simple algebraic sufficient conditions under which two same type of smooth Chua's systems are globally exponentially synchronized using simple linear feedback controls. Also, as the special cases of chaos synchronization, we consider global tracking and global exponentially tracking of periodic motions, as well as global stabilization and globally exponential stabilization of equilibrium points in smooth Chua's systems. We construct a series of simple, easily applicable feedback control laws. Computer simulation results are presented to verify the theoretical predictions.
In this paper, a new method for controlling spatiotemporal chaos in discrete-time spatially extended systems modeled by coupled map lattices is proposed. This method is based on quasi-sliding mode using a Lyapunov function. This method enables the system to drive the chaotic motion toward any desired trajectory via a sliding surface in the error space. The controller also guarantees finite time convergence of the state trajectory. The main advantage of this method is its robustness with respect to additive uncertainties and its applicability for all types of coupled map lattices. A diffusively coupled map lattice is used as an example to demonstrate the method. Simulation results reveal the robustness and the effectiveness of the method in controlling spatiotemporal chaos in coupled map lattices.
Please login to be able to save your searches and receive alerts for new content matching your search criteria.