Skip main navigation

Cookies Notification

We use cookies on this site to enhance your user experience. By continuing to browse the site, you consent to the use of our cookies. Learn More
×

System Upgrade on Tue, May 28th, 2024 at 2am (EDT)

Existing users will be able to log into the site and access content. However, E-commerce and registration of new users may not be available for up to 12 hours.
For online purchase, please visit us again. Contact us at customercare@wspc.com for any enquiries.

SEARCH GUIDE  Download Search Tip PDF File

  Bestsellers

  • articleNo Access

    DESIGNING A FUSION-DRIVEN SENSOR NETWORK TO SELECTIVELY TRACK MOBILE TARGETS

    Sensor networks that can support time-critical operations pose challenging problems for tracking events of interest. We propose an architecture for a sensor network that autonomously adapts in real-time to data fusion requirements so as not to miss events of interest and provides accurate real-time mobile target tracking. In the proposed architecture, the sensed data is processed in an abstract space called Information Space and the communication between nodes is modeled as an abstract space called Network Design Space. The two abstract spaces are connected through an interaction interface called InfoNet, that seamlessly translates the messages between the two. The proposed architecture is validated experimentally on a laboratory testbed for multiple scenarios.

  • articleNo Access

    Semantics-Fusion: Radar Semantic Information-Based Radar–Camera Fusion for 3D Object Detection

    The fusion of millimeter-wave radar and camera for three-dimensional (3D) object detection represents a pivotal technology for autonomous driving, yet it is not without its inherent challenges. First, the radar point cloud contains clutter, which can result in the generation of impure radar features. Second, the radar point cloud is sparse, which presents a challenge in fully extracting the radar features. This can result in the loss of object information, leading to object misdetection, omission, and a reduction in the robustness. To address these issues, a 3D object detection method based on the semantic information of radar features and camera fusion (Semantics-Fusion) is proposed. Initially, the image features are extracted through the centroid detection network, resulting in the generation of a preliminary 3D bounding box for the objects. Subsequently, the radar point cloud is clustered based on the objects’ position and velocity, thereby eliminating irrelevant point cloud and clutter. The clustered radar point cloud is projected onto the image plane, thereby forming a radar 2D pseudo-image. This is then input to the designed 2D convolution module, which enables the full extraction of the semantic information of the radar features. Ultimately, the radar features are fused with the image features, and secondary regression is employed to achieve robust 3D object detection. The performance of our method was evaluated on the nuScenes dataset, achieving a mean average precision (mAP) of 0.325 and a nuScenes detection score (NDS) of 0.462.

  • articleNo Access

    COMBINING AUDIO AND VIDEO SURVEILLANCE WITH A MOBILE ROBOT

    This paper presents a Distributed Perception System for application of intelligent surveillance. The system prototype presented in this paper is composed of a static acoustic agent and a static vision agent cooperating with a mobile vision agent mounted on a mobile robot. The audio and video sensors distributed in the environment are used as a single sensor to reveal and track the presence of a person in the surveilled environment. The robot extends the capabilities of the system by adding a mobile sensor (in this work an omnidirectional camera). The mobile omnidirectional camera can be used to have a closer look of the scene or to inspect portions of the environment not covered by the fix sensory agents. In this paper, the hardware and the software architecture of the system and of its sensors are presented. Experiments on the integration of the audio localization data and on the video localization data are reported.

  • articleNo Access

    COMBINING MULTIPLE SENSOR FEATURES FOR STRESS DETECTION USING COMBINATORIAL FUSION

    Physiological sensors have been used to detect different stress levels in order to improve human health and well-being. When analyzing these sensor data, sensor features are generated in the experiment and a subset of the features are selected and then combined using a host of informatics techniques (machine learning, data mining, or information fusion). Our previous work studied feature selection using correlation and diversity as well as feature combination using five methods C4.5, Naïve Bayes, Linear Discriminant Function, Support Vector Machine, and k-Nearest Neighbors. In this paper, we use combinatorial fusion, based on performance criterion (CF-P) and cognitive diversity (CF-CD), to combine those multiple sensor features. Our results showed that: (a) sensor feature combination method is distinctly much better than CF-CD and other algorithms, and (b) CF-CD is as good as other five feature combination methods, and is better in most of the cases.

  • articleNo Access

    FUSION OF THE MAGNETIC AND OPTICAL INFORMATION FOR MOTION CAPTURING

    We propose a sensor fusion technique for motion capture system. In our system, two kinds of sensors are used for mutual assistance. Six magnetic sensors are attached on the arms and feet for assisting twelve optical markers and six optical markers, which are attached on the arms and feet of a performer, respectively. The optical marker information is not always complete because the optical markers can be hidden due to obstacles. In this case, magnetic sensor information is used to link discontinuous optical marker information. We use a system identification technique for modeling the relation between the two signals of sensor and marker. We determine the best model from the set of candidate models using the canonical system identification technique. In order to show the efficiency of the proposed system, experiments are performed for motion capture data obtained from both the optical and magnetic motion capture system, and the animation results are shown.

  • articleNo Access

    BODY IMAGE CONSTRUCTED FROM MOTOR AND TACTILE IMAGES WITH VISUAL INFORMATION

    This paper proposes a learning model that enables a robot to acquire a body image for parts of its body that are invisible to itself. The model associates spatial perception based on motor experience and motor image with perception based on the activations of touch sensors and tactile image, both of which are supported by visual information. The tactile image can be acquired with the help of the motor image, which is thought to be the basis for spatial perception, because all spatial perceptions originate in motor experiences. Based on the proposed model, a robot estimates invisible hand positions using the Jacobian between the displacement of the joint angles and the optical flow of the hand. When the hand touches one of the invisible tactile sensor units on the face, the robot associates this sensor unit with the estimated hand position. The simulation results show that the spatial arrangement of tactile sensors is successfully acquired by the proposed model.

  • articleNo Access

    Estimation and Stabilization of Humanoid Flexibility Deformation Using Only Inertial Measurement Units and Contact Information

    Most robots are today controlled as being entirely rigid. But often, as for HRP-2 robot, there are flexible parts, intended for example to absorb impacts. The deformation of this flexibility modifies the orientation of the robot and endangers balance. Nevertheless, robots have usually inertial sensors inertial measurement units (IMUs) to reconstruct their orientation based on gravity and inertial effects. Moreover, humanoids have usually to ensure a firm contact with the ground, which provides reliable information on surrounding environment. We show in this study how important it is to take into account these information to improve IMU-based position/orientation reconstruction. We use an extended Kalman filter to rebuild the deformation, making the fusion between IMU and contact information, and without making any assumption on the dynamics of the flexibility. We show how, with this simple setting, we are able to compensate for perturbations and to stabilize the end-effector's position/orientation in the world reference frame. We show also that this estimation is reliable enough to enable a closed-loop stabilization of the flexibility and control of the center of mass (CoM) position with the simplest possible model.

  • articleNo Access

    Synchronous and Asynchronous Application of a Filtering Method for Underwater Robot Localization

    This paper reports a method that fuses multiple sensor measurements for location estimation of an underwater robot. Synchronous and asynchronous (AS) implementation of the method are also proposed. Extended Kalman filter (EKF) is used to fuse four types of measurements: linear velocity by Doppler velocity log (DVL), angular velocity by gyroscope, ranges to acoustic beacons, and depth. The EKF approach is implemented in three ways to deal with asynchrony in measurements in correction step. The three implementation methods are synchronous collective (SC), synchronous individual (SI), and AS application. These methods are verified and compared through simulation and test tank experiments. The test reveals that the application methods need to be selected depending on the measurement properties: dependency between the measurements and degree of asynchrony. The distinctive features proposed in this study are three application methods together with derivation of an EKF approach to sensor fusion for underwater navigation.

  • articleNo Access

    DATA FUSION OF ROBOT WRIST FORCES BASED ON FINGER FORCE SENSORS AND MLF NEURAL NETWORK

    Quantitative analysis of wrist forces for robot grippers is an important issue for robot control and operation safety. An approach is proposed to deduce the wrist forces from distributed force sensors in the robot fingers. A multi-layer forward (MLF) neural network is designed to fuse the data from finger force sensors. The experimental results demonstrate that the maximum deducing error of the wrist forces is decreased to 4.8% from 18.7% comparing with previous sensor fusion methods.

  • articleNo Access

    FAULT DIAGNOSIS OF AN INDUSTRIAL MACHINE THROUGH SENSOR FUSION

    In this paper, a four layer neuro-fuzzy architecture of multi-sensor fusion is developed for a fault diagnosis system which is applied to an industrial fish cutting machine. An important characteristic of the fault diagnosis approach developed in this paper is to make an accurate decision of the machine condition by fusing information acquired from three types of sensors: Accelerometer, microphone and charge-coupled device (CCD) camera. Feature vectors for vibration and sound signals from their fast Fourier transform (FFT) frequency spectra are defined and extracted from the acquired information. A feature-based vision method is applied for object tracking in the machine, to detect and track the fish moving on the conveyor. A four-layer neural network including a fuzzy hidden layer is developed in the paper to analyze and diagnose existing faults. Feature vectors of vibration, sound and vision are provided as inputs to the neuro-fuzzy network for fault detection and diagnosis. By proper training of the neural network using data samples for typical faults, six crucial faults in the fish cutting machine are detected with high reliability and robustness. On this basis, not only the condition of the machine can be determined for possible retuning and maintenance, but also alarms to warn about impending faults may be generated during the machine operation.

  • articleNo Access

    MULTIMODAL PEOPLE TRACKING AND IDENTIFICATION FOR SERVICE ROBOTS

    In order for a service robot to approach humans and provide the services it has been designed for, an efficient system for people tracking and identification must be developed. This paper presents a novel solution to the problem that makes use of different sensors and data fusion techniques. The robot utilizes a laser device and a PTZ color camera to detect, respectively, human legs and faces. The relative information is integrated, in real-time, using a sequential implementation of Unscented Kalman Filter. Furthermore, thanks to an histogram comparison with a measure based on the Bhattacharyya coefficient, people are also identified and labelled according to their clothes. This measure is also used to improve the robustness of the data association process. The effectiveness of the proposed method is shown by experiments with a real mobile robot in challenging situations.

  • articleNo Access

    DEVELOPMENT OF A MOBILE MANIPULATOR SYSTEM WITH RFID-BASED SENSOR FUSION FOR HOME SERVICE: A CASE STUDY ON MOBILE MANIPULATION OF CHAIRS

    A mobile manipulator system as a versatile platform for home service has been developed by authors toward practical realizations. Two features of the system are addressed. One is the efficient and easy recognition of environment and objects about geometrical, physical and additional information by an RFID (Radio Frequency IDentification)-based sensor fusion system. The other is such geometrical, physical and additional information-based manipulation including mobile manipulation for various objects. In this paper, mobile manipulation of various chairs for home service is described. The methods for recognition and mobile manipulation are proposed and experimental results are given to show the feasibility of the proposed methods.

  • articleNo Access

    User-Generated Video Composition Based on Device Context Measurements

    Instant sharing of user-generated video recordings has become a widely used service on platforms such as YouNow, Facebook.Live or uStream. Yet, providing such services with a high QoE for viewers is still challenging, given that mobile upload speed and capacities are limited, and the recording quality on mobile devices greatly depends on the users’ capabilities. One proposed solution to address these issues is video composition. It allows to switch between multiple recorded video streams, selecting the best source at any given time, for composing a live video with a better overall quality for the viewers. Previous approaches have required an in-depth visual analysis of the video streams, which usually limited the scalability of these systems. In contrast, our work allows the stream selection to be realized solely on context information, based on video- and service-quality aspects from sensor and network measurements.

    The implemented monitoring service for a context-aware upload of video streams is evaluated in different network conditions, with diverse user behavior, including camera shaking and user mobility. We have evaluated the system’s performance based on two studies. First, in a user study, we show that a higher efficiency for the video upload as well as a better QoE for viewers can be achieved when using our proposed system. Second, by examining the overall delay for the switching between streams based on sensor readings, we show that a composition view change can efficiently be achieved in approximately four seconds.

  • articleNo Access

    Decoupled Iterative Deep Sensor Fusion for 3D Semantic Segmentation

    One of the key tasks for autonomous vehicles or robots is a robust perception of their 3D environment, which is why autonomous vehicles or robots are equipped with a wide range of different sensors. Building upon a robust sensor setup, understanding and interpreting their 3D environment is the next important step. Semantic segmentation of 3D sensor data, e.g. point clouds, provides valuable information for this task and is often seen as key enabler for 3D scene understanding. This work presents an iterative deep fusion architecture for semantic segmentation of 3D point clouds, which builds upon a range image representation of the point clouds and additionally exploits camera features to increase accuracy and robustness. In contrast to other approaches, which fuse lidar and camera features once, the proposed fusion strategy iteratively combines and refines lidar and camera features at different scales inside the network architecture. Additionally, the proposed approach can deal with camera failure as well as jointly predict lidar and camera segmentation. We demonstrate the benefits of the presented iterative deep fusion approach on two challenging datasets, outperforming all range image-based lidar and fusion approaches. An in-depth evaluation underlines the effectiveness of the proposed fusion strategy and the potential of camera features for 3D semantic segmentation.

  • articleNo Access

    A Comparison of SLAM Prediction Densities Using the Kolmogorov Smirnov Statistic

    Unmanned Systems01 Oct 2016

    Accurate pose and trajectory estimates, are necessary components of autonomous robot navigation system. A wide variety of Simultaneous Localization and Mapping (SLAM) and localization algorithms have been developed by the robotics community to cater to this requirement. Some of the sensor fusion algorithms employed by SLAM and localization algorithms include the particle filter, Gaussian Particle Filter, the Extended Kalman Filter, the Unscented Kalman Filter, and the Central Difference Kalman Filter. To guarantee a rapid convergence of the state estimate to the ground truth, the prediction density of the sensor fusion algorithm must be as close to the true vehicle prediction density as possible. This paper presents a Kolmogorov–Smirnov statistic-based method to compare the prediction densities of the algorithms listed above. The algorithms are compared using simulations of noisy inputs provided to an autonomous robotic vehicle, and the obtained results are analyzed. The results are then validated using data obtained from a robot moving in controlled trajectories similar to the simulations.

  • articleNo Access

    Evidential SLAM Fusing 2D Laser Scanner and Stereo Camera

    Unmanned Systems01 Jul 2019

    This work introduces a new complete Simultaneous Localization and Mapping (SLAM) framework that uses an enriched representation of the world based on sensor fusion and is able to simultaneously provide an accurate localization of the vehicle. A method to create an Evidential grid representation from two very different sensors, laser scanner and stereo camera, allows a better handling of the dynamic aspects of the urban environment and a proper management of errors to create a more reliable map, thus having a more precise localization. A life-long layer with high level states is presented, it maintains a global map of the entire vehicle’s trajectory and distinguishes between static and dynamic obstacles. Finally, we propose a method that at each current map creation estimates the vehicle’s position by a grid matching algorithm based on image registration techniques. Results on a real road dataset show that the environment mapping data can be improved by adding relevant information that could be missed without the proposed approach. Moreover, the proposed localization method is able to reduce the drift and improve the localization compared to other methods using similar configurations.

  • articleNo Access

    Loosely-Coupled Ultra-wideband-Aided Scale Correction for Monocular Visual Odometry

    Unmanned Systems17 Mar 2020

    In this paper, we propose a method to address the problem of scale uncertainty in monocular visual odometry (VO), which includes scale ambiguity and scale drift, using distance measurements from a single ultra-wideband (UWB) anchor. A variant of Levenberg–Marquardt (LM) nonlinear least squares regression method is proposed to rectify unscaled position data from monocular odometry with 1D point-to-point distance measurements. As a loosely-coupled approach, our method is flexible in that each input block can be replaced with one’s preferred choices for monocular odometry/SLAM algorithm and UWB sensor. Furthermore, we do not require the location of the UWB anchor as prior knowledge and will estimate both scale and anchor location simultaneously. However, it is noted that a good initial guess for anchor position can result in more accurate scale estimation. The performance of our method is compared with state-of-the-art on both public datasets and real-life experiments.

  • articleNo Access

    Multi-Sensor Fusion for Navigation and Mapping in Autonomous Vehicles: Accurate Localization in Urban Environments

    Unmanned Systems01 Jul 2020

    The combination of data from multiple sensors, also known as sensor fusion or data fusion, is a key aspect in the design of autonomous robots. In particular, algorithms able to accommodate sensor fusion techniques enable increased accuracy, and are more resilient against the malfunction of individual sensors. The development of algorithms for autonomous navigation, mapping and localization have seen big advancements over the past two decades. Nonetheless, challenges remain in developing robust solutions for accurate localization in dense urban environments, where the so-called last-mile delivery occurs. In these scenarios, local motion estimation is combined with the matching of real-time data with a detailed pre-built map. In this paper, we utilize data gathered with an autonomous delivery robot to compare different sensor fusion techniques and evaluate which are the algorithms providing the highest accuracy depending on the environment. The techniques we analyze and propose in this paper utilize 3D lidar data, inertial data, GNSS data and wheel encoder readings. We show how lidar scan matching combined with other sensor data can be used to increase the accuracy of the robot localization and, in consequence, its navigation. Moreover, we propose a strategy to reduce the impact on navigation performance when a change in the environment renders map data invalid or part of the available map is corrupted.

  • articleNo Access

    Survey on Localization Systems and Algorithms for Unmanned Systems

    Unmanned Systems05 Feb 2021

    Intelligent unmanned systems have important applications, such as pesticide-spraying in agriculture, robot-based warehouse management systems, and missile-firing drones. The underlying assumption behind all autonomy is that the agent knows its relative position or egomotion with respect to some reference or scene. There exist thousands of localization systems in the literature. These localization systems use various combinations of sensors and algorithms, such as visual/visual-inertial SLAM, to achieve robust localization. The majority of the methods use one or more sensors from LIDAR, camera, IMU, UWB, GPS, compass, tracking system, etc. This survey presents a systematic review and analysis of published algorithms and techniques chronologically, and we introduce various highly impactful works. We provide insightful investigation and taxonomy on sensory data forming principle, feature association principle, egomotion estimation formation, and fusion model for each type of system. At last, some open problems and directions for future research are also included. We aim to survey the literature comprehensively to provide a complete understanding of localization methodologies, performance, advantages and limitations, and evaluations of various methods, shedding some light for future research.

  • articleFree Access

    Cooperative Localization Using the 3D Euler–Lagrange Vehicle Model

    Kalman filter-based cooperative localization (CL) algorithms have been shown to significantly improve pose estimations within networks of vehicles but have relied predominantly on two-dimensional kinematic models of the member agents. An inherent deficiency of the commonly employed kinematic vehicle model is the ineffectiveness of CL with only relative position measurements. In this work, we present a singularity-free CL using the full three-dimensional (3D) nonlinear dynamic vehicle model suitable for decentralized control and navigation of heterogeneous networks. We develop the algorithm, present Monte Carlo simulation results with relative pose measurements, and assess the algorithm performance as the number of measurements increases. We further demonstrate that CL with only relative position measurements is effective when using the dynamic model and benefits from increasing number of measurements. We also evaluate the performance of CL with respect to measurement task distribution, which is important in cooperative control of autonomous vehicles.