Processing math: 100%
Skip main navigation

Cookies Notification

We use cookies on this site to enhance your user experience. By continuing to browse the site, you consent to the use of our cookies. Learn More
×

System Upgrade on Tue, May 28th, 2024 at 2am (EDT)

Existing users will be able to log into the site and access content. However, E-commerce and registration of new users may not be available for up to 12 hours.
For online purchase, please visit us again. Contact us at customercare@wspc.com for any enquiries.

SEARCH GUIDE  Download Search Tip PDF File

  • articleNo Access

    A Robust Visual-Inertial Navigation Method for Illumination-Challenging Scenes

    Unmanned Systems24 Feb 2025

    Visual-inertial odometry (VIO) has been found to have great value in robot positioning and navigation. However, the existing VIO algorithms rely heavily on excellent lighting environments and the accuracy of robot positioning and navigation is degraded largely in illumination-challenging scenes. A robust visual-inertial navigation method is developed in this paper. We construct an effective low-light image enhancement model using a deep curve estimation network (DCE) and a lightweight convolutional neural network to recover the texture information of dark images. Meanwhile, a brightness consistency inference method based on the Kalman filter is proposed to cope with illumination variations in image sequences. Multiple sequences obtained from UrbanNav and M2DRG datasets are used to test the proposed algorithm. Furthermore, we also conduct a real-world experiment for the proposed algorithm. Both experimental results demonstrate that our algorithm outperforms other state-of-art algorithms. Compared to the baseline algorithm VINS-mono, the tracking time is improved from 22.0% to 68.2% and the localization accuracy is improved from 0.489m to 0.258m on the darkest sequences.

  • articleNo Access

    Ultra-Wideband-Based Localization for Quadcopter Navigation

    Unmanned Systems01 Jan 2016

    Micro unmanned aerial vehicles (UAVs) are promising to play more and more important roles in both civilian and military activities. Currently, the navigation of UAVs is critically dependent on the localization service provided by the Global Positioning System (GPS), which suffers from the multipath effect and blockage of line-of-sight, and fails to work in an indoor, forest or urban environment. In this paper, we establish a localization system for quadcopters based on ultra-wideband (UWB) range measurements. To achieve the localization, a UWB module is installed on the quadcopter to actively send ranging requests to some fixed UWB modules at known positions (anchors). Once a distance is obtained, it is calibrated first and then goes through outlier detection before being fed to a localization algorithm. The localization algorithm is initialized by trilateration and sustained by the extended Kalman filter (EKF). The position and velocity estimates produced by the algorithm will be further fed to the control loop to aid the navigation of the quadcopter. Various flight tests in different environments have been conducted to validate the performance of UWB ranging and localization algorithm.

  • articleNo Access

    An Efficient Fast-Mapping SLAM Method for UAS Applications Using Only Range Measurements

    Unmanned Systems01 Apr 2016

    This paper deals with 3D Simultaneous Localization and Mapping (SLAM), where the UAS uses only range measurements to build a local map of an unknown environment and to self-localize in that map. In the recent years Range Only (RO) SLAM has attracted significant interest, it is suitable for non line-of-sight conditions and bad lighting, being superior to visual SLAM in some problems. However, some issues constrain its applicability in practical cases, such as delays in map building and low map and UAS estimation accuracies. This paper proposes a 3D RO-SLAM scheme for UAS that specifically focuses on improving map building delays and accuracy levels without compromising efficiency in the consumption of resources. The scheme integrates sonar measurements together with range measurements between the robot and beacons deployed in the scenario. The proposed scheme presents two main advantages: (1) it integrates direct range measurements between the robot and the beacons and also range measurements between beacons — called inter-beacon measurements — which significantly reduce map building times and improve map and UAS localization accuracies; and (2) the SLAM scheme is endowed with a supervisory module that self-adapts the measurements that are integrated in SLAM reducing computational, bandwidth and energy consumption. Experimental validation in field experiments with an octorotor UAS showed that the proposed scheme improved map building times in 72%, map accuracy in 40% and UAS localization accuracy in 12%.

  • articleNo Access

    A Comparison of SLAM Prediction Densities Using the Kolmogorov Smirnov Statistic

    Unmanned Systems01 Oct 2016

    Accurate pose and trajectory estimates, are necessary components of autonomous robot navigation system. A wide variety of Simultaneous Localization and Mapping (SLAM) and localization algorithms have been developed by the robotics community to cater to this requirement. Some of the sensor fusion algorithms employed by SLAM and localization algorithms include the particle filter, Gaussian Particle Filter, the Extended Kalman Filter, the Unscented Kalman Filter, and the Central Difference Kalman Filter. To guarantee a rapid convergence of the state estimate to the ground truth, the prediction density of the sensor fusion algorithm must be as close to the true vehicle prediction density as possible. This paper presents a Kolmogorov–Smirnov statistic-based method to compare the prediction densities of the algorithms listed above. The algorithms are compared using simulations of noisy inputs provided to an autonomous robotic vehicle, and the obtained results are analyzed. The results are then validated using data obtained from a robot moving in controlled trajectories similar to the simulations.

  • articleNo Access

    Swift Path Planning: Vehicle Localization by Visual Odometry Trajectory Tracking and Mapping

    Unmanned Systems01 Oct 2018

    Accurate localization is the key component in intelligent vehicles for navigation. With the rapid development especially in urban area, the increasing high-rise buildings results in urban canyon and road network has become more complex. These affect the vehicle navigation performance particularly in the event of poor Global Positioning System (GPS) signal. Therefore, it is essential to develop a perceptive localization system to overcome this problem. This paper proposes a localization approach that exhibits the advantages of Visual Odometry (VO) in low-cost data fusion to reduce vehicle localization error and improve its response rate in path selection. The data used are sourced from camera as visual sensor, low-cost GPS and free digital map from OpenStreetMap. These data are fused by Particle filter (PF) where our method estimates the curvature similarity score of VO trajectory curve with candidate ways extracted from the map. We evaluate the robustness of our proposed approach with three types of GPS errors such as random noise, biased noise and GPS signal loss in an instance of ambiguous road decision. Our results show that this method is able to detect and select the correct path simultaneously which contributes to a swift path planning.

  • articleNo Access

    Loosely-Coupled Ultra-wideband-Aided Scale Correction for Monocular Visual Odometry

    Unmanned Systems17 Mar 2020

    In this paper, we propose a method to address the problem of scale uncertainty in monocular visual odometry (VO), which includes scale ambiguity and scale drift, using distance measurements from a single ultra-wideband (UWB) anchor. A variant of Levenberg–Marquardt (LM) nonlinear least squares regression method is proposed to rectify unscaled position data from monocular odometry with 1D point-to-point distance measurements. As a loosely-coupled approach, our method is flexible in that each input block can be replaced with one’s preferred choices for monocular odometry/SLAM algorithm and UWB sensor. Furthermore, we do not require the location of the UWB anchor as prior knowledge and will estimate both scale and anchor location simultaneously. However, it is noted that a good initial guess for anchor position can result in more accurate scale estimation. The performance of our method is compared with state-of-the-art on both public datasets and real-life experiments.

  • articleNo Access

    Multi-Sensor Fusion for Navigation and Mapping in Autonomous Vehicles: Accurate Localization in Urban Environments

    Unmanned Systems01 Jul 2020

    The combination of data from multiple sensors, also known as sensor fusion or data fusion, is a key aspect in the design of autonomous robots. In particular, algorithms able to accommodate sensor fusion techniques enable increased accuracy, and are more resilient against the malfunction of individual sensors. The development of algorithms for autonomous navigation, mapping and localization have seen big advancements over the past two decades. Nonetheless, challenges remain in developing robust solutions for accurate localization in dense urban environments, where the so-called last-mile delivery occurs. In these scenarios, local motion estimation is combined with the matching of real-time data with a detailed pre-built map. In this paper, we utilize data gathered with an autonomous delivery robot to compare different sensor fusion techniques and evaluate which are the algorithms providing the highest accuracy depending on the environment. The techniques we analyze and propose in this paper utilize 3D lidar data, inertial data, GNSS data and wheel encoder readings. We show how lidar scan matching combined with other sensor data can be used to increase the accuracy of the robot localization and, in consequence, its navigation. Moreover, we propose a strategy to reduce the impact on navigation performance when a change in the environment renders map data invalid or part of the available map is corrupted.

  • articleNo Access

    Survey on Localization Systems and Algorithms for Unmanned Systems

    Unmanned Systems05 Feb 2021

    Intelligent unmanned systems have important applications, such as pesticide-spraying in agriculture, robot-based warehouse management systems, and missile-firing drones. The underlying assumption behind all autonomy is that the agent knows its relative position or egomotion with respect to some reference or scene. There exist thousands of localization systems in the literature. These localization systems use various combinations of sensors and algorithms, such as visual/visual-inertial SLAM, to achieve robust localization. The majority of the methods use one or more sensors from LIDAR, camera, IMU, UWB, GPS, compass, tracking system, etc. This survey presents a systematic review and analysis of published algorithms and techniques chronologically, and we introduce various highly impactful works. We provide insightful investigation and taxonomy on sensory data forming principle, feature association principle, egomotion estimation formation, and fusion model for each type of system. At last, some open problems and directions for future research are also included. We aim to survey the literature comprehensively to provide a complete understanding of localization methodologies, performance, advantages and limitations, and evaluations of various methods, shedding some light for future research.

  • articleNo Access

    Fault Diagnosis and Reconfiguration for Mobile Robot Localization Based on Multi-Sensors Data Fusion

    Unmanned Systems10 Jun 2021

    In this paper, we propose a new approach for fault tolerant localization using multi-sensors data fusion for a unicycle-type mobile robot. The main contribution of this paper is a new architecture proposal for fault diagnosis and reconfiguration for mobile robot localization using multi-sensors data fusion and the duplication/comparison approach. Four different sensors usually embedded in mobile robots (Camera, IMU, GPS, and Odometer) are considered, while six different sensors couples combinations are used for sensor data fusion and the duplication of the localization and estimation system. In order to reach this aim, three different filters (EKF, SVSF, and ASVSF) have been proposed and compared. For each selected filter, a comparison mechanism is then introduced to compute different residuals by comparing the estimated robot position for each sensor couples separately. Faults are then detected using the structural residual diagnosis method. This approach assumes the occurrence of a single fault at a given time. A reconfiguration mechanism is then applied by selected the healthy sensors couple and their corresponding fusion filter. Several scenarios are considered for navigation-based fault tolerant localization approaches. Simulation results are presented to illustrate the advantage and performance of the proposed architecture. The proposed solutions are implemented and validated successfully using the V-REP simulator.

  • articleNo Access

    Optimal Autonomous Pursuit of an Intruder on a Grid Aided by Local Node and Edge Sensors

    Unmanned Systems10 Jul 2021

    Timely detection of intruders ensures the safety and security of high valued assets within a protected area. This problem takes on particular significance across international borders and becomes challenging when the terrain is porous, rugged and treacherous in nature. Keeping an effective vigil against intruders on large tracts of land is a tedious task; currently, it is primarily performed by security personnel with automatic detection systems in passive supporting roles. This paper discusses an alternate autonomous approach by utilizing one or more Unmanned Vehicles (UVs), aided by smart sensors on the ground, to detect and localize an intruder. To facilitate autonomous UV operations, the region is equipped with Unattended Ground Sensors (UGSs) and laser fencing. Together, these sensors provide time-stamped location information (node and edge detection) of the intruder to a UV. For security reasons, we assume that the sensors are not networked (a central node can be disabled bringing the whole system down) and so, the UVs must visit the vicinity of the sensors to gather the information therein. This makes the problem challenging in that pursuit must be done with local and likely delayed information. We discretize time and space by considering a 2D grid for the area and unit speed for the UV, i.e. it takes one time unit to travel from one node to an adjacent node. The intruder is slower and takes two time steps to complete the same move. We compute the min–max optimal, i.e. minimum number of steps to capture the intruder under worst-case intruder actions, for different number of rows and columns in the grid and for both one and two pursuers.