Processing math: 100%
Skip main navigation

Cookies Notification

We use cookies on this site to enhance your user experience. By continuing to browse the site, you consent to the use of our cookies. Learn More
×

System Upgrade on Tue, May 28th, 2024 at 2am (EDT)

Existing users will be able to log into the site and access content. However, E-commerce and registration of new users may not be available for up to 12 hours.
For online purchase, please visit us again. Contact us at customercare@wspc.com for any enquiries.

SEARCH GUIDE  Download Search Tip PDF File

  • articleNo Access

    UMS-VINS: Unified Monocular-Stereo Features for Visual-Inertial Tightly Coupled Odometry in Degenerated Scenarios

    Unmanned Systems01 Feb 2025

    This paper proposes a Unified Monocular-Stereo Visual-Inertial State Estimator (UMS-VINS) that combines monocular, stereo vision, and inertial measurements for vehicle localization in degenerated scenarios. UMS-VINS is a tightly coupled visual-inertial odometry (VIO), which requires a stereo camera and a low-cost inertial measurement unit (IMU). On the one hand, we introduce additional two-dimensional sub-pixel features from the left and/or right cameras. With monocular-stereo features, UMS-VINS can improve the positioning accuracy and robustness by enhancing the quality and quantity of features. On the other hand, a mode selection-based visual-inertial initialization strategy is designed to dynamically choose between stereo visual odometry or VIO according to the inertial motion state and initialization status, which can guarantee successful initialization. The performance on both new real-world and public datasets demonstrates its effectiveness in terms of localization accuracy, localization robustness, and environmental adaptability.

  • articleNo Access

    A Robust Visual-Inertial Navigation Method for Illumination-Challenging Scenes

    Unmanned Systems24 Feb 2025

    Visual-inertial odometry (VIO) has been found to have great value in robot positioning and navigation. However, the existing VIO algorithms rely heavily on excellent lighting environments and the accuracy of robot positioning and navigation is degraded largely in illumination-challenging scenes. A robust visual-inertial navigation method is developed in this paper. We construct an effective low-light image enhancement model using a deep curve estimation network (DCE) and a lightweight convolutional neural network to recover the texture information of dark images. Meanwhile, a brightness consistency inference method based on the Kalman filter is proposed to cope with illumination variations in image sequences. Multiple sequences obtained from UrbanNav and M2DRG datasets are used to test the proposed algorithm. Furthermore, we also conduct a real-world experiment for the proposed algorithm. Both experimental results demonstrate that our algorithm outperforms other state-of-art algorithms. Compared to the baseline algorithm VINS-mono, the tracking time is improved from 22.0% to 68.2% and the localization accuracy is improved from 0.489m to 0.258m on the darkest sequences.

  • articleNo Access

    Generalization of Parameter Recovery in Binocular Vision for a Planar Scene

    In this paper, we consider a mobile platform with two cameras directed towards the floor. In earlier work, this specific problem geometry has been considered under the assumption that the cameras have been mounted at the same height. This paper extends the previous work by removing the height constraint, as it is hard to realize in real-life applications.

    We develop a method based on an equivalent problem geometry, and show that much of previous work can be reused with small modification to account for the height difference. A fast solver for the resulting nonconvex optimization problem is devised. Furthermore, we propose a second method for estimating the height difference by constraining the mobile platform to pure translations. This is intended to simulate a calibration sequence, which is not uncommon to impose. Experiments are conducted using synthetic data, and the results demonstrate a robust method for determining the relative parameters comparable to previous work.

  • articleNo Access

    From Local Understanding to Global Regression in Monocular Visual Odometry

    The most significant part of any autonomous intelligent robot is the localization module that gives the robot knowledge about its position and orientation. This knowledge assists the robot to move to the location of its desired goal and complete its task. Visual Odometry (VO) measures the displacement of the robots’ camera in consecutive frames which results in the estimation of the robot position and orientation. Deep Learning, nowadays, helps to learn rich and informative features for the problem of VO to estimate frame-by-frame camera movement. Recent Deep Learning-based VO methods train an end-by-end network to solve VO as a regression problem directly without visualizing and sensing the label of training data in the training procedure. In this paper, a new approach to train Convolutional Neural Networks (CNNs) for the regression problems, such as VO, is proposed. The proposed method first changes the problem to a classification problem to learn different subspaces with similar observations. After solving the classification problem, the problem converts to the original regression problem to solve using the knowledge achieved by solving the classification problem. This approach helps CNN to solve regression problem globally in a local domain learned in the classification step, and improves the performance of the regression module for approximately 10%.

  • articleNo Access

    Feature-Based Correspondence Filtering Using Structural Similarity Index for Visual Odometry

    The stereo correspondence problem is one of the most pre-eminent problems in a stereo vision system. With the right correspondence, a stereo vision system can help cap over diverse problems, while on the other hand, a wrong correspondence can be costly. While the performance of a feature-based correspondence approach is exceptional, the method can still produce wrong correspondences. This work presents an amalgam of feature-based and correlation-based correspondence, where the local pixels around a feature pair are compared using Structural SIMilarity index (SSIM), enhancing the correspondences, and a semantic-based filtering module, which further filters the obtained corresponding features using semantic data whenever detected in both the stereo image pair. While approaches in the literature are focused towards finding better features and their representation, the proposed approach advocates that correlation-based verification of the features can filter out bad correspondences, and in addition, aided by semantic-level filtering. These two modules establish the novelty of the work. The proposed correspondence matching algorithm is used to solve the problem of Visual Odometry to let a low-cost robot compute its pose in a novel environment. The experimental results show adequate filtering of wrong feature correspondence wherein, different environments with different lighting conditions were also considered. The proposed approach outperformed numerous state-of-the-art approaches available in the literature. The visual odometry algorithm using the proposed correspondence matching is compared against classical methods and a deep learning method, and it is observed that the proposed approach delivers lower trajectory error values in most scenarios on the KITTI dataset sequences.

  • articleNo Access

    Multispectral Visual Odometry Using SVSF for Mobile Robot Localization

    Unmanned Systems27 Nov 2021

    In this paper, we propose a novel method for mobile robot localization and navigation based on multispectral visual odometry (MVO). The proposed approach consists in combining visible and infrared images to localize the mobile robot under different conditions (day, night, indoor and outdoor). The depth image acquired by the Kinect sensor is very sensitive for IR luminosity, which makes it not very useful for outdoor localization. So, we propose an efficient solution for the aforementioned Kinect limitation based on three navigation modes: indoor localization based on RGB/depth images, night localization based on depth/IR images and outdoor localization using multispectral stereovision RGB/IR. For automatic selection of the appropriate navigation modes, we proposed a fuzzy logic controller based on images’ energies. To overcome the limitation of the multimodal visual navigation (MMVN) especially during navigation mode switching, a smooth variable structure filter (SVSF) is implemented to fuse the MVO pose with the wheel odometry (WO) pose based on the variable structure theory. The proposed approaches are validated with success experimentally for trajectory tracking using the mobile robot (Pioneer P3-AT).

  • chapterNo Access

    VISUAL ODOMETRY TECHNIQUE USING CIRCULAR MARKER IDENTIFICATION FOR MOTION PARAMETER ESTIMATION

    This paper presents a new visual odometry approach for mobile robot self-localization utilizing natural circular invariant features during motion. It is proposed that the on-board camera acquires sequences of overlapping mages and senses the distance and orientation of the vehicle with respect to identified markers. The paper uses an effective image filtering technique based on convolution that can be used to localize the natural markers in the images. The proposed approach simplifies the problem of feature localization and allows a robust estimation of the vehicle's trajectory. Initial experiments are carried out on a mobile robot and results are presented.

  • chapterNo Access

    EXPERIMENTAL STUDY ON TRACK-TERRAIN INTERACTION DYNAMICS IN AN INTEGRATED ENVIRONMENT: TEST RIG

    An understanding of track-terrain interaction dynamics and vehicle slip is crucial for skid-steered tracked vehicles traversing over soft terrain. There is a lack of experimental data for validating dynamic, kinematic and control models developed for tracked vehicles on soft terrains. The objective of this paper is to develop a test rig that will generate experimental data for autonomous tracked vehicles following a steady state circular trajectory on soft terrains. The data will be used in the future to validate a traversability model for predicting track thrusts, a visual odometry technique for predicting vehicle slip and in controlling autonomous tracked vehicle following a steady state circular trajectory on soft terrains that were developed at King's College London.

  • chapterNo Access

    Camera Assisted Navigation for Planetary Rover

    Robotic rovers sent for planetary exploration are largely tele-operated by ground based drivers, thus there is some difficulties in keeping track of a planetary rover’s movements especially in undiscovered environment. These ground based drivers generate commands using the 3D visual images sent by the planetary rovers and these commands take a certain amount of time to execute due to communication lag. It is for this particular reason that the commands are usually only generated once every Martian solar day (sol). This leads to an increasing need for an onboard real time autonomous reliable navigation solution. This paper proposes a camera-assisted inertial odometry suitable for planetary environments where GPS cannot be employed. The effectiveness of the proposed solution has been established using the open source bench mark KITTI dataset.

  • chapterNo Access

    EGO-MOTION SENSOR FOR UNMANNED AERIAL VEHICLES BASED ON A SINGLE-BOARD COMPUTER

    This paper describes the design and implementation of a ground-related odometry sensor suitable for micro aerial vehicles. The sensor is based on a ground-facing camera and a single-board Linux-based embedded computer with a multimedia System on a Chip (SoC). The SoC features a hardware video encoder which is used to estimate the optical flow online. The optical flow is then used in combination with a distance sensor to estimate the vehicle’s velocity. The proposed sensor is compared to a similar existing solution and evaluated in both indoor and outdoor environments.