Processing math: 100%
World Scientific
Skip main navigation

Cookies Notification

We use cookies on this site to enhance your user experience. By continuing to browse the site, you consent to the use of our cookies. Learn More
×

System Upgrade on Tue, May 28th, 2024 at 2am (EDT)

Existing users will be able to log into the site and access content. However, E-commerce and registration of new users may not be available for up to 12 hours.
For online purchase, please visit us again. Contact us at customercare@wspc.com for any enquiries.

A Robust Visual-Inertial Navigation Method for Illumination-Challenging Scenes

    https://doi.org/10.1142/S2301385026500068Cited by:0 (Source: Crossref)

    Visual-inertial odometry (VIO) has been found to have great value in robot positioning and navigation. However, the existing VIO algorithms rely heavily on excellent lighting environments and the accuracy of robot positioning and navigation is degraded largely in illumination-challenging scenes. A robust visual-inertial navigation method is developed in this paper. We construct an effective low-light image enhancement model using a deep curve estimation network (DCE) and a lightweight convolutional neural network to recover the texture information of dark images. Meanwhile, a brightness consistency inference method based on the Kalman filter is proposed to cope with illumination variations in image sequences. Multiple sequences obtained from UrbanNav and M2DRG datasets are used to test the proposed algorithm. Furthermore, we also conduct a real-world experiment for the proposed algorithm. Both experimental results demonstrate that our algorithm outperforms other state-of-art algorithms. Compared to the baseline algorithm VINS-mono, the tracking time is improved from 22.0% to 68.2% and the localization accuracy is improved from 0.489m to 0.258m on the darkest sequences.

    This paper was recommended for publication in its revised form by editorial board member, Jinwen Hu.