A Robust Visual-Inertial Navigation Method for Illumination-Challenging Scenes
Abstract
Visual-inertial odometry (VIO) has been found to have great value in robot positioning and navigation. However, the existing VIO algorithms rely heavily on excellent lighting environments and the accuracy of robot positioning and navigation is degraded largely in illumination-challenging scenes. A robust visual-inertial navigation method is developed in this paper. We construct an effective low-light image enhancement model using a deep curve estimation network (DCE) and a lightweight convolutional neural network to recover the texture information of dark images. Meanwhile, a brightness consistency inference method based on the Kalman filter is proposed to cope with illumination variations in image sequences. Multiple sequences obtained from UrbanNav and M2DRG datasets are used to test the proposed algorithm. Furthermore, we also conduct a real-world experiment for the proposed algorithm. Both experimental results demonstrate that our algorithm outperforms other state-of-art algorithms. Compared to the baseline algorithm VINS-mono, the tracking time is improved from 22.0% to 68.2% and the localization accuracy is improved from 0.489m to 0.258m on the darkest sequences.
This paper was recommended for publication in its revised form by editorial board member, Jinwen Hu.