Please login to be able to save your searches and receive alerts for new content matching your search criteria.
The extraction of feature points is crucial to computer vision tasks like self-calibration of binocular camera extrinsic parameters, pose estimation and structure from motion (SFM). In the context of autonomous driving, there are numerous unstructured feature points, as well as structured feature points with shapes such as L-type, Y-type, Star-type and centroid. Typically, feature points are extracted without discrimination and used as inputs for feature-based visual algorithms in a generalized manner. However, the influence of the structural characteristics of these feature points on the performance of such algorithms remains largely unexplored. To address this issue, we propose a multi-stream feature point classification network based on circular patches extraction (CPE). CPE uses concentric circles centered on a given feature point to extract the intensity distribution features around that point. Subsequently, a series of circular patches are converted into square patches according to the order of radius and polar angle. Then, we have a multi-stream feature point classification network, where each stream receives a square patch as input to learn the intensity distribution features and classify the feature points into Y-type, centroid and unstructured categories. Finally, the influence of points with structure and without structure on related autonomous driving visual algorithms was verified in the experiment. Experimental results indicate that our proposed network can effectively classify based on the structure of feature points, which can enhance the performance of feature-based vision algorithms.
The accuracy of feature-based vision algorithms, including the self-calibration of binocular camera extrinsic parameters used in autonomous driving environment perception techniques relies heavily on the quality of the features extracted from the images. This study investigates the influence of the depth distance between objects and the camera, the feature points in different object regions, and the feature points in dynamic object regions on the self-calibration of binocular camera extrinsic parameters. To achieve this, the study first filters out different types of objects in the image through semantic segmentation. Then, it identifies the areas of dynamic objects and extracts the feature points in the static object region for the self-calibration of binocular camera extrinsic parameters. By calculating the baseline error of the binocular camera and the row alignment error of the matching feature points, this study evaluates the influence of feature points in dynamic object regions, feature points in different object regions, and feature points at different distances on the self-calibration algorithm. The experimental results demonstrate that feature points at static objects close to the camera are beneficial for the self-calibration of extrinsic parameters of binocular camera.