Please login to be able to save your searches and receive alerts for new content matching your search criteria.
Aiming to solve the problem of tracking drift during movement, which was caused by the lack of discriminability of the feature information and the failure of a fixed template to adapt to the change of object appearance, the paper proposes an object tracking algorithm combining attention mechanism and correlation filter theory based on the framework of full convolutional Siamese neural networks. Firstly, the apparent information is processed by using the attention mechanism thought, where the object and search area features are optimized according to the spatial attention and channel attention module. At the same time, the cross-attention module is introduced to process the template branch and search area branch, respectively, which makes full use of the diversified context information of the search area. Then, the background perception correlation filter model with scale adaptation and learning rate adjustment is adopted into the model construction, using as a layer in the network model to realize the object template update. Finally, the optimal object location is determined according to the confidence map with similarity calculation. Experimental results show that the designed method in the paper can promote the object tracking performance under various challenging environments effectively; the success rate increases by 16.2%, and the accuracy rate increases by 16%.
Kernelized Correlation Filters (KCF) for visual tracking have received much attention due to their fast speed and outstanding performances in real scenarios. However, the KCF sometimes still fails to track the targets with different scales, and it may drift because the target response is fixed and the original histogram of orientation gradient (HOG) features cannot represent the targets well. In this paper, we propose a novel fast tracker, which is based on KCF and insensitive to scale changes by learning two independent correlation filters (CFs) where one filter is designed for position estimation and the other is for scale estimation. In addition, it can adaptively change the target response and multiple features are integrated to improve the performance for our tracker. Finally, we employ an adaptive high confidence filters updating scheme to avoid errors. Evaluated on the popular OTB50 and OTB100 datasets, our proposed trackers show superior performances in terms of efficiency and accuracy compared to the existing methods.
This paper focuses on integrating information from RGB and thermal infrared modalities to perform RGB-T object tracking in the correlation filter framework. Our baseline tracker is Staple (Sum of Template and Pixel-wise LEarners), which combines complementary cues in the correlation filter framework with high efficiency. Given the input RGB and thermal videos, we utilize the baseline tracker due to its high performance in both of accuracy and speed. Different from previous correlation filter-based methods, we perform the fusion tracking at both the pixel-fusion and decision-fusion levels. Our tracker is robust to the dataset challenges, and due to the efficiency of FFT, our tracker can maintain high efficiency with superior performance. Extensive experiments on the RGBT234 dataset have demonstrated the effectiveness of our work.
Since Correlation Filter appeared in the field of video object tracking, it is very popular due to its excellent performance. The Correlation Filter-based tracking algorithms are very competitive in terms of accuracy and speed as well as robustness. However, there are still some fields for improvement in the Correlation Filter-based tracking algorithms. First, during the training of the classifier, the background information that can be utilized is very limited. Moreover, the introduction of the cosine window further reduces the background information. These reasons reduce the discriminating power of the classifier. This paper introduces more global background information on the basis of the DCF tracker to improve the discriminating ability of the classifier. Then, in some complex scenes, tracking loss is easy to occur. At this point, the tracker will be treated the background information as the object. To solve this problem, this paper introduces a novel re-detection component. Finally, the current Correlation Filter-based tracking algorithms use the linear interpolation model update method, which cannot adapt to the object changes in time. This paper proposes an adaptive model update strategy to improve the robustness of the tracker. The experimental results on multiple datasets can show that the tracking algorithm proposed in this paper is an excellent algorithm.