Please login to be able to save your searches and receive alerts for new content matching your search criteria.
Text-independent handwriting identification methods require that features such as texture are extracted from lengthy document image; while text-dependent handwriting identification methods require that the contents of the documents being compared are identical. In order to overcome these confinements, this paper presents a novel Chinese handwriting identification technique. First, Chinese characters are segmented from handwriting document, then keywords are extracted based on matching and voting of local features of character. Then the same-content keywords are used to build training sets, and these training sets of two documents are compared. Because the keywords are similar to signature, the handwriting identification problem is transformed into signature verification problem. Experiments on HIT-MW, HIT-SW and CASIA show this method outperforms many text-independent handwriting identification methods.
With the development of biometric recognition technology, sketch face recognition has been widely applied to assist the police to confirm the identity of the criminal suspect. Most of the present recognition methods use the image features directly, in which the key parts can’t be used sufficiently. This paper presents a sketch face recognition method based on P-HOG multi-features weighted fusion. Firstly, the global face image and the local face image which contains key components of the face are divided into patches based on spatial scale pyramid, and then the global P-HOG features and local P-HOG features are extracted, respectively. After that, the dimensions of global and local features are reduced using PCA and NLDA. Finally, the features are weighted based on sensitivity and fused. The nearest neighbor classifier is used to complete the final recognition. The experimental results on different databases show that the proposed method outperforms state-of-the-art methods.
Person re-identification methods currently encounter challenges in feature learning, primarily due to difficulties in expressing the correlation between local features and integrating global and local features effectively. To address these issues, a pose-guided person re-identification method with Two-Level Channel–Spatial Feature Integration (TLCSFI) is proposed. In TLCSFI, a two-level integration mechanism is implemented. At the first level, TLCSFI integrates the spatial information from local features to generate fine-grained spatial features. At the second level, the fine-grained spatial feature and the coarse-grained channel feature are integrated together to complete channel–spatial feature integration. In the method, a Pose-based Spatial Feature Integration (PSFI) module is introduced to generate the pose union feature, which calculates intra-body affinity to guide the integration of spatial information among local pose feature maps. Then, a Channel and Spatial Union Feature Integration (CSUFI) module is proposed to efficiently integrate the channel information of the global feature and the spatial information of the pose union feature. Two individual networks are designed to extract channel and spatial information, respectively, in CSUFI, which are then weighted and integrated. Experiments are conducted on three publicly available datasets to evaluate TLCSFI, and the experimental results demonstrate its competitive performance.
Most parameter-based online signature verification methods achieve correspondence between the points of two signatures by minimizing the accumulation of their local feature distances. The matching based on minimizing the local feature distances alone is not adequate since the point contains not only local features but the distribution of the remaining points relative to it. One useful way to get correspondences between points on two shapes and measure the similarity of the two shapes is to use the shape context, since this descriptor can be used to describe the distributive relationship between a reference point and the remaining points on a shape. In this paper, we introduce a shape context descriptor for describing an online signature point which contains both 2D spatial information and a time stamp. A common algorithm, dynamic time warp (DTW), is used for the elastic matching between two signatures. When combining shape contexts and local features, we achieve better results than when using only the local features. We evaluate the proposed method on a signature database from the First International Signature Verification Competition (SVC2004). Experimental results demonstrate that the shape context is a good feature and has available complementarity for describing the signature point. The best result by combining the shape contexts and the local features yields an Equal Error Rate (EER) of 6.77% for five references.
In face recognition (FR), a lot of algorithms just utilize one single type of facial features namely global feature or local feature, and cannot obtain better performance under the complicated variations of the facial images. To extract robust facial features, this paper proposes a novel Semi-Supervised Discriminant Analysis (SSDA) criterion via nonlinearly combining the global feature and local feature. To further enhance the discriminant power of SSDA features, the geometric distribution weight information of the training data is also incorporated into the proposed criterion. We use SSDA criterion to design an iterative algorithm which can determine the combination parameters and the optimal projection matrix automatically. Moreover, the combination parameters are guaranteed to fall into the interval [0, 1]. The proposed SSDA method is evaluated on the ORL, FERET and CMU PIE face databases. The experimental results demonstrate that our method achieves superior performance.