Recently, face recognition (FR) has become an important research topic due to increase in video surveillance. However, the surveillance images may have vague non-frontal faces, especially with the unidentifiable face pose or unconstrained environment such as bad illumination and dark environment. As a result, most FR algorithms would not show good performance when they are applied on these images. On the contrary, it is common at surveillance field that only Single Sample per Person (SSPP) is available for identification. In order to resolve such issues, visible spectrum infrared images were used which can work in entirely dark condition without having any light variations. Furthermore, to effectively improve FR for both the low-quality SSPP and unidentifiable pose problem, an approach to synthesize 3D face modeling and pose variations is proposed in this paper. A 2D frontal face image is used to generate a 3D face model. Then several virtual face test images with different poses are synthesized from this model. A well-known Surveillance Camera’s Face (SCface) database is utilized to evaluate the proposed algorithm by using PCA, LDA, KPCA, KFA, RSLDA, LRPP-GRR, deep KNN and DLIB deep learning. The effectiveness of the proposed method is verified through simulations, where increase in average recognition rates up to 10%, 27.69%, 14.62%, 25.38%, 57.46%, 57.43, 37.69% and 63.28%, respectively, for SCface database as observed.
On the basis of recent binary signal detection theory (BSDT), optimal recognition algorithms for complex images are constructed and their optimal performance are calculated. A methodology for comparing BSDT predictions and measured human performance is developed and applied to explaining particular face recognition experiment. The BSDT makes possible computer codes with recognition performance better than that in humans, its fundamental discreteness is consistent with the experiment. Related neurobiological and behavioral effects are briefly discussed.
A small change in image will cause a dramatic change in signals. Visual system is required to be able to ignore these changes, yet specific enough to perform recognition. This work intends to provide biological-backed insights into 2D translation and scaling invariance and 3D pose-invariance without imposing strain on memory and with biological justification. The model can be divided into lower and higher visual stages. Lower visual stage models the visual pathway from retina to the striate cortex (V1), whereas the modeling of higher visual stage is mainly based on current psychophysical evidences.
In order to solve the effects of illumination changes and differences of personal features on the face recognition rate, this paper presents a new face recognition algorithm based on Gabor wavelet and Locality Preserving Projections (LPP). The problem of the Gabor filter banks with high dimensions was solved effectively, and also the shortcoming of the LPP on the light illumination changes was overcome. Firstly, the features of global image information were achieved, which used the good spatial locality and orientation selectivity of Gabor wavelet filters. Then the dimensions were reduced by utilizing the LPP, which well-preserved the local information of the image. The experimental results shown that this algorithm can effectively extract the features relating to facial expressions, attitude and other information. Besides, it can reduce influence of the illumination changes and the differences in personal features effectively, which improves the face recognition rate to 99.2%.
Face recognition is a vastly researched topic in the field of computer vision. A lot of work have been done for facial recognition in two dimensions and three dimensions. The amount of work done with face recognition invariant of image processing attacks is very limited. This paper presents a total of three classes of image processing attacks on face recognition system, namely image enhancement attacks, geometric attacks and the image noise attacks. The well-known machine learning techniques have been used to train and test the face recognition system using two different databases namely Bosphorus Database and University of Milano Bicocca three-dimensional (3D) Face Database (UMBDB). Three classes of classification models, namely discriminant analysis, support vector machine and k-nearest neighbor along with ensemble techniques have been implemented. The significance of machine learning techniques has been mentioned. The visual verification has been done with multiple image processing attacks.
This paper presents a real-time face recognition system. For the system to be real time, no external time-consuming feature extraction method is used, rather the gray-level values of the raw pixels that make up the face pattern are fed directly to the recognizer. In order to absorb the resulting high dimensionality of the input space, support vector machines (SVMs), which are known to work well even in high-dimensional space, are used as the face recognizer. Furthermore, a modified form of polynomial kernel (local correlation kernel) is utilized to take account of prior knowledge about facial structures and is used as the alternative feature extractor. Since SVMs were originally developed for two-class classification, their basic scheme is extended for multiface recognition by adopting one-per-class decomposition. In order to make a final classification from several one-per-class SVM outputs, a neural network (NN) is used as the arbitrator. Experiments with ORL database show a recognition rate of 97.9% and speed of 0.22 seconds per face with 40 classes.
Hausdorff distance is a deformation tolerant measure between two sets of points. The main advantage of this measure is that it does not need an explicit correspondence between the points of the two sets. This paper presents the application to automatic face recognition of a novel supervised Hausdorff-based measure. This measure is designed to minimize the distance between sets of the same class (subject) and at the same time maximize the distance of sets between different classes.
Human faces are difficult to interpret because they are highly variable. Over the last few decades, various techniques have been proposed for computer recognition of human faces. In this paper, we introduce an Elastic Graph Dynamic Link Model to automate the process of facial recognition. It is integrated with the Active Contour Model to provide an effective and efficient means of facial contour extraction and recognition. A portrait gallery of 100 distinct facial images is used for network training. Experimental results are presented for a database of 1,020 tested face images, which were obtained under conditions of widely varying facial expressions, viewing perspectives and image sizes. An overall average correct recognition rate of over 86% is attained.
The profile view of a face provides a complementary structure that is not seen in the frontal view. The classification system combining both frontal and profile views of faces can improve the classification accuracy. And it would be more foolproof because it is difficult to fool the profile face identification by a mask. This paper proposes a new face recognition approach, which can be applied on both frontal and profile faces, to build a robust combined multiple view face identification system. The recognition employs a novel facial corner coding and matching method, and integrates the outline and interior facial parts in the profile matching. The proposed multiview modified Hausdorff distance fuses multiple views of faces to achieve an improved system performance.
The integration of multiple classifiers promises higher classification accuracy and robustness than can be obtained with a single classifier. We address two problems: (a) automatic recognition of human faces using a novel fusion approach based on an adaptive LVQ network architecture, and (b) improve the face recognition up to 100% while maintaining the learning time per face image constant, which is an scalability issue. The learning time per face image of the recognition system remains constant irrespective of the data size. The integration of the system incorporates the "divide and conquer" modularity principles, i.e. divide the learning data into small modules, train individual modules separately using compact LVQ model structure and still encompass all the variance, and fuse trained modules to achieve recognition rate nearly 100%. The concept of Merged Classes (MCs) is introduced to enhance the accuracy rate. The proposed integrated architecture has shown its feasibility using a collection of 1130 face images of 158 subjects from three standard databases, ORL, PICS and KU. Empirical results yield an accuracy rate of 100% on the face recognition task for 40 subjects in 0.056 seconds per image. Thus, the system has shown potential to be adopted for real time application domains.
This paper introduces a novel method for the recognition of human faces in two-dimensional digital images using a new feature extraction method and Radial Basis Function (RBF) neural network with a Hybrid Learning Algorithm (HLA) as classifier. The proposed feature extraction method includes human face localization derived from the shape information using a proposed distance measure as Facial Candidate Threshold (FCT) as well as Pseudo Zernike Moment Invariant (PZMI) with a newly defined parameter named Correct Information Ratio (CIR) of images for disregarding irrelevant information of face images. In this paper, the effect of these parameters in disregarding irrelevant information in recognition rate improvement is studied. Also we evaluate the effect of orders of PZMI in recognition rate of the proposed technique as well as learning speed. Simulation results on the face database of Olivetti Research Laboratory (ORL) indicate that high order PZMI together with the derived face localization technique for extraction of feature data yielded a recognition rate of 99.3%.
In this paper, we present a survey on pattern recognition applications of Support Vector Machines (SVMs). Since SVMs show good generalization performance on many real-life data and the approach is properly motivated theoretically, it has been applied to wide range of applications. This paper describes a brief introduction of SVMs and summarizes its various pattern recognition applications.
In this paper, a novel image projection analysis method (UIPDA) is first developed for image feature extraction. In contrast to Liu's projection discriminant method, UIPDA has the desirable property that the projected feature vectors are mutually uncorrelated. Also, a new LDA technique called EULDA is presented for further feature extraction. The proposed methods are tested on the ORL and the NUST603 face databases. The experimental results demonstrate that: (i) UIPDA is superior to Liu's projection discriminant method and more efficient than Eigenfaces and Fisherfaces; (ii) EULDA outperforms the existing PCA plus LDA strategy; (iii) UIPDA plus EULDA is a very effective two-stage strategy for image feature extraction.
A two-stage face recognition method is presented in this paper. In the first stage, the set of candidate patterns is narrowed down with the global similarity being taken into account. In the second stage, synergetic approach is employed to perform further recognition. Face image is segmented into meaningful regions, each of which is represented as a prototype vector. The similarity between a given region of the test pattern and a stored sample is obtained as the order parameter which serves as an element of the order vector. Finally, a modified definition of the potential function is given, and the dynamic model of recognition is derived from it. The effectiveness of the proposed method is experimentally confirmed.
Fisher Linear Discriminant Analysis (LDA) has been successfully used as a data discriminantion technique for face recognition. This paper has developed a novel subspace approach in determining the optimal projection. This algorithm effectively solves the small sample size problem and eliminates the possibility of losing discriminative information. Through the theoretical derivation, we compared our method with the typical PCA-based LDA methods, and also showed the relationship between our new method and perturbation-based method. The feasibility of the new algorithm has been demonstrated by comprehensive evaluation and comparison experiments with existing LDA-based methods.
This paper presents a new regularization technique to deal with the small sample size (S3) problem in linear discriminant analysis (LDA) based face recognition. Regularization on the within-class scatter matrix Sw has been shown to be a good direction for solving the S3 problem because the solution is found in full space instead of a subspace. The main limitation in regularization is that a very high computation is required to determine the optimal parameters. In view of this limitation, this paper re-defines the three-parameter regularization on the within-class scatter matrix , which is suitable for parameter reduction. Based on the new definition of
, we derive a single parameter (t) explicit expression formula for determining the three parameters and develop a one-parameter regularization on the within-class scatter matrix. A simple and efficient method is developed to determine the value of t. It is also proven that the new regularized within-class scatter matrix
approaches the original within-class scatter matrix Sw as the single parameter tends to zero. A novel one-parameter regularization linear discriminant analysis (1PRLDA) algorithm is then developed. The proposed 1PRLDA method for face recognition has been evaluated with two public available databases, namely ORL and FERET databases. The average recognition accuracies of 50 runs for ORL and FERET databases are 96.65% and 94.00%, respectively. Comparing with existing LDA-based methods in solving the S3 problem, the proposed 1PRLDA method gives the best performance.
In face recognition tasks, Fisher discriminant analysis (FDA) is one of the promising methods for dimensionality reduction and discriminant feature extraction. The objective of FDA is to find an optimal projection matrix, which maximizes the between-class-distance and simultaneously minimizes within-class-distance. The main limitation of traditional FDA is the so-called Small Sample Size (3S) problem. It induces that the within-class scatter matrix is singular and then the traditional FDA fails to perform directly for pattern classification. To overcome 3S problem, this paper proposes a novel two-step single parameter regularization Fisher discriminant (2SRFD) algorithm for face recognition. The first semi-regularized step is based on a rank lifting theorem. This step adjusts both the projection directions and their corresponding weights. Our previous three-to-one parameter regularized technique is exploited in the second stage, which just changes the weights of projection directions. It is shown that the final regularized within-class scatter matrix approaches the original within-class scatter matrix as the single parameter tends to zero. Also, our method has good computational complexity. The proposed method has been tested and evaluated with three public available databases, namely ORL, CMU PIE and FERET face databases. Comparing with existing state-of-the-art FDA-based methods in solving the S3 problem, the proposed 2SRFD approach gives the best performance.
Despite the variety of approaches and tools studied, face recognition is not accurate or robust enough to be used in uncontrolled environments. Recently, infrared (IR) imagery of human faces is considered as a promising alternative to visible imagery. IR face recognition is a biometric which offers the security of fingerprints with the convenience of face recognition. However, IR has its own limitations. The presence of eyeglasses has more influence on IR than visible imagery. In this paper, a method based on Log-Gabor wavelets for IR face recognition is proposed. The method first derives a Log-Gabor feature vector from IR face image, then obtains the independent Log-Gabor features by using independent component analysis (ICA). Experimental results show that the proposed method works well, even in challenging situations.
Accurate measurement of poses and expressions can increase the efficiency of recognition systems by avoiding the recognition of spurious faces. This paper presents a novel and robust pose-expression invariant face recognition method in order to improve the existing face recognition techniques. First, we apply the TSL color model for detecting facial region and estimate the vector X-Y-Z of face using connected components analysis. Second, the input face is mapped by a deformable 3D facial model. Third, the mapped face is transformed to the frontal face which appropriates for face recognition by the estimated pose vector and action unit of expression. Finally, the damaged regions which occur during the process of normalization are reconstructed using PCA. Several empirical tests are used to validate the application of face detection model and the method for estimating facial poses and expression. In addition, the tests suggest that recognition rate is greatly boosted through the normalization of the poses and expression.
This paper describes how a facial albedo map can be recovered from a single image using a statistical model that captures variations in surface normal direction. We fit the model to intensity data using constraints on the surface normal direction provided by Lambert's law and then use the differences between observed and reconstructed image brightness to estimate the albedo. We show that this process is stable under varying illumination. We then show how eigenfaces trained on albedo maps may provide a better representation for illumination insensitive recognition than those trained on raw image intensity.
Please login to be able to save your searches and receive alerts for new content matching your search criteria.