As a kind of promising biometric technology, multispectral palmprint recognition methods have attracted increasing attention in security due to their high recognition accuracy and ease of use. It is worth noting that although multispectral palmprint data contains rich complementary information, multispectral palmprint recognition methods are still vulnerable to adversarial attacks. Even if only one image of a spectrum is attacked, it can have a catastrophic impact on the recognition results. Therefore, we propose a robustness-enhanced multispectral palmprint recognition method, including a model interpretability-based adversarial detection module and a robust multispectral fusion module. Inspired by the model interpretation technology, we found there is a large difference between clean palmprint and adversarial examples after CAM visualization. Using visualized images to build an adversarial detector can lead to better detection results. Finally, the weights of clean images and adversarial examples in the fusion layer are dynamically adjusted to obtain the correct recognition results. Experiments have shown that our method can make full use of the image features that are not attacked and can effectively improve the robustness of the model.
This paper proposes an intelligent 2ν-support vector machine based match score fusion algorithm to improve the performance of face and iris recognition by integrating the quality of images. The proposed algorithm applies redundant discrete wavelet transform to evaluate the underlying linear and non-linear features present in the image. A composite quality score is computed to determine the extent of smoothness, sharpness, noise, and other pertinent features present in each subband of the image. The match score and the corresponding quality score of an image are fused using 2ν-support vector machine to improve the verification performance. The proposed algorithm is experimentally validated using the FERET face database and the CASIA iris database. The verification performance and statistical evaluation show that the proposed algorithm outperforms existing fusion algorithms.
The brain activity observed on EEG electrodes is influenced by volume conduction and functional connectivity of a person performing a task. When the task is a biometric test the EEG signals represent the unique “brain print”, which is defined by the functional connectivity that is represented by the interactions between electrodes, whilst the conduction components cause trivial correlations. Orthogonalization using autoregressive modeling minimizes the conduction components, and then the residuals are related to features correlated with the functional connectivity. However, the orthogonalization can be unreliable for high-dimensional EEG data. We have found that the dimensionality can be significantly reduced if the baselines required for estimating the residuals can be modeled by using relevant electrodes. In our approach, the required models are learnt by a Group Method of Data Handling (GMDH) algorithm which we have made capable of discovering reliable models from multidimensional EEG data. In our experiments on the EEG-MMI benchmark data which include 109 participants, the proposed method has correctly identified all the subjects and provided a statistically significant (p<0.01p<0.01) improvement of the identification accuracy. The experiments have shown that the proposed GMDH method can learn new features from multi-electrode EEG data, which are capable to improve the accuracy of biometric identification.
Palmprint identification refers to searching in a database for the palmprint template, which is from the same palm as a given palmprint input. The identification process involves preprocessing, feature extraction, feature matching and decision-making. As a key step in the process, in this paper, we propose a new feature extraction method by converting a palmprint image from a spatial domain to a frequency domain using Fourier Transform. The features extracted in the frequency domain are used as indexes to the palmprint templates in the database and the searching process for the best match is conducted by a layered fashion. The experimental results show that palmprint identification based on feature extraction in the frequency domain is effective in terms of accuracy and efficiency.
The biometric verification task is to determine whether or not an input and a template belong to the same individual. In the context of automatic fingerprint verification the task consists of three steps: feature extraction, where features (typically minutiae) are extracted from each fingerprint, scoring, where the degree of match between the two sets of features is determined, and decision, where the score is used to accept or reject the hypothesis that the input and template belong to the same individual. The paper focuses on the final decision step, which is a binary classification problem involving a single score variable. The commonly used decision method is to learn a score threshold from a labeled set of inputs and templates, by first determining the receiver operating characteristic (ROC) of the task. The ROC method works well when there is a well-registered fingerprint image. The paper shows that when there is uncertainty due to fingerprint quality, e.g. the input is a latent print or a partial print, the decision method can be improved by using the likelihood ratio of match/non match. The likelihood ratio is obtained by modeling the distributions of same finger and different finger scores using parametric distributions. The parametric forms considered are Gaussian and Gamma distributions whose parameters are learnt from labeled training samples. The performances of the likelihood and ROC methods are compared for varying numbers of minutiae points available for verification. Using either Gaussian or Gamma parametric distributions, the likelihood method has a lower error rate than the ROC method when few minutiae points are available. Likelihood and ROC methods converge to the same accuracy as more minutiae points are available.
The paper describes an integrated recognition-by-parts architecture for reliable and robust face recognition. Reliability and robustness are characteristic of the ability to deploy full-fledged and operational biometric engines, and handling adverse image conditions that include among others uncooperative subjects, occlusion, and temporal variability, respectively. The architecture proposed is model-free and non-parametric. The conceptual framework draws support from discriminative methods using likelihood ratios. At the conceptual level it links forensics and biometrics, while at the implementation level it links the Bayesian framework and statistical learning theory (SLT). Layered categorization starts with face detection using implicit rather than explicit segmentation. It proceeds with face authentication that involves feature selection of local patch instances including dimensionality reduction, exemplar-based clustering of patches into parts, and data fusion for matching using boosting driven by parts that play the role of weak-learners. Face authentication shares the same implementation with face detection. The implementation, driven by transduction, employs proximity and typicality (ranking) realized using strangeness and p-values, respectively. The feasibility and reliability of the proposed architecture are illustrated using FRGC data. The paper concludes with suggestions for augmenting and enhancing the scope and utility of the proposed architecture.
Modeling and analyzing the dynamic shape of human motion is a challenging task owing to temporal variations in the shape and multiple sources of observed shape variations such as viewpoint, motion speed, clothing, etc. We present a new framework for dynamic shape analysis based on temporal normalization and factorized shape style analysis. Using a nonlinear generative model with motion manifold embedding in a low-dimensional space, we detect cycles of periodic motion like gait in different views and synthesize temporally-aligned shape sequences from the same type of motion at different speeds. The bilinear analysis of temporally-aligned shape sequences decomposes dynamic motion into time-invariant shape style factors and time-dependent motion factors. We extend the bilinear model into a tensor shape model, a multilinear decomposition of dynamic shape sequences for view-invariant shape style representations. The shape style is a view-invariant, time-invariant, and speed-invariant shape signature and is used as a feature vector for human identification. The shape style can be adapted to new environmental conditions by iterative estimation of style and content factors to reflect new observation conditions. We present the experimental results of gait recognition using the CMU Mobo gait database and the USF gait challenging database.
Most existing iris recognition algorithms focus on the processing and recognition of the ideal iris images that are acquired in a controlled environment. In this paper, we process the nonideal iris images that are captured in an unconstrained situation and are affected severely by gaze deviation, eyelids and eyelashes occlusions, nonuniform intensity, motion blur, reflections, etc. The proposed iris recognition algorithm has three novelties as compared to the previous works; firstly, we deploy a region-based active contour model to segment a nonideal iris image with intensity inhomogeneity; secondly, genetic algorithms (GAs) are deployed to select the subset of informative texture features without compromising the recognition accuracy; Thirdly, to speed up the matching process and to control the misclassification error, we apply a combined approach called the adaptive asymmetrical support vector machines (AASVMs). The verification and identification performance of the proposed scheme is validated on three challenging iris image datasets, namely, the ICE 2005, the WVU Nonideal, and the UBIRIS Version 1.
Region covariance matrices (RCMs) as feature descriptors have been developed due to the advantages of low dimensionality, being scale and illumination independent. How to define a feature mapping vector for the RCMs construction of strong discriminating ability is still an open issue. In this paper, there is a focus on finding a more efficient feature mapping vector for RCMs as palmprint descriptors based on Gabor magnitude and phase (GMP) information. Specially, Gabor magnitude (GM) features of each palmprint image approximate a lognormal distribution. For palmprint recognition, the logarithmic transformation of GM proves to be important for the discriminating ability of corresponding RCMs. All experiments are performed on the public Hong Kong Polytechnic University (PolyU) Palmprint Database of 7752 images. The results demonstrate the efficiency of our proposed method, and also show that adding pixel locations and intensity component to the feature mapping vector has a negative effect on palmprint recognition performance for our proposed Log_GMP based RCM method.
The two-dimensional (2D) Gabor function has been recognized as a very useful tool in feature extraction of image, due to its optimal localization properties in both spatial and frequency domain. This paper presents a novel palmprint feature extraction method based on the statistics of decomposition coefficients of the Gabor wavelet transform. It is experimentally found that the magnitude coefficients of the Gabor wavelet transform within each subband uniformly to approximate the Lognormal distribution. Based on this fact, we create the palmprint representation using two simple statistics (mean and standard deviation) as feature components after applying the logarithmic transformation of Gabor filtered magnitude coefficients for each subband with different orientations and scales. The optimum setting of the number of Gabor filters and orientation of each Gabor filter is experimentally determined. For palmprint recognition, the popularly used Fisher Linear Discriminant (FLD) analysis is further applied on the constructed feature vectors to extract discriminative features and reduce dimensionality. All experiments are both executed over the CCD-based HongKong PolyU Palmprint Database of 7752 images and the scanner-based BJTU_PalmprintDB (V1.0) of 3460 images. The results demonstrate the effectiveness of the proposed palmprint representation in achieving the improved recognition performance.
Distance metric is widely used in similarity estimation which plays a key role in fingerprint recognition. In this work we propose the detailed comparison of 29 distinct distance metrics. Features of fingerprint images are extracted using Fast Fourier Transform (FFT). Recognition rate, receiver operating curve (ROC), time and space complexity parameters are used for evaluation of each distance metric. To consolidate our conclusion we used the standard fingerprint database available at Bologna University and FVC2000 databases. After evaluation of 29 distinct distance metrics we found Sorgel distance metric performs best. Genuine acceptance rate (GAR) of Sorgel distance metric is observed to be ~5% higher than traditional Euclidean distance metric at low false acceptance rate (FAR). Sorgel distance gives good GAR at low FAR with moderate computational complexity.
This paper proposes the utility of texture and color for iris recognition systems. It contributes for improvement of system accuracy with reduced feature vector size of just 1 × 3 and reduction of false acceptance rate (FAR) and false rejection rate (FRR). It avoids the iris normalization process used traditionally in iris recognition systems. Proposed method is compared with the existing methods. Experimental results indicate that the proposed method using only color achieves 99.9993 accuracy, 0.0160 FAR, and 0.0813 FRR. Computational time efficiency achieved is of 947.7 ms.
Nowadays many techniques are being used to increase the reliability of human identification systems. Iris is a part of human body that is desirable for biometric identification and has favorable factors. We have focused on the reality that iris is a fractal phenomenon in this paper. During the production of new fractals, some features will be extracted by Chaos Game mechanism. These features are useful and effective in iris identification. There are three steps for iris identification with fractal and Chaos Game Theory. The first step is making a new fractal. The second step includes extracting features during the first step. Finally, the iris identification based on extracted features is the third step. We have named this technique Iris Identification based-Fractal and Chaos Game Theory (Iris-IFCGT). This technique has some fractal properties like stability against zoom, removing part of the iris image, no sensitivity on rotation and so on as well as desirable speed which helps preventing time consuming process of pattern recognition.
Due to the intensive use of mobile phones for different purposes, these devices usually contain confidential information which must not be accessed by another person apart from the owner of the device. Furthermore, the new generation phones commonly incorporate an accelerometer which may be used to capture the acceleration signals produced as a result of owner's gait. Nowadays, gait identification in basis of acceleration signals is being considered as a new biometric technique which allows blocking the device when another person is carrying it. Although distance based approaches as Euclidean distance or dynamic time warping have been applied to solve this identification problem, they show difficulties when dealing with gaits at different speeds. For this reason, in this paper, a method to extract an average template from instances of the gait at different velocities is presented. This method has been tested with the gait signals of 34 subjects while walking at different motion speeds (slow, normal and fast) and it has shown to improve the performance of Euclidean distance and classical dynamic time warping.
This paper presents an efficient IrisCode classifier, built from phase features which uses AdaBoost for the selection of Gabor wavelets bandwidths. The final iris classifier consists of a weighted contribution of weak classifiers. As weak classifiers we use three-split decision trees that identify a candidate based on the Levenshtein distance between phase vectors of the respective iris images. Our experiments show that the Levenshtein distance has better discrimination in comparing IrisCodes than the Hamming distance. Our process also differs from existing methods because the wavelengths of the Gabor filters used, and their final weights in the decision function, are chosen from the robust final classifier, instead of being fixed and/or limited by the programmer, thus yielding higher iris recognition rates. A pyramidal strategy for cascading filters with increasing complexity makes the system suitable for real-time operation. We have designed a processor array to accelerate the computation of the Levenshtein distance. The processing elements are simple basic cells, interconnected by relatively short paths, which makes it suitable for a VLSI implementation.
Iris recognition is one of the most reliable personal identification methods. This paper presents a novel algorithm for iris recognition encompassing iris segmentation, fusion of statistical and co-occurrence features extracted from the curvelet and ridgelet transformed images. In this work, the pupil and iris boundaries are detected by using the equation of circle from three points on its circumference. Using Canny edge detection, the iris radius value is empirically chosen based on rigorous experimentation. Eyelash removal is done by using a horizontal 1-D rank filter. Iris normalization is done by mapping the detected iris region from the polar domain to the rectangular domain and the multi-resolution transforms such as curvelet and ridgelet transforms are applied for multi-resolutional feature extraction. The classification is done using Manhattan distance (Md) and multiclass classifier with logistic function and the two results are compared. Here, the benchmark database CASIA-IRIS-V3 (Interval) is used for identification and recognition. It is observed that the ridgelet transform increases the iris recognition rate.
In this paper, a biometric technique based on gesture recognition is proposed to improve the security of operations requiring authentication in mobile phones. Users are authenticated by making a gesture invented by them holding a mobile phone embedding an accelerometer on their hand. An analysis method based on sequence alignment is proposed and evaluated in different experiments. Firstly, a test of distinctiveness of gestures has been proposed obtaining an equal error rate (EER) of 4.98% with a database of 30 users and four repetitions. With the same database, a second experiment representing the unicity of accessing attempts has resulted in an EER value of 1.92%. Finally, a third experiment to evaluate the robustness of the technique has examined a database of 40 users with eight repetitions and real falsification attempts, performed by three impostors from the study of recordings of the carrying out of the original gestures, resulting in an EER of 2.5%.
In this paper, we propose a hybrid computational geometry-gray scale algorithm that enhances fingerprint images greatly. The algorithm extracts the local minima points that are positioned on the ridges of a fingerprint, then, it generates a Delaunay triangulation using these points of interest. This triangulation along with the local orientations give an accurate distance and orientation-based ridge frequency. Finally, a tuned anisotropic filter is locally applied and the enhanced output fingerprint image is obtained. When the algorithm is applied to rejected fingerprint images from FVC2004 DB2 database by the veryfinger application, these images pass and experimental results show that we obtain a low false and missed minutiae rate with an almost uniform distribution over the database. Moreover, the application of the proposed algorithm enables the extraction of features from all low-quality fingerprint images where the equal error rate of verification is decreased from 6.50% to 5% using nondamaged low-quality images in the database.
Ear biometrics attracted the attention of researchers in computer vision and machine learning for its use in many applications. In this paper, we present a fully automated system for recognition from ear images based upon sparse representation. In sparse representation, extracted features from the training data is used to develop a dictionary. Classification is achieved by representing the extracted features of the test data as a linear combination of entries in the dictionary. In fact, there are many solutions for this problem and the goal is to find the sparsest solution. We use a relatively new algorithm named smoothed l0 norm to find the sparsest solution and Gabor wavelet features are used for building the dictionary. Furthermore, we expand the proposed approach for gender classification from ear images. Several researches have addressed this issue based on facial images. We introduce a novel approach based on majority voting for gender classification. Experimental results conducted on the University of Notre Dame (UND) collection J data set, containing large appearance, pose, and lighting variations, resulted in a gender classification rate of 89.49%. Furthermore, the proposed method is evaluated on the WVU data set and classification rates for different view angles are presented. Results show improvement and great robustness in gender classification over existing methods.
Automatic Kinship verification aims at recognizing the degree of kinship of two individuals from their facial images and it has possible applications in image retrieval and annotation, forensics and historical studies. This is a recent and challenging problem, which must deal with different degrees of kinship and variations in age and gender. Our work explores the computer identification of parent–child pairs using a combination of (i) features of different natures, based on geometric and textural data, (ii) feature selection and (iii) state-of-the-art classifiers. Experiments show that the proposed approach provides a valuable solution to the kinship verification problem, as suggested by its comparison with different methods on the same data and the same experimental protocols. We further show the good generalization capabilities of our method in several cross-database experiments.
Please login to be able to save your searches and receive alerts for new content matching your search criteria.