These days identification of a person is an integral part of many computer-based solutions. It is a key characteristic for access control, customized services, and a proof of identity. Over the last couple of decades, many new techniques were introduced for how to identify human faces. This approach investigates the human face identification based on frontal images by producing ratios from distances between the different features and their locations. Moreover, this extended version includes an investigation of identification based on side profile by extracting and diagnosing the feature sets with geometric ratio expressions which are calculated into feature vectors. The last stage involves using weighted means to calculate the resemblance. The approach considers an explainable Artificial Intelligence (XAI) approach. Findings, based on a small dataset, achieve that the used approach offers promising results. Further research could have a great influence on how faces and face-profiles can be identified. Performance of the proposed system is validated using metrics such as Precision, False Acceptance Rate, False Rejection Rate, and True Positive Rate. Multiple simulations indicate an Equal Error Rate of 0.89. This work is an extended version of the paper submitted in ACIIDS 2020.
The digitalization has been challenged with the security and privacy aspects in each and every field. In addition to numerous authentication methods, biometrics has been popularized as it relies on one’s individual behavioral and physical characters. In this context, numerous unimodal and multimodal biometrics have been proposed and tested in the last decade. In this paper, authors have presented a comprehensive survey of the existing biometric systems while highlighting their respective challenges, advantage and limitations. The paper also discusses the present biometric technology market value, its scope, and practical applications in vivid sectors. The goal of this review is to offer a compact outline of various advances in biometrics technology with potential applications using unimodal and multimodal bioinformatics are discussed that would prove to offer a base for any biometric-based future research.
Now-a-days, biometric systems have replaced the password or token based authentication system in many fields to improve the security level. However, biometric system is also vulnerable to security threats. Unlike password based system, biometric templates cannot be replaced if lost or compromised. To deal with the issue of the compromised biometric template, template protection schemes evolved to make it possible to replace the biometric template. Cancelable biometric is such a template protection scheme that replaces a biometric template when the stored template is stolen or lost. It is a feature domain transformation where a distorted version of a biometric template is generated and matched in the transformed domain. This paper presents a review on the state-of-the-art and analysis of different existing methods of biometric based authentication system and cancelable biometric systems along with an elaborate focus on cancelable biometrics in order to show its advantages over the standard biometric systems through some generalized standards and guidelines acquired from the literature. We also proposed a highly secure method for cancelable biometrics using a non-invertible function based on Discrete Cosine Transformation (DCT) and Huffman encoding. We tested and evaluated the proposed novel method for 50 users and achieved good results.
In recent years, biometric authentication systems have remained a hot research topic, as they can recognize or authenticate a person by comparing their data to other biometric data stored in a database. Fingerprints, palm prints, hand vein, finger vein, palm vein, and other anatomic or behavioral features have all been used to develop a variety of biometric approaches. Finger vein recognition (FVR) is a common method of examining the patterns of the finger veins for proper authentication among the various biometrics. Finger vein acquisition, preprocessing, feature extraction, and authentication are all part of the proposed intelligent deep learning-based FVR (IDL-FVR) model. Infrared imaging devices have primarily captured the use of finger veins. Furthermore, a region of interest extraction process is carried out in order to save the finger part. The shark smell optimization algorithm is used to tune the hyperparameters of the bidirectional long–short-term memory model properly. Finally, an authentication process based on Euclidean distance is performed, which compares the features of the current finger vein image to those in the database. The IDL-FVR model surpassed the earlier methods by accomplishing a maximum accuracy of 99.93%. Authentication is successful when the Euclidean distance is small and vice versa.
A new personal recognition system using the palm vein pattern is presented in this article. It is the first time that the palm vein pattern is used for personal recognition. The texture feature of palm vein is extracted by wavelet decomposition. With our palm vein image database, we employed the nearest neighbor (NN) classifier to test the performance of the system. Experimental results show that the algorithm based on wavelet transform can reach a correct recognition rate (CRR) of 98.8%.
Biometric authentication technologies are used for the machine identification of individuals. The human-generated patterns used may be primarily physiological or behavioral, but usually contain elements of both components. Examples include voice, handwriting, face, eye and fingerprint identification. In this paper, we look at these technologies and their applications in general, developing a systematic approach to classifying, analyzing and evaluating them. A general system model is shown and test results for a number of technologies are considered.
In this paper, a system based on image descriptor and Local Histogram Concatenation (LHC) for finger vein recognition is introduced. The LHC of image descriptors such as LBP, LDP CLBP cannot be inverted back to the original images, therefore they can provide good security if stored as enrolled data. On the other hand, the technique of LHC does not depict spatial information, therefore it is expected to be less sensitive to image misalignment if a measure such as the histogram difference (dX2) is used for recognition. The use of histogram difference makes the system more robust to misalignment compared to the pixel-by-pixel-based measures such as the Hamming Distance (HD). The approach of LHC is implemented by dividing the image descriptor into non-overlapped grids, then the histogram within each grid is calculated and concatenated with the histograms of the preceding grids and finally, the concatenated histograms of each two images are compared using dX2 measure. Two datasets, UTFVP and SDUMLA-HMT, are used for testing the performance of the system. The results have shown that the Identification Recognition Rate (IRR) is improved when LHCs of the image descriptors with (dX2) measure are used compared to the use of only the image descriptors with HD measure. For UTFVP dataset, the IRR values were 97.44%, 95% and 98.37% when LHC and (dX2) were used with LBP, LDP and CLBP, respectively, while these values were 89.44%, 92.63% and 92.92% when only LBP, LDP and CLBP with HD were used. For SDUMLA-HMT dataset, the IRR values of the system were 98.43%, 98.69% and 98.85% when LHC and (dX2) were used with LBP, LDP and CLBP, respectively, while these values were 97.6%, 98.24% and 97.27% when only the image descriptors LBP, LDP and CLBP with HD were used.
Palmprint identification refers to searching in a database for the palmprint template, which is from the same palm as a given palmprint input. The identification process involves preprocessing, feature extraction, feature matching and decision-making. As a key step in the process, in this paper, we propose a new feature extraction method by converting a palmprint image from a spatial domain to a frequency domain using Fourier Transform. The features extracted in the frequency domain are used as indexes to the palmprint templates in the database and the searching process for the best match is conducted by a layered fashion. The experimental results show that palmprint identification based on feature extraction in the frequency domain is effective in terms of accuracy and efficiency.
As a kind of promising biometric technology, multispectral palmprint recognition methods have attracted increasing attention in security due to their high recognition accuracy and ease of use. It is worth noting that although multispectral palmprint data contains rich complementary information, multispectral palmprint recognition methods are still vulnerable to adversarial attacks. Even if only one image of a spectrum is attacked, it can have a catastrophic impact on the recognition results. Therefore, we propose a robustness-enhanced multispectral palmprint recognition method, including a model interpretability-based adversarial detection module and a robust multispectral fusion module. Inspired by the model interpretation technology, we found there is a large difference between clean palmprint and adversarial examples after CAM visualization. Using visualized images to build an adversarial detector can lead to better detection results. Finally, the weights of clean images and adversarial examples in the fusion layer are dynamically adjusted to obtain the correct recognition results. Experiments have shown that our method can make full use of the image features that are not attacked and can effectively improve the robustness of the model.
Image classification is a complicated process of classifying an image based on its visual representation. This paper portrays the need for adapting and applying a suitable image enhancement and denoising technique in order to arrive at a successful classification of data captured remotely. Biometric properties that are widely explored today are very important for authentication purposes. Noise may be the result of incorrect vein detection in the accepted image, thus explaining the need for a better development technique. This work provides subjective and objective analysis of the performance of various image enhancement filters in the spatial domain. After performing these pre-processing steps, the vein map and the corresponding vein graph can be easily obtained with minimal extraction steps, in which the appropriate Graph Matching method can be used to evaluate hand vein graphs thus performing the person authentication. The analysis result shows that the image enhancement filter performs better as an image enhancement filter compared to all other filters. Image quality measures (IQMs) are also tabulated for the evaluation of image quality.
The IT security paradigm evolves from secret-based to biometric identity-based. Biometric identification has gradually become more popular in recent years for handheld devices. Privacy-preserving is a key concern when biometrics is used in authentication systems in the present world today. Nowadays, the declaration of biometric traits has been imposed not only by the government but also by many private entities. There are no proper mechanisms and assurance that biometric traits will be kept safe by such entities. The encryption of biometric traits to avoid privacy attacks is a giant problem. Hence, state-of-the-art safety and security technological solutions must be devised to prevent the loss and misuse of such biometric traits. In this paper, we have identified different cancelable biometrics methods with the possible attacks on the biometric traits and directions on possible countermeasures in order to design a secure and privacy-preserving biometric authentication system. We also proposed a highly secure method for cancelable biometrics using a non-invertible function based on Discrete Cosine Transformation and Index of max hashing. We tested and evaluated the proposed novel method on a standard dataset and achieved good results.
Distance metric is widely used in similarity estimation which plays a key role in fingerprint recognition. In this work we propose the detailed comparison of 29 distinct distance metrics. Features of fingerprint images are extracted using Fast Fourier Transform (FFT). Recognition rate, receiver operating curve (ROC), time and space complexity parameters are used for evaluation of each distance metric. To consolidate our conclusion we used the standard fingerprint database available at Bologna University and FVC2000 databases. After evaluation of 29 distinct distance metrics we found Sorgel distance metric performs best. Genuine acceptance rate (GAR) of Sorgel distance metric is observed to be ~5% higher than traditional Euclidean distance metric at low false acceptance rate (FAR). Sorgel distance gives good GAR at low FAR with moderate computational complexity.
Automatic Face Recognition (FR) presents a challenging task in the field of pattern recognition and despite the huge research in the past several decades; it still remains an open research problem. This is primarily due to the variability in the facial images, such as non-uniform illuminations, low resolution, occlusion, and/or variation in poses. Due to its non-intrusive nature, the FR is an attractive biometric modality and has gained a lot of attention in the biometric research community. Driven by the enormous number of potential application domains, many algorithms have been proposed for the FR. This paper presents an overview of the state-of-the-art FR algorithms, focusing their performances on publicly available databases. We highlight the conditions of the image databases with regard to the recognition rate of each approach. This is useful as a quick research overview and for practitioners as well to choose an algorithm for their specified FR application. To provide a comprehensive survey, the paper divides the FR algorithms into three categories: (1) intensity-based, (2) video-based, and (3) 3D based FR algorithms. In each category, the most commonly used algorithms and their performance is reported on standard face databases and a brief critical discussion is carried out.
Finger-vein verification is a highly secure biometric authentication that has been widely investigated over the last years. One of its challenges, however, is the possible degradation of image quality, that results in spurious and missing vein patterns, which increases the verification error. Despite recent advances in finger-vein quality assessment, the proposed solutions are limited as they depend on human expertise and domain knowledge to extract handcrafted features for assessing quality. We have proposed, recently, the first deep neural network (DNN) framework for assessing finger-vein quality, that does not require manual labeling of high and low quality images, as is the case for state of the art methods, but infers such annotations automatically based on an objective indicator, the biometric verification decision. This framework has significantly outperformed the existing methods, whether the input image is in grayscale or is binary. Motivated by these performances, we propose, in this work, a representation learning of finger vein image quality, where a DNN takes as input conjointly the grayscale and binary versions of the input image to predict vein quality. Our model allows to learn the joint representation from grayscale and binary images, for quality assessment. The experimental results, obtained on a large public dataset, demonstrates that our proposed method accurately identifies high and low quality images, and outperforms other techniques in terms of equal error rate (EER) minimization, including our previous DNN models, based either on grayscale or binary input.
Person identification using periocular images has emerged as a challenging scenario in efficient biometric analysis, particularly under less constrained environments. Accurate recognition is significant in rendering effective measures during the COVID-19 pandemic. In this research paper, the person identification process is performed based on a deep learning model. Several effectual methods have already been developed, but certain drawbacks still exist, like deteriorated image quality, high computational cost, increased error, less training ability, a requirement of high storage space and accuracy rate degradation. Hence, the proposed work introduces a Hybrid Optimal Dense Capsule network-based Periocular biometric system (HodCP) to conquer these demerits. The proposed work involves pre-processing, dimensionality reduction, hybrid feature extraction and Image matching. The pre-processing step is undertaken using Parabolic Contrast Enhancement (PCE) to balance the image contrast and enhance the image quality. Then the Two-Dimensional Principal Component Analysis (2D_PCA) is employed to minimize the image dimensionality. Deep features are extracted in the hybrid feature extraction process using Dense Convolutional-121 Capsule Network (DenseCapsNet). The net loss and hyperparameter tuning are performed through African Vultures Optimization (AVO) algorithm. Finally, image matching is performed using Weighted Distance Similarity (WDS), which identifies the similarity between the query image and a set of image samples based on the distance score. The simulation tool used for analyzing better performance is PYTHON. The data required to process the proposed work are collected from four benchmark datasets. The proposed work provides a better accuracy rate of CASIA-Iris-Mobile-V1.0 (99.01%), UBIPr (99.12%), Facemask detection dataset (98.67%) and Glasses versus without glasses dataset (98.83%), which is superior to the existing methods.
Automatic fingerprint identification methods have become the most widely used technology in rapidly growing bioidentification applications. In this paper, different image enhancement approaches presented in the scientific literature are reviewed. Fingerprint verification can be divided into image acquisition, enhancement, feature extraction and matching steps. The enhancement step is needed to improve image quality prior to feature extraction. By far the most common approach relies on the filtering of the fingerprint images with filters adapted to local ridge orientation, but alternative approaches based on Fourier domain processing, direct ridge following and global features also exist. Methods of comparing the performance of enhancement methods are discussed. An example of the performance of different methods is given. Conclusions are made regarding the importance of effective enhancement, especially for noisy or low quality images.
Biometrics is a technology designed to automatically recognize a person together with his/her natural and distinct characteristics. Recently it is in the limelight as an effective authentication method of information. With the great interests in biometrics, the need for reliable evaluation of these technologies increases and the research on objective and quantitative performance estimation methodology is actively investigated. In this paper, we give a comprehensive overview of biometric technology and performance evaluation with more than 100 publications, specially focused on fingerprints. After the thorough review, we propose a promising evaluation method based on affecting factors.
This paper proposes an intelligent 2ν-support vector machine based match score fusion algorithm to improve the performance of face and iris recognition by integrating the quality of images. The proposed algorithm applies redundant discrete wavelet transform to evaluate the underlying linear and non-linear features present in the image. A composite quality score is computed to determine the extent of smoothness, sharpness, noise, and other pertinent features present in each subband of the image. The match score and the corresponding quality score of an image are fused using 2ν-support vector machine to improve the verification performance. The proposed algorithm is experimentally validated using the FERET face database and the CASIA iris database. The verification performance and statistical evaluation show that the proposed algorithm outperforms existing fusion algorithms.
The brain activity observed on EEG electrodes is influenced by volume conduction and functional connectivity of a person performing a task. When the task is a biometric test the EEG signals represent the unique “brain print”, which is defined by the functional connectivity that is represented by the interactions between electrodes, whilst the conduction components cause trivial correlations. Orthogonalization using autoregressive modeling minimizes the conduction components, and then the residuals are related to features correlated with the functional connectivity. However, the orthogonalization can be unreliable for high-dimensional EEG data. We have found that the dimensionality can be significantly reduced if the baselines required for estimating the residuals can be modeled by using relevant electrodes. In our approach, the required models are learnt by a Group Method of Data Handling (GMDH) algorithm which we have made capable of discovering reliable models from multidimensional EEG data. In our experiments on the EEG-MMI benchmark data which include 109 participants, the proposed method has correctly identified all the subjects and provided a statistically significant (p<0.01) improvement of the identification accuracy. The experiments have shown that the proposed GMDH method can learn new features from multi-electrode EEG data, which are capable to improve the accuracy of biometric identification.
The paper describes an integrated recognition-by-parts architecture for reliable and robust face recognition. Reliability and robustness are characteristic of the ability to deploy full-fledged and operational biometric engines, and handling adverse image conditions that include among others uncooperative subjects, occlusion, and temporal variability, respectively. The architecture proposed is model-free and non-parametric. The conceptual framework draws support from discriminative methods using likelihood ratios. At the conceptual level it links forensics and biometrics, while at the implementation level it links the Bayesian framework and statistical learning theory (SLT). Layered categorization starts with face detection using implicit rather than explicit segmentation. It proceeds with face authentication that involves feature selection of local patch instances including dimensionality reduction, exemplar-based clustering of patches into parts, and data fusion for matching using boosting driven by parts that play the role of weak-learners. Face authentication shares the same implementation with face detection. The implementation, driven by transduction, employs proximity and typicality (ranking) realized using strangeness and p-values, respectively. The feasibility and reliability of the proposed architecture are illustrated using FRGC data. The paper concludes with suggestions for augmenting and enhancing the scope and utility of the proposed architecture.
Please login to be able to save your searches and receive alerts for new content matching your search criteria.