Processing math: 100%
Skip main navigation

Cookies Notification

We use cookies on this site to enhance your user experience. By continuing to browse the site, you consent to the use of our cookies. Learn More
×

System Upgrade on Tue, May 28th, 2024 at 2am (EDT)

Existing users will be able to log into the site and access content. However, E-commerce and registration of new users may not be available for up to 12 hours.
For online purchase, please visit us again. Contact us at customercare@wspc.com for any enquiries.

SEARCH GUIDE  Download Search Tip PDF File

  Bestsellers

  • articleNo Access

    A Survey on Biometrics and Cancelable Biometrics Systems

    Now-a-days, biometric systems have replaced the password or token based authentication system in many fields to improve the security level. However, biometric system is also vulnerable to security threats. Unlike password based system, biometric templates cannot be replaced if lost or compromised. To deal with the issue of the compromised biometric template, template protection schemes evolved to make it possible to replace the biometric template. Cancelable biometric is such a template protection scheme that replaces a biometric template when the stored template is stolen or lost. It is a feature domain transformation where a distorted version of a biometric template is generated and matched in the transformed domain. This paper presents a review on the state-of-the-art and analysis of different existing methods of biometric based authentication system and cancelable biometric systems along with an elaborate focus on cancelable biometrics in order to show its advantages over the standard biometric systems through some generalized standards and guidelines acquired from the literature. We also proposed a highly secure method for cancelable biometrics using a non-invertible function based on Discrete Cosine Transformation (DCT) and Huffman encoding. We tested and evaluated the proposed novel method for 50 users and achieved good results.

  • articleNo Access

    A REVIEW ON STATE-OF-THE-ART FACE RECOGNITION APPROACHES

    Fractals01 Apr 2017

    Automatic Face Recognition (FR) presents a challenging task in the field of pattern recognition and despite the huge research in the past several decades; it still remains an open research problem. This is primarily due to the variability in the facial images, such as non-uniform illuminations, low resolution, occlusion, and/or variation in poses. Due to its non-intrusive nature, the FR is an attractive biometric modality and has gained a lot of attention in the biometric research community. Driven by the enormous number of potential application domains, many algorithms have been proposed for the FR. This paper presents an overview of the state-of-the-art FR algorithms, focusing their performances on publicly available databases. We highlight the conditions of the image databases with regard to the recognition rate of each approach. This is useful as a quick research overview and for practitioners as well to choose an algorithm for their specified FR application. To provide a comprehensive survey, the paper divides the FR algorithms into three categories: (1) intensity-based, (2) video-based, and (3) 3D based FR algorithms. In each category, the most commonly used algorithms and their performance is reported on standard face databases and a brief critical discussion is carried out.

  • articleNo Access

    PALMPRINT IDENTIFICATION BY FOURIER TRANSFORM

    Palmprint identification refers to searching in a database for the palmprint template, which is from the same palm as a given palmprint input. The identification process involves preprocessing, feature extraction, feature matching and decision-making. As a key step in the process, in this paper, we propose a new feature extraction method by converting a palmprint image from a spatial domain to a frequency domain using Fourier Transform. The features extracted in the frequency domain are used as indexes to the palmprint templates in the database and the searching process for the best match is conducted by a layered fashion. The experimental results show that palmprint identification based on feature extraction in the frequency domain is effective in terms of accuracy and efficiency.

  • articleNo Access

    Finger-Vein Quality Assessment Based on Deep Features From Grayscale and Binary Images

    Finger-vein verification is a highly secure biometric authentication that has been widely investigated over the last years. One of its challenges, however, is the possible degradation of image quality, that results in spurious and missing vein patterns, which increases the verification error. Despite recent advances in finger-vein quality assessment, the proposed solutions are limited as they depend on human expertise and domain knowledge to extract handcrafted features for assessing quality. We have proposed, recently, the first deep neural network (DNN) framework for assessing finger-vein quality, that does not require manual labeling of high and low quality images, as is the case for state of the art methods, but infers such annotations automatically based on an objective indicator, the biometric verification decision. This framework has significantly outperformed the existing methods, whether the input image is in grayscale or is binary. Motivated by these performances, we propose, in this work, a representation learning of finger vein image quality, where a DNN takes as input conjointly the grayscale and binary versions of the input image to predict vein quality. Our model allows to learn the joint representation from grayscale and binary images, for quality assessment. The experimental results, obtained on a large public dataset, demonstrates that our proposed method accurately identifies high and low quality images, and outperforms other techniques in terms of equal error rate (EER) minimization, including our previous DNN models, based either on grayscale or binary input.

  • articleNo Access

    FUNDAMENTALS OF BIOMETRIC AUTHENTICATION TECHNOLOGIES

    Biometric authentication technologies are used for the machine identification of individuals. The human-generated patterns used may be primarily physiological or behavioral, but usually contain elements of both components. Examples include voice, handwriting, face, eye and fingerprint identification. In this paper, we look at these technologies and their applications in general, developing a systematic approach to classifying, analyzing and evaluating them. A general system model is shown and test results for a number of technologies are considered.

  • articleNo Access

    Finger Vein Recognition Model for Biometric Authentication Using Intelligent Deep Learning

    In recent years, biometric authentication systems have remained a hot research topic, as they can recognize or authenticate a person by comparing their data to other biometric data stored in a database. Fingerprints, palm prints, hand vein, finger vein, palm vein, and other anatomic or behavioral features have all been used to develop a variety of biometric approaches. Finger vein recognition (FVR) is a common method of examining the patterns of the finger veins for proper authentication among the various biometrics. Finger vein acquisition, preprocessing, feature extraction, and authentication are all part of the proposed intelligent deep learning-based FVR (IDL-FVR) model. Infrared imaging devices have primarily captured the use of finger veins. Furthermore, a region of interest extraction process is carried out in order to save the finger part. The shark smell optimization algorithm is used to tune the hyperparameters of the bidirectional long–short-term memory model properly. Finally, an authentication process based on Euclidean distance is performed, which compares the features of the current finger vein image to those in the database. The IDL-FVR model surpassed the earlier methods by accomplishing a maximum accuracy of 99.93%. Authentication is successful when the Euclidean distance is small and vice versa.

  • articleOpen Access

    In Your Face: Person Identification Through Ratios and Distances Between Facial Features

    These days identification of a person is an integral part of many computer-based solutions. It is a key characteristic for access control, customized services, and a proof of identity. Over the last couple of decades, many new techniques were introduced for how to identify human faces. This approach investigates the human face identification based on frontal images by producing ratios from distances between the different features and their locations. Moreover, this extended version includes an investigation of identification based on side profile by extracting and diagnosing the feature sets with geometric ratio expressions which are calculated into feature vectors. The last stage involves using weighted means to calculate the resemblance. The approach considers an explainable Artificial Intelligence (XAI) approach. Findings, based on a small dataset, achieve that the used approach offers promising results. Further research could have a great influence on how faces and face-profiles can be identified. Performance of the proposed system is validated using metrics such as Precision, False Acceptance Rate, False Rejection Rate, and True Positive Rate. Multiple simulations indicate an Equal Error Rate of 0.89. This work is an extended version of the paper submitted in ACIIDS 2020.

  • articleNo Access

    Feature Extraction with GMDH-Type Neural Networks for EEG-Based Person Identification

    The brain activity observed on EEG electrodes is influenced by volume conduction and functional connectivity of a person performing a task. When the task is a biometric test the EEG signals represent the unique “brain print”, which is defined by the functional connectivity that is represented by the interactions between electrodes, whilst the conduction components cause trivial correlations. Orthogonalization using autoregressive modeling minimizes the conduction components, and then the residuals are related to features correlated with the functional connectivity. However, the orthogonalization can be unreliable for high-dimensional EEG data. We have found that the dimensionality can be significantly reduced if the baselines required for estimating the residuals can be modeled by using relevant electrodes. In our approach, the required models are learnt by a Group Method of Data Handling (GMDH) algorithm which we have made capable of discovering reliable models from multidimensional EEG data. In our experiments on the EEG-MMI benchmark data which include 109 participants, the proposed method has correctly identified all the subjects and provided a statistically significant (p<0.01) improvement of the identification accuracy. The experiments have shown that the proposed GMDH method can learn new features from multi-electrode EEG data, which are capable to improve the accuracy of biometric identification.

  • articleNo Access

    FACE AUTHENTICATION USING RECOGNITION-BY-PARTS, BOOSTING AND TRANSDUCTION

    The paper describes an integrated recognition-by-parts architecture for reliable and robust face recognition. Reliability and robustness are characteristic of the ability to deploy full-fledged and operational biometric engines, and handling adverse image conditions that include among others uncooperative subjects, occlusion, and temporal variability, respectively. The architecture proposed is model-free and non-parametric. The conceptual framework draws support from discriminative methods using likelihood ratios. At the conceptual level it links forensics and biometrics, while at the implementation level it links the Bayesian framework and statistical learning theory (SLT). Layered categorization starts with face detection using implicit rather than explicit segmentation. It proceeds with face authentication that involves feature selection of local patch instances including dimensionality reduction, exemplar-based clustering of patches into parts, and data fusion for matching using boosting driven by parts that play the role of weak-learners. Face authentication shares the same implementation with face detection. The implementation, driven by transduction, employs proximity and typicality (ranking) realized using strangeness and p-values, respectively. The feasibility and reliability of the proposed architecture are illustrated using FRGC data. The paper concludes with suggestions for augmenting and enhancing the scope and utility of the proposed architecture.

  • articleNo Access

    IMPROVEMENT OF IRIS RECOGNITION PERFORMANCE USING REGION-BASED ACTIVE CONTOURS, GENETIC ALGORITHMS AND SVMs

    Most existing iris recognition algorithms focus on the processing and recognition of the ideal iris images that are acquired in a controlled environment. In this paper, we process the nonideal iris images that are captured in an unconstrained situation and are affected severely by gaze deviation, eyelids and eyelashes occlusions, nonuniform intensity, motion blur, reflections, etc. The proposed iris recognition algorithm has three novelties as compared to the previous works; firstly, we deploy a region-based active contour model to segment a nonideal iris image with intensity inhomogeneity; secondly, genetic algorithms (GAs) are deployed to select the subset of informative texture features without compromising the recognition accuracy; Thirdly, to speed up the matching process and to control the misclassification error, we apply a combined approach called the adaptive asymmetrical support vector machines (AASVMs). The verification and identification performance of the proposed scheme is validated on three challenging iris image datasets, namely, the ICE 2005, the WVU Nonideal, and the UBIRIS Version 1.

  • articleNo Access

    Ear Recognition Based on Fusion of Ear and Tragus Under Different Challenges

    This paper proposes a 2D ear recognition approach that is based on the fusion of ear and tragus using score-level fusion strategy. An attempt to overcome the effect of partial occlusion, pose variation and weak illumination challenges is done since the accuracy of ear recognition may be reduced if one or more of these challenges are available. In this study, the effect of the aforementioned challenges is estimated separately, and many samples of ear that are affected by two different challenges concurrently are also considered. The tragus is used as a biometric trait because it is often free from occlusion; it also provides discriminative features even in different poses and illuminations. The features are extracted using local binary patterns and the evaluation has been done on three datasets of USTB database. It has been observed that the fusion of ear and tragus can improve the recognition performance compared to the unimodal systems. Experimental results show that the proposed method enhances the recognition rates by fusion of parts that are nonoccluded with tragus in the cases of partial occlusion, pose variation and weak illumination. It is observed that the proposed method performs better than feature-level fusion methods and most of the state-of-the-art ear recognition systems.

  • articleNo Access

    RBS: A ROBUST BIMODAL SYSTEM FOR FACE RECOGNITION

    During the last few years, many algorithms have been proposed in particular for face recognition using classical 2-D images. However, it is necessary to deal with occlusions when the subject is wearing sunglasses, scarves and such. In the same way, ear recognition is arising as a new promising biometric for people recognition, even if the related literature appears to be somewhat underdeveloped. In this paper, several hybrid face/ear recognition systems are investigated. The system is based on IFS (Iterated Function Systems) theory that are applied on both face and ear resulting in a bimodal architecture. One advantage is that the information used for the indexing and recognition task of face/ear can be made local, and this makes the method more robust to possible occlusions. The distribution of similarities in the input images is exploited as a signature for the identity of the subject. The amount of information provided by each component of the face and the ear image has been assessed, first independently and then jointly. At last, results underline that the system significantly outperforms the existing approaches in the state of the art.

  • articleNo Access

    Enhancement of Vascular Patterns in Palm Images Using Various Image Enhancement Techniques for Person Identification

    Image classification is a complicated process of classifying an image based on its visual representation. This paper portrays the need for adapting and applying a suitable image enhancement and denoising technique in order to arrive at a successful classification of data captured remotely. Biometric properties that are widely explored today are very important for authentication purposes. Noise may be the result of incorrect vein detection in the accepted image, thus explaining the need for a better development technique. This work provides subjective and objective analysis of the performance of various image enhancement filters in the spatial domain. After performing these pre-processing steps, the vein map and the corresponding vein graph can be easily obtained with minimal extraction steps, in which the appropriate Graph Matching method can be used to evaluate hand vein graphs thus performing the person authentication. The analysis result shows that the image enhancement filter performs better as an image enhancement filter compared to all other filters. Image quality measures (IQMs) are also tabulated for the evaluation of image quality.

  • articleNo Access

    Adversarial Detection and Fusion Method for Multispectral Palmprint Recognition

    As a kind of promising biometric technology, multispectral palmprint recognition methods have attracted increasing attention in security due to their high recognition accuracy and ease of use. It is worth noting that although multispectral palmprint data contains rich complementary information, multispectral palmprint recognition methods are still vulnerable to adversarial attacks. Even if only one image of a spectrum is attacked, it can have a catastrophic impact on the recognition results. Therefore, we propose a robustness-enhanced multispectral palmprint recognition method, including a model interpretability-based adversarial detection module and a robust multispectral fusion module. Inspired by the model interpretation technology, we found there is a large difference between clean palmprint and adversarial examples after CAM visualization. Using visualized images to build an adversarial detector can lead to better detection results. Finally, the weights of clean images and adversarial examples in the fusion layer are dynamically adjusted to obtain the correct recognition results. Experiments have shown that our method can make full use of the image features that are not attacked and can effectively improve the robustness of the model.

  • articleNo Access

    INTEGRATING IMAGE QUALITY IN 2ν-SVM BIOMETRIC MATCH SCORE FUSION

    This paper proposes an intelligent 2ν-support vector machine based match score fusion algorithm to improve the performance of face and iris recognition by integrating the quality of images. The proposed algorithm applies redundant discrete wavelet transform to evaluate the underlying linear and non-linear features present in the image. A composite quality score is computed to determine the extent of smoothness, sharpness, noise, and other pertinent features present in each subband of the image. The match score and the corresponding quality score of an image are fused using 2ν-support vector machine to improve the verification performance. The proposed algorithm is experimentally validated using the FERET face database and the CASIA iris database. The verification performance and statistical evaluation show that the proposed algorithm outperforms existing fusion algorithms.

  • articleNo Access

    COMPARISON OF ROC AND LIKELIHOOD DECISION METHODS IN AUTOMATIC FINGERPRINT VERIFICATION

    The biometric verification task is to determine whether or not an input and a template belong to the same individual. In the context of automatic fingerprint verification the task consists of three steps: feature extraction, where features (typically minutiae) are extracted from each fingerprint, scoring, where the degree of match between the two sets of features is determined, and decision, where the score is used to accept or reject the hypothesis that the input and template belong to the same individual. The paper focuses on the final decision step, which is a binary classification problem involving a single score variable. The commonly used decision method is to learn a score threshold from a labeled set of inputs and templates, by first determining the receiver operating characteristic (ROC) of the task. The ROC method works well when there is a well-registered fingerprint image. The paper shows that when there is uncertainty due to fingerprint quality, e.g. the input is a latent print or a partial print, the decision method can be improved by using the likelihood ratio of match/non match. The likelihood ratio is obtained by modeling the distributions of same finger and different finger scores using parametric distributions. The parametric forms considered are Gaussian and Gamma distributions whose parameters are learnt from labeled training samples. The performances of the likelihood and ROC methods are compared for varying numbers of minutiae points available for verification. Using either Gaussian or Gamma parametric distributions, the likelihood method has a lower error rate than the ROC method when few minutiae points are available. Likelihood and ROC methods converge to the same accuracy as more minutiae points are available.

  • articleNo Access

    REGION COVARIANCE MATRICES AS FEATURE DESCRIPTORS FOR PALMPRINT RECOGNITION USING GABOR FEATURES

    Region covariance matrices (RCMs) as feature descriptors have been developed due to the advantages of low dimensionality, being scale and illumination independent. How to define a feature mapping vector for the RCMs construction of strong discriminating ability is still an open issue. In this paper, there is a focus on finding a more efficient feature mapping vector for RCMs as palmprint descriptors based on Gabor magnitude and phase (GMP) information. Specially, Gabor magnitude (GM) features of each palmprint image approximate a lognormal distribution. For palmprint recognition, the logarithmic transformation of GM proves to be important for the discriminating ability of corresponding RCMs. All experiments are performed on the public Hong Kong Polytechnic University (PolyU) Palmprint Database of 7752 images. The results demonstrate the efficiency of our proposed method, and also show that adding pixel locations and intensity component to the feature mapping vector has a negative effect on palmprint recognition performance for our proposed Log_GMP based RCM method.

  • articleNo Access

    PERFORMANCE EVALUATION OF DISTANCE METRICS: APPLICATION TO FINGERPRINT RECOGNITION

    Distance metric is widely used in similarity estimation which plays a key role in fingerprint recognition. In this work we propose the detailed comparison of 29 distinct distance metrics. Features of fingerprint images are extracted using Fast Fourier Transform (FFT). Recognition rate, receiver operating curve (ROC), time and space complexity parameters are used for evaluation of each distance metric. To consolidate our conclusion we used the standard fingerprint database available at Bologna University and FVC2000 databases. After evaluation of 29 distinct distance metrics we found Sorgel distance metric performs best. Genuine acceptance rate (GAR) of Sorgel distance metric is observed to be ~5% higher than traditional Euclidean distance metric at low false acceptance rate (FAR). Sorgel distance gives good GAR at low FAR with moderate computational complexity.

  • articleNo Access

    Geometric and Textural Cues for Automatic Kinship Verification

    Automatic Kinship verification aims at recognizing the degree of kinship of two individuals from their facial images and it has possible applications in image retrieval and annotation, forensics and historical studies. This is a recent and challenging problem, which must deal with different degrees of kinship and variations in age and gender. Our work explores the computer identification of parent–child pairs using a combination of (i) features of different natures, based on geometric and textural data, (ii) feature selection and (iii) state-of-the-art classifiers. Experiments show that the proposed approach provides a valuable solution to the kinship verification problem, as suggested by its comparison with different methods on the same data and the same experimental protocols. We further show the good generalization capabilities of our method in several cross-database experiments.

  • articleNo Access

    Fuzzy Brain Storm Optimization and Adaptive Thresholding for Multimodal Vein-Based Recognition System

    Nowadays, conventional security method of using passwords can be easily forged by unauthorized person. Hence, biometric cues such as fingerprints, voice, palm print, and face are more preferable for recognition but to preserve the liveliness, another one important biometric trait is vein pattern, which is formed by the subcutaneous blood vessels that contain all the achievable recognition properties. Accordingly, in this paper, we propose a multibiometric system using palm vein, hand vein, and finger vein. Here, Holoentropy-based thresholding mechanism is newly developed for extracting the vein patterns. Also, Fuzzy Brain Storm Optimization (FBSO) method is proposed for score level fusion to achieve the better recognition performance. These two contributions are effectively included in the biometric recognition system and the performance analysis of the proposed method is carried out using the benchmark datasets of palm vein image, finger vein image, and hand vein image. The quantitative results are analyzed with the help of FAR, FRR, and accuracy. From outcome, we proved that the proposed FBSO approach attained a higher accuracy of 81.3% than the existing methods.