Skip main navigation

Cookies Notification

We use cookies on this site to enhance your user experience. By continuing to browse the site, you consent to the use of our cookies. Learn More
×

System Upgrade on Tue, May 28th, 2024 at 2am (EDT)

Existing users will be able to log into the site and access content. However, E-commerce and registration of new users may not be available for up to 12 hours.
For online purchase, please visit us again. Contact us at customercare@wspc.com for any enquiries.

SEARCH GUIDE  Download Search Tip PDF File

  Bestsellers

  • articleNo Access

    PALMPRINT IDENTIFICATION BY FOURIER TRANSFORM

    Palmprint identification refers to searching in a database for the palmprint template, which is from the same palm as a given palmprint input. The identification process involves preprocessing, feature extraction, feature matching and decision-making. As a key step in the process, in this paper, we propose a new feature extraction method by converting a palmprint image from a spatial domain to a frequency domain using Fourier Transform. The features extracted in the frequency domain are used as indexes to the palmprint templates in the database and the searching process for the best match is conducted by a layered fashion. The experimental results show that palmprint identification based on feature extraction in the frequency domain is effective in terms of accuracy and efficiency.

  • articleNo Access

    COMPARISON OF ROC AND LIKELIHOOD DECISION METHODS IN AUTOMATIC FINGERPRINT VERIFICATION

    The biometric verification task is to determine whether or not an input and a template belong to the same individual. In the context of automatic fingerprint verification the task consists of three steps: feature extraction, where features (typically minutiae) are extracted from each fingerprint, scoring, where the degree of match between the two sets of features is determined, and decision, where the score is used to accept or reject the hypothesis that the input and template belong to the same individual. The paper focuses on the final decision step, which is a binary classification problem involving a single score variable. The commonly used decision method is to learn a score threshold from a labeled set of inputs and templates, by first determining the receiver operating characteristic (ROC) of the task. The ROC method works well when there is a well-registered fingerprint image. The paper shows that when there is uncertainty due to fingerprint quality, e.g. the input is a latent print or a partial print, the decision method can be improved by using the likelihood ratio of match/non match. The likelihood ratio is obtained by modeling the distributions of same finger and different finger scores using parametric distributions. The parametric forms considered are Gaussian and Gamma distributions whose parameters are learnt from labeled training samples. The performances of the likelihood and ROC methods are compared for varying numbers of minutiae points available for verification. Using either Gaussian or Gamma parametric distributions, the likelihood method has a lower error rate than the ROC method when few minutiae points are available. Likelihood and ROC methods converge to the same accuracy as more minutiae points are available.

  • articleNo Access

    FACE AUTHENTICATION USING RECOGNITION-BY-PARTS, BOOSTING AND TRANSDUCTION

    The paper describes an integrated recognition-by-parts architecture for reliable and robust face recognition. Reliability and robustness are characteristic of the ability to deploy full-fledged and operational biometric engines, and handling adverse image conditions that include among others uncooperative subjects, occlusion, and temporal variability, respectively. The architecture proposed is model-free and non-parametric. The conceptual framework draws support from discriminative methods using likelihood ratios. At the conceptual level it links forensics and biometrics, while at the implementation level it links the Bayesian framework and statistical learning theory (SLT). Layered categorization starts with face detection using implicit rather than explicit segmentation. It proceeds with face authentication that involves feature selection of local patch instances including dimensionality reduction, exemplar-based clustering of patches into parts, and data fusion for matching using boosting driven by parts that play the role of weak-learners. Face authentication shares the same implementation with face detection. The implementation, driven by transduction, employs proximity and typicality (ranking) realized using strangeness and p-values, respectively. The feasibility and reliability of the proposed architecture are illustrated using FRGC data. The paper concludes with suggestions for augmenting and enhancing the scope and utility of the proposed architecture.

  • articleNo Access

    DYNAMIC SHAPE STYLE ANALYSIS: BILINEAR AND MULTILINEAR HUMAN IDENTIFICATION WITH TEMPORAL NORMALIZATION

    Modeling and analyzing the dynamic shape of human motion is a challenging task owing to temporal variations in the shape and multiple sources of observed shape variations such as viewpoint, motion speed, clothing, etc. We present a new framework for dynamic shape analysis based on temporal normalization and factorized shape style analysis. Using a nonlinear generative model with motion manifold embedding in a low-dimensional space, we detect cycles of periodic motion like gait in different views and synthesize temporally-aligned shape sequences from the same type of motion at different speeds. The bilinear analysis of temporally-aligned shape sequences decomposes dynamic motion into time-invariant shape style factors and time-dependent motion factors. We extend the bilinear model into a tensor shape model, a multilinear decomposition of dynamic shape sequences for view-invariant shape style representations. The shape style is a view-invariant, time-invariant, and speed-invariant shape signature and is used as a feature vector for human identification. The shape style can be adapted to new environmental conditions by iterative estimation of style and content factors to reflect new observation conditions. We present the experimental results of gait recognition using the CMU Mobo gait database and the USF gait challenging database.

  • articleNo Access

    IMPROVEMENT OF IRIS RECOGNITION PERFORMANCE USING REGION-BASED ACTIVE CONTOURS, GENETIC ALGORITHMS AND SVMs

    Most existing iris recognition algorithms focus on the processing and recognition of the ideal iris images that are acquired in a controlled environment. In this paper, we process the nonideal iris images that are captured in an unconstrained situation and are affected severely by gaze deviation, eyelids and eyelashes occlusions, nonuniform intensity, motion blur, reflections, etc. The proposed iris recognition algorithm has three novelties as compared to the previous works; firstly, we deploy a region-based active contour model to segment a nonideal iris image with intensity inhomogeneity; secondly, genetic algorithms (GAs) are deployed to select the subset of informative texture features without compromising the recognition accuracy; Thirdly, to speed up the matching process and to control the misclassification error, we apply a combined approach called the adaptive asymmetrical support vector machines (AASVMs). The verification and identification performance of the proposed scheme is validated on three challenging iris image datasets, namely, the ICE 2005, the WVU Nonideal, and the UBIRIS Version 1.

  • articleNo Access

    REGION COVARIANCE MATRICES AS FEATURE DESCRIPTORS FOR PALMPRINT RECOGNITION USING GABOR FEATURES

    Region covariance matrices (RCMs) as feature descriptors have been developed due to the advantages of low dimensionality, being scale and illumination independent. How to define a feature mapping vector for the RCMs construction of strong discriminating ability is still an open issue. In this paper, there is a focus on finding a more efficient feature mapping vector for RCMs as palmprint descriptors based on Gabor magnitude and phase (GMP) information. Specially, Gabor magnitude (GM) features of each palmprint image approximate a lognormal distribution. For palmprint recognition, the logarithmic transformation of GM proves to be important for the discriminating ability of corresponding RCMs. All experiments are performed on the public Hong Kong Polytechnic University (PolyU) Palmprint Database of 7752 images. The results demonstrate the efficiency of our proposed method, and also show that adding pixel locations and intensity component to the feature mapping vector has a negative effect on palmprint recognition performance for our proposed Log_GMP based RCM method.

  • articleNo Access

    MEAN AND STANDARD DEVIATION AS FEATURES FOR PALMPRINT RECOGNITION BASED ON GABOR FILTERS

    The two-dimensional (2D) Gabor function has been recognized as a very useful tool in feature extraction of image, due to its optimal localization properties in both spatial and frequency domain. This paper presents a novel palmprint feature extraction method based on the statistics of decomposition coefficients of the Gabor wavelet transform. It is experimentally found that the magnitude coefficients of the Gabor wavelet transform within each subband uniformly to approximate the Lognormal distribution. Based on this fact, we create the palmprint representation using two simple statistics (mean and standard deviation) as feature components after applying the logarithmic transformation of Gabor filtered magnitude coefficients for each subband with different orientations and scales. The optimum setting of the number of Gabor filters and orientation of each Gabor filter is experimentally determined. For palmprint recognition, the popularly used Fisher Linear Discriminant (FLD) analysis is further applied on the constructed feature vectors to extract discriminative features and reduce dimensionality. All experiments are both executed over the CCD-based HongKong PolyU Palmprint Database of 7752 images and the scanner-based BJTU_PalmprintDB (V1.0) of 3460 images. The results demonstrate the effectiveness of the proposed palmprint representation in achieving the improved recognition performance.

  • articleNo Access

    PERFORMANCE EVALUATION OF DISTANCE METRICS: APPLICATION TO FINGERPRINT RECOGNITION

    Distance metric is widely used in similarity estimation which plays a key role in fingerprint recognition. In this work we propose the detailed comparison of 29 distinct distance metrics. Features of fingerprint images are extracted using Fast Fourier Transform (FFT). Recognition rate, receiver operating curve (ROC), time and space complexity parameters are used for evaluation of each distance metric. To consolidate our conclusion we used the standard fingerprint database available at Bologna University and FVC2000 databases. After evaluation of 29 distinct distance metrics we found Sorgel distance metric performs best. Genuine acceptance rate (GAR) of Sorgel distance metric is observed to be ~5% higher than traditional Euclidean distance metric at low false acceptance rate (FAR). Sorgel distance gives good GAR at low FAR with moderate computational complexity.

  • articleNo Access

    COMPARISON OF COLOR AND TEXTURE FOR IRIS RECOGNITION

    This paper proposes the utility of texture and color for iris recognition systems. It contributes for improvement of system accuracy with reduced feature vector size of just 1 × 3 and reduction of false acceptance rate (FAR) and false rejection rate (FRR). It avoids the iris normalization process used traditionally in iris recognition systems. Proposed method is compared with the existing methods. Experimental results indicate that the proposed method using only color achieves 99.9993 accuracy, 0.0160 FAR, and 0.0813 FRR. Computational time efficiency achieved is of 947.7 ms.

  • articleNo Access

    TOWARDS A FAST METHOD FOR IRIS IDENTIFICATION WITH FRACTAL AND CHAOS GAME THEORY

    Nowadays many techniques are being used to increase the reliability of human identification systems. Iris is a part of human body that is desirable for biometric identification and has favorable factors. We have focused on the reality that iris is a fractal phenomenon in this paper. During the production of new fractals, some features will be extracted by Chaos Game mechanism. These features are useful and effective in iris identification. There are three steps for iris identification with fractal and Chaos Game Theory. The first step is making a new fractal. The second step includes extracting features during the first step. Finally, the iris identification based on extracted features is the third step. We have named this technique Iris Identification based-Fractal and Chaos Game Theory (Iris-IFCGT). This technique has some fractal properties like stability against zoom, removing part of the iris image, no sensitivity on rotation and so on as well as desirable speed which helps preventing time consuming process of pattern recognition.

  • articleNo Access

    SPEED-INDEPENDENT GAIT IDENTIFICATION FOR MOBILE DEVICES

    Due to the intensive use of mobile phones for different purposes, these devices usually contain confidential information which must not be accessed by another person apart from the owner of the device. Furthermore, the new generation phones commonly incorporate an accelerometer which may be used to capture the acceleration signals produced as a result of owner's gait. Nowadays, gait identification in basis of acceleration signals is being considered as a new biometric technique which allows blocking the device when another person is carrying it. Although distance based approaches as Euclidean distance or dynamic time warping have been applied to solve this identification problem, they show difficulties when dealing with gaits at different speeds. For this reason, in this paper, a method to extract an average template from instances of the gait at different velocities is presented. This method has been tested with the gait signals of 34 subjects while walking at different motion speeds (slow, normal and fast) and it has shown to improve the performance of Euclidean distance and classical dynamic time warping.

  • articleNo Access

    IRIS RECOGNITION USING ADABOOST AND LEVENSHTEIN DISTANCES

    This paper presents an efficient IrisCode classifier, built from phase features which uses AdaBoost for the selection of Gabor wavelets bandwidths. The final iris classifier consists of a weighted contribution of weak classifiers. As weak classifiers we use three-split decision trees that identify a candidate based on the Levenshtein distance between phase vectors of the respective iris images. Our experiments show that the Levenshtein distance has better discrimination in comparing IrisCodes than the Hamming distance. Our process also differs from existing methods because the wavelengths of the Gabor filters used, and their final weights in the decision function, are chosen from the robust final classifier, instead of being fixed and/or limited by the programmer, thus yielding higher iris recognition rates. A pyramidal strategy for cascading filters with increasing complexity makes the system suitable for real-time operation. We have designed a processor array to accelerate the computation of the Levenshtein distance. The processing elements are simple basic cells, interconnected by relatively short paths, which makes it suitable for a VLSI implementation.

  • articleNo Access

    IRIS RECOGNITION USING COMBINED STATISTICAL AND CO-OCCURRENCE MULTI-RESOLUTIONAL FEATURES

    Iris recognition is one of the most reliable personal identification methods. This paper presents a novel algorithm for iris recognition encompassing iris segmentation, fusion of statistical and co-occurrence features extracted from the curvelet and ridgelet transformed images. In this work, the pupil and iris boundaries are detected by using the equation of circle from three points on its circumference. Using Canny edge detection, the iris radius value is empirically chosen based on rigorous experimentation. Eyelash removal is done by using a horizontal 1-D rank filter. Iris normalization is done by mapping the detected iris region from the polar domain to the rectangular domain and the multi-resolution transforms such as curvelet and ridgelet transforms are applied for multi-resolutional feature extraction. The classification is done using Manhattan distance (Md) and multiclass classifier with logistic function and the two results are compared. Here, the benchmark database CASIA-IRIS-V3 (Interval) is used for identification and recognition. It is observed that the ridgelet transform increases the iris recognition rate.

  • articleNo Access

    A SEQUENCE ALIGNMENT APPROACH APPLIED TO A MOBILE AUTHENTICATION TECHNIQUE BASED ON GESTURES

    In this paper, a biometric technique based on gesture recognition is proposed to improve the security of operations requiring authentication in mobile phones. Users are authenticated by making a gesture invented by them holding a mobile phone embedding an accelerometer on their hand. An analysis method based on sequence alignment is proposed and evaluated in different experiments. Firstly, a test of distinctiveness of gestures has been proposed obtaining an equal error rate (EER) of 4.98% with a database of 30 users and four repetitions. With the same database, a second experiment representing the unicity of accessing attempts has resulted in an EER value of 1.92%. Finally, a third experiment to evaluate the robustness of the technique has examined a database of 40 users with eight repetitions and real falsification attempts, performed by three impostors from the study of recordings of the carrying out of the original gestures, resulting in an EER of 2.5%.

  • articleNo Access

    RIDGE FREQUENCY ESTIMATION FOR LOW-QUALITY FINGERPRINT IMAGES ENHANCEMENT USING DELAUNAY TRIANGULATION

    In this paper, we propose a hybrid computational geometry-gray scale algorithm that enhances fingerprint images greatly. The algorithm extracts the local minima points that are positioned on the ridges of a fingerprint, then, it generates a Delaunay triangulation using these points of interest. This triangulation along with the local orientations give an accurate distance and orientation-based ridge frequency. Finally, a tuned anisotropic filter is locally applied and the enhanced output fingerprint image is obtained. When the algorithm is applied to rejected fingerprint images from FVC2004 DB2 database by the veryfinger application, these images pass and experimental results show that we obtain a low false and missed minutiae rate with an almost uniform distribution over the database. Moreover, the application of the proposed algorithm enables the extraction of features from all low-quality fingerprint images where the equal error rate of verification is decreased from 6.50% to 5% using nondamaged low-quality images in the database.

  • articleNo Access

    EAR BIOMETRICS AND SPARSE REPRESENTATION BASED ON SMOOTHED l0 NORM

    Ear biometrics attracted the attention of researchers in computer vision and machine learning for its use in many applications. In this paper, we present a fully automated system for recognition from ear images based upon sparse representation. In sparse representation, extracted features from the training data is used to develop a dictionary. Classification is achieved by representing the extracted features of the test data as a linear combination of entries in the dictionary. In fact, there are many solutions for this problem and the goal is to find the sparsest solution. We use a relatively new algorithm named smoothed l0 norm to find the sparsest solution and Gabor wavelet features are used for building the dictionary. Furthermore, we expand the proposed approach for gender classification from ear images. Several researches have addressed this issue based on facial images. We introduce a novel approach based on majority voting for gender classification. Experimental results conducted on the University of Notre Dame (UND) collection J data set, containing large appearance, pose, and lighting variations, resulted in a gender classification rate of 89.49%. Furthermore, the proposed method is evaluated on the WVU data set and classification rates for different view angles are presented. Results show improvement and great robustness in gender classification over existing methods.

  • articleNo Access

    Geometric and Textural Cues for Automatic Kinship Verification

    Automatic Kinship verification aims at recognizing the degree of kinship of two individuals from their facial images and it has possible applications in image retrieval and annotation, forensics and historical studies. This is a recent and challenging problem, which must deal with different degrees of kinship and variations in age and gender. Our work explores the computer identification of parent–child pairs using a combination of (i) features of different natures, based on geometric and textural data, (ii) feature selection and (iii) state-of-the-art classifiers. Experiments show that the proposed approach provides a valuable solution to the kinship verification problem, as suggested by its comparison with different methods on the same data and the same experimental protocols. We further show the good generalization capabilities of our method in several cross-database experiments.

  • articleNo Access

    Fuzzy Brain Storm Optimization and Adaptive Thresholding for Multimodal Vein-Based Recognition System

    Nowadays, conventional security method of using passwords can be easily forged by unauthorized person. Hence, biometric cues such as fingerprints, voice, palm print, and face are more preferable for recognition but to preserve the liveliness, another one important biometric trait is vein pattern, which is formed by the subcutaneous blood vessels that contain all the achievable recognition properties. Accordingly, in this paper, we propose a multibiometric system using palm vein, hand vein, and finger vein. Here, Holoentropy-based thresholding mechanism is newly developed for extracting the vein patterns. Also, Fuzzy Brain Storm Optimization (FBSO) method is proposed for score level fusion to achieve the better recognition performance. These two contributions are effectively included in the biometric recognition system and the performance analysis of the proposed method is carried out using the benchmark datasets of palm vein image, finger vein image, and hand vein image. The quantitative results are analyzed with the help of FAR, FRR, and accuracy. From outcome, we proved that the proposed FBSO approach attained a higher accuracy of 81.3% than the existing methods.

  • articleNo Access

    Ear Recognition Based on Fusion of Ear and Tragus Under Different Challenges

    This paper proposes a 2D ear recognition approach that is based on the fusion of ear and tragus using score-level fusion strategy. An attempt to overcome the effect of partial occlusion, pose variation and weak illumination challenges is done since the accuracy of ear recognition may be reduced if one or more of these challenges are available. In this study, the effect of the aforementioned challenges is estimated separately, and many samples of ear that are affected by two different challenges concurrently are also considered. The tragus is used as a biometric trait because it is often free from occlusion; it also provides discriminative features even in different poses and illuminations. The features are extracted using local binary patterns and the evaluation has been done on three datasets of USTB database. It has been observed that the fusion of ear and tragus can improve the recognition performance compared to the unimodal systems. Experimental results show that the proposed method enhances the recognition rates by fusion of parts that are nonoccluded with tragus in the cases of partial occlusion, pose variation and weak illumination. It is observed that the proposed method performs better than feature-level fusion methods and most of the state-of-the-art ear recognition systems.

  • articleNo Access

    Finger-Vein Quality Assessment Based on Deep Features From Grayscale and Binary Images

    Finger-vein verification is a highly secure biometric authentication that has been widely investigated over the last years. One of its challenges, however, is the possible degradation of image quality, that results in spurious and missing vein patterns, which increases the verification error. Despite recent advances in finger-vein quality assessment, the proposed solutions are limited as they depend on human expertise and domain knowledge to extract handcrafted features for assessing quality. We have proposed, recently, the first deep neural network (DNN) framework for assessing finger-vein quality, that does not require manual labeling of high and low quality images, as is the case for state of the art methods, but infers such annotations automatically based on an objective indicator, the biometric verification decision. This framework has significantly outperformed the existing methods, whether the input image is in grayscale or is binary. Motivated by these performances, we propose, in this work, a representation learning of finger vein image quality, where a DNN takes as input conjointly the grayscale and binary versions of the input image to predict vein quality. Our model allows to learn the joint representation from grayscale and binary images, for quality assessment. The experimental results, obtained on a large public dataset, demonstrates that our proposed method accurately identifies high and low quality images, and outperforms other techniques in terms of equal error rate (EER) minimization, including our previous DNN models, based either on grayscale or binary input.