![]() |
The field of biometrics utilizes computer models of the physical and behavioral characteristics of human beings with a view to reliable personal identification. The human characteristics of interest include visual images, speech, and indeed anything which might help to uniquely identify the individual.
The other side of the biometrics coin is biometric synthesis — rendering biometric phenomena from their corresponding computer models. For example, we could generate a synthetic face from its corresponding computer model. Such a model could include muscular dynamics to model the full gamut of human emotions conveyed by facial expressions.
This book is a collection of carefully selected papers presenting the fundamental theory and practice of various aspects of biometric data processing in the context of pattern recognition. The traditional task of biometric technologies — human identification by analysis of biometric data — is extended to include the new discipline of biometric synthesis.
Sample Chapter(s)
Chapter 1: Introduction to Synthesis in Biometrics (1,072 KB)
https://doi.org/10.1142/9789812770677_fmatter
The following sections are included:
https://doi.org/10.1142/9789812770677_others01
The following sections are included:
https://doi.org/10.1142/9789812770677_0001
The primary application focus of biometric technology is the verification and identification of humans using their possessed biological (anatomical, physiological and behavioral) properties. Recent advances in biometric processing of individual biometric modalities (sources of identification data such as facial features, iris patterns, voice, gait, ear topology, etc.) encompass all aspects of system integration, privacy and security, reliability and countermeasures to attack, as well as accompanying problems such as testing and evaluation, operating standards, and ethical issues.
https://doi.org/10.1142/9789812770677_0002
Despite the wide appreciation of biometric principles in security applications, biometric solutions are far from being affordable and available "on demand" anytime and anywhere. Many security biometric solutions require dedicated devices for data acquisition delaying their deployment and limiting the scope. The chapter focuses primarily on analysis of data taken from a human signatures for his/her authentication or identification. Also the chapter introduces a system developed to identify and authenticate individuals based on their signatures and/or handwriting. The issues of pervasive services are addressed (i) by integrating unique data acquisition and processing techniques which are capable of communicating with a variety of off-the-shelf devices such as pressure sensitive pens, mice, and touch pads, (ii) by using sequence processing techniques (like matching, alignment or filtering) for signature analysis techniques and comparison, (iii) by using the self-learning database solutions for achieving accurate results, and (iv) by utilizing signature synthesis techniques for benchmarking and testing.
https://doi.org/10.1142/9789812770677_0003
Multiresolution has been extensively used in many areas of computer science, including biometrics. We introduce local multiresolution filters for quadratic and cubic B-splines that satisfy the first and the second level of smoothness respectively. For constructing these filters, we use a reverse subdivision method. We also show how to use and extend these filters for tensor-product surfaces, and 2D/3D images. For some types of data, such as curves and surfaces, boundary interpolation is strongly desired. To maintain this condition, we introduce extraordinary filters for boundaries. For images and other cases in which interpolating the boundaries is not required or even desired, we need a particular arrangement to be able to apply regular filters. As a solution, we propose a technique based on symmetric extension. Practical issues for efficient implementation of multiresolution are discussed. Finally, we discuss some example applications in biometrics, including iris synthesis and volumetric data rendering.
https://doi.org/10.1142/9789812770677_0004
The rapid development of biometric technologies is one of the modern world's phenomena, which can be justified by the strong need for the increased security by the society and the spur in the new technological developments driven by the industries. This chapter examines a unique aspect of the problem — the development of new approaches and methodologies for biometric identification, verification and synthesis utilizing the notion of proximity and topological properties of biometric identifiers. The use of recently developed advanced techniques in computational geometry and image processing is examined with the purpose of finding the common denominator between the different biometric problems, and identifying the most promising methodologies. The material of the chapter is enhanced with recently obtained experimental results for fingerprint identification, facial expression modeling, iris synthesis, and hand boundary tracing.
https://doi.org/10.1142/9789812770677_others02
The following sections are included:
https://doi.org/10.1142/9789812770677_0005
The biometric verification task is one of determining whether an input consisting of measurements from an unknown individual matches the corresponding measurements of a known individual. This chapter describes a statistical learning methodology for determining whether a pair of biometric samples belong to the same individual. The methodology involves four parts. First, discriminating elements or features, are extracted from each sample. Second, similarities between the elements of each sample are computed. Third, using conditional probability estimates of each difference, the log-likelihood ratio (LLR) is computed for the hypotheses that the samples correspond to the same individual and to different individuals; the conditional probability estimates are determined in a learning phase that involves estimating the parameters of various distributions such as Gaussian, gamma or mixture of gamma/Gaussian. Fourth, the LLR is analyzed with the Tippett plot to provide a measure of the strength of evidence. The methods are illustrated in two biometric modalities: friction ridge prints — which is a physical biometric, and handwriting — which is a behavioural biometric. The statistical methodology has two advantages over conventional methods such as thresholds based on receiver operating characteristics: improved accuracy and a natural provision for combining with other biometric modalities.
https://doi.org/10.1142/9789812770677_0006
It has been a standard assumption that handwritten signatures possess significant within-class variation, and that feature extraction and pattern recognition should be used to perform automatic recognition and verification. Described here is a simple way to reliably compare signatures in a quite direct fashion. Reasonable speeds and very high success rates have been achieved. Comparisons are made to other methods, and a four algorithm voting scheme is used to achieve over 99% success.
https://doi.org/10.1142/9789812770677_0007
The overall objective in defining feature space is to reduce the dimensionality of the original pattern space, whilst maintaining discriminatory power for classification. To meet this objective in the context of ear biometrics a new force field transformation is presented which treats the image as an array of mutually attracting particles that act as the source of a Gaussian force field. Underlying the force field there is a scalar potential energy field, which in the case of an ear takes the form of a smooth surface that resembles a small mountain with a number of peaks joined by ridges. The peaks correspond to potential energy wells and to extend the analogy the ridges correspond to potential energy channels. Since the transform also turns out to be invertible, and since the surface is otherwise smooth, information theory suggests that much of the information is transferred to these features, thus confirming their efficacy.
We describe how field line feature extraction, using an algorithm similar to gradient descent, exploits the directional properties of the force field to automatically locate these channels and wells, which then form the basis of the characteristic ear features. We also show how an analysis of this algorithm leads to a separate closed analytical description based on the divergence of force direction.
The technique is validated by performing recognition on a database of ears selected from the XM2VTS face database, and by comparing the results with the more established technique of Principal Components Analysis (PCA). This confirms not only that ears do indeed appear to have potential as a biometric, but also that the new approach is well suited to their description.
https://doi.org/10.1142/9789812770677_0008
Biometrics are automated methods of recognizing a person based on a physiological or behavioural characteristic. Among the features measured are: face, fingerprints, hand geometry, handwriting, iris, retinal, vein, and voice. Facial Recognition technologies are becoming one of the foundation of an extensive array of highly secure identification and personal verification solutions. Meanwhile, a smart environment is one that is able to identify people, interpret their actions, and react appropriately. One of the most important building blocks of smart environments is a person identification system. Face recognition devices are ideal for such systems, since they have recently become fast, cheap, unobtrusive, and, when combined with voice identification, are very robust against changes in the environment.
As with all biometrics, facial recognition follows the same four steps below: sample capture, feature extraction and representation, template comparison, and matching. Feature representation of facial image play a key role in facial recognition system. In other words, the performance of facial recognition technology is very closely tied to the quality of the representation of facial image. In this chapter, we develop a new method for facial feature representation by using a nontensor product bivariate wavelet transform. A new nontensor product bivariate wavelet filter banks with linear phase are constructed from the centrally symmetric matrices. Our investigations demonstrate that these filter banks have a matrix factorization and they are capable of describing the features of face image. The implementations of our algorithm are made of three parts: first, by perform 2-level wavelettransform with a new nontensor product wavelet filter, a face images are represented by the lowest resolution subbands after decomposition. Second, the Principal Component Analysis (PCA) feature selection scheme is adopted to reduce the computational complexity of feature representation. Finally, to test the robustness of the proposed facial feature representation, the Support Vector Machines (SVM) is applied for classification. The experimental results show that our method is superior to other methods in terms of recognition accuracy and efficiency.
https://doi.org/10.1142/9789812770677_0009
The wavelet theory has become a hot research topic in the last few years for its important relative characteristics, such as sub-band coding, multi-resolution analysis and filter banks. In this Chapter, we propose a novel method of feature extraction for palmprint identification based on the waveletransform, which is very efficient to handle the textural characteristics of palmprint images at low resolution. In order to achieve a high accuracy, four sets of statistical features (mean, energy, variance, and kurtosis) are extracted based on the wavelet transform. Five classifier combination strategies are then represented. The experimental comparison of various feature sets and different fusion schemes shows that the individual feature set of energy has the best classification ability and the fusion schemes of Median rule as well as the Majority rule have the best performance.
https://doi.org/10.1142/9789812770677_0010
Traditional biometrics technologies such as fingerprints or iris recognition systems require special hardware devices for biometrics data collection. This makes them unsuitable for online computer user monitoring, which to be effective should be non-intrusive, and carried out passively. Behavioural biometrics based on human computer interaction devices such as mouse and keyboards do not carry such limitation, and as such are good candidates for online computer user monitoring. We present in this chapter artificial intelligence based techniques that can be used to analyze and process keystroke and mouse dynamics to achieve passive user monitoring.
https://doi.org/10.1142/9789812770677_others03
The following sections are included:
https://doi.org/10.1142/9789812770677_0011
In addition to law enforcement applications, many civil applications will require biometrics-based identification systems and a large percentage is predicted to rely on fingerprints as an identifer. Even though fingerprint as a biometric has been used in many identification applications, mostly these applications have been semi-automatic. The results of such systems often require to be validated by human experts. With the increased use of biometric identification systems in many real-time applications, the challenges for large-scale biometric identification are significant both in terms of improving accuracy and response time. In this paper, we briey review the tools, terminology and methods used in large-scale biometrics identification applications. The performance of the identification algorithms need to be significantly improved to successfully handle millions of persons in the biometrics database matching thousands of transactions per day.
https://doi.org/10.1142/9789812770677_0012
In this chapter, we provide a short introduction to the main concepts related to evolutionary algorithms, including some basic concepts and terminology, a brief description of their main paradigms, and some of their representative applications reported in the specialized literature. In the second part of the chapter, we discuss several case studies on the use of evolutionary algorithms in both physiological and behavioural biometrics. The case studies include fingerprint compression, facial modeling, hand-based feature selection, handwritten character recognition, keystroke dynamics identity verification, and speaker verification. These case studies show the success that different evolutionary algorithms have had in a variety of biometrics-related applications, either as standalone approaches, or combined with other heuristics (e.g. neural networks or Support Vector Machines. In the final part of the chapter, we provide a few guidelines regarding potential research trends for the near future. Such research trends include the use of alternative metaheuristics (e.g. particle swarm optimization, artificial immune systems and the ant system), as well as the use of alternative approaches to model the problems (e.g. through the use of multiobjective optimization or genetic programming).
https://doi.org/10.1142/9789812770677_0013
Some concerns of measurement for biometric analysis and synthesis are investigated. This research tries to reexamine the nature of the basic definition of "measurement" of distance between two objects or image patterns, which is essential for comparing the "similarity" of patterns. According to a recent International Workshop on Biometric Technologies: Modeling and Simulation at University of Calgary, June, 2004, Canada, biometric refers to the studies of analysis, synthesis, modeling and simulation of human behavior by computers, including mainly recognition of hand printed words, machines printed characters, handwriting, fingerprint, signature, facial expression, speech, voice, emotion and iris etc.
The key idea is the "measurement" that defines the similarity between different input data that can be represented by image data. This paper deals with the very fundamental phenomena of "measurement" of these studies and analysis. Preliminary findings and observations show that the concepts of "segmentation" and "disambiguation" are extremely important, which have been long ignored. Even while computer and information professionals and researchers have spent much effort, energy, and time, trying very hard and diligently to develop methods that may reach as high as 99.9999% accuracy rate for character and symbol recognition, a poorly or ill considered pre-designed board poster or input pattern could easily destroy its effectiveness and lower the overall performance accurate rate to less than 50%. The more data it handles, the worse the results. Its overall performance accuracy rate will be proportionally decreasing. Take road safety for example. If street direction signs are poorly designed, then even a most intelligent robot driver with perfect vision and 100% accurate rate of symbol recognition, still has at least 50% error rate. That is, it can achieve at most only 50% accuracy rate in determining which correct direction to follow. It's not just a matter of being slow or losing time.
However, in more serious and urgent life threatening situation like fire, accidents, terrorists attack or natural disaster, it's a matter of life and death. So the impact is enormous and widespread. The idea and concept of "ambiguity," "disambiguation," "measurement," and "learning," and their impacts on biometrics image pattern analysis and recognition, as well their applications are illustrated through several examples.
https://doi.org/10.1142/9789812770677_0014
This chapter outlines the role of Human Biometric Sensor Interaction (HBSI) which examines topics such as ergonomics, the environment, biometric sample quality, and device selection, and how these factors influence the successful implementation of a biometric system. Research conducted at Purdue University's Biometric Standards, Performance, and Assurance Laboratory has shown that the interaction of these factors has a significant impact on the performance of a biometric system. This chapter examines the impact on biometric system performance of a number of modalities including two-dimensional and three-dimensional face recognition, fingerprint recognition, and dynamic signature verification. By applying ergonomic principles to device design as well as understanding the environment in which the biometric sensor will be placed, can positively impact, the quality of a biometric sample, thus resulting in increased performance.
https://doi.org/10.1142/9789812770677_0015
This chapter describes the fundamental concepts of biometric-based training system design and training methodology for users of biometric-based access control systems. Such systems utilizes the information provided by various measurements of an individual's biometrics. The goal of training is to develop the user's decision making skills based on two types of information about the individual: biometric information collected during pre-screening or surveillance, and information collected during the authorization check itself. The training system directly benefits users of security systems covering a broad spectrum of social activities, including airport and seaport surveillance and control, immigration service, border control, important public events, hospitals, banking, etc.
https://doi.org/10.1142/9789812770677_bmatter
The following sections are included:
Sample Chapter(s)
Chapter 1: Introduction to Synthesis in Biometrics (1,072k)