Skip main navigation

Cookies Notification

We use cookies on this site to enhance your user experience. By continuing to browse the site, you consent to the use of our cookies. Learn More
×

System Upgrade on Tue, May 28th, 2024 at 2am (EDT)

Existing users will be able to log into the site and access content. However, E-commerce and registration of new users may not be available for up to 12 hours.
For online purchase, please visit us again. Contact us at customercare@wspc.com for any enquiries.

SEARCH GUIDE  Download Search Tip PDF File

  • articleNo Access

    FACE AUTHENTICATION USING RECOGNITION-BY-PARTS, BOOSTING AND TRANSDUCTION

    The paper describes an integrated recognition-by-parts architecture for reliable and robust face recognition. Reliability and robustness are characteristic of the ability to deploy full-fledged and operational biometric engines, and handling adverse image conditions that include among others uncooperative subjects, occlusion, and temporal variability, respectively. The architecture proposed is model-free and non-parametric. The conceptual framework draws support from discriminative methods using likelihood ratios. At the conceptual level it links forensics and biometrics, while at the implementation level it links the Bayesian framework and statistical learning theory (SLT). Layered categorization starts with face detection using implicit rather than explicit segmentation. It proceeds with face authentication that involves feature selection of local patch instances including dimensionality reduction, exemplar-based clustering of patches into parts, and data fusion for matching using boosting driven by parts that play the role of weak-learners. Face authentication shares the same implementation with face detection. The implementation, driven by transduction, employs proximity and typicality (ranking) realized using strangeness and p-values, respectively. The feasibility and reliability of the proposed architecture are illustrated using FRGC data. The paper concludes with suggestions for augmenting and enhancing the scope and utility of the proposed architecture.

  • articleNo Access

    From the Universe to Subsystems: Why Quantum Mechanics Appears More Stochastic than Classical Mechanics

    By means of the examples of classical and Bohmian quantum mechanics, we illustrate the well-known ideas of Boltzmann as to how one gets from laws defined for the universe as a whole the dynamical relations describing the evolution of subsystems. We explain how probabilities enter into this process, what quantum and classical probabilities have in common and where exactly their difference lies.

  • articleNo Access

    Typicality in Pure Wave Mechanics

    Hugh Everett III's pure wave mechanics is a deterministic physical theory with no probabilities. He nevertheless sought to show how his theory might be understood as making the same statistical predictions as the standard collapse formulation of quantum mechanics. We will consider Everett's argument for pure wave mechanics, how it depends on the notion of branch typicality, and the relationship between the predictions of pure wave mechanics and the standard quantum probabilities.

  • articleNo Access

    Phase randomization and typicality in the interference of two condensates

    Interference is observed when two independent Bose–Einstein condensates expand and overlap. This phenomenon is typical, in the sense that the overwhelming majority of wave functions of the condensates, uniformly sampled out of a suitable portion of the total Hilbert space, display interference with maximal visibility. We focus here on the phases of the condensates and their (pseudo) randomization, which naturally emerges when two-body scattering processes are considered. Relationship to typicality is discussed and analyzed.

  • articleNo Access

    Randomness: Quantum versus classical

    Recent tremendous development of quantum information theory has led to a number of quantum technological projects, e.g. quantum random generators. This development had stimulated a new wave of interest in quantum foundations. One of the most intriguing problems of quantum foundations is the elaboration of a consistent and commonly accepted interpretation of a quantum state. Closely related problem is the clarification of the notion of quantum randomness and its interrelation with classical randomness. In this short review, we shall discuss basics of classical theory of randomness (which by itself is very complex and characterized by diversity of approaches) and compare it with irreducible quantum randomness. We also discuss briefly “digital philosophy”, its role in physics (classical and quantum) and its coupling to the information interpretation of quantum mechanics (QM).

  • articleNo Access

    Musical Similarity and Commonness Estimation Based on Probabilistic Generative Models of Musical Elements

    This paper proposes a novel concept we call musical commonness, which is the similarity of a song to a set of songs; in other words, its typicality. This commonness can be used to retrieve representative songs from a set of songs (e.g. songs released in the 80s or 90s). Previous research on musical similarity has compared two songs but has not evaluated the similarity of a song to a set of songs. The methods presented here for estimating the similarity and commonness of polyphonic musical audio signals are based on a unified framework of probabilistic generative modeling of four musical elements (vocal timbre, musical timbre, rhythm, and chord progression). To estimate the commonness, we use a generative model trained from a song set instead of estimating musical similarities of all possible song-pairs by using a model trained from each song. In experimental evaluation, we used two song-sets: 3278 Japanese popular music songs and 415 English songs. Twenty estimated song-pair similarities for each element and each song-set were compared with ratings by a musician. The comparison with the results of the expert ratings suggests that the proposed methods can estimate musical similarity appropriately. Estimated musical commonnesses are evaluated on basis of the Pearson product-moment correlation coefficients between the estimated commonness of each song and the number of songs having high similarity with the song. Results of commonness evaluation show that a song having higher commonness is similar to songs of a song set.