Loading [MathJax]/jax/output/CommonHTML/fonts/TeX/fontdata.js
Skip main navigation

Cookies Notification

We use cookies on this site to enhance your user experience. By continuing to browse the site, you consent to the use of our cookies. Learn More
×

System Upgrade on Tue, May 28th, 2024 at 2am (EDT)

Existing users will be able to log into the site and access content. However, E-commerce and registration of new users may not be available for up to 12 hours.
For online purchase, please visit us again. Contact us at customercare@wspc.com for any enquiries.

SEARCH GUIDE  Download Search Tip PDF File

  Bestsellers

  • articleOpen Access

    Decoding Continuous Tracking Eye Movements from Cortical Spiking Activity

    Eye movements are the primary way primates interact with the world. Understanding how the brain controls the eyes is therefore crucial for improving human health and designing visual rehabilitation devices. However, brain activity is challenging to decipher. Here, we leveraged machine learning algorithms to reconstruct tracking eye movements from high-resolution neuronal recordings. We found that continuous eye position could be decoded with high accuracy using spiking data from only a few dozen cortical neurons. We tested eight decoders and found that neural network models yielded the highest decoding accuracy. Simpler models performed well above chance with a substantial reduction in training time. We measured the impact of data quantity (e.g. number of neurons) and data format (e.g. bin width) on training time, inference time, and generalizability. Training models with more input data improved performance, as expected, but the format of the behavioral output was critical for emphasizing or omitting specific oculomotor events. Our results provide the first demonstration, to our knowledge, of continuously decoded eye movements across a large field of view. Our comprehensive investigation of predictive power and computational efficiency for common decoder architectures provides a much-needed foundation for future work on real-time gaze-tracking devices.

  • articleNo Access

    A SURVEY OF AUTOMATIC PERSON RECOGNITION USING EYE MOVEMENTS

    The human eye is rich in physical and behavioral attributes that can be used for automatic person recognition. The physical attributes such as the iris attracted early attention and yielded significant recognition results, but like most physical biometrics, they have several disadvantages such as intrusive acquisition, vulnerability to spoofing attacks, etc. Consequently, during the last decade the behavioral attributes extracted from human eyes have steadily gained interest from the automatic person recognition research community. In this first of its kind survey, we present the studies utilizing the behavioral attributes of human eyes for automatic person recognition. We have proposed a unique classification based on the type of stimuli used to elicit behavioral attributes. In addition, for each approach we have carefully examined the common steps involved in automatic person recognition from database acquisition, feature extraction to classification. Lastly, we also present a comparison of the recognition results obtained by each approach.

  • articleNo Access

    INTEGRATED SEGMENTATION AND RECOGNITION THROUGH EXHAUSTIVE SCANS OR LEARNED SACCADIC JUMPS

    This paper advances two approaches to integrating handwritten character segmentation and recognition within one system, where the underlying function is learned by a backpropagation neural network. Integrated segmentation and recognition is necessary when characters overlap or touch, or when an individual character is broken up. The first approach exhaustively scans a field of characters, effectively creating a possible segmentation at each scan point. A neural net is trained to both identify when its input window is centered over a character, and if it is, to classify the character. This approach is similar to most recently advanced approaches to integrating segmentation and recognition, and has the common flaw of generating too many possible segmentations to be truly efficient. The second approach overcomes this weakness without reducing accuracy by training a neural network to mimic the ballistic and corrective saccades (eye movements) of human vision. A single neural net learns to jump from character to character, making corrective jumps when necessary, and to classify the centered character when properly fixated. The significant aspect of this system is that the neural net learns to both control what is in its input window as well as to recognize what is in the window. High accuracy results are reported for a standard database of handprinted digits for both approaches.

  • articleNo Access

    CONTROL OF EYE AND ARM MOVEMENTS USING ACTIVE, ATTENTIONAL VISION

    Recent related approaches in the areas of vision, motor control and planning are attempting to reduce the computational requirements of each process by restricting the class of problems that can be addressed. Active vision, differential kinematics and reactive planning are all characterized by their minimal use of representations, which simplifies both the required computations and the acquisition of models. This paper describes an approach to visually-guided motor control that is based on active vision and differential kinematics, and is compatible with reactive planning. Active vision depends on an ability to choose a region of the visual environment for task-specific processing. Visual attention provides a mechanism for choosing the region to be processed in a task-specific way. In addition, this attentional mechanism provides the interface between the vision and motor systems by representing visual position information in a 3-D retinocentric coordinate frame. Coordinates in this frame are transformed into eye and arm motor coordinates using kinematic relations expressed differentially. A real-time implementation of these visuomotor mechanisms has been used to develop a number of visually-guided eye and arm movement behaviors.

  • articleNo Access

    RECIPROCAL-WEDGE TRANSFORM IN ACTIVE STEREO

    The Reciprocal-Wedge Transform (RWT) facilitates space-variant image representation. In this paper a V-plane projection method is presented as a model for imaging using the RWT. It is then shown that space-variant sensing with this new RWT imaging model is suitable for fixation control in active stereo that exhibits vergence and versional eye movements and scanpath behaviors. A computational interpretation of stereo fusion in relation to disparity limit in space-variant imagery leads to the development of a computational model for binocular fixation. The vergence-version movement sequence is implemented as an effective fixation mechanism using the RWT imaging. A fixation system is presented to show the various modules of camera control, vergence and version.

  • articleNo Access

    FRACTAL-BASED ANALYSIS OF THE INFLUENCE OF AUDITORY STIMULI ON EYE MOVEMENTS

    Fractals01 Jun 2018

    Analyzing the influence of external stimuli on human eye movements is an important challenge in vision research. In this paper, we investigate the plasticity of eye movements due to the applied auditory stimuli (music). For this purpose, we use fractal theory, which provides us with tools such as fractal dimension as an indicator of process complexity. This study, for the first time, reveals the correlation between fractal dynamics of eye movements and fractal dynamics of auditory stimuli. Based on the performed analysis, the fractal structure of the eye movements shifts toward the fractal structure of the applied auditory stimuli, where the greater variation in fractal dynamics of auditory stimuli causes greater variation in the fractal dynamics of eye movements. The observed behavior is explained through the nervous system. As a rehabilitation purpose, the employed methodology in this research can be investigated in case of patients with vision problems, where the applied music could potentially improve their vision.

  • articleOpen Access

    ANALYSIS OF THE CORRELATION BETWEEN EYES AND BRAIN ACTIVITIES IN RESPONSE TO MOVING VISUAL STIMULI

    Fractals03 Nov 2021

    This paper analyzed the coupling among the reactions of eyes and brain in response to visual stimuli. Since eye movements and electroencephalography (EEG) signals as the features of eye and brain activities have complex patterns, we utilized fractal theory and sample entropy to decode the correlation between them. In the experiment, subjects looked at a dot that moved on different random paths (dynamic visual stimuli) on the screen of a computer in front of them while we recorded their EEG signals and eye movements simultaneously. The results indicated that the changes in the complexity of eye movements and EEG signals are coupled (r=0.8043 in case of fractal dimension and r=0.9259 in case of sample entropy), which reflects the coupling between the brain and eye activities. This analysis could be extended to evaluate the correlation between the activities of other organs versus the brain.

  • articleNo Access

    Information-based Analysis of the Coupling between Dynamic Visual Stimuli, Eye Movements, and Brain Signals

    Our eyes are always in search of exploring our surrounding environment. The brain controls our eyes’ activities through the nervous system. Hence, analyzing the correlation between the activities of the eyes and brain is an important area of research in vision science. This paper evaluates the coupling between the reactions of the eyes and the brain in response to different moving visual stimuli. Since both eye movements and EEG signals (as the indicator of brain activity) contain information, we employed Shannon entropy to decode the coupling between them. Ten subjects looked at four moving objects (dynamic visual stimuli) with different information contents while we recorded their EEG signals and eye movements. The results demonstrated that the changes in the information contents of eye movements and EEG signals are strongly correlated (r=0.7084), which indicates a strong correlation between brain and eye activities. This analysis could be extended to evaluate the correlation between the activities of other organs versus the brain.

  • articleNo Access

    Analysis of the Correlation between Static Visual Stimuli, Eye Movements, and Brain Signals

    Analysis of the correlation among the activities of the eyes and brain is an important research area in physiological science. In this paper, we analyzed the correlation between the reactions of eyes and the brain during rest and while watching different visual stimuli. Since every external stimulus transfers information to the human brain, and on the other hand, eye movements and EEG signals contain information, we utilized Shannon entropy to evaluate the coupling between them. In the experiment, 10 subjects looked at 4 images with different information contents while we recorded their EEG signals and eye movements simultaneously. According to the results, the information contents of eye fluctuations, EEG signals, and visual stimuli are coupled, which reflect the coupling between the brain and eye activities. Similar analyses could be performed to evaluate the correlation among the activities of other organs versus the brain.

  • articleNo Access

    OVERT AND COVERT VISUAL SEARCH IN PRIMATES: REACTION TIMES AND GAZE SHIFT STRATEGIES

    In order to investigate the search performance and strategies of nonhuman primates, two macaque monkeys were trained to search for a target template among differently oriented distractors in both free-gaze and fixed-gaze viewing conditions (overt and covert search). In free-gaze search, reaction times (RT) and eye movements revealed the theoretically predicted characteristics of exhaustive and self-terminating serial search, with certain exceptions that are also observed in humans. RT was linearly related to the number of fixations but not necessarily to the number of items on display. Animals scanned the scenes in a nonrandom manner spending notably more time on targets and items inspected last (just before reaction). The characteristics of free-gaze search were then compared with search performance under fixed gaze (covert search) and with the performance of four human subjects tested in similar experiments. By and large the performance characteristics of both groups were similar; monkeys were slightly faster, and humans more accurate. Both species produced shorter RT in fixed-gaze than in free-gaze search. But while RT slopes of the human subjects still showed the theoretically predicted difference between hits and rejections, slopes of the two monkeys appeared to collapse. Despite considerable priming and short-term learning when similar tests were continuously repeated, no substantial long-term training effects were seen when test conditions and set sizes were frequently varied. Altogether, the data reveal many similarities between human and monkey search behavior but indicate that search is not necessarily restricted to exclusively serial processes.

  • articleNo Access

    Diagnosis of mild Alzheimer disease through the analysis of eye movements during reading

    Reading requires the integration of several central cognitive subsystems, ranging from attention and oculomotor control to word identification and language comprehension. Reading saccades and fixations contain information that can be correlated with word properties. When reading a sentence, the brain must decide where to direct the next saccade according to what has been read up to the actual fixation. In this process, the retrieval memory brings information about the current word features and attributes into working memory. According to this information, the prefrontal cortex predicts and triggers the next saccade. The frequency and cloze predictability of the fixated word, the preceding words and the upcoming ones affect when and where the eyes will move next. In this paper we present a diagnostic technique for early stage cognitive impairment detection by analyzing eye movements during reading proverbs. We performed a case-control study involving 20 patients with probable Alzheimer's disease and 40 age-matched, healthy control patients. The measurements were analyzed using linear mixed-effects models, revealing that eye movement behavior while reading can provide valuable information about whether a person is cognitively impaired. To the best of our knowledge, this is the first study using word-based properties, proverbs and linear mixed-effect models for identifying cognitive abnormalities.

  • articleNo Access

    ACTIVE 3D VISION THROUGH GAZE RELOCATION IN A HUMANOID ROBOT

    Motion parallax, the relative motion of 3D space at different distances experienced by a moving agent, is one of the most informative visual cues of depth and distance. While motion parallax is typically investigated during navigation, it also occurs in most robotic head/eye systems during rotations of the cameras. In these systems, as in the eyes of many species, the optical nodal points do not lie on the axes of rotation. Thus, a camera rotation shifts an object's projection on the sensor by an amount that depends not only on the rotation amplitude, but also on the distance of the object with respect to the camera. Several species rely on this cue to estimate distance. An oculomotor parallax is present also in the human eye, and during normal eye movements, displaces the stimulus on the retina by an amount that is well within the range of sensitivity of the visual system. We developed an anthropomorphic robot equipped with an oculomotor system specifically designed to reproduce the images impinging on the human retina. In this study, we thoroughly characterize the oculomotor parallax emerging while replicating human eye movements and describe a method for combining 3D information resulting from pan and tilt rotations of the cameras. We show that emulation of the dynamic strategy by which humans scan a visual scene gives accurate estimation of distance within the space surrounding the robot.

  • articleNo Access

    Effects of English Capitals on Reading Performance of Chinese Learners: Evidence from Eye Tracking

    Native English speakers need more time to recognize capital letters in reading, yet the influence of capitals upon Chinese learners’ reading performance is seldom studied. We conducted an eye tracker experiment to explore the cognitive features of Chinese learners in reading texts containing capital letters. The effect of English proficiency on capital letter reading is also studied. The results showed that capitals significantly increase the cognitive load in Chinese learners’ reading process, complicate their cognitive processing, and lower their reading efficiency. The perception of capital letters of Chinese learners is found to be an isolated event and may influence the word-superiority effect. English majors, who possess relatively stronger English logical thinking capability than non-English majors, face the same difficulty as the non-English majors do if no practice of capital letter reading has been done.

  • chapterNo Access

    Chapter 20: Perceived Cognitive Challenge Predicts Eye Movements While Viewing Contemporary Paintings

    Neuroaesthetics01 Jan 2025

    Contemporary art is often challenging for the viewer, especially when it violates classic rules of representation. Also, viewers usually have little knowledge about this type of art, making its reception even more difficult. Our main research question was how the cognitive challenge associated with contemporary art affects eye movement. In particular, we aimed to assess the impact on eye movements of (a) object-related cognitive challenge in terms of image properties (syntactic and semantic violations) and (b) subject-related cognitive challenge (composite subjective estimate of image inconsistency, ambiguity, and complexity). The eye movements of expert and naive participants were recorded while they freely viewed digital copies of contemporary paintings (four groups of five paintings each, differing in the presence of semantic and syntactic violations). We found that neither violations nor art expertise alone predicted eye movements, although perceived, subjectively experienced cognitive challenge did. In particular, subject-related cognitive challenge was associated with an increase in visual exploration (longer and more numerous fixations, bigger area of exploration, and longer viewing time). The roles of object-related and subject-related indicators of cognitive challenge in perception of contemporary art are discussed.

  • chapterNo Access

    Chapter 23: Rembrandt Portraits: Implicitly Detecting the Original Perspective

    Neuroaesthetics01 Jan 2025

    The original left-front perspective of portraits by Rembrandt was detected by Chinese students with a higher recognition rate as compared to the right-front perspective or the mirror reversals of both perspectives. Oculomotor patterns indicated that the eye regions provided essential information for such implicit detection.

  • chapterNo Access

    CONTROL OF EYE AND ARM MOVEMENTS USING ACTIVE, ATTENTIONAL VISION

    Recent related approaches in the areas of vision, motor control and planning are attempting to reduce the computational requirements of each process by restricting the class of problems that can be addressed. Active vision, differential kinematics and reactive planning are all characterized by their minimal use of representations, which simplifies both the required computations and the acquisition of models. This paper describes an approach to visually-guided motor control that is based on active vision and differential kinematics, and is compatible with reactive planning. Active vision depends on an ability to choose a region of the visual environment for task-specific processing. Visual attention provides a mechanism for choosing the region to be processed in a task-specific way. In addition, this attentional mechanism provides the interface between the vision and motor systems by representing visual position information in a 3-D retinocentric coordinate frame. Coordinates in this frame are transformed into eye and arm motor coordinates using kinematic relations expressed differentially. A real-time implementation of these visuomotor mechanisms has been used to develop a number of visually-guided eye and arm movement behaviors.

  • chapterNo Access

    INTEGRATED SEGMENTATION AND RECOGNITION THROUGH EXHAUSTIVE SCANS OR LEARNED SACCADIC JUMPS

    This paper advances two approaches to integrating handwritten character segmentation and recognition within one system, where the underlying function is learned by a backpropagation neural network. Integrated segmentation and recognition is necessary when characters overlap or touch, or when an individual character is broken up. The first approach exhaustively scans a field of characters, effectively creating a possible segmentation at each scan point. A neural net is trained to both identify when its input window is centered over a character, and if it is, to classify the character. This approach is similar to most recently advanced approaches to integrating segmentation and recognition, and has the common flaw of generating too many possible segmentations to be truly efficient. The second approach overcomes this weakness without reducing accuracy by training a neural network to mimic the ballistic and corrective saccades (eye movements) of human vision. A single neural net learns to jump from character to character, making corrective jumps when necessary, and to classify the centered character when properly fixated. The significant aspect of this system is that the neural net learns to both control what is in its input window as well as to recognize what is in the window. High accuracy results are reported for a standard database of handprinted digits for both approaches.