Processing math: 100%
Skip main navigation

Cookies Notification

We use cookies on this site to enhance your user experience. By continuing to browse the site, you consent to the use of our cookies. Learn More
×

System Upgrade on Tue, May 28th, 2024 at 2am (EDT)

Existing users will be able to log into the site and access content. However, E-commerce and registration of new users may not be available for up to 12 hours.
For online purchase, please visit us again. Contact us at customercare@wspc.com for any enquiries.

SEARCH GUIDE  Download Search Tip PDF File

  • articleNo Access

    Towards Adaptive Ontology Visualization — Predicting User Success from Behavioral Data

    Ontology visualization plays an important role in human data interaction by offering clarity and insight for complex structured datasets. Recent usability studies of ontology visualization techniques have added to our understanding of desired features when assisting users in the interactive process. However, user behavioral data such as eye gaze and event logs have largely been used as indirect evidence to explain why a user may have carried out certain tasks in a controlled environment, as opposed to direct input that informs the underlying visualization system. Although findings from usability studies have contributed to the refinement of ontology visualizations as a whole, the visualization techniques themselves remain a one-size-fits-all approach, where all users are presented with the same visualizations and interactive features. By contrast, this paper investigates the feasibility of using behavioral data, such as user gaze and event logs, as real-time indicators of how appropriate or effective a given visualization may be for a specific user at a moment in time, which in turn may be used to inform the adaptation of the visualization to the user on the fly. To this end, we apply established predictive modeling techniques in Machine Learning to predict user success using gaze data and event logs. We present a detailed analysis from a controlled experiment and demonstrate such predictions are not only feasible, but can also be significantly better than a baseline classifier during visualization usage. These predictions can then be used to drive the adaptations of visual systems in providing ad hoc visualizations on a per user basis, which in turn may increase individual user success and performance. Furthermore, we demonstrate the prediction performance using several different feature sets, and report on the results generated from several notable classifiers, where a decision tree-based learning model using a boosting algorithm produced the best overall results.

  • articleNo Access

    An Evaluation Method of Visualization Using Visual Momentum Based on Eye-Tracking Data

    A new method based on eye-tracking data — visual momentum (VM) — was introduced to quantitatively evaluate a dynamic interactive visualization interface. We extracted the dimensionless factors from the raw eye-tracking data, including the fixation time factor T, the saccade amplitude factor D and the fixation number factor N. A predictive regression model of VM was deduced by eye movement factors and the performance response time (RT). In Experiment 1, the experimental visualization materials were designed with six effectiveness levels according to design techniques proposed by Woods to improve VM (total replacement, fixed format data replacement, long shot, perceptual landmark and spatial representation) and were tested in six parallel subject groups. The coefficients of the regression model were calculated from the data of 42 valid subjects in Experiment 1. The mean VM of each group exhibited an increasing trend with an increase in design techniques. The data of the performance and eye tracking among the combined high VM group, middle VM group and low VM group indicated significant differences. The data analysis indicates that the results were consistent with the previous qualitative research of VM. We tested and verified this regression model in Experiment 2 with another dynamic interactive visualization. The results indicated that the VM calculated by the regression model was significantly correlated with the performance data. Therefore, the virtual parameter VM can be a quantitative indicator for evaluating dynamic visualization. It could be a useful evaluation method for the dynamic visualization in general working environments.

  • articleNo Access

    Vision-Based Global Localization of Points of Gaze in Sport Climbing

    Investigating realistic visual exploration is quite challenging in sport climbing, but it promises a deeper understanding of how performers adjust their perception-action couplings during task completion. However, the samples of participants and the number of trials analyzed in such experiments are often reduced to a minimum because of the time-consuming treatments of the eye-tracking data. Notably, mapping successive points of gaze from local views to the global scene is generally performed manually by watching eye-tracking video data frame by frame. This manual procedure is not suitable for processing a large number of datasets. Consequently, this study developed an automatic method for solving this global point of gaze localization in indoor sport climbing. Particularly, an eye-tracking device was used for acquiring local image frames and points of gaze from a climber’s local views. Artificial landmarks, designed as four-color-disk groups, were distributed on the wall to facilitate localization. Global points of gaze were computed based on planar homography transforms between the local and global positions of the detected landmarks. Thirty climbing trials were recorded and processed by the proposed methods. The success rates (Mean±SD) were up to 85.72%±13.90%, and the errors (Mean±SD) were up to 0.1302±0.2051m. The proposed method will be employed for computing global points of gaze in our current climbing dataset for understanding the dynamics intertwining of gaze and motor behaviors during the climbs.

  • articleNo Access

    A Deep Learning Approach to Imputation of Dynamic Pupil Size Data and Prediction of ADHD

    Attention-deficit/hyperactivity disorder (ADHD) is a common neurodevelopmental disorder in children and adolescents. Traditional diagnosis methods of ADHD focus on observed behavior and reported symptoms, which may lead to a misdiagnosis. Studies have focused on computer-aided systems to improve the objectivity and accuracy of ADHD diagnosis by utilizing psychophysiological data measured from devices such as EEG and MRI. Despite their performance, their low accessibility has prevented their widespread adoption. We propose a novel ADHD prediction method based on the pupil size dynamics measured using eye tracking. Such data typically contain missing values owing to anomalies including blinking or outliers, which negatively impact the classification. We therefore applied an end-to-end deep learning model designed to impute the dynamic pupil size data and predict ADHD simultaneously. We used the recorded dataset of an experiment involving 28 children with ADHD and 22 children as a control group. Each subject conducted an eight-second visuospatial working memory task 160 times. We treated each trial as an independent data sample. The proposed model effectively imputes missing values and outperforms other models in predicting ADHD (AUC of 0.863). Thus, given its high accessibility and low cost, the proposed approach is promising for objective ADHD diagnosis.

  • articleNo Access

    Machine Learning Prediction of Locomotion Intention from Walking and Gaze Data

    In many applications of human–computer interaction, a prediction of the human’s next intended action is highly valuable. To control direction and orientation of the body when walking towards a goal, a walking person relies on visual input obtained by eye and head movements. The analysis of these parameters might allow us to infer the intended goal of the walker. However, such a prediction of human locomotion intentions is a challenging task, since interactions between these parameters are nonlinear and highly dynamic. We employed machine learning models to investigate if walk and gaze data can be used for locomotor prediction. We collected training data for the models in a virtual reality experiment in which 18 participants walked freely through a virtual environment while performing various tasks (walking in a curve, avoiding obstacles and searching for a target). The recorded position, orientation- and eye-tracking data was used to train an LSTM model to predict the future position of the walker on two different time scales, short-term predictions of 50ms and long-term predictions of 2.5s. The trained LSTM model predicted free walking paths with a mean error of 5.14mm for the short-term prediction and 65.73cm for the long-term prediction. We then investigated how much the different features (direction and orientation of the head and body and direction of gaze) contributed to the prediction quality. For short-term predictions, position was the most important feature while orientation and gaze did not provide a substantial benefit. In long-term predictions, gaze and orientation of the head and body provided significant contributions. Gaze offered the greatest predictive utility in situations in which participants were walking short distances or in which participants changed their walking speed.

  • articleNo Access

    Mask R-CNN Method for Dashboard Feature Extraction in Eye Tracking

    The traditional information extraction technology of dashboard is easily affected by external factors, and the robustness is poor. To improve the safety of the pilot’s performance on the dashboard, this paper proposes a way for extracting the dashboard feature information in eye tracking, which acquires the line of sight point in simulated dashboard. It then uses the Mask R-CNN method to detect the gaze area and then extracts the target feature information. Finally, it fuses two sets of data to get the result of the pilot who extracts the target gaze area in the scene. Experiment results show that the method of new dashboard information extraction proposed in this paper has a better accuracy.

  • articleOpen Access

    TOWARDS EYE-TRACKING-BASED TECHNOLOGY ON SIGHT INTERPRETATION PERFORMANCE IMPROVEMENT

    This study used the recording of eye-tracking-based target domain (TD) fixation as the primary approach to explore the correlation between occupying fixation and sight interpretation (SI) performance. This paper records the gaze plot and gaze duration during the sight interpretation and analyzes the correlation between them and the interpretation performance. First, we designed the nine-point track calibration for a noninvasive study. Second, we carried out pre-experiments to find out the best experimental conditions. Finally, after eye rest, we performed the formal test. Extensive experiments were implemented to verify the factors that affect the SI performance, including the number of TD occupying fixations, the time-cost of TD occupying fixation, and the concentration of TD occupying fixation. Statistical analysis of experiments concluded that the psychological dictionary, Long-term Memory (LTM) information, and bilingual conversion skills are the main factors affecting the number and time of eye-tracking TD occupying fixation spots.