Please login to be able to save your searches and receive alerts for new content matching your search criteria.
In order to effectively and accurately recognize students’ emotions in English teaching, and timely regulate students’ emotions, a method of emotion recognition and regulation in English teaching based on emotion computing technology is proposed. Through the skin color model, students’ facial images in the English teaching classroom obtained by the camera are searched for skin color regions, and students’ facial expression images in English teaching are detected, to carry out size normalization and grayscale normalization on the detected facial expression image, preprocess the facial expression image, use the binary method to locate the main facial organs of eyes and mouth that affect emotion in the preprocessed facial expression image, extract the edge features of facial expression image and the features of eyes and mouth, and take all the extracted features as the input of the model, output students’ emotion categories, and make corresponding teaching strategy adjustment and students’ emotion regulation according to students’ emotion categories, so as to finally realize English teaching emotion recognition and regulation. Experiments show that this method can effectively and fully detect the facial expression images of students learning English, and it is efficient for English teaching emotion recognition.
Since pose-varying face images form nonlinear convex manifold in high dimensional image space, it is difficult to model their pose distribution in terms of a simple probabilistic density function. To solve this difficulty, we divide the pose space into many constituent pose classes and treat the continuous pose estimation problem as a discrete pose-class identification problem. We propose to use a hierarchically structured ML (Maximum Likelihood) pose classifiers in the reduced feature space to decrease the computation time for pose identification, where pose space is divided into several pose groups and each group consists of a number of similar neighboring poses. We use the CONDENSATION algorithm to find a newly appearing face and track the face with a variety of poses in real-time. Simulation results show that our proposed pose identification using the hierarchically structured ML pose classifiers can perform a faster pose identification than conventional pose identification using the flat structured ML pose classifiers. A real-time facial pose tracking system is built with high speed hierarchically structured ML pose classifiers.
In this paper, an automatic rotation invariant multiview face detection method, which utilizes modified Skin Color Model (SCM), is presented. First, Gaussian Mixture Model (GMM) and Support Vector Machine (SVM) based hybrid models are used to classify human skin regions from color images. The novelty of the adaptive hybrid model is its ability to predict the chromatic skin color band for individual images based on calibration differences of camera and luminance condition of environment. Classified skin regions are then converted to gray scale image with a threshold based on the predicted chromatic skin color bands, which further enhances detection performance. Next, Principle Component Analysis (PCA) is applied to gray segmented regions. Face detection is carried out based on the PCA-based extracted features, along with selected features, using support vector regression. The output of this procedure is used to report the final result of face detection. The proposed method is also beneficial for the rotation invariant face recognition problem.
In this paper, a monitoring system that can distinguish normal behavior from abnormal ones based on trajectory of palm for supermarket is designed. Our system brings some traditional algorithms and insights together to construct a framework for a new field called Supermarket Monitoring. In this project, only the moving hands are considered. To fulfill the automated monitoring task, the self-adaptive background subtraction technique and the YIQ skin color model are combined to detect the moving hands. In order for accurate localization at the palm, a new method is developed. After successfully detecting the moving hands, a linear prediction model is cited to realize the object tracking. The ART is used to distinguish normal behavior from abnormal ones by analyzing the trajectory characterizations of the moving hands. The experiment results show that this system is robust and effective.