Processing math: 100%
Skip main navigation

Cookies Notification

We use cookies on this site to enhance your user experience. By continuing to browse the site, you consent to the use of our cookies. Learn More
×

System Upgrade on Tue, May 28th, 2024 at 2am (EDT)

Existing users will be able to log into the site and access content. However, E-commerce and registration of new users may not be available for up to 12 hours.
For online purchase, please visit us again. Contact us at customercare@wspc.com for any enquiries.

SEARCH GUIDE  Download Search Tip PDF File

  Bestsellers

  • articleOpen Access

    An Automated Quiet Sleep Detection Approach in Preterm Infants as a Gateway to Assess Brain Maturation

    Sleep state development in preterm neonates can provide crucial information regarding functional brain maturation and give insight into neurological well being. However, visual labeling of sleep stages from EEG requires expertise and is very time consuming, prompting the need for an automated procedure. We present a robust method for automated detection of preterm sleep from EEG, over a wide postmenstrual age (PMA=gestational age+postnatal age) range, focusing first on Quiet Sleep (QS) as an initial marker for sleep assessment. Our algorithm, CLuster-based Adaptive Sleep Staging (CLASS), detects QS if it remains relatively more discontinuous than non-QS over PMA. CLASS was optimized on a training set of 34 recordings aged 27–42 weeks PMA, and performance then assessed on a distinct test set of 55 recordings of the same age range. Results were compared to visual QS labeling from two independent raters (with inter-rater agreement Kappa=0.93), using Sensitivity, Specificity, Detection Factor (DF=proportion of visual QS periods correctly detected by CLASS) and Misclassification Factor (MF=proportion of CLASS-detected QS periods that are misclassified). CLASS performance proved optimal across recordings at 31–38 weeks (median DF=1.0, median MF 0–0.25, median Sensitivity 0.93–1.0, and median Specificity 0.80–0.91 across this age range), with minimal misclassifications at 35–36 weeks (median MF=0). To illustrate the potential of CLASS in facilitating clinical research, normal maturational trends over PMA were derived from CLASS-estimated QS periods, visual QS estimates, and nonstate specific periods (containing QS and non-QS) in the EEG recording. CLASS QS trends agreed with those from visual QS, with both showing stronger correlations than nonstate specific trends. This highlights the benefit of automated QS detection for exploring brain maturation.

  • articleOpen Access

    Improved Activity Recognition Combining Inertial Motion Sensors and Electroencephalogram Signals

    Human activity recognition and neural activity analysis are the basis for human computational neureoethology research dealing with the simultaneous analysis of behavioral ethogram descriptions and neural activity measurements. Wireless electroencephalography (EEG) and wireless inertial measurement units (IMU) allow the realization of experimental data recording with improved ecological validity where the subjects can be carrying out natural activities while data recording is minimally invasive. Specifically, we aim to show that EEG and IMU data fusion allows improved human activity recognition in a natural setting. We have defined an experimental protocol composed of natural sitting, standing and walking activities, and we have recruited subjects in two sites: in-house (N=4) and out-house (N=12) populations with different demographics. Experimental protocol data capture was carried out with validated commercial systems. Classifier model training and validation were carried out with scikit-learn open source machine learning python package. EEG features consist of the amplitude of the standard EEG frequency bands. Inertial features were the instantaneous position of the body tracked points after a moving average smoothing to remove noise. We carry out three validation processes: a 10-fold cross-validation process per experimental protocol repetition, (b) the inference of the ethograms, and (c) the transfer learning from each experimental protocol repetition to the remaining repetitions. The in-house accuracy results were lower and much more variable than the out-house sessions results. In general, random forest was the best performing classifier model. Best cross-validation results, ethogram accuracy, and transfer learning were achieved from the fusion of EEG and IMUs data. Transfer learning behaved poorly compared to classification on the same protocol repetition, but it has accuracy still greater than 0.75 on average for the out-house data sessions. Transfer leaning accuracy among repetitions of the same subject was above 0.88 on average. Ethogram prediction accuracy was above 0.96 on average. Therefore, we conclude that wireless EEG and IMUs allow for the definition of natural experimental designs with high ecological validity toward human computational neuroethology research. The fusion of both EEG and IMUs signals improves activity and ethogram recognition.

  • articleOpen Access

    Spatio-Temporal Image-Based Encoded Atlases for EEG Emotion Recognition

    Emotion recognition plays an essential role in human–human interaction since it is a key to understanding the emotional states and reactions of human beings when they are subject to events and engagements in everyday life. Moving towards human–computer interaction, the study of emotions becomes fundamental because it is at the basis of the design of advanced systems to support a broad spectrum of application areas, including forensic, rehabilitative, educational, and many others. An effective method for discriminating emotions is based on ElectroEncephaloGraphy (EEG) data analysis, which is used as input for classification systems. Collecting brain signals on several channels and for a wide range of emotions produces cumbersome datasets that are hard to manage, transmit, and use in varied applications. In this context, the paper introduces the Empátheia system, which explores a different EEG representation by encoding EEG signals into images prior to their classification. In particular, the proposed system extracts spatio-temporal image encodings, or atlases, from EEG data through the Processing and transfeR of Interaction States and Mappings through Image-based eNcoding (PRISMIN) framework, thus obtaining a compact representation of the input signals. The atlases are then classified through the Empátheia architecture, which comprises branches based on convolutional, recurrent, and transformer models designed and tuned to capture the spatial and temporal aspects of emotions. Extensive experiments were conducted on the Shanghai Jiao Tong University (SJTU) Emotion EEG Dataset (SEED) public dataset, where the proposed system significantly reduced its size while retaining high performance. The results obtained highlight the effectiveness of the proposed approach and suggest new avenues for data representation in emotion recognition from EEG signals.

  • articleOpen Access

    Combining EEG Features and Convolutional Autoencoder for Neonatal Seizure Detection

    Neonatal epilepsy is a common emergency phenomenon in neonatal intensive care units (NICUs), which requires timely attention, early identification, and treatment. Traditional detection methods mostly use supervised learning with enormous labeled data. Hence, this study offers a semi-supervised hybrid architecture for detecting seizures, which combines the extracted electroencephalogram (EEG) feature dataset and convolutional autoencoder, called Fd-CAE. First, various features in the time domain and entropy domain are extracted to characterize the EEG signal, which helps distinguish epileptic seizures subsequently. Then, the unlabeled EEG features are fed into the convolutional autoencoder (CAE) for training, which effectively represents EEG features by optimizing the loss between the input and output features. This unsupervised feature learning process can better combine and optimize EEG features from unlabeled data. After that, the pre-trained encoder part of the model is used for further feature learning of labeled data to obtain its low-dimensional feature representation and achieve classification. This model is performed on the neonatal EEG dataset collected at the University of Helsinki Hospital, which has a high discriminative ability to detect seizures, with an accuracy of 92.34%, precision of 93.61%, recall rate of 98.74%, and F1-score of 95.77%, respectively. The results show that unsupervised learning by CAE is beneficial to the characterization of EEG signals, and the proposed Fd-CAE method significantly improves classification performance.

  • articleOpen Access

    Effects of Emotional Olfactory Stimuli on Modulating Angry Driving Based on an EEG Connectivity Study

    Effectively regulating anger driving has become critical in ensuring road safety. The existing research lacks a feasible exploration of anger-driving regulation. This paper delves into the effect and neural mechanisms of emotional olfactory stimuli (EOS) on regulating anger driving based on EEG. First, this study designed an angry driving regulation experiment based on EOS to record EEG signals. Second, brain activation patterns under various EOS conditions are explored by analyzing functional brain networks (FBNs). Additionally, the paper analyzes dynamic alterations in anger-related characteristics to explore the intensity and persistence of regulating anger driving under different EOS. Finally, the paper studies the frequency energy of EEG changes under EOS through time–frequency analysis. The results indicate that EOS can effectively regulate a driver’s anger emotions, especially with the banana odor showing superior effects. Under banana odor stimulus, synchronization between the parietal and temporal lobes significantly decreased. Notably, the regulatory effect of banana odor is optimal and exhibits sustained efficacy. The regulatory effect of banana odor on anger emotions is persistent. Furthermore, the impact of banana odor significantly reduces the distribution of high-energy activation states in the parietal lobe region. Our findings provide new insights into the dynamic characterization of functional connectivity during anger-driving regulation and demonstrate the potential of using EOS as a reliable tool for regulating angry driving.

  • articleOpen Access

    Mental Workload Artificial Intelligence Assessment of Pilots’ EEG Based on Multi-Dimensional Data Fusion and LSTM with Attention Mechanism Model

    EEG has been proved to be an effective tool for researchers’ cognition and mental workload by detecting the changes of brain activity potential. The mental workload of pilots in aviation flight is closely related to the characteristics of flight tasks. The previous methods have problems such as lack of objectivity, low EEG analysis ability and lack of real-time analysis ability. In order to solve these problems, this paper proposes a multi-dimensional data fusion brain workload calculation method based on flight effect evaluation, which integrates vision, operational behavior and visual gaze, and classifies and analyzes them in combination with EEG data. This method evaluates the mental workload of pilots from three aspects: visual gaze behavior, control behavior and flight effect in the simulated flight experimental environment, and realizes a more objective mental workload analysis. Then, the synchronously collected EEG data are segmented and sampled to form a dataset, and an LSTM neural network model integrating attention mechanism is established, in which the attention mechanism is used to improve the feature processing ability of the network model for the classification of complex EEG data. After machine learning training, the final model can achieve 94% detection accuracy for 2-s EEG data, and has the ability of real-time analysis in the application environment. Compared with the previous similar LSTM model, the accuracy is improved by 6%, which also shows the effectiveness of the model.

  • articleOpen Access

    EFFECTS OF FUNCTIONAL GAMES USING NEUROFEEDBACK ON COGNITIVE FUNCTIONS AND ELECTROENCEPHALOGRAPHY FOR PEOPLE WITH DEVELOPMENTAL DISABILITIES

    The purpose of this study was to investigate the effects of functional games using neurofeedback on cognitive function and changes in brain waves of people with developmental disabilities. Toward this goal, the MiND RACER program developed by Minders Co., Ltd. in 2018 was used for 5 people with developmental disabilities enrolled in the continuing education course at D University in Gyeongsangbuk-do, Korea. It was carried out once a week from October 13 to December 15, 2020; pre- and post-tests were conducted one week before the program execution and one week after the program execution. Electroencephalography measurements were performed 3 times as pre, intermediate, and posttest. As a result, the functional game using neurofeedback showed a remarkable change in the cognitive function of the person with developmental disabilities and significant changes in α waves.

  • articleOpen Access

    EEG SIGNAL-DRIVEN HUMAN–COMPUTER INTERACTION EMOTION RECOGNITION MODEL USING AN ATTENTIONAL NEURAL NETWORK ALGORITHM

    The level of human–machine interaction experience is raising its bar as artificial intelligence develops quickly. An important trend in this application is the improvement of the friendliness, harmony, and simplicity of human–machine communication. Electroencephalogram (EEG) signal-driven emotion identification has recently gained popularity in the area of human–computer interaction (HCI) because of its advantages of being simple to extract, difficult to conceal, and real-time differences. The corresponding research is ultimately aimed at imbuing computers with feelings to enable fully harmonic and organic human–computer connections. This study applies three-dimensional convolutional neural networks (3DCNNs) and attention mechanisms to an environment for HCI and offers a dual-attention 3D convolutional neural networks (DA-3DCNNs) model from the standpoint of spatio-temporal convolution. With the purpose of extracting more representative spatio-temporal characteristics, the new model first thoroughly mines the spatio-temporal distribution information of EEG signals using 3DCNN, taking into account the temporal fluctuation of EEG data. Yet, a dual-attention technique based on EEG channels is utilized at the same time to strengthen or weaken the feature information and understand the links between various brain regions and emotional activities, highlighting the variations in the spatiotemporal aspects of various emotions. Finally, three sets of experiments were planned on the Database for Emotion Analysis using Physiological Signals (DEAP) dataset for cross-subject emotion classification experiments, channel selection experiments, and ablation experiments, respectively, to show the validity and viability of the DA-3DCNN model for HCI emotion recognition applications. The outcomes show that the new model may significantly increase the model’s accuracy in recognizing emotions, acquire the spatial relationship of channels, and more thoroughly extract dynamic information from EEG.

  • articleOpen Access

    DOMAIN-ADAPTIVE TSK FUZZY SYSTEM BASED ON MULTISOURCE DATA FUSION FOR EPILEPTIC EEG SIGNAL CLASSIFICATION

    In recent years, machine learning methods based on epileptic signals have shown good results with brain-computer interfaces (BCIs). With the continuous expansion of their applications, the demand for labeled epileptic signals is increasing. For a large number of data-driven models, such signals are not suitable, as they extend the calibration cycle. Therefore, a new domain-adaptive TSK fuzzy system model based on multisource data fusion (DA-TSK) is proposed. The purpose of DA-TSK is to maintain high classification performance when the amount of labeled data is insufficient. The DA-TSK model not only has a strong learning ability to learn characteristic information from EEG data but is also interpretable, which aids in the understanding of the analytic process of the model for medical purposes. In particular, this model can make full use of a small amount of labeled EEG data in the source domain and target domain through domain adaptation. Therefore, the DA-TSK model can reduce data dependence to a certain extent and improve the generalization performance of the target classifier. Experiments are performed to evaluate the effectiveness of the DA-TSK model on public EEG datasets based on epileptic signals. The DA-TSK model can obtain satisfactory accuracy when the labeled data are insufficient in the target domain.

  • articleOpen Access

    A CONSUMER SENTIMENT ANALYSIS METHOD BASED ON EEG SIGNALS AND A RESNEXT MODEL

    E-commerce is becoming increasingly dependent on technologies such as consumer sentiment research at an ever-increasing rate. Its purpose is to recognize and comprehend the feelings and dispositions of customers by analyzing customer language and behavior as expressed in social media, online reviews, and other forms of digital communication. The proliferation of digital technology has resulted in an increase in the number and variety of channels through which customers can communicate their feelings. Gaining a comprehensive understanding of consumer sentiment may be of great use to businesses, as it enables them to better satisfy customer demands, enhance their products and services, improve their brand reputation, and ultimately increase their level of competitiveness. As a result, consumer sentiment research has evolved into a tool that is essential for the decision-making process in e-commerce as well as the management of customer relationships. Within the scope of this discussion, this study uses deep learning models to improve consumer sentiment research precision. The following is the list of the primary contributions that this paper makes. (1) Advancing the use of EEG signals as a basis for a method for analyzing customer feelings. This technique measures brain activity directly, thus avoiding the restrictions and ambiguities that come with relying on verbal expression. (2) The purpose of this study is to improve the overall performance of the model for analyzing sentiment by incorporating an attention mechanism into the ResNeXt model. This attention mechanism is intended to augment the model’s capacity to extract subtle characteristics. (3) The results of the experiments show that the strategy described in this study is effective in improving EEG-based sentiment analysis performance. When compared to standard text-based sentiment analysis approaches, this sentiment analysis model demonstrates greater objectivity, real-time capabilities, and multidimensionality when applied to consumer sentiment analysis in e-commerce.

  • articleOpen Access

    SSDNet: A SEMISUPERVISED DEEP GENERATIVE ADVERSARIAL NETWORK FOR ELECTROENCEPHALOGRAM-BASED EMOTION RECOGNITION

    This research introduces a novel method for emotion recognition using Electroencephalography (EEG) signals, leveraging advancements in emotion computing and EEG signal processing. The proposed method utilizes a semisupervised deep convolutional generative adversarial network (SSDNet) as the central model. The model fully integrates the feature extraction methods of the generative adversarial network (GAN), deep convolutional GAN (DCGAN), spectrally normalized GAN (SSGAN) and the encoder, maximizing the advantages of each method to construct a more accurate emotion classification model. In our study, we introduce a flow-form-consistent merging pattern, which successfully addresses mismatches between the data by fusing the EEG data with the features. Implementing this merging pattern not only enhances the uniformity of the input but also decreases the computational load on the network, resulting in a more efficient model. By conducting experiments on the DEAP and SEED datasets, we evaluate the SSDNet model proposed in this paper in detail. The experimental results show that the accuracy of the proposed algorithm is improved by 6.4% and 8.3% on the DEAP and SEED datasets, respectively, compared to the traditional GAN. This significant improvement in performance validates the effectiveness and feasibility of the SSDNet model. The research contributions of this paper are threefold. First, we propose and implement an SSDNet model that integrates multiple feature extraction methods, providing a more accurate and comprehensive solution for emotion recognition tasks. Second, by introducing a flow-form-consistent merging pattern, we successfully address the problem of interdata mismatches and improve the generalization performance of the model. Finally, we experimentally demonstrate that the method in this paper achieves a significant improvement in accuracy over the traditional GANs on the DEAP and SEED datasets, providing an innovative solution in the field of EEG-based emotion recognition.

  • articleOpen Access

    MAC: EPILEPSY EEG SIGNAL RECOGNITION BASED ON THE MLP-SELF-ATTENTION MODEL AND COSINE DISTANCE

    In current epilepsy disease research, accurate identification of epilepsy electroencephalogram (EEG) signals is crucial for improving diagnostic efficiency and developing personalized treatment plans. This study proposes an innovative epilepsy recognition model, MAC, which combines the unique advantages of a multilayer perceptron (MLP), a self-attention mechanism and the cosine distance. This model uses a MLP as the basic model and effectively reduces individual differences among epilepsy patients through its superior linear fitting ability. To more accurately measure the difference between two EEG signals, we introduced the cosine distance as a new feature metric. This metric enhances the performance of epilepsy EEG classification by using the cosine value of the angle in vector space to precisely assess the difference between two individuals. In addition, we introduced a self-attention mechanism into the model to enhance the impact of various factors on the final EEG data. Our experiments employed the EEG database of the Epilepsy Research Center of the University of Bonn. Through comparative experiments, it was proven that the proposed MAC model achieved significant improvement in performance on the epilepsy EEG signal recognition task. This study fills the existing research gap in the field of epilepsy identification and provides a powerful tool for the accurate diagnosis of epilepsy diseases in the future. We believe that the introduction of the MAC model will promote new breakthroughs in epilepsy EEG signal recognition and lay a solid foundation for the development of related fields. This research provides an important theoretical and practical reference for advancing the field of epilepsy identification.

  • articleOpen Access

    AN EEG-BASED EMOTION RECOGNITION MODEL USING AN INTERACTION DESIGN FRAMEWORK AND DEEP LEARNING

    This research makes significant advances in the field of emotion recognition by presenting a new generative adversarial network (GAN) model that integrates deep learning with electroencephalography (EEG). To achieve more accurate data production and real data matching, the model utilizes self-attention and residual neural networks. Additionally, this process is accomplished by substituting an autoencoder for the discriminator in the GAN, and incorporating a reconstruction loss function. We include the self-attention mechanism and residual block in the building of the model to overcome the vanishing gradient problem. This allows the model to acquire information related to emotions in a more in-depth manner, which ultimately results in an improvement in the emotion detection accuracy. The DEAP and MAHNOB-HCI datasets are chosen for the experimental validation portion of this research. These datasets are subsequently compared and analyzed with traditional deep learning methods and well-known emotion identification algorithms. Based on these findings, it is evident that the model that we propose performs exceptionally well on the emotion recognition test, which offers substantial support for studies and applications in this field. In addition, within the context of emotion detection systems, this study places particular emphasis on the crucial role that interaction design frameworks play in enhancing both the user experience and the usability of the system. By pushing the emotion recognition technology boundaries, a new paradigm for the application of deep learning in EEG emotion recognition is provided with this comprehensive research contribution.

  • articleOpen Access

    M4EEG: MATCHING NETWORK-BASED MENTAL HEALTH STATUS ASSESSMENT MODEL USING EEG SIGNALS

    Mental health is critical to an individual’s life and social functioning and affects emotions, cognition and behavior. Mental health status assessments can help individuals understand their own psychological status, identify potential problems in real-time and implement effective interventions to promote favorable mental health. In this study, a deep learning approach was used to construct a simple-minded and flexible model for electroencephalogram (EEG)-based mental health status assessment to construct the corresponding M4EEG model. This model is suitable not only for supervised learning tasks containing a large amount of labeled data but also for few-shot classification tasks in special cases. During execution, certain components of a pretrained transformer model are utilized as the model’s foundation. After deriving feature values from different inputs, these features are decoupled by cross-connecting them into the relation module. Finally, the correlation between the outputs and the classification results are determined by a relation score. In experiments, the Database for Emotion Analysis using Physiological Signals (DEAP) and Affective Mood and Interpersonal Goals in the School Environment (AMIGOS) datasets were partitioned into K-Shot files as the input information, and the classification results were derived from the M4EEG model. These results showed that the M4EEG model is capable of assessing mental health status through EEG, and the model can obtain results that cannot be achieved by existing models that do not apply comparable data labeling.

  • articleOpen Access

    AN EMOTION ANALYSIS METHOD THAT INTEGRATES EEG SIGNALS AND MUSIC THERAPY USING A TRANSFORMER MODEL

    The need for sentiment analysis in the mental health field is increasing, and electroencephalogram (EEG) signals and music therapy have attracted extensive attention from researchers as breakthrough ideas. However, the existing methods still face the challenge of integrating temporal and spatial features when combining these two types, especially when considering the volume conduction differences among multichannel EEG signals and the different response speeds of subjects; moreover, the precision and accuracy of emotion analysis have yet to be improved. To solve this problem, we integrate the idea of top-k selection into the classic transformer model and construct a novel top-k sparse transformer model. This model captures emotion-related information in a finer way by selecting k data segments from an EEG signal with distinct signal features. However, this optimization process is not without its challenges, and we need to balance the selected k values to ensure that the important features are preserved while avoiding excessive information loss. Experiments conducted on the DEAP dataset demonstrate that our approach achieves significant improvements over other models. By enhancing the sensitivity of the model to the emotion-related information contained in EEG signals, our method achieves an overall emotion classification accuracy improvement and obtains satisfactory results when classifying different emotion dimensions. This study fills a research gap in the field of sentiment analysis involving EEG signals and music therapy, provides a novel and effective method, and is expected to lead to new ideas regarding the application of deep learning in sentiment analysis.

  • articleOpen Access

    AN INTELLIGENT DEPRESSION DETECTION MODEL BASED ON MULTIMODAL FUSION TECHNOLOGY

    Depression is a prevalent mental condition, and it is essential to diagnose and treat patients as soon as possible to maximize their chances of rehabilitation and recovery. An intelligent detection model based on multimodal fusion technology is proposed based on the findings of this study to address the difficulties associated with depression detection. Text data and electroencephalogram (EEG) data are used in the model as representatives of subjective and objective nature, respectively. These data are processed by the BERT–TextCNN model and the CNN–LSTM model, which are responsible for processing them. While the CNN–LSTM model is able to handle time-series data in an effective manner, the BERT–TextCNN model is able to adequately capture the semantic information that is included in text data. This enables the model to consider the various features that are associated with the various types of data. In this research, a weighted fusion technique is utilized to combine the information contained within the two modal datasets. This strategy involves assigning a weight to the outcomes of each modal data processing in accordance with the degree of contribution that each modal data will make to produce the ultimate depression detection results. In regard to the task of depression identification, the suggested model demonstrates great validity and robustness, as demonstrated by the results of the experimental validation that we carried out on a dataset that we manufactured ourselves. A viable and intelligent solution for the early identification of depression is provided by the proposed model. This solution will likely be widely utilized in clinical practice and will provide new ideas and approaches for the growth of the field of precision medicine.

  • articleOpen Access

    A NOVEL SLEEP STAGE CLASSIFICATION METHOD FOR ELDERLY PEOPLE USING THE SLEEP EEG SIGNAL AND SENET MODELS

    First, research on sleep stage staging in older individuals is highly valuable because it helps to obtain insights into sleep quality and structure changes that occur in older persons. This approach was used to better understand the correlations with cognitive functioning, the immune system, and mental health, among other factors. Second, identifying the features of older persons in various stages of sleep could lead to more specific guidance for managing and treating personalized sleep to significantly improve the quality of life of these individuals. Finally, in-depth research on sleep in older persons can assist in the development of preventative and intervention measures for older adults, which can help to reduce the negative consequences that age-related sleep issues have on general health. Categorizing sleep EEG data is the focus of this research to present an improved squeeze-and-excitation network (SENet) for application in the classification process. For the purpose of this study, electroencephalogram (EEG) features were extracted using continuous wavelet transform. Subsequently, the lightweight context transform (LCT), which is a combination of normalization, linear transformation, and SENet, is carried out to achieve sleep EEG classification. Extraction using the continuous wavelet transform has the potential to more accurately depict the changes and characteristics of sleep stages. As a result of incorporating LCT, the computational cost of the model is reduced, and the model becomes more applicable to real-world situations. Experiments conducted on two public datasets, Sleep-EDF-20 and Sleep-EDF-78, demonstrated that the strategy presented in this study can achieve the requisite classification performance and the model converges faster.

  • articleOpen Access

    A STUDENT SENTIMENT ANALYSIS METHOD BASED ON MULTIMODAL DEEP LEARNING

    Based on electroencephalography (EEG) and video data, we propose a multimodal affective analysis approach in this study to examine the affective states of university students. This method is based on the findings of this investigation. The EEG signals and video data were obtained from 50 college students experiencing various emotional states, and then they were processed in great detail. The EEG signals are pre-processed to extract their multi-view characteristics. Additionally, the video data were processed by frame extraction, face detection, and convolutional neural network (CNN) operations to extract features. We take a feature splicing strategy to merge EEG and video data to produce a time series input to realize the fusion of multimodal features. This allows us to realize the fusion of multimodal features. In addition, we developed and trained a model for the classification of emotional states based on a long short-term memory network (LSTM). With the help of cross-validation, the experiments were carried out by dividing the dataset into a training set and a test set. The model’s performance was evaluated with the help of four metrics: accuracy, precision, recall, and F1-score. When compared to the single-modal method of sentiment analysis, the results demonstrate that the multimodal approach, which combines EEG and video, demonstrates considerable advantages in terms of sentiment detection. Specifically, the accuracy obtained from the multimodal approach is significantly higher. As part of its investigation, the study also investigates the respective contributions of EEG and video aspects to emotion detection. It discovers that these features complement each other in a variety of emotional states and have the potential to improve the overall recognition results. The multimodal sentiment analysis method that is based on LSTM offers a high level of accuracy and robustness when it comes to recognizing the affective states of college students. This is especially essential for enhancing the quality of education and providing support for mental health.

  • articleOpen Access

    A CORRELATION ANALYSIS METHOD BETWEEN STUDENTS’ DIGITAL LITERACY AND MENTAL HEALTH

    This study aims to explore the correlation between college students’ digital literacy and mental health and proposes a method based on Twin Support Vector Machines (TWSVMs) classification and chi-square validation correlation analysis. First, a group of college students’ digital literacy data was collected by designing and distributing questionnaires. The questionnaire covers multiple aspects such as digital skills, information literacy, and technology application, to comprehensively evaluate the students’ digital literacy level. The collected digital literacy data were classified using TWSVM to obtain the digital literacy assessment results. Next, the electroencephalogram (EEG) signals of the same group were collected, and the EEG signals were subjected to power spectral density (PSD) feature extraction and TWSVM classification model training to obtain the mental health identification results of each student. Finally, after obtaining the digital literacy assessment and mental health identification results, the chi-square validation method was used for correlation analysis to evaluate the linear relationship between the two. Through the analysis, we found that students with higher digital literacy were more likely to have good mental health. In comparison, students with lower digital literacy were more likely to have mental health problems. This study revealed a significant correlation between college students’ digital literacy and mental health, providing theoretical support and practical guidance for educators and mental health professionals. Improving students’ digital literacy will not only help their academic and career development but may also have a positive impact on their mental health, thereby promoting their overall development.

  • articleOpen Access

    Time-Frequency-Domain Copula-Based Granger Causality and Application to Corticomuscular Coupling in Stroke

    The corticomuscular coupling (CMC) characterization between the motor cortex and muscles during motion control is a valid biomarker of motor system function after stroke, which can improve clinical decision-making. However, traditional CMC analysis is mainly based on the coherence method that can’t determine the coupling direction, whereas Granger Causality (GC) is limited in identifying linear cause–effect relationship. In this paper, a time-frequency domain copula-based GC (copula-GC) method is proposed to assess CMC characteristic. The 32-channel electroencephalogram (EEG) signals over brain scalp and electromyography (EMG) signals from upper limb were recorded during controlling and maintaining steady-state force output for five stroke patients and five healthy controls. Then, the time-frequency copula-GC analysis was applied to evaluate the CMC strength in both directions. Experimental results show that the CMC strength of descending direction is greater than that of ascending direction in the time domain for healthy controls. With the increase of grip strength, the bi-directional CMC strength has an increasing trend. Meanwhile, the bi-directional CMC strength of right hand is larger than that of left hand. In addition, the bi-directional CMC strength of stroke patients is lower than that of healthy controls. In the frequency domain, the strongest CMC is observed at the beta frequency band. Additionally, the CMC strength of descending direction is slightly larger than that of ascending direction in healthy controls, while the CMC strength of descending direction is lower than that of ascending direction in stroke patients. We suggest that the proposed time-frequency domain analysis approach based on copula-GC can effectively detect complex functional coupling between cortical oscillations and muscle activities, and provide a potential quantitative analysis measure for motion control and rehabilitation evaluation.