Loading [MathJax]/jax/output/CommonHTML/jax.js
Skip main navigation

Cookies Notification

We use cookies on this site to enhance your user experience. By continuing to browse the site, you consent to the use of our cookies. Learn More
×

System Upgrade on Tue, May 28th, 2024 at 2am (EDT)

Existing users will be able to log into the site and access content. However, E-commerce and registration of new users may not be available for up to 12 hours.
For online purchase, please visit us again. Contact us at customercare@wspc.com for any enquiries.

SEARCH GUIDE  Download Search Tip PDF File

  • articleNo Access

    EEG SIGNAL COMPRESSION USING RADIAL BASIS NEURAL NETWORKS

    This paper describes a two-stage lossless compression scheme for electroencephalographic (EEG) signals using radial basis neural network predictors. Two variants of the radial basis network, namely, the radial basis function network and the generalized regression neural network are used in the first stage and their performances are evaluated in terms of compression ratio. The training is imparted to the network by using two training schemes, namely, single block scheme and block adaptive scheme. The compression ratios achieved by these networks when used along with arithmetic encoders in a two-stage compression scheme are obtained for different EEG test files. It is found that the generalized regression neural network performs better than other neural network models such as multilayer perceptrons and Elman network and linear predictor such as FIR.

  • articleNo Access

    ANALYSIS AND VISUALIZATION OF HUMAN ELECTROENCEPHALOGRAMS SEEN AS FRACTAL TIME SERIES

    The paper presents a novel technique of nonlinear spectral analysis. This technique is based on the concept of generalized entropy of a given probability distribution, known as the Rényi entropy. This concept allows defining generalized fractal dimension of encephalogram (EEG) and determining fractal spectra of encephalographic signals. These spectra contain information of both frequency and amplitude characteristics of EEG and can be used together with well-accepted techniques of EEG analysis as an enhancement of the latter. Powered by volume visualization of the brain activity, the method provides new clues for understanding the mental processes in humans.

  • articleNo Access

    AUTOMATIC IDENTIFICATION OF EPILEPTIC EEG SIGNALS USING NONLINEAR PARAMETERS

    Epilepsy is a brain disorder causing people to have recurring seizures. Electroencephalogram (EEG) is the electrical activity of the brain signals that can be used to diagnose the epilepsy. The EEG signal is highly nonlinear and nonstationary in nature and may contain indicators of current disease, or warnings about impending diseases. The chaotic measures like correlation dimension (CD), Hurst exponent (H), and approximate entropy (ApEn) can be used to characterize the signal. These features extracted can be used for automatic diagnosis of seizure onsets which would help the patients to take appropriate precautions. These nonlinear features have been reported to be a promising approach to differentiate among normal, pre-ictal (background), and epileptic EEG signals. In this work, these features were used to train both Gaussian mixture model (GMM) and support vector machine (SVM) classifiers. The performance of the two classifiers were evaluated using the receiver operating characteristics (ROC) curves. Our results show that the GMM classifier performed better with average classification efficiency of 95%, sensitivity and specificity of 92.22% and 100%, respectively.

  • articleNo Access

    CLASSIFICATION OF EEG SIGNALS IN NORMAL AND DEPRESSION CONDITIONS BY ANN USING RWE AND SIGNAL ENTROPY

    EEG is useful for the analysis of the functional activity of the brain and a detailed assessment of this non-stationary waveform can provide crucial parameters indicative of the mental state of patients. The complex nature of EEG signals calls for automated analysis using various signal processing methods. This paper attempts to classify the EEG signals of normal and depression patients using well-established signal processing techniques involving relative wavelet energy (RWE) and artificial feedForward neural network. High frequency noise present in the recorded signal is removed using total variation filtering (TVF). Classification of the frequency bands of EEG signals into appropriate detail levels and approximation level is carried out using an eight-level multiresolution decomposition method of discrete wavelet transform (DWT). Parseval's theorem is used for calculating the energy at different resolution levels. RWE analysis gives information about the signal energy distribution at different decomposition levels. Both RWE and feedforward Network are used to classify the signals from normal controls and depression patients. The performance of the artificial neural network was evaluated using the classification accuracy and its value of 98.11% indicates a great potential for classifying normal and depression signals.

  • articleNo Access

    AUTOMATED IDENTIFICATION OF EPILEPTIC AND ALCOHOLIC EEG SIGNALS USING RECURRENCE QUANTIFICATION ANALYSIS

    Epilepsy is a common neurological disorder characterized by recurrence seizures. Alcoholism causes organic changes in the brain, resulting in seizure attacks similar to epileptic fits. Hence, it is challenging to differentiate the cause of fits as epileptic or alcoholism, which is important for deciding on the treatment in the neurology ward. The focus of this paper is to automatically differentiate epileptic, normal, and alcoholic electroencephalogram (EEG) signals. As the EEG signals are non-linear and dynamic in nature, it is difficult to tell the subtle changes in these signals with the help of linear techniques or by the naked eye. Therefore, to analyze the normal (control), epileptic, and alcoholic EEG signals, two non-linear methods, such as recurrence plots (RPs) and then recurrence quantification analysis (RQA) are adopted. Approximately 10 RQA parameters have been used to classify the EEG signals into three distinct classes, i.e., normal, epileptic, and alcoholic. Six classifiers, such as support vector machine (SVM), radial basis probabilistic neural network (RBPNN), decision tree (DT), Gaussian mixture model (GMM), k-nearest neighbor (kNN), and fuzzy Sugeno classifiers have been developed to accomplish this task. Results show that the GMM classifier outperformed the other classifiers with a classification sensitivity of 99.6%, specificity of 98.3%, and accuracy of 98.6%.

  • articleNo Access

    THE CREATIVE INVESTIGATION OF BRAIN ACTIVITY WITH EEG FOR GENDER AND LEFT/RIGHT-HANDED DIFFERENCES

    This paper studied the differences of gender and left/right-handed groups from a neuroscience perspective through task-related power of alpha power changes during the generation of creative ideas. Aiming to investigate the effects of the differences, it will help understand the specific neural processes for different genders and left/right-handed groups. We used B-Alert X10®; electroencephalography (EEG) system, computed for left and right hemispheres, to determine if EEG metrics differentiated between the gender and left/right-handed groups. This study assessed EEG power spectral density (PSD) while 17 healthy participants worked on the alternative uses (AU) task. The results showed that (1) the creativity level has no relations with the gender; there is no obvious difference between males and females on the process of creative idea generation. (2) The creativity level is high related to the cultivation of innovative ability. There is obvious higher alpha power changes in posterior regions of the right hemisphere compared to the left hemisphere of the brain for high original group, and a stronger task-related alpha synchronization showed in the right hemisphere than that in the left one for the low original group. (3) There is comparatively lower alpha power in parietal region in the left hemisphere than that in the right one for the left-handed participants, and higher alpha power in the frontal region for the left-handed and in parietal region for right-handed participants. The comparison among different genders and left/right-handed participants can help us understand more about the creative thinking manifested in the human brain.

  • articleNo Access

    A NOVEL APPROACH TO DETECT EPILEPTIC SEIZURES USING A COMBINATION OF TUNABLE-Q WAVELET TRANSFORM AND FRACTAL DIMENSION

    The detection and quantification of seizures can be achieved through the analysis of nonstationary electroencephalogram (EEG) signals. The detection of these intractable seizures involving human beings is a challenging and difficult task. The analysis of EEG through human inspection is prone to errors and may lead to false conclusions. The computer-aided systems have been developed to assist neurophysiologists in the identification of seizure activities accurately. We propose a new machine learning and signal processing-based automated system that can detect epileptic episodes accurately. The proposed algorithm employs a promising time-frequency tool called tunable-Q wavelet transform (TQWT) to decompose EEG signals into various sub-bands (SBs). The fractal dimensions (FDs) of the SBs have been used as the discriminating features. The TQWT has many attractive features, such as tunable oscillatory attribute and time-invariance property, which are favorable for the analysis of nonstationary and transient signals. Fractal dimension is a nonlinear chaotic trait that has been proven to be very useful in the analysis and classifications of nonstationary signals including EEG. First, we decompose EEG signals into the desired SBs. Then, we compute FD for each SB. These FDs of the SBs have been applied to the least-squares support vector machine (LS-SVM) classifier with radial basis function (RBF) kernel. We have used 10-fold cross-validation to ensure reliable performance and avoid the possible over-fitting of the model. In the proposed study, we investigate the following four popular classification tasks (CTs) related to different classes of EEG signals: (i) normal versus seizure (ii) seizure-free versus seizure (iii) nonseizure versus Seizure (iv) normal versus seizure-free. The proposed model surpassed existing models in the area under the receiver operating characteristics (ROC) curve, Matthew’s correlation coefficient (MCC), average classification accuracy (ACA), and average classification sensitivity (ACS). The proposed system attained perfect 100% ACS for all CTs considered in this study. The method achieved the highest classification accuracy as well as the largest area under ROC curve (AUC) for all classes. The salient feature of our proposed model is that, though many models exist in the literature, which gave high ACA, however, their performance has not been evaluated using MCC and AUC along with ACA simultaneously. The evaluation of the performance in terms of only ACA which may be misleading. Hence, the performance of the proposed model has been assessed not only in terms of ACA but also in terms AUC and MCC. Moreover, the performance of the model has been found to be almost equivalent to a perfect model, and the performance of the proposed model surpasses the existing models for the CTs investigated by us. Therefore, the proposed model is expected to assist clinicians in analyzing seizures accurately in less time without any error.

  • articleOpen Access

    EFFECTS OF FUNCTIONAL GAMES USING NEUROFEEDBACK ON COGNITIVE FUNCTIONS AND ELECTROENCEPHALOGRAPHY FOR PEOPLE WITH DEVELOPMENTAL DISABILITIES

    The purpose of this study was to investigate the effects of functional games using neurofeedback on cognitive function and changes in brain waves of people with developmental disabilities. Toward this goal, the MiND RACER program developed by Minders Co., Ltd. in 2018 was used for 5 people with developmental disabilities enrolled in the continuing education course at D University in Gyeongsangbuk-do, Korea. It was carried out once a week from October 13 to December 15, 2020; pre- and post-tests were conducted one week before the program execution and one week after the program execution. Electroencephalography measurements were performed 3 times as pre, intermediate, and posttest. As a result, the functional game using neurofeedback showed a remarkable change in the cognitive function of the person with developmental disabilities and significant changes in α waves.

  • articleOpen Access

    EEG SIGNAL-DRIVEN HUMAN–COMPUTER INTERACTION EMOTION RECOGNITION MODEL USING AN ATTENTIONAL NEURAL NETWORK ALGORITHM

    The level of human–machine interaction experience is raising its bar as artificial intelligence develops quickly. An important trend in this application is the improvement of the friendliness, harmony, and simplicity of human–machine communication. Electroencephalogram (EEG) signal-driven emotion identification has recently gained popularity in the area of human–computer interaction (HCI) because of its advantages of being simple to extract, difficult to conceal, and real-time differences. The corresponding research is ultimately aimed at imbuing computers with feelings to enable fully harmonic and organic human–computer connections. This study applies three-dimensional convolutional neural networks (3DCNNs) and attention mechanisms to an environment for HCI and offers a dual-attention 3D convolutional neural networks (DA-3DCNNs) model from the standpoint of spatio-temporal convolution. With the purpose of extracting more representative spatio-temporal characteristics, the new model first thoroughly mines the spatio-temporal distribution information of EEG signals using 3DCNN, taking into account the temporal fluctuation of EEG data. Yet, a dual-attention technique based on EEG channels is utilized at the same time to strengthen or weaken the feature information and understand the links between various brain regions and emotional activities, highlighting the variations in the spatiotemporal aspects of various emotions. Finally, three sets of experiments were planned on the Database for Emotion Analysis using Physiological Signals (DEAP) dataset for cross-subject emotion classification experiments, channel selection experiments, and ablation experiments, respectively, to show the validity and viability of the DA-3DCNN model for HCI emotion recognition applications. The outcomes show that the new model may significantly increase the model’s accuracy in recognizing emotions, acquire the spatial relationship of channels, and more thoroughly extract dynamic information from EEG.

  • articleOpen Access

    DOMAIN-ADAPTIVE TSK FUZZY SYSTEM BASED ON MULTISOURCE DATA FUSION FOR EPILEPTIC EEG SIGNAL CLASSIFICATION

    In recent years, machine learning methods based on epileptic signals have shown good results with brain-computer interfaces (BCIs). With the continuous expansion of their applications, the demand for labeled epileptic signals is increasing. For a large number of data-driven models, such signals are not suitable, as they extend the calibration cycle. Therefore, a new domain-adaptive TSK fuzzy system model based on multisource data fusion (DA-TSK) is proposed. The purpose of DA-TSK is to maintain high classification performance when the amount of labeled data is insufficient. The DA-TSK model not only has a strong learning ability to learn characteristic information from EEG data but is also interpretable, which aids in the understanding of the analytic process of the model for medical purposes. In particular, this model can make full use of a small amount of labeled EEG data in the source domain and target domain through domain adaptation. Therefore, the DA-TSK model can reduce data dependence to a certain extent and improve the generalization performance of the target classifier. Experiments are performed to evaluate the effectiveness of the DA-TSK model on public EEG datasets based on epileptic signals. The DA-TSK model can obtain satisfactory accuracy when the labeled data are insufficient in the target domain.

  • articleFree Access

    EPILEPTIC EEG SIGNALS RHYTHMS ANALYSIS IN THE DETECTION OF FOCAL AND NON-FOCAL SEIZURES BASED ON OPTIMISED MACHINE LEARNING AND DEEP NEURAL NETWORK ARCHITECTURE

    Objective: Most studies in epileptic seizure detection and classification focused on classifying different types of epileptic seizures. However, localization of the epileptogenic zone in epilepsy patient brain’s is paramount to assist the doctor in locating a focal region in patients screened for surgery. Therefore, this paper proposed robust models for the localization of epileptogenic areas for the success of epilepsy surgery. Method: Advanced feature extraction techniques were proposed as effective feature extraction techniques based on Electroencephalogram (EEG) rhythms extracted from Fourier Basel Series Expansion Multivariate Empirical Wavelet Transform (FBSE-MEWT). The proposed extracted EEG rhythms of δ,θ,α,β and γ features were used to obtain a joint instantaneous frequency and amplitude components using a sub-band alignment approach. The features are used in Sparse Autoencoder (SAE), Deep Belief Network (DBN), and Support Vector Machine (SVM) with the optimized capability to develop three new models: 1. FMEWT–SVM 2. FMEWT_SAE–SVM, and 3. FMEWT–DBN–SVM. The EEG signal was preprocessed using a proposed Multiscale Principal Component Analysis (mPCA) to denoise the noise embedded in the signal. Main results: The developed models show a significant performance improvement, with the SAE–SVM outperforming other proposed models and some recently reported works in literature with an accuracy of 99.7% using δ-rhythms in channels 1 and 2. Significance: This study validates the EEG rhythm as a means of discriminating the embedded features in epileptic EEG signals to locate the focal and non-focal regions in the epileptic patient’s brain to increase the success of the surgery and reduce computational cost.

  • articleOpen Access

    A CONSUMER SENTIMENT ANALYSIS METHOD BASED ON EEG SIGNALS AND A RESNEXT MODEL

    E-commerce is becoming increasingly dependent on technologies such as consumer sentiment research at an ever-increasing rate. Its purpose is to recognize and comprehend the feelings and dispositions of customers by analyzing customer language and behavior as expressed in social media, online reviews, and other forms of digital communication. The proliferation of digital technology has resulted in an increase in the number and variety of channels through which customers can communicate their feelings. Gaining a comprehensive understanding of consumer sentiment may be of great use to businesses, as it enables them to better satisfy customer demands, enhance their products and services, improve their brand reputation, and ultimately increase their level of competitiveness. As a result, consumer sentiment research has evolved into a tool that is essential for the decision-making process in e-commerce as well as the management of customer relationships. Within the scope of this discussion, this study uses deep learning models to improve consumer sentiment research precision. The following is the list of the primary contributions that this paper makes. (1) Advancing the use of EEG signals as a basis for a method for analyzing customer feelings. This technique measures brain activity directly, thus avoiding the restrictions and ambiguities that come with relying on verbal expression. (2) The purpose of this study is to improve the overall performance of the model for analyzing sentiment by incorporating an attention mechanism into the ResNeXt model. This attention mechanism is intended to augment the model’s capacity to extract subtle characteristics. (3) The results of the experiments show that the strategy described in this study is effective in improving EEG-based sentiment analysis performance. When compared to standard text-based sentiment analysis approaches, this sentiment analysis model demonstrates greater objectivity, real-time capabilities, and multidimensionality when applied to consumer sentiment analysis in e-commerce.

  • articleOpen Access

    SSDNet: A SEMISUPERVISED DEEP GENERATIVE ADVERSARIAL NETWORK FOR ELECTROENCEPHALOGRAM-BASED EMOTION RECOGNITION

    This research introduces a novel method for emotion recognition using Electroencephalography (EEG) signals, leveraging advancements in emotion computing and EEG signal processing. The proposed method utilizes a semisupervised deep convolutional generative adversarial network (SSDNet) as the central model. The model fully integrates the feature extraction methods of the generative adversarial network (GAN), deep convolutional GAN (DCGAN), spectrally normalized GAN (SSGAN) and the encoder, maximizing the advantages of each method to construct a more accurate emotion classification model. In our study, we introduce a flow-form-consistent merging pattern, which successfully addresses mismatches between the data by fusing the EEG data with the features. Implementing this merging pattern not only enhances the uniformity of the input but also decreases the computational load on the network, resulting in a more efficient model. By conducting experiments on the DEAP and SEED datasets, we evaluate the SSDNet model proposed in this paper in detail. The experimental results show that the accuracy of the proposed algorithm is improved by 6.4% and 8.3% on the DEAP and SEED datasets, respectively, compared to the traditional GAN. This significant improvement in performance validates the effectiveness and feasibility of the SSDNet model. The research contributions of this paper are threefold. First, we propose and implement an SSDNet model that integrates multiple feature extraction methods, providing a more accurate and comprehensive solution for emotion recognition tasks. Second, by introducing a flow-form-consistent merging pattern, we successfully address the problem of interdata mismatches and improve the generalization performance of the model. Finally, we experimentally demonstrate that the method in this paper achieves a significant improvement in accuracy over the traditional GANs on the DEAP and SEED datasets, providing an innovative solution in the field of EEG-based emotion recognition.

  • articleOpen Access

    MAC: EPILEPSY EEG SIGNAL RECOGNITION BASED ON THE MLP-SELF-ATTENTION MODEL AND COSINE DISTANCE

    In current epilepsy disease research, accurate identification of epilepsy electroencephalogram (EEG) signals is crucial for improving diagnostic efficiency and developing personalized treatment plans. This study proposes an innovative epilepsy recognition model, MAC, which combines the unique advantages of a multilayer perceptron (MLP), a self-attention mechanism and the cosine distance. This model uses a MLP as the basic model and effectively reduces individual differences among epilepsy patients through its superior linear fitting ability. To more accurately measure the difference between two EEG signals, we introduced the cosine distance as a new feature metric. This metric enhances the performance of epilepsy EEG classification by using the cosine value of the angle in vector space to precisely assess the difference between two individuals. In addition, we introduced a self-attention mechanism into the model to enhance the impact of various factors on the final EEG data. Our experiments employed the EEG database of the Epilepsy Research Center of the University of Bonn. Through comparative experiments, it was proven that the proposed MAC model achieved significant improvement in performance on the epilepsy EEG signal recognition task. This study fills the existing research gap in the field of epilepsy identification and provides a powerful tool for the accurate diagnosis of epilepsy diseases in the future. We believe that the introduction of the MAC model will promote new breakthroughs in epilepsy EEG signal recognition and lay a solid foundation for the development of related fields. This research provides an important theoretical and practical reference for advancing the field of epilepsy identification.

  • articleOpen Access

    AN EEG-BASED EMOTION RECOGNITION MODEL USING AN INTERACTION DESIGN FRAMEWORK AND DEEP LEARNING

    This research makes significant advances in the field of emotion recognition by presenting a new generative adversarial network (GAN) model that integrates deep learning with electroencephalography (EEG). To achieve more accurate data production and real data matching, the model utilizes self-attention and residual neural networks. Additionally, this process is accomplished by substituting an autoencoder for the discriminator in the GAN, and incorporating a reconstruction loss function. We include the self-attention mechanism and residual block in the building of the model to overcome the vanishing gradient problem. This allows the model to acquire information related to emotions in a more in-depth manner, which ultimately results in an improvement in the emotion detection accuracy. The DEAP and MAHNOB-HCI datasets are chosen for the experimental validation portion of this research. These datasets are subsequently compared and analyzed with traditional deep learning methods and well-known emotion identification algorithms. Based on these findings, it is evident that the model that we propose performs exceptionally well on the emotion recognition test, which offers substantial support for studies and applications in this field. In addition, within the context of emotion detection systems, this study places particular emphasis on the crucial role that interaction design frameworks play in enhancing both the user experience and the usability of the system. By pushing the emotion recognition technology boundaries, a new paradigm for the application of deep learning in EEG emotion recognition is provided with this comprehensive research contribution.

  • articleOpen Access

    M4EEG: MATCHING NETWORK-BASED MENTAL HEALTH STATUS ASSESSMENT MODEL USING EEG SIGNALS

    Mental health is critical to an individual’s life and social functioning and affects emotions, cognition and behavior. Mental health status assessments can help individuals understand their own psychological status, identify potential problems in real-time and implement effective interventions to promote favorable mental health. In this study, a deep learning approach was used to construct a simple-minded and flexible model for electroencephalogram (EEG)-based mental health status assessment to construct the corresponding M4EEG model. This model is suitable not only for supervised learning tasks containing a large amount of labeled data but also for few-shot classification tasks in special cases. During execution, certain components of a pretrained transformer model are utilized as the model’s foundation. After deriving feature values from different inputs, these features are decoupled by cross-connecting them into the relation module. Finally, the correlation between the outputs and the classification results are determined by a relation score. In experiments, the Database for Emotion Analysis using Physiological Signals (DEAP) and Affective Mood and Interpersonal Goals in the School Environment (AMIGOS) datasets were partitioned into K-Shot files as the input information, and the classification results were derived from the M4EEG model. These results showed that the M4EEG model is capable of assessing mental health status through EEG, and the model can obtain results that cannot be achieved by existing models that do not apply comparable data labeling.

  • articleOpen Access

    AN EMOTION ANALYSIS METHOD THAT INTEGRATES EEG SIGNALS AND MUSIC THERAPY USING A TRANSFORMER MODEL

    The need for sentiment analysis in the mental health field is increasing, and electroencephalogram (EEG) signals and music therapy have attracted extensive attention from researchers as breakthrough ideas. However, the existing methods still face the challenge of integrating temporal and spatial features when combining these two types, especially when considering the volume conduction differences among multichannel EEG signals and the different response speeds of subjects; moreover, the precision and accuracy of emotion analysis have yet to be improved. To solve this problem, we integrate the idea of top-k selection into the classic transformer model and construct a novel top-k sparse transformer model. This model captures emotion-related information in a finer way by selecting k data segments from an EEG signal with distinct signal features. However, this optimization process is not without its challenges, and we need to balance the selected k values to ensure that the important features are preserved while avoiding excessive information loss. Experiments conducted on the DEAP dataset demonstrate that our approach achieves significant improvements over other models. By enhancing the sensitivity of the model to the emotion-related information contained in EEG signals, our method achieves an overall emotion classification accuracy improvement and obtains satisfactory results when classifying different emotion dimensions. This study fills a research gap in the field of sentiment analysis involving EEG signals and music therapy, provides a novel and effective method, and is expected to lead to new ideas regarding the application of deep learning in sentiment analysis.

  • articleOpen Access

    AN INTELLIGENT DEPRESSION DETECTION MODEL BASED ON MULTIMODAL FUSION TECHNOLOGY

    Depression is a prevalent mental condition, and it is essential to diagnose and treat patients as soon as possible to maximize their chances of rehabilitation and recovery. An intelligent detection model based on multimodal fusion technology is proposed based on the findings of this study to address the difficulties associated with depression detection. Text data and electroencephalogram (EEG) data are used in the model as representatives of subjective and objective nature, respectively. These data are processed by the BERT–TextCNN model and the CNN–LSTM model, which are responsible for processing them. While the CNN–LSTM model is able to handle time-series data in an effective manner, the BERT–TextCNN model is able to adequately capture the semantic information that is included in text data. This enables the model to consider the various features that are associated with the various types of data. In this research, a weighted fusion technique is utilized to combine the information contained within the two modal datasets. This strategy involves assigning a weight to the outcomes of each modal data processing in accordance with the degree of contribution that each modal data will make to produce the ultimate depression detection results. In regard to the task of depression identification, the suggested model demonstrates great validity and robustness, as demonstrated by the results of the experimental validation that we carried out on a dataset that we manufactured ourselves. A viable and intelligent solution for the early identification of depression is provided by the proposed model. This solution will likely be widely utilized in clinical practice and will provide new ideas and approaches for the growth of the field of precision medicine.

  • articleOpen Access

    A NOVEL SLEEP STAGE CLASSIFICATION METHOD FOR ELDERLY PEOPLE USING THE SLEEP EEG SIGNAL AND SENET MODELS

    First, research on sleep stage staging in older individuals is highly valuable because it helps to obtain insights into sleep quality and structure changes that occur in older persons. This approach was used to better understand the correlations with cognitive functioning, the immune system, and mental health, among other factors. Second, identifying the features of older persons in various stages of sleep could lead to more specific guidance for managing and treating personalized sleep to significantly improve the quality of life of these individuals. Finally, in-depth research on sleep in older persons can assist in the development of preventative and intervention measures for older adults, which can help to reduce the negative consequences that age-related sleep issues have on general health. Categorizing sleep EEG data is the focus of this research to present an improved squeeze-and-excitation network (SENet) for application in the classification process. For the purpose of this study, electroencephalogram (EEG) features were extracted using continuous wavelet transform. Subsequently, the lightweight context transform (LCT), which is a combination of normalization, linear transformation, and SENet, is carried out to achieve sleep EEG classification. Extraction using the continuous wavelet transform has the potential to more accurately depict the changes and characteristics of sleep stages. As a result of incorporating LCT, the computational cost of the model is reduced, and the model becomes more applicable to real-world situations. Experiments conducted on two public datasets, Sleep-EDF-20 and Sleep-EDF-78, demonstrated that the strategy presented in this study can achieve the requisite classification performance and the model converges faster.

  • articleOpen Access

    A STUDENT SENTIMENT ANALYSIS METHOD BASED ON MULTIMODAL DEEP LEARNING

    Based on electroencephalography (EEG) and video data, we propose a multimodal affective analysis approach in this study to examine the affective states of university students. This method is based on the findings of this investigation. The EEG signals and video data were obtained from 50 college students experiencing various emotional states, and then they were processed in great detail. The EEG signals are pre-processed to extract their multi-view characteristics. Additionally, the video data were processed by frame extraction, face detection, and convolutional neural network (CNN) operations to extract features. We take a feature splicing strategy to merge EEG and video data to produce a time series input to realize the fusion of multimodal features. This allows us to realize the fusion of multimodal features. In addition, we developed and trained a model for the classification of emotional states based on a long short-term memory network (LSTM). With the help of cross-validation, the experiments were carried out by dividing the dataset into a training set and a test set. The model’s performance was evaluated with the help of four metrics: accuracy, precision, recall, and F1-score. When compared to the single-modal method of sentiment analysis, the results demonstrate that the multimodal approach, which combines EEG and video, demonstrates considerable advantages in terms of sentiment detection. Specifically, the accuracy obtained from the multimodal approach is significantly higher. As part of its investigation, the study also investigates the respective contributions of EEG and video aspects to emotion detection. It discovers that these features complement each other in a variety of emotional states and have the potential to improve the overall recognition results. The multimodal sentiment analysis method that is based on LSTM offers a high level of accuracy and robustness when it comes to recognizing the affective states of college students. This is especially essential for enhancing the quality of education and providing support for mental health.