Please login to be able to save your searches and receive alerts for new content matching your search criteria.
The level of human–machine interaction experience is raising its bar as artificial intelligence develops quickly. An important trend in this application is the improvement of the friendliness, harmony, and simplicity of human–machine communication. Electroencephalogram (EEG) signal-driven emotion identification has recently gained popularity in the area of human–computer interaction (HCI) because of its advantages of being simple to extract, difficult to conceal, and real-time differences. The corresponding research is ultimately aimed at imbuing computers with feelings to enable fully harmonic and organic human–computer connections. This study applies three-dimensional convolutional neural networks (3DCNNs) and attention mechanisms to an environment for HCI and offers a dual-attention 3D convolutional neural networks (DA-3DCNNs) model from the standpoint of spatio-temporal convolution. With the purpose of extracting more representative spatio-temporal characteristics, the new model first thoroughly mines the spatio-temporal distribution information of EEG signals using 3DCNN, taking into account the temporal fluctuation of EEG data. Yet, a dual-attention technique based on EEG channels is utilized at the same time to strengthen or weaken the feature information and understand the links between various brain regions and emotional activities, highlighting the variations in the spatiotemporal aspects of various emotions. Finally, three sets of experiments were planned on the Database for Emotion Analysis using Physiological Signals (DEAP) dataset for cross-subject emotion classification experiments, channel selection experiments, and ablation experiments, respectively, to show the validity and viability of the DA-3DCNN model for HCI emotion recognition applications. The outcomes show that the new model may significantly increase the model’s accuracy in recognizing emotions, acquire the spatial relationship of channels, and more thoroughly extract dynamic information from EEG.
This research introduces a novel method for emotion recognition using Electroencephalography (EEG) signals, leveraging advancements in emotion computing and EEG signal processing. The proposed method utilizes a semisupervised deep convolutional generative adversarial network (SSDNet) as the central model. The model fully integrates the feature extraction methods of the generative adversarial network (GAN), deep convolutional GAN (DCGAN), spectrally normalized GAN (SSGAN) and the encoder, maximizing the advantages of each method to construct a more accurate emotion classification model. In our study, we introduce a flow-form-consistent merging pattern, which successfully addresses mismatches between the data by fusing the EEG data with the features. Implementing this merging pattern not only enhances the uniformity of the input but also decreases the computational load on the network, resulting in a more efficient model. By conducting experiments on the DEAP and SEED datasets, we evaluate the SSDNet model proposed in this paper in detail. The experimental results show that the accuracy of the proposed algorithm is improved by 6.4% and 8.3% on the DEAP and SEED datasets, respectively, compared to the traditional GAN. This significant improvement in performance validates the effectiveness and feasibility of the SSDNet model. The research contributions of this paper are threefold. First, we propose and implement an SSDNet model that integrates multiple feature extraction methods, providing a more accurate and comprehensive solution for emotion recognition tasks. Second, by introducing a flow-form-consistent merging pattern, we successfully address the problem of interdata mismatches and improve the generalization performance of the model. Finally, we experimentally demonstrate that the method in this paper achieves a significant improvement in accuracy over the traditional GANs on the DEAP and SEED datasets, providing an innovative solution in the field of EEG-based emotion recognition.
The need for sentiment analysis in the mental health field is increasing, and electroencephalogram (EEG) signals and music therapy have attracted extensive attention from researchers as breakthrough ideas. However, the existing methods still face the challenge of integrating temporal and spatial features when combining these two types, especially when considering the volume conduction differences among multichannel EEG signals and the different response speeds of subjects; moreover, the precision and accuracy of emotion analysis have yet to be improved. To solve this problem, we integrate the idea of top-k selection into the classic transformer model and construct a novel top-k sparse transformer model. This model captures emotion-related information in a finer way by selecting k data segments from an EEG signal with distinct signal features. However, this optimization process is not without its challenges, and we need to balance the selected k values to ensure that the important features are preserved while avoiding excessive information loss. Experiments conducted on the DEAP dataset demonstrate that our approach achieves significant improvements over other models. By enhancing the sensitivity of the model to the emotion-related information contained in EEG signals, our method achieves an overall emotion classification accuracy improvement and obtains satisfactory results when classifying different emotion dimensions. This study fills a research gap in the field of sentiment analysis involving EEG signals and music therapy, provides a novel and effective method, and is expected to lead to new ideas regarding the application of deep learning in sentiment analysis.