Please login to be able to save your searches and receive alerts for new content matching your search criteria.
With the advancement of smart grid technology, the issue of power system network security has become increasingly critical. To fully utilize the power grid’s vast data resources and enhance the efficiency of anomaly detection, this paper proposes an improved decision tree (DT)-based automatic identification approach for anomalies in electric power big data. The method employs six-dimensional features extracted from the dimensions of volatility, trend, and variability to characterize the time series of power data. These features are integrated into a hybrid DT-SVM-LSTM framework, combining the strengths of DTs, support vector machines, and long short-term memory networks. Experimental results demonstrate that the proposed method achieves an accuracy of 96.8%, a precision of 95.3%, a recall of 94.8%, and an F1-score of 95.0%, outperforming several state-of-the-art methods cited in the literature. Moreover, the approach exhibits strong robustness to noise, maintaining high detection accuracy even under low signal-to-noise ratio conditions. These findings highlight the effectiveness of the method in efficiently detecting anomalies and addressing noise interference.
Emotion recognition plays an essential role in human–human interaction since it is a key to understanding the emotional states and reactions of human beings when they are subject to events and engagements in everyday life. Moving towards human–computer interaction, the study of emotions becomes fundamental because it is at the basis of the design of advanced systems to support a broad spectrum of application areas, including forensic, rehabilitative, educational, and many others. An effective method for discriminating emotions is based on ElectroEncephaloGraphy (EEG) data analysis, which is used as input for classification systems. Collecting brain signals on several channels and for a wide range of emotions produces cumbersome datasets that are hard to manage, transmit, and use in varied applications. In this context, the paper introduces the Empátheia system, which explores a different EEG representation by encoding EEG signals into images prior to their classification. In particular, the proposed system extracts spatio-temporal image encodings, or atlases, from EEG data through the Processing and transfeR of Interaction States and Mappings through Image-based eNcoding (PRISMIN) framework, thus obtaining a compact representation of the input signals. The atlases are then classified through the Empátheia architecture, which comprises branches based on convolutional, recurrent, and transformer models designed and tuned to capture the spatial and temporal aspects of emotions. Extensive experiments were conducted on the Shanghai Jiao Tong University (SJTU) Emotion EEG Dataset (SEED) public dataset, where the proposed system significantly reduced its size while retaining high performance. The results obtained highlight the effectiveness of the proposed approach and suggest new avenues for data representation in emotion recognition from EEG signals.
Long short-term memory (LSTM) with significantly increased complexity and a large number of parameters have a bottleneck in computing power resulting from limited memory capacity. Hardware acceleration of LSTM using memristor circuit is an effective solution. This paper presents a complete design of memristive LSTM network system. Both the LSTM cell and the fully connected layer circuit are implemented through memristor crossbars, and the 1T1R design avoids the influence of the sneak current which helps to improve the accuracy of network calculation. To reduce the power consumption, the word embedding dimensionality was reduced using the GloVe model, and the number of features in the hidden layer was reduced. The effectiveness of the proposed scheme is verified by performing the text classification task on the IMDB dataset and the hardware training accuracy reached as high as 88.58%.
With the accelerated construction of 5G and IoT, more and more 5G base stations are erected. However, with the increase of 5G base stations, the power management of 5G base stations becomes progressively a bottleneck. In this paper, we solve the problem of 5G base station power management by designing a 5G base station lithium battery cloud monitoring system. In this paper, first, the lithium battery acquisition hardware is designed. Second, a new communication protocol is established based on Modbus. Third, the windows desktop upper computer software and the cloud-based monitoring system are designed. Finally, this paper designs the improved ResLSTM algorithm which is fused with ResNet algorithm based on Stacked LSTM. The algorithm designed in this paper is tested in comparison with SVM and LSTM. The performance of the algorithm designed in this paper is better than SVM and LSTM. Furthermore, the communication test, as well as the training and testing of the ResLSTM algorithm are outstanding. The 5G base station lithium-ion battery cloud monitoring system designed in this paper can meet the requirements. It has great significance for engineering promotion. More importantly, the ResLSTM algorithm designed in this paper can better guide the method of lithium-ion battery SOC estimation.
This study was conducted to evaluate the effect of computer vision-based respiratory rehabilitation. Chronic obstructive pulmonary disease (COPD) is one of the primary respiratory diseases worldwide. Recently, image-capturing devices are increasingly used for physical therapy during rehabilitation treatment. Among these technologies, Action recognition plays a critical role in physical exercise and rehabilitation evaluation. This study aimed to propose an action series of a respiratory training program consisting of six actions. A video camera was placed in front of the participants to record their movements. Then, a hybrid algorithm combined with a convolution neural network and long short-term memory models was employed for action recognition from a video recording. The results indicated that the model achieved a reliable classification level of 82.35% on six actions. This demonstrated the validity of the proposed approach for multi-category action recognition. It was effective for action evaluation without medical guidance under home-based rehabilitation. Furthermore, the model for weight estimation was light-weight, with no need to consider the processing time.
Based on electroencephalography (EEG) and video data, we propose a multimodal affective analysis approach in this study to examine the affective states of university students. This method is based on the findings of this investigation. The EEG signals and video data were obtained from 50 college students experiencing various emotional states, and then they were processed in great detail. The EEG signals are pre-processed to extract their multi-view characteristics. Additionally, the video data were processed by frame extraction, face detection, and convolutional neural network (CNN) operations to extract features. We take a feature splicing strategy to merge EEG and video data to produce a time series input to realize the fusion of multimodal features. This allows us to realize the fusion of multimodal features. In addition, we developed and trained a model for the classification of emotional states based on a long short-term memory network (LSTM). With the help of cross-validation, the experiments were carried out by dividing the dataset into a training set and a test set. The model’s performance was evaluated with the help of four metrics: accuracy, precision, recall, and F1-score. When compared to the single-modal method of sentiment analysis, the results demonstrate that the multimodal approach, which combines EEG and video, demonstrates considerable advantages in terms of sentiment detection. Specifically, the accuracy obtained from the multimodal approach is significantly higher. As part of its investigation, the study also investigates the respective contributions of EEG and video aspects to emotion detection. It discovers that these features complement each other in a variety of emotional states and have the potential to improve the overall recognition results. The multimodal sentiment analysis method that is based on LSTM offers a high level of accuracy and robustness when it comes to recognizing the affective states of college students. This is especially essential for enhancing the quality of education and providing support for mental health.
Assessing the impacts of climate change on hydrological systems requires accurate downscaled climate projections. In the past two decades, various statistical and machine-learning techniques have been developed and tested for climate downscaling; however, there is no consensus regarding which technique is the most reliable for climate downscaling and hydrological impact assessment. In this study, an advanced machine-learning technique, Long Short-Term Memory (LSTM) neural network, is used to build multi-model ensembles for downscaling climate projections from a wide ranges of global and regional climate models, and its performance is compared with a number of traditional statistical and machine-learning methods, such as ensemble average, linear regression, Multi-layer Perceptron, Time-lagged Feed-forward Neural Network, and Nonlinear Auto-regression Network with exogenous inputs. The downscaling input consists of temperature and precipitation projections provided by regional climate models, such as CanRCM4, CRCM5, RCA4, and HIRHAM5, and the output is observation data collected from meteorological stations. Performance of the developed LSTM ensemble is evaluated for two case studies in Canada and China. The downscaled climate projections are further used to assess the hydrological impacts in the southwestern mountainous area in China, with the assist of a fully distributed hydrological model, MIKE SHE. The results can support future applications of LSTM neural networks and other similar data-driven techniques for climate downscaling and hydrological impact assessment.
Protein domain boundary prediction is usually an early step to understand protein function and structure. Most of the current computational domain boundary prediction methods suffer from low accuracy and limitation in handling multi-domain types, or even cannot be applied on certain targets such as proteins with discontinuous domain. We developed an ab-initio protein domain predictor using a stacked bidirectional LSTM model in deep learning. Our model is trained by a large amount of protein sequences without using feature engineering such as sequence profiles. Hence, the predictions using our method is much faster than others, and the trained model can be applied to any type of target proteins without constraint. We evaluated DeepDom by a 10-fold cross validation and also by applying it on targets in different categories from CASP 8 and CASP 9. The comparison with other methods has shown that DeepDom outperforms most of the current ab-initio methods and even achieves better results than the top-level template-based method in certain cases. The code of DeepDom and the test data we used in CASP 8, 9 can be accessed through GitHub at https://github.com/yuexujiang/DeepDom.