Loading [MathJax]/jax/output/CommonHTML/jax.js
Skip main navigation

Cookies Notification

We use cookies on this site to enhance your user experience. By continuing to browse the site, you consent to the use of our cookies. Learn More
×

System Upgrade on Tue, May 28th, 2024 at 2am (EDT)

Existing users will be able to log into the site and access content. However, E-commerce and registration of new users may not be available for up to 12 hours.
For online purchase, please visit us again. Contact us at customercare@wspc.com for any enquiries.

SEARCH GUIDE  Download Search Tip PDF File

  Bestsellers

  • articleNo Access

    Research on the Construction Method of Curriculum Teaching Knowledge Graph Based on Bi-LSTM and CNN Algorithm

    The aim of the paper is to explore a method of constructing a curriculum teaching knowledge graph by combining Bi-LSTM and convolutional neural network (CNN) algorithm. The field of education is constantly seeking innovation to improve teaching results and student learning experience. Knowledge graph, as an advanced technology of structured representation of knowledge, is expected to provide effective support for teaching management and personalized learning. First, this paper introduces the background and significance of the curriculum teaching knowledge graph. By establishing knowledge maps, we can more clearly present the knowledge system and correlation in the curriculum, which helps teachers to design more targeted teaching content and provide personalized learning paths for students. However, traditional knowledge graph construction methods are often faced with problems such as incomplete information capture and inaccurate semantic association, so it is necessary to introduce advanced deep learning algorithms to improve the quality of knowledge graph. Secondly, this paper elaborates on the construction method of fusion Bi-LSTM and CNN algorithm. Bi-LSTM, as a recurrent neural network capable of capturing sequence information, can better model the evolution process of knowledge in the course. As a CNN is good at extracting local features, CNN can effectively capture the spatial structure information in the knowledge graph. By integrating two, we can improve the expression ability and reasoning accuracy of knowledge graph. Further, the experimental results show that the fusion Bi-LSTM and the CNN algorithm have significantly improved the accuracy of information capture and inference compared with the traditional method. In summary, this paper proposes an innovative construction method of curriculum teaching knowledge graph by integrating Bi-LSTM and CNN algorithm, which provides new ideas and solutions for informatization and personalized teaching in the field of education. In the future, the applicability of this method in different disciplines and teaching scenarios can be further discussed, and more advanced technologies can be combined to continuously improve and expand the research.

  • articleNo Access

    A hybrid model using 1D-CNN with Bi-LSTM, GRU, and various ML regressors for forecasting the conception of electrical energy

    To solve power consumption challenges by using the power of Artificial Intelligence (AI) techniques, this research presents an innovative hybrid time series forecasting approach. The suggested model combines GRU-BiLSTM with several regressors and is benchmarked against three other models to guarantee optimum reliability. It uses a specialized dataset from the Ministry of Electricity in Baghdad, Iraq. For every model architecture, three optimizers are tested: Adam, RMSprop and Nadam. Performance assessments show that the hybrid model is highly reliable, offering a practical option for model-based sequence applications that need fast computation and comprehensive context knowledge. Notably, the Adam optimizer works better than the others by promoting faster convergence and obstructing the establishment of local minima. Adam modifies the learning rate according to estimates of each parameter’s first and second moments of the gradients separately. Furthermore, because of its tolerance for outliers and emphasis on fitting within a certain margin, the SVR regressor performs better than stepwise and polynomial regressors, obtaining a lower MSE of 0.008481 using the Adam optimizer. The SVR’s regularization also reduces overfitting, especially when paired with Adam’s flexible learning rates. The research concludes that the properties of the targeted dataset, processing demands and job complexity should all be considered when selecting a model and optimizer.

  • articleNo Access

    Deep Ensemble Model for Brain Age Prediction in MRI with Hybrid Optimal Feature Selection

    Deep learning (DL) has a tremendous deal of potential for accurate brain age prediction using neuroimaging data, but its performance is frequently limited by the size of the training dataset and the computer’s memory needs. Under this circumstance, a unique brain age prediction model with the following five phases of development. The input image is first pre-processed via median filtering, which keeps the edges of the raw image while removing the noise. The pre-processed images are segmented using the improved Balanced Iterative Reducing and Clustering employing Hierarchies (BIRCH) algorithm. Then, features including statistics (Mean, Median, Standard Deviation), Improved Median Robust Extended LBP (MRELBP), and sharpness score are extracted from these segmented images. The suggested coot interfaced Archimedes with Gaussian Map Estimation (CIAGME) optimization approach, which combines the Archimedes and coot optimization techniques, will select the optimal features from those extracted features. The deep ensemble approach, which includes deep classifiers like CNN, Bi-LSTM, and Deep maxout, will provide a prediction based on the optimal features that have been chosen. Lastly, the recommended CIAGME-based predictions model’s outperformance is evaluated, and the result is effectively evaluated.

  • articleOpen Access

    EEG-BASED SLEEP STAGE CLASSIFICATION FOR ENTREPRENEURIAL STUDENTS: A LIGHTWEIGHT AND EFFICIENT DEEP LEARNING APPROACH

    This study aims to classify sleep stages using electroencephalogram (EEG) signals to investigate the potential impact of entrepreneurial stress on the sleep quality of entrepreneurial students. Due to high stress and irregular schedules, entrepreneurial students are prone to sleep issues, making accurate detection and analysis of their sleep states highly significant in practice. This study proposes a lightweight deep learning model that combines Depthwise Separable Convolution (DSC) with a Bidirectional Long Short-Term Memory (Bi-LSTM) network to capture the spatiotemporal features of EEG signals. DSC effectively extracts spatial features from EEG data, reducing model complexity and computational cost, while Bi-LSTM enhances the model’s ability to capture temporal dependencies, thereby improving the identification of different sleep stages (W, N1, N2, N3, and REM). This approach balances efficiency and accuracy, making it suitable for environments with limited computational resources. Experiments were conducted on both the public Sleep-EDF dataset and a custom dataset collected from entrepreneurial students. The results show that the model achieved a sleep stage classification accuracy of 93.59% on the Sleep-EDF dataset and 88.98% on the custom entrepreneurial student dataset, demonstrating strong generalization and robustness. Additionally, the model maintained high F1-scores across different sleep stages, with particularly outstanding performance in the classification of N2 and REM stages. This study provides an efficient and interpretable tool for monitoring the sleep health of entrepreneurial students, contributing to further understanding of the relationship between sleep and entrepreneurial psychological states. It offers scientific support for enhancing the health management and learning efficiency of entrepreneurial students.

  • articleNo Access

    Deep Learning Recognition of Paroxysmal Kinesigenic Dyskinesia Based on EEG Functional Connectivity

    Paroxysmal kinesigenic dyskinesia (PKD) is a rare neurological disorder marked by transient involuntary movements triggered by sudden actions. Current diagnostic approaches, including genetic screening, face challenges in identifying secondary cases due to symptom overlap with other disorders. This study introduces a novel PKD recognition method utilizing a resting-state electroencephalogram (EEG) functional connectivity matrix and a deep learning architecture (AT-1CBL). Resting-state EEG data from 44 PKD patients and 44 healthy controls (HCs) were collected using a 128-channel EEG system. Functional connectivity matrices were computed and transformed into graph data to examine brain network property differences between PKD patients and controls through graph theory. Source localization was conducted to explore neural circuit differences in patients. The AT-1CBL model, integrating 1D-CNN and Bi-LSTM with attentional mechanisms, achieved a classification accuracy of 93.77% on phase lag index (PLI) features in the Theta band. Graph theoretic analysis revealed significant phase synchronization impairments in the Theta band of the functional brain network in PKD patients, particularly in the distribution of weak connections compared to HCs. Source localization analyses indicated greater differences in functional connectivity in sensorimotor regions and the frontal-limbic system in PKD patients, suggesting abnormalities in motor integration related to clinical symptoms. This study highlights the potential of deep learning models based on EEG functional connectivity for accurate and cost-effective PKD diagnosis, supporting the development of portable EEG devices for clinical monitoring and diagnosis. However, the limited dataset size may affect generalizability, and further exploration of multimodal data integration and advanced deep learning architectures is necessary to enhance the robustness of PKD diagnostic models.

  • articleNo Access

    A spatial-temporal approach for traffic status analysis and prediction based on Bi-LSTM structure

    Urban traffic control has become a big issue to help traffic management in recent years. With data explosion, Intelligent Transportation System (ITS) is developing rapidly. ITS is an advanced data-based method for traffic control, which requires timely and effective information supply. This research aims at providing real-time and accurate traffic flow data by intelligent prediction method. Applying multiple road traffic flow data of the Caltrans Performance Measurement System (PeMS) and separating the time series, the mechanism of spatial-temporal differences was taken into consideration. Based on the basic Long Short-Term Memory (LSTM) model, an improved LSTM model with Dropout and Bi-structure (Bi-LSTM) for traffic flow prediction was presented. In the prediction process, we applied three models including the improved Bi-LSTM model, Gated Recurrent Unit (GRU) model and Linear Regression in the experiment, and made a comparison from aspects of model structure complexity, operating efficiency and prediction accuracy. To validate the portability of the prediction model, the features of traffic flow from different datasets were further analyzed. The experimental results show that the improved Bi-LSTM model performs best in traffic flow prediction with comprehensive rationality, which reaches an accuracy of about 92% when considering temporal differences. Particularly, the specific factors of traffic situations and locations which is more applicable to be predicted by the improved Bi-LSTM model are summarized considering spatial differences. This research proposes an advanced and accurate model to provide real-time and short-term traffic flow prediction data, which is of great help to intelligent traffic control. Considering the mechanism between model and road traffic properties, the results suggest that it is more applicable in urban commercial area.

  • articleNo Access

    Modified GAN with Proposed Feature Set for Text-to-Image Synthesis

    Automated synthesis of practical images from the text could be useful and interesting; however, present AI systems are yet far from this objective. Nevertheless, in current years, powerful and generic Recurrent Neural Network (RNN) structures were introduced to train discriminative text feature representation. In the meantime, Deep Convolutional GANs have started producing highly convincing images of specified categories, like room interiors, album covers, and faces. In this research work, we plan to develop a new model for text-to-image synthesis, which contains three important phases: (i) feature extraction, (ii) text encoding, and (iii) optimal image synthesis. Initially, the text features such as improved TF–IDF, bag of words, and N-gram are extracted from the text and they are trained by Bi-LSTM. During the encoding of an image from text, cross-modal feature grouping is performed. Further, the image is synthesized using modified GAN (MGAN) with a new loss function. Here, for precise synthesis of images, the weights of GAN are optimized using Self-improved Social Ski-Driver (SI-SSD) optimization algorithm. Eventually, the superiority of the suggested model is examined via an assessment over existing schemes.

  • articleNo Access

    Emotion Recognition from Facial Expression Using Hybrid CNN–LSTM Network

    Facial Expression Recognition (FER) is a prominent research area in Computer Vision and Artificial Intelligence that has been playing a crucial role in human–computer interaction. The existing FER system focuses on spatial features for identifying the emotion, which suffers when recognizing emotions from a dynamic sequence of facial expressions in real time. Deep learning techniques based on the fusion of convolutional neural networks (CNN) and long short-term memory (LSTM) are presented in this paper for recognizing emotion and identifying the relationship between the sequence of facial expressions. In this approach, a hyperparameter tweaked VGG-19 skeleton is employed to extract the spatial features automatically from a sequence of images, which avoids the shortcoming of the conventional feature extraction methods. Second, these features are given into bidirectional LSTM (Bi-LSTM) for extracting spatiotemporal features of time series in two directions, which recognize emotion from a sequence of expressions. The proposed method’s performance is evaluated using the CK+ benchmark as well as an in-house dataset captured from the designed IoT kit. Finally, this approach has been verified through hold-out cross-validation techniques. The proposed techniques show an accuracy of 0.92% on CK+, and 0.84% on the in-house dataset. The experimental results reveal that the proposed method outperforms compared to baseline methods and state-of-the-art approaches. Furthermore, precision, recall, F1-score, and ROC curve metrics have been used to evaluate the performance of the proposed system.

  • articleNo Access

    Ensemble Model for Stock Price Forecasting: MapReduce Framework for Big Data Handling: An Optimal Trained Hybrid Model for Classification

    A number of authors have focused on this study to examine how huge data are perceived. A novel big data classification paradigm is introduced by the work’s preprocessing, feature extraction and classification techniques. Data normalization is carried out at the preprocessing stage. The MapReduce framework is then utilized to manage the massive data. Statistical features (mean, median, min/max and SD), higher-order statistical features (skewness, kurtosis and enhanced entropy), and correlation-based features are all extracted prior to classification. The Bi-LSTM and deep maxout hybrid classification model classifies the data during the reduction stage. To assure classification accuracy, training will also be deployed by the new Hybrid Butterfly Positioned Coot Optimization (HBPCO) algorithm. The proposed method’s accuracy of 97.45% beats the methods of NN (85.13%), CNN (83.78%), RNN (78.37%), Bi-LSTM (82.43%) and SVM (87.83%).

  • articleNo Access

    An Efficient Deep Learning Mechanism for Predicting Fake News/Reviews in Twitter Data

    Recently, social media platforms have been widely utilized as information sources due to their effortless accessibility and reduced costs. However, online platforms like Instagram, Twitter and Facebook get influenced by their users via fake news/reviews. The main intention of spreading fake news is to mislead other network users, which highly affects businesses, political parties, etc. Thus, an effective methodology is needed to predict fake news from social media automatically. The major objective of this proposed study is to identify and classify the given Twitter input data as real or fake through deep learning mechanisms. The proposed study involves four stages: pre-processing, embedded word analysis, feature extraction, and fake news/reviews prediction. Initially, pre-processing is performed to enhance the quality of data with the help of tokenization, stemming and stop word removal. Embedded word analysis is done using Advanced Word2Vec and GloVe modeling to enhance the performance of a proposed prediction model. Then, the hybrid deep learning model named Dense Convolutional assisted Gannet Optimal Bi-directional Network (DC_GO_BiNet) is introduced for feature extraction and prediction. A Dense Convolutional Neural Network (DCNN) is hybridized with a bi-directional long-short-term memory (Bi-LSTM) model to extract the essential features and predict fake news from the given input text. Also, the proposed model’s parameters are fine-tuned by adopting a gannet optimization (GO) algorithm. The proposed study used three different datasets and obtained higher classification accuracy as 99.5% in Fake News Detection on Twitter EDA, 99.59% in FakeNewsNet and 99.51% in ISOT. The analysis proves that the proposed model attains higher prediction results for each dataset than others.

  • articleNo Access

    Wireless Capsule Endoscopy Infected Images Detection and Classification Using MobileNetV2-BiLSTM Model

    An efficient tool to execute painless imaging and examine gastrointestinal tract illnesses of the intestine is also known as wireless capsule endoscopy (WCE). Performance, safety, tolerance, and efficacy are the several concerns that make adaptation challenging and wide applicability. In addition, to detect abnormalities, the great importance is the automatic analysis of the WCE dataset. These issues are resolved by numerous vision-based and computer-aided solutions. But, they want further enhancements and do not give the accuracy at the desired level. In order to solve these issues, this paper presents the detection and classification of WCE infected images by a deep neural network and utilizes a bleed image recognizer (BIR) that associates the MobileNetV2 design to classify the images of WCE infected. For the opening-level evaluation, the BIR uses the MobileNetV2 model for its minimum computation power necessity, and then the outcome is sent to the CNN for more processing. Then, Bi-LSTM with an attention mechanism is used to improve the performance level of the model. Hybrid attention Bi-LSTM design yields more accurate classification outcomes. The proposed scheme is implemented in the Python platform and the performance is evaluated by Cohen’s kappa, F1-score, recall, accuracy, and precision. The implementation outcomes show that the introduced scheme achieved maximum accuracy of 0.996 with data augmentation with the dataset of WCE images which provided higher outcomes than the others.

  • articleOpen Access

    FINE-GRAINED AND MULTI-SCALE MOTIF FEATURES FOR CROSS-SUBJECT MENTAL WORKLOAD ASSESSMENT USING BI-LSTM

    Mental workload (MW) assessment is crucial for understanding human mental state. Cross-subject MW analysis based on electroencephalogram (EEG) signals is an important way. In this paper, a fine-grained and multi-scale motif (FGMSM) features extraction method is proposed, and the proposed features together with original EEG data are used as the input of bidirectional long short-term memory (Bi-LSTM) to evaluate the cross-subject mental workload. First, the EEG signal of each channel is decomposed based on improved complete ensemble empirical mode decomposition with adaptive noise (ICEEMDAN) algorithm. Second, for the motif structure consisting of three nodes, multi-scale detection is carried out in each intrinsic mode function, and the proportion of each motif structure is extracted as the newly extracted features. Then, the statistical differences of the extracted features between different MW levels are analyzed by using the t-test, and the features with statistical differences are selected for the cross-subject MW assessment. Finally, based on the public dataset with 26 subjects, Bi-LSTM and a variety of machine learning algorithms are used to classify the levels of cross-subject MW. The results show that the Bi-LSTM classification method with the original EEG data and the proposed features show the most positive results. Therefore, the FGMSM features proposed in this paper with Bi-LSTM provide a new technique for the assessment of cross-subject MW based on EEG signals.

  • articleNo Access

    ENSEMBLE MODEL WITH IMPROVED U-NET-BASED SEGMENTATION FOR LEUKEMIA DETECTION

    An essential component of the immune system that aids in the fight against pathogens is white blood cells. One of the most prevalent blood diseases, leukemia can be fatal if not properly diagnosed. Diagnosing this disease at an early stage may reduce the severity of the disease. This research intends to propose an ensemble model with improved U-net for leukemia detection (EMIULD) with the following four phases: preprocessing, segmentation, feature extraction and detection. The preprocessing step involves preprocessing the blood smear image, which includes filtering and scaling the image. The segmentation phase is applied to the preprocessed image, and U-Net-based segmentation is used to segment the image. As a result, features for the segmented images are extracted, including better Local Gabor XOR Pattern (LGXP), area, and grid-based form features. The extracted features are fed into the suggested ensemble model, which consists of Deep Convolutional Neural Network (DCNN), Support Vector Machine (SVM) and Random Forest (RF) classifiers, with the purpose of detecting leukemia. Finally, the proposed Bidirectional Long Short-Term Memory (Bi-LSTM) network to predict whether the given blood smear image is leukemia or not. The suggested model attained the best outcome when evaluated over the extant approaches.