Extracting to enhance the accuracy of diagnosing bearing faults in steam turbines, a novel approach focused on extracting key fault features from vibration signals is introduced. Recognizing the complex, non-linear, and non-stationary nature of bearing vibration signals, our strategy involves a sensitivity analysis utilizing a multivariate diagnostic algorithm. The process begins with collecting vibration data from defective bearings via the TMI system. This data is then subjected to Complete Ensemble Empirical Mode Decomposition with Adaptive Noise (CEEMDAN), enabling the integration of adaptive noise for the extraction of in-depth information. Following this, an analysis in both time and frequency domains — post Fast Fourier Transform (FFT) — is conducted on the decomposed signals, forming the basis of a diagnostic features database. To streamline data analysis and boost the model’s computational efficiency, a combination of eXtreme Gradient Boosting (XGBoost) and Mutual Information Criterion (MIC) is applied for dimensionality reduction. Furthermore, a deep belief network (DBN) is implemented to develop a precise fault diagnosis model for the bearings in rotating machinery. By incorporating sensitivity analysis, a diagnostic matrix is crafted, facilitating highly accurate fault identification. The superiority of this diagnostic algorithm is corroborated by testing with real on-site data and a benchmark database, demonstrating its enhanced diagnostic capabilities relative to other feature selection techniques.
Nowadays, it is very popular to use deep architectures in machine learning. Deep Belief Networks (DBNs) are deep architectures that use stack of Restricted Boltzmann Machines (RBMs) to create a powerful generative model using the training data. In this paper, we present an improvement in a common method that is usually used in training RBMs. The new method uses free energy as a criterion to obtain elite samples from generative model. We argue that these samples can compute gradient of log probability of training data more accurately. According to the results, an error rate of 0.99% was achieved on MNIST test set. This result shows that the proposed method outperforms the method presented in the first paper introducing DBN (1.25% error rate) and general classification methods such as SVM (1.4% error rate) and KNN (1.6% error rate). In another test using ISOLET dataset, letter classification error dropped to 3.59% compared to 5.59% error rate achieved in the papers using this dataset.
Day-ahead prediction of wind speed is a basic and key problem of large-scale wind power penetration. Many current techniques fail to satisfy practical engineering requirements because of wind speed's strong nonlinear features, influenced by many complex factors, and the general model's inability to automatically learn features. It is well recognized that wind speed varies in different patterns. In this paper, we propose a deep feature learning (DFL) approach to wind speed forecasting because of its advantages at both multi-layer feature extraction and unsupervised learning. A deep belief network (DBN) model for regression with an architecture of 144 input and 144 output nodes was constructed using a restricted Boltzmann machine (RBM). Day-ahead prediction experiments were then carried out. By comparing the experimental results, it was found that the prediction errors with respect to both size and stability of a DBN model with only three hidden layers were less than those of the other three typical approaches including support vector regression (SVR), single hidden layer neural networks (SHL-NN), and neural networks with three hidden layers (THL-NN). In addition, the DBN model can learn and obtain complex features of wind speed through its strong nonlinear mapping ability, which effectively improves its prediction precision. In addition, prediction errors are minimized when the number of DBN model's hidden layers reaches a threshold value. Above this number, it is not possible to improve the prediction accuracy by further increasing the number of hidden layers. Thus, the DBN method has a high practical value for wind speed prediction.
The increasing diverse demand for image feature recognition and complicated relationships among image pixels cannot be fully and effectively handled by traditional single image recognition methods. In order to effectively improve classification accuracy in image processing, a deep belief network (DBN) classification model based on probability measure rough set theory is proposed in our research.
First, the incomplete and inaccurate fuzzy information in the original image is preprocessed by the rough set method based on probability measure. Second, the attribute features of the image information are extracted, the attribute feature set is reduced to generate the classification rules, and key components are extracted as the input of the DBN. Third, the network structure of the DBN is determined by the extracted classification rules, and the importance of the rough set attributes is integrated and the weights of the neuronal nodes are corrected by the backpropagation (BP) algorithm. Last, the DBN is trained to classify images. The experimental analysis of the proposed method for medical imagery shows that it is more effective than current single rough set approach or the taxonomy of deep learning.
Image processing plays a significant role in various fields like military, business, healthcare and science. Ultrasound (US), Magnetic Resonance Imaging (MRI) and Computed Tomography (CT) are the various image tests used in the treatment of the cancer. Detecting the liver tumor by these tests is a complex process. Hence, in this research work, a novel approach utilizing a deep learning model is used. That is Deep Belief Network (DBN) with Opposition-Based Learning (OBL)-Grey Wolf Optimization (GWO) is used for the classification of liver cancer. This process undergoes five major processes. Initially, in pre-processing the color contrast is improved by Contrast Limited Adaptive Histogram Equalization (CLAHE) and the noise is removed by Wiener Filtering (WF). The liver is segmented by adaptive thresholding following pre-processing. Following that, the kernelizedFuzzy C Means (FCM) method is used to segment the tumor area. The form, color, and texture features are then extracted during the feature extraction process. Finally, these traits are categorized using DBN, and OBL-GWO is employed to enhance system performance. The entire evaluation is done on Liver Tumor Segmentation (LiTS) benchmark dataset. Finally, the performance of the proposed DBN-OBL-GWO is compared to other models and their achievements are proved. The proposed DBN-OBL-GWO achieves a better accuracy of 0.995, precision of 0.948 and false positive rate (FPR) of 0.116, respectively.
Accurate aircraft positioning is the key to construct a reliable network topology when aircrafts are used to assist 6G cellular networks in ground communications. Distance Measuring Equipment (DME) has been widely used in aircraft positioning with the help of multiple ground-based radar stations. In this paper, a learning-based health prediction method for airborne DME receiver is proposed by using signal processing techniques to achieve quantitative health status assessment and failure degradation trend prediction, when the DME is used to measure the distance between ground-based radar stations and airborne DME. First, a quantitative airborne DME device receiving channel health evaluation model is established, which takes the Automatic Gain Control (AGC) attenuation value and the collected distance between the ground beacon station and the airborne DME receiver with DME device as input, to calculate the receiving channel AGC attenuation value deviation and gain loss. The model can be used to build the mapping relationship between the receiver channel gain loss and the DME function range, and further establish the calculation model of the receiving channel’s health index. Second, a multi-model fusion fault prediction framework based on the Deep Belief Network (DBN) techniques is proposed. In this framework, the problem of insufficient generalization and robustness of the traditional DBN model is solved by introducing the Dropout mechanism into the DBN structure, and an improved weighted voting method is utilized as a model fusion algorithm to eliminate the deviation of prediction results caused by environmental load differences and improve the accuracy of fault prediction. Finally, extensive experiments are conducted to show the feasibility of the proposed method, and the results show that the proposed method has a good performance.
To enhance the accuracy results of the change detection of buildings utilizing high-resolution remote sensing (HRRS) images, a novel method was proposed by combining both tensor and deep belief network (DBN). To optimize the description of the essential characteristics for changes in buildings, a tensor-based structure covering time-space-spectrum-shadow features integrated into the model (TSSS-Cube) is proposed. The changes occurring as a combination of shadow and spectral features and spatio-temporal autocorrelation at each pixel are represented by a third-order tensor to maintain the structural information and the constraint integrity between them. Then, a restricted Boltzmann machine (TC-RBM) that can be directly used to process TSSS-Cube data is designed, and the support tensor machine (STM) is used to replace the conventional backpropagation neural network at the top of the DBN to construct a multi-tensor deep belief network (MTR-DBN) composing of multi-layer TC-RBMs and an STM classifier. Finally, the multi-layer TC-RBMs in MTR-DBN are trained layer by layer, and the global parameters of the MTR-DBN are optimized by combining a limited number of labeled data and fine-tuning the STM classifier. The implementation of both supervised and unsupervised learning methods comprehensively provides advantages to increase the accuracy result of the MTR-DBN network to detect changes. Three representative different sub-regions are selected from the whole original experimental area respectively for building change detection experiments, and a dataset composed of double-temporal HRRS images in 2012 and 2016 is used as the related experimental dataset. The experimental results show that both a change detection accuracy result with a higher average and better detection efficiency is attained by the proposed method called the MTR-DBN when compared with other similar methods.
Big data is important in knowledge manipulation, assessment, and prediction. However, extracting and analyzing knowledge through big database are complex because of imbalance data distribution that leads to wrong decisions and biased classification outputs. Hence, an effective and optimal big data classification approach is designed using the proposed Bird Swarm Deer Hunting Optimization-Deep Belief Network (BSDHO-based DBN) algorithm based on spark architecture that follows the master and slave nodes. The proposed BSDHO is obtained by combining Deer Hunting Optimization algorithm and Bird Swarm Algorithm. The developed model poses two nodes, namely slave and master node. The training data is initially given to the master node in the spark architecture to perform transformation of data. Here, the transformation of data is done with an exponential log kernel, and then selection of feature is done with sequential forward selecting for choosing suitable features for enhanced processing. Consequently, oversampling process is performed with Fuzzy K-Nearest Neighbor (Fuzzy KNN) in the slave node using selected features to manage imbalance data. Then, in master node, classification is done with Deep belief Network, and trained using developed Bird swarm Deer Hunting Optimization (BSDHO) algorithm. On the other hand, the test data is taken as input, and is fed to the slave node to perform data transformation. Then, the transformed data is given to the master node for classification based on the proposed BSDHO. At last, the training data and testing data output produced the classified output. The proposed BSDHO-based DBN provided enhanced outcomes with highest specificity of 97.92%, accuracy of 96.92%, and sensitivity of 96.9%.
Atrial fibrillation (AF) is a common atrial arrhythmia occurring in clinical practice and can be diagnosed using electrocardiogram (ECG) signal. The conventional diagnostic features of ECG signal are not enough to quantify the pathological variations during AF. Therefore, an automated detection of AF pathology using the new diagnostic features of ECG signal is required. This paper proposes a novel method for the detection of AF using ECG signals. In this work, we are using a novel nonlinear method namely, the two-stage variational mode decomposition (VMD) to analyze ECG and deep belief network (DBN) for automated AF detection. First, the ECG signals of both normal sinus rhythm (NSR) and AF classes are decomposed into different modes using VMD. The first mode of VMD is decomposed in the second stage as this mode captures the atrial activity (AA) information during AF. The remaining modes of ECG captures the ventricular activity information. The sample entropy (SE) and the VMD estimated center frequency features are extracted from the sub-modes of AA mode and ventricular activity modes. These extracted features coupled with DBN classifier is able to classify normal and AF ECG signals with an accuracy, sensitivity and specificity values of 98.27%, 97.77% and 98.67%, respectively. We have developed an atrial fibrillation diagnosis index (AFDI) using selected SE and center frequency features to detect AF with a single number. The system is ready to be tested on huge database and can be used in main hospitals to detect AF ECG classes.
Software bug prediction is mainly used for testing and code inspection. So, software bug prediction is carried out by network measures over the decades. But, the classical fault prediction method failed to obtain the semantic difference among various programs. Thus, it degrades the working of the prediction model, which is designed using these aspects. It is necessary to obtain the semantic difference for designing the prediction model accurately and effectively. In a software defect prediction system, it faces many difficulties in identifying the defect modules like correlation, irrelevance aspects, data redundancy, and missing samples or values. Consequently, many researchers are designed to detect software bug prediction that categorises faulty as well as non-faulty modules with software matrices. But, there are only a few works focussed to mitigate the class imbalance problem in bug prediction. In order to overcome the problem, it is required to develop an efficient software bug prediction method with the enhanced classifier. For this experimentation, the input data are taken from the standard online data sources. Initially, the input data undergo pre-processing phase and then, the pre-processed data are provided as input to the feature extraction by utilising the Auto-Encoder. These obtained features are utilised in getting the optimal fused features with the help of a new Hybrid Honey Badger Cat Swarm Algorithm (HHBCSA). Finally, these features are fed as input to the Optimised Parallel Cascaded Deep Network (OPCDP), where the “Extreme Learning Machine (ELM) and Deep Belief Network (DBN)” are used for the prediction of software bugs, in which the parameters from both classifiers are optimised by proposed HHBCSA algorithm. From the investigations, the recommended software bug prediction method offers a quicker bug prediction result, which helps to detect and remove the software bug easily and accurately.
The 13-8 PH steel is one of the rare and hardest metals owing to high-end applications such as Nuclear reactor components, Injection molds, Machine tools and Aerospace applications. The hardness of 13-8 PH makes it one of the most challenging materials during conventional machining processes. On the other hand, due to its application, the 13-8 PH steel requires a high degree of Surface Roughness (SR) and precision to be obtained, but most conventional machining processes fail in machining the 13-8 PH steel due to its extensive physical, mechanical properties. Hence, Die Sinking Electrical Discharge Machining (DSEDM) of 13-8 PH steel is suggested in this study. The experimentation has been designed and performed using Response Surface Methodology (RSM)-based Box Behnken design (BBD). The various machining parameters such as Peak Current (IP), Pulse On time (TON), pulse off time (TOFF), Tool lift time (TLT), Material Removal rate (MRR), Tool wear rate (TWR) and SR were analyzed. Besides, this investigation proposes a novel hybrid Deep Belief Network-based Human Eye Vision Algorithm (DBN-HEVA) for predicting and optimizing process parameters. The results show that the rate of required input parameters will be IP=11.89A, TON=97.12μs, TOFF=30.25μs, TLT=3.17μs to obtain an optimal outcome of 142.041g/min MRR, 2.306g/min TWR and 5.89μm SR. In addition, during process prediction, the proposed hybrid DBN-HEVA algorithm outperformed the existing methods while predicting MRR, TWR and SR responses by 118.5%, 144.3% and 166.2%.
Medical data classification is the process of transforming descriptions of medical diagnoses and procedures into universal medical code numbers. The diagnoses and procedures are usually taken from a variety of sources within the healthcare record, such as the transcription of the physician’s notes, laboratory results, radiologic results and other sources. However, there exist many frequency distribution problems in these domains. Hence, this paper intends to develop an advanced and precise medical data classification approach for diabetes and breast cancer dataset. With the knowledge of the features and challenges persisting with the state-of-the-art classification methods, deep learning-based medical data classification methodology is proposed here. It is well known that deep learning networks learn directly from the data. In this paper, the medical data is dimensionally reduced using Principle Component Analysis (PCA). The dimensionally reduced data are transformed by multiplying by a weighting factor, which is optimized using Whale Optimization Algorithm (WOA), to obtain the maximum distance between the features. As a result, the data are transformed into a label-distinguishable plane under which the Deep Belief Network (DBN) is adopted to perform the deep learning process, and the data classification is performed. Further, the proposed WOA-based DBN (WOADBN) method is compared with the Neural Network (NN), DBN, Generic Algorithm-based NN (GANN), GADBN, Particle Swarm Optimization (PSONN), PSO-based DBN (PSODBN), WOA-based NN (WOANN) techniques and the results are obtained, which shows the superiority of proposed algorithm over conventional methods.
Speech recognition is a rapidly emerging research area as the speech signal contains linguistic information and speaker information that can be used in applications including surveillance, authentication, and forensic field. The performance of speech recognition systems degrades expeditiously nowadays due to channel degradations, mismatches, and noise. To provide better performance of speech recognition, the Taylor-Deep Belief Network (Taylor-DBN) classifier is proposed, which is the modification of the Gradient Descent (GD) algorithm with Taylor series in the existing DBN classifier. Initially, the noise present in the speech signal is removed through the speech signal enhancement. The features, such as Holoentropy with the eXtended Linear Prediction using autocorrelation Snapshot (HXLPS), spectral kurtosis, and spectral skewness, are extracted from the enhanced speech signal, which is fed to the Taylor-DBN classifier that identifies the speech of the impaired persons. The experimentation is done using the TensorFlow speech recognition database, the real database, and the ESC-50 dataset. The accuracy, False Acceptance Rate (FAR), False Rejection Rate (FRR), and Mean Square Error (MSE) of the Taylor-DBN for TensorFlow speech recognition database are 96.95%, 3.04%, 3.04%, and 0.045, respectively, and for real database, the accuracy, FAR, FRR, and MSE are 96.67%, 3.32%, 3.32%, and 0.0499, respectively. Similarly, for the ESC-50 dataset, the accuracy, FAR, FRR, and MSE are 96.81%, 3.18%, 3.18%, and 0.047, respectively. The results imply that the Taylor-DBN provides better performance as compared to the existing conventional methods.
The cytochrome P450 (CYP) superfamily, exists in the human liver, is responsible for more than 90% of the metabolism of clinical drugs. So it is necessary to adopt a new kind of computer simulation methods that can predict the rejection capability of compounds for a concrete CYPs isoform. In this work, a model is presented for classification of CYP450 1A2 inhibitors and noninhibitors based on a multi-tiered deep belief network (DBN) on a large dataset. The dataset composed of more than 13,000 heterogeneous compounds was acquired from PubChem. Firstly, 139 2D and 53 3D descriptors are calculated and preprocessed. Then, the unsupervised learning method is used to train DBN model to automatically extract multiple levels of distributed representation from the descriptors of training set. Finally, by using testing set and external validation set, we evaluate the classified performance of DBN for the inhibition of CYP1A2. Meanwhile, the proposed model is compared with shallow machine learning models (support vector machine (SVM) and artificial neural network (ANN)). We also discussed the performance of DBN by comparing it with different features combination. The experimental results showed that DBN has a better prediction ability compared with SVM and ANN. And these models combined with the features of 2D and 3D obtain the best forecast accuracy.
This paper studied music feature recognition and classification. First, the common signal features were analyzed, and the signal pre-processing method was introduced. Then, the Mel–Phon coefficient (MPC) was proposed as a feature for subsequent recognition and classification. The deep belief network (DBN) model was applied and improved by the gray wolf optimization (GWO) algorithm to get the GWO–DBN model. The experiments were conducted on GTZAN and free music archive (FMA) datasets. It was found that the best hidden-layer structure of DBN was 1440-960-480-300. Compared with machine learning methods such as decision trees, the DBN model had better classification performance in recognizing and classifying music types. The classification accuracy of the GWO–DBN model reached 75.67%. The experimental results demonstrate the reliability of the GWO–DBN model. The GWO–DBN model can be further promoted and applied in actual music research.
Facial emotion recognition (FER) is the technology or process of identifying and interpreting human emotions based on the analysis of facial expressions. It involves using computer algorithms and machine learning techniques to detect and classify emotional states from images or videos of human faces. Further, FER plays a vital role in recognizing and understanding human emotions to better interpret someone’s feelings, intentions, and attitudes. In the present time, it is widely used in various fields such as healthcare, human–computer interaction, law enforcement, security, and beyond. FER has a wide range of practical applications across various industries including Emotion Monitoring, Adaptive Learning, and Virtual Assistants. This paper presents a comparative analysis of FER algorithms, focusing on deep learning approaches. The performance of different datasets, including FER2013, JAFFE, AffectNet, and Cohn–Kanade, is evaluated using convolutional neural networks (CNNs), deep face, attentional convolutional networks (ACNs), and deep belief networks (DBNs). Among the tested algorithms, DBNs outperformed other algorithms, reaching the highest accuracy of 98.82%. These results emphasize the effectiveness of deep learning techniques, particularly DBNs, in FER. Additionally, outlining the advantages and disadvantages of current research on facial emotion identification might direct future research efforts in the direction of the most profitable directions.
With the rapid growth of the biomedical literature volume, the development of biomedical text mining technology is becoming more and more important. But the research progress of other semantic type such as disease has been hindered because of the lack of adequately availability annotated corpora. For all of disease centric information extraction task, the correct identification of disease entity is the key issue for further improvement. In this paper, a machine learning based approach that uses deep belief network as basic architecture, combining some simple orthographic features for disease mention recognition is proposed, which achieves comparable or better results than the state-of-the-art results on Arizona Disease Corpus. The paper also discusses why simple orthographic features can improve the performance under deep belief network architecture.
Please login to be able to save your searches and receive alerts for new content matching your search criteria.