Please login to be able to save your searches and receive alerts for new content matching your search criteria.
Neonatal epilepsy is a common emergency phenomenon in neonatal intensive care units (NICUs), which requires timely attention, early identification, and treatment. Traditional detection methods mostly use supervised learning with enormous labeled data. Hence, this study offers a semi-supervised hybrid architecture for detecting seizures, which combines the extracted electroencephalogram (EEG) feature dataset and convolutional autoencoder, called Fd-CAE. First, various features in the time domain and entropy domain are extracted to characterize the EEG signal, which helps distinguish epileptic seizures subsequently. Then, the unlabeled EEG features are fed into the convolutional autoencoder (CAE) for training, which effectively represents EEG features by optimizing the loss between the input and output features. This unsupervised feature learning process can better combine and optimize EEG features from unlabeled data. After that, the pre-trained encoder part of the model is used for further feature learning of labeled data to obtain its low-dimensional feature representation and achieve classification. This model is performed on the neonatal EEG dataset collected at the University of Helsinki Hospital, which has a high discriminative ability to detect seizures, with an accuracy of 92.34%, precision of 93.61%, recall rate of 98.74%, and F1-score of 95.77%, respectively. The results show that unsupervised learning by CAE is beneficial to the characterization of EEG signals, and the proposed Fd-CAE method significantly improves classification performance.
With the rapid development of the Internet and network technology, network security has become increasingly prominent. Regarding the important issue of network attack detection, traditional methods often find it difficult to effectively capture and identify new types of network attack behaviors, while models based on deep learning and evolutionary algorithms are considered to better adapt to complex and ever-changing network attack environments. This paper aims to explore how to combine convolutional autoencoder (CAE) and genetic algorithm (GA) to construct an efficient network attack detection model and improve the ability of network security defense. First, the convolutional autoencoder is used to effectively learn the feature representation of network data, and combined with hierarchical attention mechanism, appropriate weights are assigned to classification tasks under different features and feature fusion is performed. Using GA to adaptively optimize random forest and improve its performance and robustness in network attack detection. Ultimately, achieving better network security protection results and effectively preventing and combating various network security threats. And conduct experimental verification on multiple datasets and compare with other benchmark methods. The results show that the model has achieved significant improvement in the detection of text network attacks, and can more effectively classify various types of attacks, bringing new technological breakthroughs and application prospects for network attack detection. It will also provide new ideas and methods for the further development of network security, effectively ensuring the safe and stable operation of network systems.
Functional Magnetic Resonance Imaging (fMRI), for many decades acts as a potential aiding method for diagnosing medical problems. Several successful machine learning algorithms have been proposed in literature to extract valuable knowledge from fMRI. One of these algorithms is the convolutional neural network (CNN) that competent with high capabilities for learning optimal abstractions of fMRI. This is because the CNN learns features similarly to human brain where it preserves local structure and avoids distortion of the global feature space. Focusing on the achievements of using the CNN for the fMRI, and accordingly, the Deep Convolutional Auto-Encoder (DCAE) benefits from the data-driven approach with CNN’s optimal features to strengthen the fMRI classification. In this paper, a new two consequent multi-layers DCAE deep discriminative approach for classifying fMRI Images is proposed. The first DCAE is unsupervised sub-model that is composed of four CNN. It focuses on learning weights to utilize discriminative characteristics of the extracted features for robust reconstruction of fMRI with lower dimensional considering tiny details and refining by its deep multiple layers. Then the second DCAE is a supervised sub-model that focuses on training labels to reach an outperformed results. The proposed approach proved its effectiveness and improved literately reported results on a large brain disorder fMRI dataset.
At present, as a research hotspot for time series data (TSD), the deep clustering analysis of TSD has huge research value and practical significance. However, there still exist the following three problems: (1) For deep clustering based on joint optimization, the inevitably mutual interference existing between deep feature representation learning progress and clustering progress leads to difficult model training especially in the initial stage, the possible feature space distortion, inaccurate and weak feature representation; (2) Existing deep clustering methods are difficult to intuitively define the similarity of time series and rely heavily on complex feature extraction networks and clustering algorithms. (3) Multidimensional time series have the characteristics of high dimensions, complex relationships between dimensions, and variable data forms, thus generating a huge feature space. It is difficult for existing methods to select discriminative features, resulting in generally low accuracy of methods. Accordingly, to address the above three problems, we proposed a novel general two-stage multi-dimensional spatial features based multi-view deep clustering method 1DCAE-TSSAMC (One-dimensional deep convolutional auto-encoder based two-stage stepwise amplification multi-clustering). We conducted verification and analysis based on real-world important multi-scenario, and compared with many other benchmarks ranging from the most classic approaches such as K-means and Hierarchical to the state-of-the-art approaches based on deep learning such as Deep Temporal Clustering (DTC) and Temporal Clustering Network (TCN). Experimental results show that the new method outperforms the other benchmarks, and provides more accurate, richer, and more reliable analysis results, more importantly, with significant improvement in accuracy and spatial linear separability.
In medical imaging, Computed Tomography (CT) is one of the most often utilized imaging modalities for diagnosing different disorders. Deep learning has become significant in the field of medical imaging, specifically investigated for low-dose CT. In recently available CT Scanners, Low Dose CT (LDCT) reconstruction is presented with a post-processing approach, which uses deep learning-based methods to reduce the noise level. Applying low radiation dosage can decrease damage to patients but the projected image is corrupted with noise due to lower intensity and fewer angle measurements, resulting in excessive noise in the reconstructed CT image. Thus, the presence of noise and artifacts in LDCT images limits their potential use. Here, a vector quantized convolutional encoder network is proposed for the image reconstruction task. The network is trained using the LoDoPaB-CT Dataset and tested on chest CT images. The qualitative and quantitative results produced better quality results when compared with the recent state-of-art deep learning-based methods. The quality of the results is optimized with perceptual and Bias-Reducing loss functions.