Please login to be able to save your searches and receive alerts for new content matching your search criteria.
Background and Objective: Alzheimer’s disease is nowadays the most common cause of dementia. It is a degenerative neurological pathology affecting the brain, progressively leading the patient to a state of total dependence, thus creating a very complex and difficult situation for the family that has to assist him/her. Early diagnosis is a primary objective and constitutes the hope of being able to intervene in the development phase of the disease. Methods: In this paper, a method to automatically detect the presence of Alzheimer’s disease, by exploiting deep learning, is proposed. Five different convolutional neural networks are considered: ALEX_NET, VGG16, FAB_CONVNET, STANDARD_CNN and FCNN. The first two networks are state-of-the-art models, while the last three are designed by authors. We classify brain images into one of the following classes: non-demented, very mild demented and mild demented. Moreover, we highlight on the image the areas symptomatic of Alzheimer presence, thus providing a visual explanation behind the model diagnosis. Results: The experimental analysis, conducted on more than 6000 magnetic resonance images, demonstrated the effectiveness of the proposed neural networks in the comparison with the state-of-the-art models in Alzheimer’s disease diagnosis and localization. The best results in terms of metrics are the best with STANDARD_CNN and FCNN with accuracy, precision and recall between 98% and 95%. Excellent results also from a qualitative point of view are obtained with the Grad-CAM for localization and visual explainability. Conclusions: The analysis of the heatmaps produced by the Grad-CAM algorithm shows that in almost all cases the heatmaps highlight regions such as ventricles and cerebral cortex. Future work will focus on the realization of a network capable of analyzing the three anatomical views simultaneously.
The introduction of Optical Coherence Tomography (OCT) in ophthalmology has resulted in significant progress in the early detection of glaucoma. Traditional approaches to identifying retinal diseases comprise an analysis of medical history and manual assessment of retinal images. Manual diagnosis is time-consuming and requires considerable human expertise, without which, errors could be costly to human sight. The use of artificial intelligence such as machine learning techniques in image analysis has been gaining ground in recent years for accurate, fast and cost-effective diagnosis from retinal images. This work proposes a Directed Acyclic Graph (DAG) network that combines Depthwise Convolution (DC) to decisively recognize early-stage retinal glaucoma from OCT images. The proposed method leverages the benefits of both depthwise convolution and DAG. The Convolutional Neural Network (CNN) information obtained in the proposed architecture is processed as per the partial order over the nodes. The Grad-CAM method is adopted to quantify and visualize normal and glaucomatous OCT heatmaps to improve diagnostic interpretability. The experiments were performed on LFH_Glaucoma dataset composed of 1105 glaucoma and 1049 healthy OCT scans. The proposed faster hybrid Depthwise-Directed Acyclic Graph Network (D-DAGNet) achieved an accuracy of 0.9995, precision of 0.9989, recall of 1.0, F1-score of 0.9994 and AUC of 0.9995 with only 0.0047 M learnable parameters. Hybrid D-DAGNet enhances network training efficacy and significantly reduces learnable parameters required for identification of the features of interest. The proposed network overcomes the problems of overfitting and performance degradation due to accretion of layers in the deep network, and is thus useful for real-time identification of glaucoma features from retinal OCT images.
Genetic perturbation of T cell receptor (TCR) T cells is a promising method to unlock better TCR T cell performance to create more powerful cancer immunotherapies, but understanding the changes to T cell behavior induced by genetic perturbations remains a challenge. Prior studies have evaluated the effect of different genetic modifications with cytokine production and metabolic activity assays. Live-cell imaging is an inexpensive and robust approach to capture TCR T cell responses to cancer. Most methods to quantify T cell responses in live-cell imaging data use simple approaches to count T cells and cancer cells across time, effectively quantifying how much space in the 2D well each cell type covers, leaving actionable information unexplored. In this study, we characterize changes in TCR T cell’s interactions with cancer cells from live-cell imaging data using explainable artificial intelligence (AI). We train convolutional neural networks to distinguish behaviors in TCR T cell with CRISPR knock outs of CUL5, RASA2, and a safe harbor control knockout. We use explainable AI to identify specific interaction types that define different knock-out conditions. We find that T cell and cancer cell coverage is a strong marker of TCR T cell modification when comparing similar experimental time points, but differences in cell aggregation characterize CUL5KO and RASA2KO behavior across all time points. Our pipeline for discovery in live-cell imaging data can be used for characterizing complex behaviors in arbitrary live-cell imaging datasets, and we describe best practices for this goal.