Please login to be able to save your searches and receive alerts for new content matching your search criteria.
With a high rise in deaths caused due to novel coronavirus (nCoV), immunocompromised persons are at high risk. Lung cancer is no exception. Classifying lung cancer patients and Covid-19 is the primary aim of the paper. For this, we propose a deep ensemble neural network (VGG16, DenseNet121, ResNet50 and custom CNN) to detect Covid-19 and lung cancer using chest CT images. We validate our model using three different datasets, namely SPIE AAPM Lung CT Challenge (1503 images), Covid CT dataset (349 images), and SARS-CoV-2 CT-scan dataset (1252 images). We utilize a k(= 5) fold cross-validation approach on the individual deep neural networks (DNNs) and a custom designed CNN model architecture, and achieve a benchmark score of 96.30% (accuracy) with a sensitivity and precision value of 96.39% and 98.44%, respectively. The proposed model effectively utilizes diverse models. To the best of our knowledge, using ensemble DNN, this is the first time we analyze chest CT images to separate lung cancer from Covid-19 (and vice-versa).
As our aim is to classify Covid-19 and lung cancer using chest CT images, it helps in prioritizing immunocompromised persons from Covid-19 for a better patient care. Also, mass screening is possible especially in resource-constrained regions since CT scans are cheaper. The long-term goal is to check whether AI-guided tool(s) is(are) able to prioritize patients that are at high risk (e.g., lung disease) from any possible future infectious disease outbreaks.
Over the last decades, facing the blooming growth of technological progress, interest in digital devices such as computed tomography (CT) as well as magnetic resource imaging which emerged in the 1970s has continued to grow. Such medical data can be invested in numerous visual recognition applications. In this context, these data may be segmented to generate a precise 3D representation of an organ that may be visualized and manipulated to aid surgeons during surgical interventions. Notably, the segmentation process is performed manually through the use of image processing software. Within this framework, multiple outstanding approaches were elaborated. However, the latter proved to be inefficient and required human intervention to opt for the segmentation area appropriately. Over the last few years, automatic methods which are based on deep learning approaches have outperformed the state-of-the-art segmentation approaches due to the use of the relying on Convolutional Neural Networks. In this paper, a segmentation of preoperative patients CT scans based on deep learning architecture was carried out to determine the target organ’s shape. As a result, the segmented 2D CT images are used to generate the patient-specific biomechanical 3D model. To assess the efficiency and reliability of the proposed approach, the 3DIRCADb dataset was invested. The segmentation results were obtained through the implementation of a U-net architecture with good accuracy.