Please login to be able to save your searches and receive alerts for new content matching your search criteria.
Automatic analysis of medical images and endoscopic images, in particular, is an attractive research topic in recent years. To achieve this goal, many tasks must be conducted for example lesion detection, segmentation, and classification. However, existing methods for such problems still face challenges due to the spreading characteristic of the lesions as well as some artifacts caused by motion, specularities, low contrast, bubbles, debris, bodily fluid, and blood. As a consequence, single segmentation or detection could not deal with those issues. Besides, although deep learning has achieved impressive results in many tasks, lacking a large annotated dataset could lead to overfitting problems. In this paper, we tackle these issues by taking particular characteristics of lesions of interest and advantages of deep segmentation models into account. We propose a dual-path framework (namely, DCS-UNet) that combines both segmentation and classification. We first segment lesions from the image using U-Net architecture with different encoder backbones (ResNet-50, VGG-16, DenseNet-201). The segmented regions will be then refined in the second path where we classify every patch in the whole image or the Inner or Outer regions extended from the contours given by the segmentation results. For the refinement scheme, we utilize the color-dependent and texture features as doctors’ advice and deploy the support vector machine (SVM) technique to classify a patch into disease region or not. Extensive experiments were conducted on three datasets of gastroesophageal reflux disease (GERD) endoscopic images as GradeM, GradeA, GradeB, which are defined as modified Los Angeles classification. The experimental results show the improvements of the proposed method compared to the single U-Net or a segmentation using the hand-designed features-based scheme. The proposed method improves mDice and mIoU by 0.5% and 0.36% on GradeA dataset, which is the most challenging dataset with ambiguous separations of GERD vs. normal regions, and by 0.82% and 0.81% on GradeM dataset, which is clearer to segment GERD regions. This proposed framework shows the potential to develop a diagnosis assistance system to help endoscopists reduce burden in examining GERD in the future.
Breast cancer is the most common cancer in women worldwide. Computer Aided Detection (CADe) has attracted increasing research interest in recent years. Data exploration and lesion detection in medical images are tedious but can be accelerated using computational intelligence. One approach is to adapt and configure recent deep learning-based object detectors from computer vision to detect abnormalities in medical images. This chapter starts with three state-of-the-art object detectors, namely Faster R-CNN, YOLO v2 and Grad-CAM, to determine the location of lesions in Magnetic Resonance Images (MRI) of breast. Each individual detector is first tuned through adjusting network hyper-parameters and backbone architectures, in order to maximize Average Precision (AP). Two different integration approaches, namely cascaded and parallel integrations, are then proposed and implemented to improve the AP. The cascaded integration builds on a coarse-to-fine strategy. It uses Grad-CAM to compute coarse locations of bounding boxes for lesions and then applies YOLO v2 or Faster R-CNN as a fine detector. In the parallel integration approach, the detection result is a combination of results from the three detectors, with the aim of reducing missed detections. The integrated deep learning models exhibit enhanced performance on the detection of breast lesions in MRIs.