Skip main navigation

Cookies Notification

We use cookies on this site to enhance your user experience. By continuing to browse the site, you consent to the use of our cookies. Learn More
×

System Upgrade on Tue, May 28th, 2024 at 2am (EDT)

Existing users will be able to log into the site and access content. However, E-commerce and registration of new users may not be available for up to 12 hours.
For online purchase, please visit us again. Contact us at customercare@wspc.com for any enquiries.

SEARCH GUIDE  Download Search Tip PDF File

  • articleNo Access

    Classifying Breast Density in Mammographic Images Using Wavelet-Based and Fine-Tuned Sensory Neural Networks

    In this modern world of biomedical medicine, the classification of breast density has been considered a very important part of the process of breast diagnosis. Furthering the same research, this research aims to determine the patient’s breast density by mammogram image with the help of modern techniques such as computerized devices and machine learning algorithms, which will greatly help the radiologist. To carry out this process, this research paper introduces a Convolutional Neural Network (CNN) model of deep learning that will work as a basis for waveform conversion and fine-tune. This deep learning model will prove effective in automatically classifying a patient’s breast density. With the help of this method, the last two layers which are fully connected are removed and joined with two newly formed layers. This would have helped in addressing a pre-trained AlexNet model that further improved the classification process. In this model, the original or preprocessed images are used at level 1 of the input (which is in sharp contrast to the usual methods based on the CNN model), which also makes the model compatible with the use of redundant wavelet coefficients. Because in the field of radiologists it is very important to underline the difference between scattered density and heterogeneous density, so the main objective of this research is focused on this end. As the proposed method has an accuracy of 82.2%, it shows a better performance. This research paper further compares the effectiveness and performance of the proposed method to traditional fine-tuning CNN models, with satisfactory results. The comparative results of the proposed method suggest that the proposed method is in the field of radiologists representing a helpful tool. This method may be intended to act as a second eye for doctors in the medical field with the intention of classifying the categories of breast density in the patient during breast cancer screening.

  • articleOpen Access

    A NOVEL DEEP LEARNING METHOD FOR BRAIN TUMOR SEGMENTATION IN MAGNETIC RESONANCE IMAGES BASED ON RESIDUAL UNITS AND MODIFIED U-NET MODEL

    Brain tumors are among the most deadly forms of cancer, as the brain is a crucial organ for human activity. Early detection and treatment are key to recovery. An expert’s final decision on tumor diagnosis mainly depends on the evaluation of Magnetic Resonance Imaging (MRI) images. However, the traditional manual assessment process is time-consuming, error-prone, and relies on the experience and knowledge of doctors, along with other unstable factors. An automated brain tumor detection system can assist radiologists and internal medicine experts in detecting and diagnosing brain tumors. This study proposes a novel deep learning model that combines residual units with a modified U-Net framework for brain tumor segmentation tasks in brain MR images. In this study, the U-Net-based framework is implemented with a stack of neural units and residual units and uses Leaky Rectified Linear Unit (LReLU) as the model’s activation function. First, neural units are added before the first layer of downsampling and upsampling to enhance feature propagation and reuse. Then, the stacking of residual blocks is applied to achieve deep semantic information extraction for downsampling and pixel classification for upsampling. Finally, a single-layer convolution outputs the predicted segmented images. The experimental results show that the segmentation Dice Similarity Coefficient of this model is 90.79%, and the model demonstrates better segmentation accuracy than other research models.

  • articleNo Access

    Evaluating the Pertinence of Pose Estimation model for Sign Language Translation

    Sign Language is the natural language used by a community that is hearing impaired. It is necessary to convert this language to a commonly understandable form as it is used by a comparatively small part of society. The automatic Sign Language interpreters can convert the signs into text or audio by interpreting the hand movements and the corresponding facial expression. These two modalities work in tandem to give complete meaning to each word. In verbal communication, emotions can be conveyed by changing the tone and pitch of the voice, but in sign language, emotions are expressed using nonmanual movements that include body posture and facial muscle movements. Each such subtle moment should be considered as a feature and extracted using different models. This paper proposes three different models that can be used for varying levels of sign language. The first test was carried out using the Convex Hull-based Sign Language Recognition (SLR) finger spelling sign language, next using a Convolution Neural Network-based Sign Language Recognition (CNN-SLR) for fingerspelling sign language, and finally pose-based SLR for word-level sign language. The experiments show that the pose-based SLR model that captures features using landmark or key points has better SLR accuracy than Convex Hull and CNN-based SLR models.