Processing math: 100%
Skip main navigation

Cookies Notification

We use cookies on this site to enhance your user experience. By continuing to browse the site, you consent to the use of our cookies. Learn More
×

System Upgrade on Tue, May 28th, 2024 at 2am (EDT)

Existing users will be able to log into the site and access content. However, E-commerce and registration of new users may not be available for up to 12 hours.
For online purchase, please visit us again. Contact us at customercare@wspc.com for any enquiries.

SEARCH GUIDE  Download Search Tip PDF File

  • articleNo Access

    PIRAP: A Study on Optimized Multi-Language Classification and Text Categorization Using Supervised Hybrid Machine Learning Approaches

    Nowadays, all the records in various languages are accessible with their advanced structures. For simple recovery of these digitized records, these reports should be ordered into a class as indicated by their content. Text Categorization is an area of Text Mining which helps to overcome this challenge. Text Classification is a demonstration of allotting classes to records. This paper investigates Text Classification works done in foreign Languages, regional languages and a list of books’ content. Messages available in different languages force the difficulties of NLP approaches. This study shows that supervised ML algorithms such as Logistic regression, Naive Bayes classifier, k-Nearest-Neighbor classifier, Decision Tree and SVMs performed better for Text Classification tasks. The automated document classification technique is useful in our day-to-day life to find out the type of language and different department books based on their text content. We have been using different foreign and regional languages here to classify such as Tamil, Telugu, Kannada, Bengali, English, Spanish, French, Russian and German. Here, we utilize one versus all SVMs for multi-characterization with 3-crease Cross Validation in all cases and see that SVMs outperform different classifiers. This implementation is done by using hybrid classifiers and it depicts analyses with delicate edge straight SVMs as well as bit-based SVMs.

  • articleNo Access

    Medical Image Analysis Methods for Accurate Segmentation and Quantification of Acute-Subacute Ischemic Stroke Lesion — A Systematic Review

    A medical emergency, acute ischemic stroke necessitates immediate treatment as the longer the neural is without blood flow, the greater the risk of irreversible neural damage and disability. Depending on the extent, the lesion may vary in size and location and location of the blood flow blockage, and can have significant effects on a person’s cognitive, motor, and sensory function. The process of identifying and classifying regions of affected neural tissue during a ischemic stroke is referred to as “ischemic stroke lesion segmentation.” Medical imaging techniques like computed tomography (CT) and magnetic resonance imaging (MRI) are typically used to accomplish this specialized software tools. The goal of segmentation is to provide a detailed and accurate map of the extent and location of the stroke damage, which can be used to guide treatment decisions and monitor the patient’s recovery. Although manual segmentation is regarded as the most accurate method, it takes time and is subject to variability between observers, which leads to inconsistent results. There are a few unique ways to deal with dividing ischemic stroke sores, ranging from manual delineation by trained clinician to fully automated algorithms that use machine learning and other advanced techniques. Automated segmentation methods are faster and more objective, but they may require large training sets and can be sensitive to imaging artifacts and other sources of noise. Although fully automatic methods hold promise, semi-automatic methods remain the preferred approach in clinical research. We performed a systematic review of the literature to explore the latest advancements and trends in analyzing ischemic stroke lesions using automated methods developed in the past five years. Our search of IEEE explores, Springer, Science Direct, Taylor & Francis, etc. yielded 1580 papers, from which we selected 50 for detailed analysis. Of these studies, 12 employed supervised segmentation, 12 used unsupervised segmentation and 27 employed deep learning segmentation methods. Only a limited number of studies have validated their fully automatic methods using longitudinal samples, and a mere eight studies included validation using clinical parameters. Furthermore, only 23 of the 50 studies made their methods publicly available. To advance the field, there is a need for fully automatic methods that are validated with longitudinal samples and clinical parameters. Moreover, making methods publicly available is essential for promoting reproducibility and facilitating comparison of results.

  • articleNo Access

    Human Unsupervised and Supervised Learning as a Quantitative Distinction

    SUSTAIN (Supervised and Unsupervised STratified Adaptive Incremental Network) is a network model of human category learning. SUSTAIN initially assumes a simple category structure. If simple solutions prove inadequate and SUSTAIN is confronted with a surprising event (e.g. it is told that a bat is a mammal instead of a bird), SUSTAIN recruits an additional cluster to represent the surprising event. Newly recruited clusters are available to explain future events and can themselves evolve into prototypes/attractors/rules. SUSTAIN has expanded the scope of findings that models of human category learning can address. This paper extends SUSTAIN to account for both supervised and unsupervised learning data through a common mechanism. The modified model, uSUSTAIN (unified SUSTAIN), is successfully applied to human learning data that compares unsupervised and supervised learning performances.18

  • chapterOpen Access

    Machine learning algorithms for simultaneous supervised detection of peaks in multiple samples and cell types

    Joint peak detection is a central problem when comparing samples in epigenomic data analysis, but current algorithms for this task are unsupervised and limited to at most 2 sample types. We propose PeakSegPipeline, a new genome-wide multi-sample peak calling pipeline for epigenomic data sets. It performs peak detection using a constrained maximum likelihood segmentation model with essentially only one free parameter that needs to be tuned: the number of peaks. To select the number of peaks, we propose to learn a penalty function based on user-provided labels that indicate genomic regions with or without peaks in specific samples. In comparisons with state-of-the-art peak detection algorithms, PeakSegPipeline achieves similar or better accuracy, and a more interpretable model with overlapping peaks that occur in exactly the same positions across all samples. Our novel approach is able to learn that predicted peak sizes vary by experiment type.

  • chapterNo Access

    A New Clustering with Estimation of Cluster Number Based on Genetic Algorithms

    Clustering is primarily used to uncover the true underlying structure of a given data set. Most algorithms for clustering often depend on initial guesses of the cluster centers and assumptions made as to the number of subgroups presents in the data. In this paper, we propose a method for fuzzy clustering without initial guesses on cluster number in the data set. Our method assumes that clusters will have the normal distribution. Our method can automatically estimate the cluster number and form the clusters according to the number. In it, Genetic Algorithms (GAs) with two chromosomic coding techniques are evaluated. Graph structured coding can derive high fitness value. Linear structured can save the number of generation.