Skip main navigation

Cookies Notification

We use cookies on this site to enhance your user experience. By continuing to browse the site, you consent to the use of our cookies. Learn More
×

System Upgrade on Tue, May 28th, 2024 at 2am (EDT)

Existing users will be able to log into the site and access content. However, E-commerce and registration of new users may not be available for up to 12 hours.
For online purchase, please visit us again. Contact us at customercare@wspc.com for any enquiries.

SEARCH GUIDE  Download Search Tip PDF File

  Bestsellers

  • articleNo Access

    A Method for Estimating the Posture of Yoga Asansa Exercises Based on Lightweight OpenPose

    In order to significantly reduce the computational complexity to achieve efficient pose estimation of yoga pose practice, this paper studies the pose estimation method of fitness yoga pose practice based on lightweight OpenPose. First, based on the video image of fitness yoga postures, the background model of the video image is constructed through HSV color space conversion processing. The background model is used to segment the foreground and background of yoga pose practice. A lightweight OpenPose fitness yoga pose training pose estimation model is established. This model uses a 10-layer VGG19 network to deeply study the segmented yoga pose foreground image, so as to extract the feature map of yoga pose training. These feature maps are then input into a multi-stage, two-branch convolutional neural network to further calculate the joint heat map and joint affinity domain map of fitness yoga pose exercises. In order to capture and track the joints of human motion in real time, the pose distance between frames and the maximum weight matching algorithm of a bipartite graph is used to track the pose of continuous frames of video images. Through the least squares support vector machine classifier integrated into the softmax layer of the lightweight OpenPose network model, the final pose estimation results of fitness yoga exercises are output. The experiment shows that this method can effectively realize the color space conversion from RGB to HSV of the fitness yoga pose exercise action image, and can realize the foreground and background segmentation of the fitness yoga pose exercise action image. At the same time, it can accurately identify the yoga pose exercise action posture Corresponding to different frames in a fitness yoga pose exercise action video, and the application effect is better.

  • articleNo Access

    Optimised Continually Evolved Classifier for Few-Shot Learning Overcoming Catastrophic Forgetting

    Catastrophic forgetting (CF) poses the most important challenges in neural networks for continual learning. During training, numerous current approaches replay precedent data that deviate from the constraints of an optimal continual learning system. In this work, optimisation-enabled continually evolved classifier is used for the CF. Moreover, a few-shot continual learning model is exploited to mitigate CF. Initially, images undergo pre-processing using Contrast Limited Adaptive Histogram Equalisation (CLAHE). Then; the pre-processed outputs are utilised in the classification through Continually Evolved Classifiers based on few-shot incremental learning. Here, the initial training is done by using the CNN (Convolutional Neural Network) model and then the pseudo incremental learning phase is performed. Furthermore, to enhance the performance, an optimisation approach, called Serial Exponential Sand Cat Swarm Optimisation (SExpSCSO), is developed. SExpSCSO algorithm modifies Sand Cat Swarm Optimisation by incorporating the serial exponential weighted moving average concept. The proposed SExpSCSO is applied to train the continually evolved classifier by optimising weights and thus, improves the classifiers performance. Finally, the experimentation analysis reveals that the adopted system acquired the maximal accuracy of 0.677, maximal specificity of 0.592, maximal precision of 0.638, recall of 0.716 and F-measure of 0.675.

  • articleNo Access

    ANALYSIS AND AUTOMATIC IDENTIFICATION OF SLEEP STAGES USING HIGHER ORDER SPECTRA

    Electroencephalogram (EEG) signals are widely used to study the activity of the brain, such as to determine sleep stages. These EEG signals are nonlinear and non-stationary in nature. It is difficult to perform sleep staging by visual interpretation and linear techniques. Thus, we use a nonlinear technique, higher order spectra (HOS), to extract hidden information in the sleep EEG signal. In this study, unique bispectrum and bicoherence plots for various sleep stages were proposed. These can be used as visual aid for various diagnostics application. A number of HOS based features were extracted from these plots during the various sleep stages (Wakefulness, Rapid Eye Movement (REM), Stage 1-4 Non-REM) and they were found to be statistically significant with p-value lower than 0.001 using ANOVA test. These features were fed to a Gaussian mixture model (GMM) classifier for automatic identification. Our results indicate that the proposed system is able to identify sleep stages with an accuracy of 88.7%.

  • articleNo Access

    AUTOMATED DIAGNOSIS OF EPILEPSY USING CWT, HOS AND TEXTURE PARAMETERS

    Epilepsy is a chronic brain disorder which manifests as recurrent seizures. Electroencephalogram (EEG) signals are generally analyzed to study the characteristics of epileptic seizures. In this work, we propose a method for the automated classification of EEG signals into normal, interictal and ictal classes using Continuous Wavelet Transform (CWT), Higher Order Spectra (HOS) and textures. First the CWT plot was obtained for the EEG signals and then the HOS and texture features were extracted from these plots. Then the statistically significant features were fed to four classifiers namely Decision Tree (DT), K-Nearest Neighbor (KNN), Probabilistic Neural Network (PNN) and Support Vector Machine (SVM) to select the best classifier. We observed that the SVM classifier with Radial Basis Function (RBF) kernel function yielded the best results with an average accuracy of 96%, average sensitivity of 96.9% and average specificity of 97% for 23.6 s duration of EEG data. Our proposed technique can be used as an automatic seizure monitoring software. It can also assist the doctors to cross check the efficacy of their prescribed drugs.

  • articleNo Access

    APPLICATION OF HIGHER ORDER CUMULANT FEATURES FOR CARDIAC HEALTH DIAGNOSIS USING ECG SIGNALS

    Electrocardiogram (ECG) is the electrical activity of the heart indicated by P, Q-R-S and T wave. The minute changes in the amplitude and duration of ECG depicts a particular type of cardiac abnormality. It is very difficult to decipher the hidden information present in this nonlinear and nonstationary signal. An automatic diagnostic system that characterizes cardiac activities in ECG signals would provide more insight into these phenomena thereby revealing important clinical information. Various methods have been proposed to detect cardiac abnormalities in ECG recordings. Application of higher order spectra (HOS) features is a seemingly promising approach because it can capture the nonlinear and dynamic nature of the ECG signals. In this paper, we have automatically classified five types of beats using HOS features (higher order cumulants) using two different approaches. The five types of ECG beats are normal (N), right bundle branch block (RBBB), left bundle branch block (LBBB), atrial premature contraction (APC) and ventricular premature contraction (VPC). In the first approach, cumulant features of segmented ECG signal were used for classification; whereas in the second approach cumulants of discrete wavelet transform (DWT) coefficients were used as features for classifiers. In both approaches, the cumulant features were subjected to data reduction using principal component analysis (PCA) and classified using three layer feed-forward neural network (NN) and least square — support vector machine (LS-SVM) classifiers. In this study, we obtained the highest average accuracy of 94.52%, sensitivity of 98.61% and specificity of 98.41% using first approach with NN classifier. The developed system is ready clinically to run on large datasets.

  • articleNo Access

    APPLICATION OF INTRINSIC TIME-SCALE DECOMPOSITION (ITD) TO EEG SIGNALS FOR AUTOMATED SEIZURE PREDICTION

    Intrinsic time-scale decomposition (ITD) is a new nonlinear method of time-frequency representation which can decipher the minute changes in the nonlinear EEG signals. In this work, we have automatically classified normal, interictal and ictal EEG signals using the features derived from the ITD representation. The energy, fractal dimension and sample entropy features computed on ITD representation coupled with decision tree classifier has yielded an average classification accuracy of 95.67%, sensitivity and specificity of 99% and 99.5%, respectively using 10-fold cross validation scheme. With application of the nonlinear ITD representation, along with conceptual advancement and improvement of the accuracy, the developed system is clinically ready for mass screening in resource constrained and emerging economy scenarios.

  • articleNo Access

    SUPPORT VECTOR MACHINE CLASSIFICATION OF PHYSICAL AND BIOLOGICAL DATASETS

    The support vector machine (SVM) is used in the classification of sonar signals and DNA-binding proteins. Our study on the classification of sonar signals shows that SVM produces a result better than that obtained from other classification methods, which is consistent from the findings of other studies. The testing accuracy of classification is 95.19% as compared with that of 90.4% from multilayered neural network and that of 82.7% from nearest neighbor classifier. From our results on the classification of DNA-binding proteins, one finds that SVM gives a testing accuracy of 82.32%, which is slightly better than that obtained from an earlier study of SVM classification of protein–protein interactions. Hence, our study indicates the usefulness of SVM in the identification of DNA-binding proteins. Further improvements in SVM algorithm and parameters are suggested.

  • articleNo Access

    Fuzzy-NN approach with statistical features for description and classification of efficient image retrieval

    Image retrieval based on content not only relies heavily upon the type of descriptors, but on the steps taken further. This has been an extensively utilized methodology for finding and fetching out images from the big database of images. Nowadays, a number of methodologies have been organized to increase the CBIR performance. This has an ability to recover pictures relying upon their graphical information. In the proposed method, Neuro-Fuzzy classifier and Deep Neural Network classifier are used to classify the pictures from a given dataset. The proposed approach obtained the highest accuracy in terms of Precision, Recall, and F-measure. To show the efficiency and effectiveness of proposed approach, statistical testing is used in terms of standard deviation, skewness, and kurtosis. The results reveal that the proposed algorithm outperforms other approaches using low computational efforts.

  • articleNo Access

    FORMAL ASPECTS OF A MULTIPLE-RULE CLASSIFIER

    This paper deals with the multiple-rule problem which arises when several decision rules (of different classes) match ("fire" for) an input to-be-classified (unseen) object. The paper focuses on formal aspects and theoretical methodology for the above problem.

    The general definitions of the notions of a Designer, Learner and Classifier are presented in a formal matter, including parameters that are usually attached to the above concepts such as rule consistency, completeness, quality, matching rate, etc. We thus provide the minimum-requirement definitions as necessary conditions for these concepts. Any designer (decision-system builder) of a new multiple-rule system may start with these minimum requirements.

    We only expect that the Classifier makes its decisions according to its decision scheme induced as a knowledge base (theory, model, concept description). Also, two case studies are discussed. We conclude with a general flow chart for a decision-system builder. He/she can just pursue it and select parameters of a Learner and Classifier, following the minimum requirements provided.

  • articleNo Access

    NEURAL-ASSOCIATION OF MICROCALCIFICATION PATTERNS FOR THEIR RELIABLE CLASSIFICATION IN DIGITAL MAMMOGRAPHY

    Breast cancer continues to be the most common cause of cancer deaths in women. Early detection of breast cancer is significant for better prognosis. Digital Mammography currently offers the best control strategy for the early detection of breast cancer. The research work in this paper investigates the significance of neural-association of microcalcification patterns for their reliable classification in digital mammograms. The proposed technique explores the auto-associative abilities of a neural network approach to regenerate the composite of its learned patterns most consistent with the new information, thus the regenerated patterns can uniquely signify each input class and improve the overall classification. Two types of features: computer extracted (gray level based statistical) features and human extracted (radiologists' interpretation) features are used for the classification of calcification type of breast abnormalities. The proposed technique attained the highest 90.5% classification rate on the calcification testing dataset.

  • articleNo Access

    MULTICLASS CLASSIFICATION BASED ON META PROBABILITY CODES

    This paper proposes a new approach to improve multiclass classification performance by employing Stacked Generalization structure and One-Against-One decomposition strategy. The proposed approach encodes the outputs of all pairwise classifiers by implicitly embedding two-class discriminative information in a probabilistic manner. The encoded outputs, called Meta Probability Codes (MPCs), are interpreted as the projections of the original features. It is observed that MPC, compared to the original features, has more appropriate features for clustering. Based on MPC, we introduce a cluster-based multiclass classification algorithm, called MPC-Clustering. The MPC-Clustering algorithm uses the proposed approach to project an original feature space to MPC, and then it employs a clustering scheme to cluster MPCs. Subsequently, it trains individual multiclass classifiers on the produced clusters to complete the procedure of multiclass classifier induction. The performance of the proposed algorithm is extensively evaluated on 20 datasets from the UCI machine learning database repository. The results imply that MPC-Clustering is quite efficient with an improvement of 2.4% overall classification rate compared to the state-of-the-art multiclass classifiers.

  • articleNo Access

    Bayesian Classifier for Sparsity-Promoting Feature Selection

    A Bayesian classifier for sparsity-promoting feature selection is developed in this paper, where a set of nonlinear mappings for the original data is performed as a pre-processing step. The linear classification model with such mappings from the original input space to a nonlinear transformation space can not only construct the nonlinear classification boundary, but also realize the feature selection for the original data. A zero-mean Gaussian prior with Gamma precision and a finite approximation of Beta process prior are used to promote sparsity in the utilization of features and nonlinear mappings in our model, respectively. We derive the Variational Bayesian (VB) inference algorithm for the proposed linear classifier. Experimental results based on the synthetic data set, measured radar data set, high-dimensional gene expression data set, and several benchmark data sets demonstrate the aggressive and robust feature selection capability and comparable classification accuracy of our method comparing with some other existing classifiers.

  • articleNo Access

    Improved LRC Based on Combined Virtual Training Samples for Face Recognition

    Lack of training samples always affects the performance and robustness of face recognition. Generating virtual samples is one of effective methods to expand the training set. When the virtual samples are able to simulate the variations of facial images including variations of illuminations, facial postures and the facial expressions, the robustness will be enhanced and the accuracy will be improved obviously in the face recognition problem. In this paper, an improved linear representation-based classification combined virtual samples (ILRCVS) is proposed. First, we design a new objective function that simultaneously considers the information of the virtual training samples and the virtual test sample. Second, an alternating minimization algorithm is proposed to solve the optimization problem of the objective function. Finally, a new classification criterion combined with the virtual training and test sample is proposed. Experimental results on the Georgia Tech, FERET and Yale B face databases show that the proposed method is more robust than three state-of-the-art face recognition methods, LRC, SRC and CRC.

  • articleNo Access

    Functional Correlations in the Pursuit of Performance Assessment of Classifiers

    In statistical classification and machine learning, as well as in social and other sciences, a number of measures of association have been proposed for assessing and comparing individual classifiers, raters, as well as their groups. In this paper, we introduce, justify, and explore several new measures of association, which we call CO-, ANTI-, and COANTI-correlation coefficients, that we demonstrate to be powerful tools for classifying confusion matrices. We illustrate the performance of these new coefficients using a number of examples, from which we also conclude that the coefficients are new objects in the sense that they differ from those already in the literature.

  • articleNo Access

    ON THE DESIGN OF A TREE CLASSIFIER AND ITS APPLICATON TO SPEECH RECOGNITION

    A new algorithm for constructing entropy reduction based decision tree classifier in two steps for large reference-class sets is proposed. The d-dimensional feature space is first mapped onto a line thus allowing a dynamic choice of features and then the resultant linear space is partitioned into two sections while minimizing the average system entropy. Classes of each section, again considered as a collection of references classes in a d-dimensional feature space, can be further split in a similar manner should the collection still be considered excessively large, thus forming a binary decision tree of nodes with overlapping members. The advantage of using such a classifier is that the need to match a test feature vector against all the references exhaustively is avoided. We demonstrate in this paper that discrete syllable recognition with dynamic programming equipped with such a classifier can reduce the recognition time by a factor of 40 to 100. The recognition speed is one third to one half of that using hidden Markov models (HMM), while the recognition rate is somewhat higher. The theory is considerably simpler than that of HMM but the decision tree can occupy a lot of memory space.

  • articleNo Access

    A Single-Node Classifier Implementation on Chua Oscillator within a Physical Reservoir Computing Framework

    The study lies in the field of physical reservoir computing and aims to develop a classifier using Fisher Iris dataset for benchmark tasks. Single Chua chaotic oscillator acts as a physical reservoir. The study was performed using computer simulation. The features of Iris flowers are represented as the consequence of short pulses at a constant level of a control parameter, which is fed to the oscillator, changing its dynamics. During the classification of flowers, the oscillator works without being reset, so each pulse on the input changes the phase trajectory and makes it unique for each Iris flower. Finally, the estimation of the symmetry of an attractor makes it possible to connect each species of Iris with the properties of the attractor.

    The resulting architecture of the classifier includes a single-node externally-driven Chua oscillator with time-delayed input. The classifier shows two mistakes in classifying the dataset with 75 samples working in chaotic mode.

  • articleNo Access

    Supervised Classification of UML Class Diagrams Based on F-KNB

    Often most software development doesn’t start from scratch but applies previously developed artifacts. These reusable artifacts are involved in various phases of the software life cycle, ranging from requirements to maintenance. Software design as the high level of software development process has an important impact on the following stages, so its reuse is gaining more and more attention. Unified modeling language (UML) class diagram as a modeling tool has become a de facto standard of software design, and thus its reuse also becomes a concern accordingly. So far, the related research on the reuse of UML class diagrams has focused on matching and retrieval. As a large number of class diagrams enter the repository for reuse, classification has become an essential task. The classification is divided into unsupervised classification (also known as clustering) and supervised classification. In our previous work, we discussed the clustering of UML class diagrams. In this paper, we focus on only the supervised classification of UML class diagrams and propose a supervised classification method. A novel ensemble classifier F-KNB combining both dependent and independent construction ideas is built. The similarity of class diagrams is described, in which the semantic, structural and hybrid matching is defined, respectively. The extracted feature elements are used in base classifiers F-KNN and F-NBs that are constructed based on improved K-nearest neighbors (KNNs) and Naive Bayes (NB), respectively. A series of experimental results show that the proposed ensemble classifier F-KNB shows a good classification quality and efficiency under the condition of variable size and distribution of training samples.

  • articleNo Access

    WIRELESS DISTRIBUTED IMPLEMENTATION OF A FUZZY NEURAL CLASSIFICATION SYSTEM

    Recent years have seen a surge of interest in the field of pervasive context-aware computing. In this framework we propose a novel real implementation of an adaptive self-configurable system, applied within the scope of wireless ad-hoc networks. WiDFuNC is an integrated system that consists of an intelligent unit implemented on a real PDA, a number of sensors and a remote server device to form an efficient prototype system that can be applied in various domains. This implementation of WiDFuNC focuses on pure classification problems with satisfactory experimental results, presenting great adaptability and context-awareness.

  • articleNo Access

    Ensemble Method of Effective AdaBoost Algorithm for Decision Tree Classifiers

    This article introduces a novel ensemble method named eAdaBoost (Effective Adaptive Boosting) is a meta classifier which is developed by enhancing the existing AdaBoost algorithm and to handle the time complexity and also to produce the best classification accuracy. The eAdaBoost reduces the error rate when compared with the existing methods and generates the best accuracy by reweighing each feature for further process. The comparison results of an extensive experimental evaluation of the proposed method are explained using the UCI machine learning repository datasets. The accuracy of the classifiers and statistical test comparisons are made with various boosting algorithms. The proposed eAdaBoost has been also implemented with different decision tree classifiers like C4.5, Decision Stump, NB Tree and Random Forest. The algorithm has been computed with various dataset, with different weight thresholds and the performance is analyzed. The proposed method produces better results using random forest and NB tree as base classifier than the decision stump and C4.5 classifiers for few datasets. The eAdaBoost gives better classification accuracy, and prediction accuracy, and execution time is also less when compared with other classifiers.

  • articleNo Access

    A VISUALIZATION FRAMEWORK FOR THE ANALYSIS OF HYPERDIMENSIONAL DATA

    The purpose of this article is to describe a new visualization framework for the analysis of hyperdimensional data. This framework was developed in order to facilitate the study of a new class of classifiers designated class cover catch digraphs. The class cover catch digraph is an original random graph technique for the construction of classifiers on high dimensional data. This framework allows the user to study the geometric structure of hyperdimensional data sets via the reduction of the original hyperdimensional space to a cover with a small number of balls. The framework allows for the elicitation of geometric and other structures through the visualization of the relationships between the balls and each other and the observations they cover.