Please login to be able to save your searches and receive alerts for new content matching your search criteria.
The diversity and the accuracy are two important ingredients for ensemble generalization error in an ensemble classifiers system. Nevertheless enhancing the diversity is at the expense of decreasing the accuracy of classifiers, thus balancing the diversity and the accuracy is crucial for constructing a good ensemble method. In the paper, a new ensemble method is proposed that selecting classifiers to ensemble via the transformation of individual classifiers based on diversity and accuracy. In the proposed method, the transformation of classifiers is made to produce new individual classifiers based on original classifiers and the true labels, in order to enhance diversity of an ensemble. The transformation approach is similar to principal component analysis (PCA), but it is essentially different between them that the proposed method employs the true labels to construct the covariance matrix rather than the mean of samples in PCA. Then a selecting rule is constructed based on two rules of measuring the classification performance. By the selecting rule, some available new classifiers are selected to ensemble in order to ensure the accuracy of the ensemble with selected classifiers. In other words, some individuals with poor or same performance are eliminated. Particularly, a new classifier produced by the transformation is equivalent to a linear combination of original classifiers, which indicates that the proposed method enhances the diversity by different transformations instead of constructing different training subsets. The experimental results illustrate that the proposed method obtains the better performance than other methods, and the kappa-error diagrams also illustrate that the proposed method enhances the diversity compared against other methods.
This paper, according to the best of our knowledge, provides the very first solution to the hardware implementation of the complete decision tree inference algorithm. Evolving decision trees in hardware is motivated by a significant improvement in the evolution time compared to the time needed for software evolution and efficient use of decision trees in various embedded applications (robotic navigation systems, image processing systems, etc.), where run-time adaptive learning is of particular interest. Several architectures for the hardware evolution of single oblique or nonlinear decision trees and ensembles comprised from oblique or nonlinear decision trees are presented. Proposed architectures are suitable for the implementation using both Field Programmable Gate Arrays (FPGA) and Application Specific Integrated Circuits (ASIC). Results of experiments obtained using 29 datasets from the standard UCI Machine Learning Repository database suggest that the FPGA implementations offer significant improvement in inference time when compared with the traditional software implementations. In the case of single decision tree evolution, FPGA implementation of H_DTS2 architecture has on average 26 times shorter inference time when compared to the software implementation, whereas FPGA implementation of H_DTE2 architecture has on average 693 times shorter inference time than the software implementation.
In this paper, several hardware architectures for the realization of ensembles of axis-parallel, oblique and nonlinear decision trees (DTs) are presented. Hardware architectures for the implementation of a number of ensemble combination rules are also presented. These architectures are universal and can be used to combine predictions from any type of classifiers, such as decision trees, artificial neural networks (ANNs) and support vector machines (SVMs). Proposed architectures are suitable for the implementation using Field Programmable Gate Arrays (FPGA) and Application Specific Integrated Circuits (ASIC). Experiment results obtained using 29 datasets from the standard UCI Machine Learning Repository database suggest that the FPGA implementations offer significant improvement in the classification time in comparison with the traditional software implementations. Greatest improvement can be achieved using the SP2-P architecture implemented on the FPGA achieving 416.53 times faster classification speed on average, compared with the software implementation. This result has been achieved on the FPGA working at 135.51 MHz on average, which is 33.21 times slower than the operating frequency of the general purpose computer on which the software implementation has been executed.
Atrial Fibrillation (A-Fib), Atrial Flutter (AFL) and Ventricular Fibrillation (V-Fib) are fatal cardiac abnormalities commonly affecting people in advanced age and have indication of life-threatening condition. To detect these abnormal rhythms, Electrocardiogram (ECG) signal is most commonly visualized as a significant clinical tool. Concealed non-linearities in the ECG signal can be clearly unraveled using Recurrence Quantification Analysis (RQA) technique. In this paper, RQA features are applied for classifying four classes of ECG beats namely Normal Sinus Rhythm (NSR), A-Fib, AFL and V-Fib using ensemble classifiers. The clinically significant (p<0.05) features are ranked and fed independently to three classifiers viz. Decision Tree (DT), Random Forest (RAF) and Rotation Forest (ROF) ensemble methods to select the best classifier. The training and testing of the feature set is accomplished using 10-fold cross-validation strategy. The RQA coefficients using ROF provided an overall accuracy of 98.37% against 96.29% and 94.14% for the RAF and DT, respectively. The results achieved evidently ratify the superiority of ROF ensemble classifier in the diagnosis of A-Fib, AFL and V-Fib. Precision of four classes is measured using class-specific accuracy (%) and reliability of the performance is assessed using Cohen’s kappa statistic (κ). The developed approach can be used in therapeutic devices and help the physicians in automatic monitoring of fatal tachycardia rhythms.
This paper presents an algorithm to generate ensemble classifier by joint optimization of accuracy and diversity. It is expected that the base classifiers in an ensemble are accurate and diverse (i.e., complementary in terms of errors) among each other for the ensemble classifier to be more accurate. We adopt a multi-objective evolutionary algorithm (MOEA) for joint optimization of accuracy and diversity on our recently developed nonuniform layered cluster oriented ensemble classifier (NULCOEC). In NULCOEC, the data set is partitioned into a variable number of clusters at different layers. Base classifiers are then trained on the clusters at different layers. The performance of NULCOEC is a function of the vector of the number of layers and clusters. The research presented in this paper investigates the implication of applying MOEA to generate NULCOEC. Accuracy and diversity of the ensemble classifier is expressed as a function of layers and clusters. A MOEA then searches for the combination of layers and clusters to obtain the nondominated set of (accuracy, diversity). We have obtained the results of single objective optimization (i.e., optimizing either accuracy or diversity) and compared them with the results of MOEA on sixteen UCI data sets. The results show that the MOEA can improve the performance of ensemble classifier.
The success of investors in obtaining huge financial rewards from the stock market depends on their ability to predict the direction of the stock market index. The purpose of this study is to evaluate the efficacy of several ensemble prediction models (Boosted, RUS-Boosted, Subspace Disc, Bagged, and Subspace KNN) in predicting the daily direction of the Johannesburg Stock Exchange (JSE) All-Share index compared to other commonly used machine learning techniques including support vector machines (SVM), logistic regression and k-nearest neighbor (KNN). The findings in this study show that, among all ensemble models, Boosted algorithm is the best performer followed by RUS-Boosted. When compared to the other techniques, ensemble technique (represented by Boosted) outperformed these techniques, followed by KNN, logistic regression and SVM, respectively. These findings suggest that investors should include ensemble models among the index prediction models if they want to make huge profits in the stock markets. However, not all investors can benefit from this as models may suffer from alpha decay as more and more investors use them, implying that the successful algorithms have limited shelf life.
Ensemble classifier is one among the machine learning hot topics and it has been successfully applied in many practical applications. Since the construction of an optimal ensemble remains an open and complex problem, several heuristics for constructing good ensembles have been introduced for several years now. One alternative consists of integrating rough set reducts into ensemble systems. To the best of our knowledge, almost existing methods neglect knowledge imperfection, knowing that several real world databases suffer from some kinds of uncertainty and incompleteness. In this paper, we develop an ensemble Evidential Editing k-Nearest Neighbors classfier (EEk-NN) through rough set reducts for addressing data with evidential attributes. Experimentations in some real databases have been carried out with the aim of comparing our proposal to another existing approach.