Fast Linear SVM Validation Based on Early Stopping in Iterative Learning
Abstract
Classification is an important field in machine learning and pattern recognition. Amongst various types of classifiers such as nearest neighbor, neural network and Bayesian classifiers, support vector machine (SVM) is known as a very powerful classifier.
One of the advantages of SVM in comparison with the other methods, is its efficient and adjustable generalization capability. The performance of SVM classifier depends on its parameters, specially regularization parameter C, that is usually selected by cross-validation. Despite its generalization, SVM suffers from some limitations such as its considerable low speed training phase. Cross-validation is a very time consuming part of training phase, because for any candidate value of the parameter C, the entire process of training and validating must be repeated completely.
In this paper, we propose a novel approach for early stopping of the SVM learning algorithm. The proposed early stopping occurs by integrating the validation part into the optimization part of the SVM training without losing any generality or degrading performance of the classifier. Moreover, this method can be considered in conjunction with the other available accelerator methods since there is not any dependency between our proposed method and the other accelerator ones, thus no redundancy will happen. Our method was tested and verified on various UCI repository datasets and the results indicate that this method speeds up the learning phase of SVM without losing any generality or affecting the final model of classifier.