Generalization bounds of incremental SVM
Abstract
Incremental learning is one of the effective methods of learning from the accumulated training samples and the large-scale dataset. The main advantages of incremental learning consist of making full use of historical information, reducing the training scale greatly and saving space and time consumption. Despite extensive research on incremental support vector machine (SVM) learning algorithms, most of them are based on independent and identically distributed samples (i.i.d.). Not only that, there has been no theoretical analysis of incremental SVM learning algorithms. In this paper, we mainly study the generalization bounds of this incremental SVM learning algorithm whose samples are based on uniformly geometric Markov chains, and exponentially strongly mixing sequence. As a special case, we also obtain the generalization bounds of i.i.d. samples.