Please login to be able to save your searches and receive alerts for new content matching your search criteria.
In this paper, we consider the online regularized pairwise learning (ORPL) algorithm with least squares loss function for non-independently and identically distribution (non-i.i.d.) observations. We first establish new Bennett’s inequalities for α-mixing sequence, geometrically β-mixing sequence, V-geometrically ergodic Markov chain and uniformly ergodic Markov chain. Then we establish the convergence rates for the last iterate of the ORPL algorithm with the polynomially decaying step sizes and varying regularization parameters for non-i.i.d. observations. These established results in this paper extend the previously known results of ORPL from i.i.d. observations to the case of non-i.i.d. observations, and the established result of ORPL for α-mixing can be nearly optimal rate of ORPL for i.i.d. observations with L2-norm.
Incremental learning is one of the effective methods of learning from the accumulated training samples and the large-scale dataset. The main advantages of incremental learning consist of making full use of historical information, reducing the training scale greatly and saving space and time consumption. Despite extensive research on incremental support vector machine (SVM) learning algorithms, most of them are based on independent and identically distributed samples (i.i.d.). Not only that, there has been no theoretical analysis of incremental SVM learning algorithms. In this paper, we mainly study the generalization bounds of this incremental SVM learning algorithm whose samples are based on uniformly geometric Markov chains, and exponentially strongly mixing sequence. As a special case, we also obtain the generalization bounds of i.i.d. samples.