Please login to be able to save your searches and receive alerts for new content matching your search criteria.
Sparse additive machines (SAMs) have attracted increasing attention in high dimensional classification due to their representation flexibility and interpretability. However, most of existing methods are formulated under Tikhonov regularization scheme with the hinge loss, which are susceptible to outliers. To circumvent this problem, we propose a sparse additive machine with ramp loss (called ramp-SAM) to tackle classification and variable selection simultaneously. Misclassification error bound is established for ramp-SAM with the help of detailed error decomposition and constructive hypothesis error analysis. To solve the nonsmooth and nonconvex ramp-SAM, a proximal block coordinate descent method is presented with convergence guarantees. The empirical effectiveness of our model is confirmed on simulated and benchmark datasets.
This paper studies the binary classification problem associated with a family of Lipschitz convex loss functions called large-margin unified machines (LUMs), which offers a natural bridge between distribution-based likelihood approaches and margin-based approaches. LUMs can overcome the so-called data piling issue of support vector machine in the high-dimension and low-sample size setting, while their theoretical analysis from the perspective of learning theory is still lacking. In this paper, we establish some new comparison theorems for all LUM loss functions which play a key role in the error analysis of large-margin learning algorithms. Based on the obtained comparison theorems, we further derive learning rates for regularized LUMs schemes associated with varying Gaussian kernels, which maybe of independent interest.