Skip main navigation

Cookies Notification

We use cookies on this site to enhance your user experience. By continuing to browse the site, you consent to the use of our cookies. Learn More
×

System Upgrade on Tue, May 28th, 2024 at 2am (EDT)

Existing users will be able to log into the site and access content. However, E-commerce and registration of new users may not be available for up to 12 hours.
For online purchase, please visit us again. Contact us at customercare@wspc.com for any enquiries.

SEARCH GUIDE  Download Search Tip PDF File

  • articleNo Access

    Sparse additive machine with ramp loss

    Sparse additive machines (SAMs) have attracted increasing attention in high dimensional classification due to their representation flexibility and interpretability. However, most of existing methods are formulated under Tikhonov regularization scheme with the hinge loss, which are susceptible to outliers. To circumvent this problem, we propose a sparse additive machine with ramp loss (called ramp-SAM) to tackle classification and variable selection simultaneously. Misclassification error bound is established for ramp-SAM with the help of detailed error decomposition and constructive hypothesis error analysis. To solve the nonsmooth and nonconvex ramp-SAM, a proximal block coordinate descent method is presented with convergence guarantees. The empirical effectiveness of our model is confirmed on simulated and benchmark datasets.

  • articleNo Access

    Comparison theorems on large-margin learning

    This paper studies the binary classification problem associated with a family of Lipschitz convex loss functions called large-margin unified machines (LUMs), which offers a natural bridge between distribution-based likelihood approaches and margin-based approaches. LUMs can overcome the so-called data piling issue of support vector machine in the high-dimension and low-sample size setting, while their theoretical analysis from the perspective of learning theory is still lacking. In this paper, we establish some new comparison theorems for all LUM loss functions which play a key role in the error analysis of large-margin learning algorithms. Based on the obtained comparison theorems, we further derive learning rates for regularized LUMs schemes associated with varying Gaussian kernels, which maybe of independent interest.