World Scientific
Skip main navigation

Cookies Notification

We use cookies on this site to enhance your user experience. By continuing to browse the site, you consent to the use of our cookies. Learn More
×

System Upgrade on Tue, May 28th, 2024 at 2am (EDT)

Existing users will be able to log into the site and access content. However, E-commerce and registration of new users may not be available for up to 12 hours.
For online purchase, please visit us again. Contact us at customercare@wspc.com for any enquiries.

Asymptotic analysis of quantile regression learning based on coefficient dependent regularization

    https://doi.org/10.1142/S0219691315500186Cited by:7 (Source: Crossref)

    In this paper, we consider conditional quantile regression learning algorithms based on the pinball loss with data dependent hypothesis space and ℓ2-regularizer. Functions in this hypothesis space are linear combination of basis functions generated by a kernel function and sample data. The only conditions imposed on the kernel function are the continuity and boundedness which are pretty weak. Our main goal is to study the consistency of this regularized quantile regression learning. By concentration inequality with ℓ2-empirical covering numbers and operator decomposition techniques, satisfied error bounds and convergence rates are explicitly derived.

    AMSC: 22E46, 53C35, 57S20