Please login to be able to save your searches and receive alerts for new content matching your search criteria.
We present the use of mapping functions to automatically generate levels of detail with known error bounds for polygonal models. We develop a piece-wise linear mapping function for each simplification operation and use this function to measure deviation of the new surface from both the previous level of detail and from the original surface. In addition, we use the mapping function to compute appropriate texture coordinates if the original model has texture coordinates at its vertices. Our overall algorithm uses edge collapse operations. We present rigorous procedures for the generation of local orthogonal projections to the plane as well as for the selection of a new vertex position resulting from the edge collapse operation. The algorithm computes guaranteed error bounds on surface deviation and produces an entire continuum of levels of detail with mappings between them. We demonstrate the effectiveness of our algorithm on several models: a Ford Bronco consisting of over 300 parts and 70, 000 triangles, a textured lion model consisting of 49 parts and 86, 000 triangles, a textured, wrinkled torus consisting of 79, 000 triangles, a dragon model consisting of 871, 000 triangles, a Buddha model consisting of 1,000,000 triangles, and an armadillo model consisting of 2, 000, 000 triangles.
The bisector of two plane curve segments (other than lines and circles) has, in general, no simple — i.e., rational — parameterization, and must therefore be approximated by the interpolation of discrete data. A procedure for computing ordered sequences of point/tangent/curvature data along the bisectors of polynomial or rational plane curves is described, with special emphasis on (i) the identification of singularities (tangent–discontinuities) of the bisector; (ii) capturing the exact rational form of those portions of the bisector with a terminal footpoint on one curve; and (iii) geometrical criteria the characterize extrema of the distance error for interpolants to the discretely–sample data. G1 piecewise– parabolic and G2 piecewise–cubic approximations (with O(h4) and O(h6) convergence) are described which, used in adaptive schemes governed by the exact error measure, can be made to satisfy any prescribed geometrical tolerance.
We consider learning algorithms induced by regularization methods in the regression setting. We show that previously obtained error bounds for these algorithms, using a priori choices of the regularization parameter, can be attained using a suitable a posteriori choice based on cross-validation. In particular, these results prove adaptation of the rate of convergence of the estimators to the minimax rate induced by the "effective dimension" of the problem. We also show universal consistency for this broad class of methods which includes regularized least-squares, truncated SVD, Landweber iteration and ν-method.
We consider a wide class of error bounds developed in the context of statistical learning theory which are expressed in terms of functionals of the regression function, for instance, its norm in a reproducing kernel Hilbert space or other functional space. These bounds are unstable in the sense that a small perturbation of the regression function can induce an arbitrary large increase of the relevant functional and make the error bound useless. Using a known result involving Fano inequality, we show how stability can be recovered.
A new generalization of the Ostrowski–Gruss inequality is introduced in three different cases for functions in L1[a, b] and L∞[a, b] spaces and its application is given for deriving error bounds of some quadrature rules.
I trace the main steps of the first fifty-five years of my career as an applied mathematician, pausing from time to time to describe problems that arose in asymptotics and numerical analysis and had far-reaching effects on this career.
Lecture delivered at Asymptotics and Applied Analysis, Conference in Honor of Frank W. J. Olver's 75th Birthday, January 10–14, 2000, San Diego State University, San Diego, California.
Editors' Note: Frank W. J. Olver died on April 23, 2013. The following text was typed by his son, Peter J. Olver, from handwritten notes found among his papers. At times the writing is unpolished, including incomplete sentences, but the editors have decided to leave it essentially the way it was written. However, for clarity, some abbreviations have been written out in full. A couple of handwritten words could not be deciphered, and a guess for what was intended is enclosed in brackets: […]. Endnotes have been made into footnotes within the body of the article. References were mostly not included in the handwritten text, but rather listed in order at the end. Citations to references have been included at the appropriate point in the text.
The aim of this paper is to derive new representations for the Hankel and Bessel functions, exploiting the reformulation of the method of steepest descents by Berry and Howls [Hyperasymptotics for integrals with saddles, Proc. R. Soc. Lond. A434 (1991) 657–675]. Using these representations, we obtain a number of properties of the large-order asymptotic expansions of the Hankel and Bessel functions due to Debye, including explicit and numerically computable error bounds, asymptotics for the late coefficients, exponentially improved asymptotic expansions, and the smooth transition of the Stokes discontinuities.
In this paper, we derive new representations for the incomplete gamma function, exploiting the reformulation of the method of steepest descents by C. J. Howls [Hyperasymptotics for integrals with finite endpoints, Proc. Roy. Soc. London Ser. A439 (1992) 373–396]. Using these representations, we obtain a number of properties of the asymptotic expansions of the incomplete gamma function with large arguments, including explicit and realistic error bounds, asymptotics for the late coefficients, exponentially improved asymptotic expansions, and the smooth transition of the Stokes discontinuities.
In this paper, the regression learning algorithm with vector-valued RKHS is studied. We motivate the need for extending learning theory of scalar-valued functions and analze the learning performance. In this setting, the output data are from a Hilbert space Y, the associated RKHS consists of functions with values lie in Y. By providing mathematical aspects of vector-valued integral operator LK, the capacity independent error bounds and learning rates are derived by means of the integral operator technique.
We consider the coefficient-based least squares regularized regression learning algorithm for the strongly and uniformly mixing samples. We obtain the capacity independent error bounds of the algorithm by means of the integral operator techniques. A standard assumption in theoretical study of learning algorithms for regression is the uniform boundedness of output sample values. We abandon this boundedness assumption and carry out the error analysis with output sample values satisfying a generalized moment hypothesis.
In this paper, we study the performance of kernel-based regression learning with non-iid sampling. The non-iid samples are drawn from different probability distributions with the same conditional distribution. A more general marginal distribution assumption is proposed. Under this assumption, the consistency of the regularization kernel network (RKN) and the coefficient regularization kernel network (CRKN) are proved. Satisfactory capacity independently error bounds and learning rates are derived by the techniques of integral operator.
We study distributed learning with partial coefficients regularization scheme in a reproducing kernel Hilbert space (RKHS). The algorithm randomly partitions the sample set {zi}Ni=1 into m disjoint sample subsets of equal size. In order to reduce the complexity of algorithms, we apply a partial coefficients regularization scheme to each sample subset to produce an output function, and average the individual output functions to get the final global estimator. The error bound in the L2-metric is deduced and the asymptotic convergence for this distributed learning with partial coefficients regularization is proved by the integral operator technique. Satisfactory learning rates are then derived under a standard regularity condition on the regression function, which reveals an interesting phenomenon that when m≤Ns and s is small enough, this distributed learning has the same convergence rate with the algorithm processing the whole data in one single machine.