Please login to be able to save your searches and receive alerts for new content matching your search criteria.
In this paper, a method for deriving computable estimates of the approximation error in eigenvalues or eigenfrequencies of three-dimensional linear elasticity or shell problems is presented. The analysis for the error estimator follows the general approach of goal-oriented error estimation for which the error is estimated in so-called quantities of interest, here the eigenfrequencies, rather than global norms. A general theory is developed and is then applied to the linear elasticity equations. For the shell analysis, it is assumed that the shell model is not completely known and additional errors are introduced due to modeling approximations. The approach is then based on recovering three-dimensional approximations from the shell eigensolution and employing the error estimator developed for linear elasticity. The performance of the error estimator is demonstrated on several test problems.
In this paper we present reduced basis (RB) approximations and associated rigorous a posteriori error bounds for the parametrized unsteady Boussinesq equations. The essential ingredients are Galerkin projection onto a low-dimensional space associated with a smooth parametric manifold — to provide dimension reduction; an efficient proper orthogonal decomposition–Greedy sampling method for identification of optimal and numerically stable approximations — to yield rapid convergence; accurate (online) calculation of the solution-dependent stability factor by the successive constraint method — to quantify the growth of perturbations/residuals in time; rigorous a posteriori bounds for the errors in the RB approximation and associated outputs — to provide certainty in our predictions; and an offline–online computational decomposition strategy for our RB approximation and associated error bound — to minimize marginal cost and hence achieve high performance in the real-time and many-query contexts. The method is applied to a transient natural convection problem in a two-dimensional "complex" enclosure — a square with a small rectangle cutout — parametrized by Grashof number and orientation with respect to gravity. Numerical results indicate that the RB approximation converges rapidly and that furthermore the (inexpensive) rigorous a posteriori error bounds remain practicable for parameter domains and final times of physical interest.
We present reduced basis approximations and associated rigorous a posteriori error bounds for the Stokes equations in parametrized domains. The method, built upon the penalty formulation for saddle point problems, provides error bounds not only for the velocity but also for the pressure approximation, while simultaneously admitting affine geometric variations with relative ease. The essential ingredients are: (i) dimension reduction through Galerkin projection onto a low-dimensional reduced basis space; (ii) stable, good approximation of the pressure through supremizer-enrichment of the velocity reduced basis space; (iii) optimal and numerically stable approximations identified through an efficient greedy sampling method; (iv) certainty, through rigorous a posteriori bounds for the errors in the reduced basis approximation; and (v) efficiency, through an offline-online computational strategy. The method is applied to a flow problem in a two-dimensional channel with a (parametrized) rectangular obstacle. Numerical results show that the reduced basis approximation converges rapidly, the effectivities associated with the (inexpensive) rigorous a posteriori error bounds remain good even for reasonably small values of the penalty parameter, and that the effects of the penalty parameter are relatively benign.
We consider learning algorithms induced by regularization methods in the regression setting. We show that previously obtained error bounds for these algorithms, using a priori choices of the regularization parameter, can be attained using a suitable a posteriori choice based on cross-validation. In particular, these results prove adaptation of the rate of convergence of the estimators to the minimax rate induced by the "effective dimension" of the problem. We also show universal consistency for this broad class of methods which includes regularized least-squares, truncated SVD, Landweber iteration and ν-method.
We consider a wide class of error bounds developed in the context of statistical learning theory which are expressed in terms of functionals of the regression function, for instance, its norm in a reproducing kernel Hilbert space or other functional space. These bounds are unstable in the sense that a small perturbation of the regression function can induce an arbitrary large increase of the relevant functional and make the error bound useless. Using a known result involving Fano inequality, we show how stability can be recovered.
A new generalization of the Ostrowski–Gruss inequality is introduced in three different cases for functions in L1[a, b] and L∞[a, b] spaces and its application is given for deriving error bounds of some quadrature rules.
I trace the main steps of the first fifty-five years of my career as an applied mathematician, pausing from time to time to describe problems that arose in asymptotics and numerical analysis and had far-reaching effects on this career.
Lecture delivered at Asymptotics and Applied Analysis, Conference in Honor of Frank W. J. Olver's 75th Birthday, January 10–14, 2000, San Diego State University, San Diego, California.
Editors' Note: Frank W. J. Olver died on April 23, 2013. The following text was typed by his son, Peter J. Olver, from handwritten notes found among his papers. At times the writing is unpolished, including incomplete sentences, but the editors have decided to leave it essentially the way it was written. However, for clarity, some abbreviations have been written out in full. A couple of handwritten words could not be deciphered, and a guess for what was intended is enclosed in brackets: […]. Endnotes have been made into footnotes within the body of the article. References were mostly not included in the handwritten text, but rather listed in order at the end. Citations to references have been included at the appropriate point in the text.
The aim of this paper is to derive new representations for the Hankel and Bessel functions, exploiting the reformulation of the method of steepest descents by Berry and Howls [Hyperasymptotics for integrals with saddles, Proc. R. Soc. Lond. A434 (1991) 657–675]. Using these representations, we obtain a number of properties of the large-order asymptotic expansions of the Hankel and Bessel functions due to Debye, including explicit and numerically computable error bounds, asymptotics for the late coefficients, exponentially improved asymptotic expansions, and the smooth transition of the Stokes discontinuities.
In this paper, we derive new representations for the incomplete gamma function, exploiting the reformulation of the method of steepest descents by C. J. Howls [Hyperasymptotics for integrals with finite endpoints, Proc. Roy. Soc. London Ser. A439 (1992) 373–396]. Using these representations, we obtain a number of properties of the asymptotic expansions of the incomplete gamma function with large arguments, including explicit and realistic error bounds, asymptotics for the late coefficients, exponentially improved asymptotic expansions, and the smooth transition of the Stokes discontinuities.
In this paper, the regression learning algorithm with vector-valued RKHS is studied. We motivate the need for extending learning theory of scalar-valued functions and analze the learning performance. In this setting, the output data are from a Hilbert space Y, the associated RKHS consists of functions with values lie in Y. By providing mathematical aspects of vector-valued integral operator LK, the capacity independent error bounds and learning rates are derived by means of the integral operator technique.
We consider the coefficient-based least squares regularized regression learning algorithm for the strongly and uniformly mixing samples. We obtain the capacity independent error bounds of the algorithm by means of the integral operator techniques. A standard assumption in theoretical study of learning algorithms for regression is the uniform boundedness of output sample values. We abandon this boundedness assumption and carry out the error analysis with output sample values satisfying a generalized moment hypothesis.
In this paper, we study the performance of kernel-based regression learning with non-iid sampling. The non-iid samples are drawn from different probability distributions with the same conditional distribution. A more general marginal distribution assumption is proposed. Under this assumption, the consistency of the regularization kernel network (RKN) and the coefficient regularization kernel network (CRKN) are proved. Satisfactory capacity independently error bounds and learning rates are derived by the techniques of integral operator.
We study distributed learning with partial coefficients regularization scheme in a reproducing kernel Hilbert space (RKHS). The algorithm randomly partitions the sample set {zi}Ni=1 into m disjoint sample subsets of equal size. In order to reduce the complexity of algorithms, we apply a partial coefficients regularization scheme to each sample subset to produce an output function, and average the individual output functions to get the final global estimator. The error bound in the L2-metric is deduced and the asymptotic convergence for this distributed learning with partial coefficients regularization is proved by the integral operator technique. Satisfactory learning rates are then derived under a standard regularity condition on the regression function, which reveals an interesting phenomenon that when m≤Ns and s is small enough, this distributed learning has the same convergence rate with the algorithm processing the whole data in one single machine.