Please login to be able to save your searches and receive alerts for new content matching your search criteria.
From some modifications of Chebyshev's method, we consider a uniparametric family of iterative methods that are more efficient than Newton's method, and we then construct two iterative methods in a similar way to the Secant method from Newton's method. These iterative methods do not use derivatives in their algorithms and one of them is more efficient than the Secant method, which is the classical method with this feature.
We present an extension of a well-known result of Traub to increase the R-order of convergence of one-point iterative methods by a simple modification of this type of methods. We consider the extension to one-point iterative methods with memory and present a particular case where Kurchatov's method is used. Moreover, we analyze the efficiency and the semilocal convergence of this method. Finally, two applications are presented, where differentiable and nondifferentiable equations are considered, that illustrate the above-mentioned.
By applying a Gröbner-Shirshov basis of the symmetric group Sn, we give two formulas for Schubert polynomials, either of which involves only nonnegative monomials. We also prove some combinatorial properties of Schubert polynomials. As applications, we give two algorithms to calculate the structure constants for Schubert polynomials, one of which depends on Monk’s formula.
The main purpose of this paper is to give the generalization and improvement of the result given in [2] on the inequality of the difference of two integral means which can also be represented as the difference of two divided differences.
Diminishing rates of change or downward slopes of a function which is known only by a set of measured values are often assumed in science and economics. Quite frequently, however, the data have lost concavity due to errors of the measuring process. Occasionally few of the data are outliers. We present the main idea of a method that makes the least sum of absolute value change to measured values of a concave function contaminated with random errors to achieve concavity, we illustrate the optimality conditions by a numerical example and we provide an interpretation of its results. The method uses an algorithm of descent direction that employs Karush-Kuhn-Tucker like parameters that are both important to the characterization of the solution and useful for practical analyses. The example is an application to modeling human life expectancy data with respect to Gross National Income per capita for the year 2007 for 159 countries, which in theory show diminishing rates of change. Concavity is expressed in terms of non-positive second divided differences of the smoothed data, which gives a linear programming calculation that is subsequently solved by this algorithm. The results are analyzed in some detail intended for use as a guide to potentially similar applications that may arise in several fields.