Processing math: 100%
Skip main navigation

Cookies Notification

We use cookies on this site to enhance your user experience. By continuing to browse the site, you consent to the use of our cookies. Learn More
×

System Upgrade on Tue, May 28th, 2024 at 2am (EDT)

Existing users will be able to log into the site and access content. However, E-commerce and registration of new users may not be available for up to 12 hours.
For online purchase, please visit us again. Contact us at customercare@wspc.com for any enquiries.

SEARCH GUIDE  Download Search Tip PDF File

  • articleNo Access

    CONSTRUCTION OF DERIVATIVE-FREE ITERATIVE METHODS FROM CHEBYSHEV'S METHOD

    From some modifications of Chebyshev's method, we consider a uniparametric family of iterative methods that are more efficient than Newton's method, and we then construct two iterative methods in a similar way to the Secant method from Newton's method. These iterative methods do not use derivatives in their algorithms and one of them is more efficient than the Secant method, which is the classical method with this feature.

  • articleNo Access

    A Traub type result for one-point iterative methods with memory

    We present an extension of a well-known result of Traub to increase the R-order of convergence of one-point iterative methods by a simple modification of this type of methods. We consider the extension to one-point iterative methods with memory and present a particular case where Kurchatov's method is used. Moreover, we analyze the efficiency and the semilocal convergence of this method. Finally, two applications are presented, where differentiable and nondifferentiable equations are considered, that illustrate the above-mentioned.

  • articleNo Access

    An Optimal Eighth-Order Scheme for Multiple Zeros of Univariate Functions

    We construct an optimal eighth-order scheme which will work for multiple zeros with multiplicity (m1), for the first time. Earlier, the maximum convergence order of multi-point iterative schemes was six for multiple zeros in the available literature. So, the main contribution of this study is to present a new higher-order and as well as optimal scheme for multiple zeros for the first time. In addition, we present an extensive convergence analysis with the main theorem which confirms theoretically eighth-order convergence of the proposed scheme. Moreover, we consider several real life problems which contain simple as well as multiple zeros in order to compare with the existing robust iterative schemes. Finally, we conclude on the basis of obtained numerical results that our iterative methods perform far better than the existing methods in terms of residual error, computational order of convergence and difference between the two consecutive iterations.

  • articleNo Access

    On a Newton-type method under weak conditions with dynamics

    In this paper, we present new cubically convergent Newton-type iterative methods with dynamics for solving nonlinear algebraic equations under weak conditions. The proposed methods are free from second-order derivative and work well when f(x)=0. Numerical results show that the proposed method performs better when Newton’s method fails or diverges and competes well with same order existing method. Fractal patterns of different methods also support the numerical results and explain the compactness regarding the convergence, divergence, and stability of the methods to different roots.

  • chapterNo Access

    Two Modified Newton-Type Iterative Methods for Solving Nonlinear Equations

    With the rapid development of information science and engineering technology, nonlinear problems become an important research in the field of numerical analysis. In this paper, iterative methods for solving nonlinear equations are researched. Two modified Newton-type algorithms for solving nonlinear equations are proposed and analyzed, whose order of convergence are six and seven respectively. Both of the methods are free from second derivatives. The efficiency index of the presented methods are 1.431 and 1.476, respectively, which are all better than that of the classical Newton’s method 1.414. Some numerical experiments demonstrate the performance of the presented algorithms.