Please login to be able to save your searches and receive alerts for new content matching your search criteria.
From some modifications of Chebyshev's method, we consider a uniparametric family of iterative methods that are more efficient than Newton's method, and we then construct two iterative methods in a similar way to the Secant method from Newton's method. These iterative methods do not use derivatives in their algorithms and one of them is more efficient than the Secant method, which is the classical method with this feature.
We present an extension of a well-known result of Traub to increase the R-order of convergence of one-point iterative methods by a simple modification of this type of methods. We consider the extension to one-point iterative methods with memory and present a particular case where Kurchatov's method is used. Moreover, we analyze the efficiency and the semilocal convergence of this method. Finally, two applications are presented, where differentiable and nondifferentiable equations are considered, that illustrate the above-mentioned.
We construct an optimal eighth-order scheme which will work for multiple zeros with multiplicity (m≥1), for the first time. Earlier, the maximum convergence order of multi-point iterative schemes was six for multiple zeros in the available literature. So, the main contribution of this study is to present a new higher-order and as well as optimal scheme for multiple zeros for the first time. In addition, we present an extensive convergence analysis with the main theorem which confirms theoretically eighth-order convergence of the proposed scheme. Moreover, we consider several real life problems which contain simple as well as multiple zeros in order to compare with the existing robust iterative schemes. Finally, we conclude on the basis of obtained numerical results that our iterative methods perform far better than the existing methods in terms of residual error, computational order of convergence and difference between the two consecutive iterations.
In this paper, we present new cubically convergent Newton-type iterative methods with dynamics for solving nonlinear algebraic equations under weak conditions. The proposed methods are free from second-order derivative and work well when f′(x)=0. Numerical results show that the proposed method performs better when Newton’s method fails or diverges and competes well with same order existing method. Fractal patterns of different methods also support the numerical results and explain the compactness regarding the convergence, divergence, and stability of the methods to different roots.
With the rapid development of information science and engineering technology, nonlinear problems become an important research in the field of numerical analysis. In this paper, iterative methods for solving nonlinear equations are researched. Two modified Newton-type algorithms for solving nonlinear equations are proposed and analyzed, whose order of convergence are six and seven respectively. Both of the methods are free from second derivatives. The efficiency index of the presented methods are 1.431 and 1.476, respectively, which are all better than that of the classical Newton’s method 1.414. Some numerical experiments demonstrate the performance of the presented algorithms.