Please login to be able to save your searches and receive alerts for new content matching your search criteria.
This paper discusses studies on the performance of a parallel iterative algorithm implemented on an array of transputers connected in a mesh configuration. The iterative algorithm under consideration is the finite difference method for the solution of partial differential equations. Analytical expressions for the execution times of the various steps of the algorithm are derived by studying its computation and communication characteristics. These expressions are validated by comparing the theoretical results of the performance with the experimental values obtained on a transputer array. Then the analytical model is used to estimate the performance of the algorithm for varying number of transputers in the array and for varying grid sizes. An important objective of this paper is to study the influence of the convergence detection overhead on the performance of the algorithm. We present an approach to minimize the overhead. Convergence detection is one of the dominant factors that affects the performance of the algorithm, since it involves a substantial amount of computation and communication. In order to reduce this overhead, the proposed algorithm checks convergence once in every certain number of iterations, kc. The method of determining an optimal value of kc is given. Further, the time taken for convergence detection is estimated for the best case, worst case, and average case situations.
This study concerns the solution of a class of nonlinear second order partial differential equations on the Connection Machine CM-2 massively parallel computer. To solve the nonlinear problem, an “outer” successive approximation technique is first applied in order to iteratively linearize the problem. This results in a sequence of systems of linear equations to be solved for successive nonlinear iterates. A preconditioned conjugate gradient (CG) method is used to solve each “inner” linear system. Either Jacobi preconditioning or a fast direct solver is employed as preconditioner for the conjugate gradient method. The implementation of these methods on the Connection Machine is discussed, and the results of numerical experiments with nested iteration are presented.
In this paper, we use the decomposition technique of Noor and Noor [M. A. Noor and K. I. Noor, Three-step iterative methods for nonlinear equations, preprint (2006)] to suggest and analyze a new iterative methods for solving the integral equations. Our method of developing this new method is very simple as compared with other methods. Several numerical examples are given to illustrate the efficiency and performance of the new method. Results reveal that the proposed method is very effective and simple. Our method can be considered as an improvement of the existing methods.
P. Jarratt has developed a family of fourth-order optimal methods. He suggested two members of the family. The dynamics of one of those was discussed previously. Here we show that the family can be written using a weight function and analyze all members of the family to find the best performer.
In this work, we construct a new family of inverse iterative numerical technique for extracting all roots of nonlinear equation simultaneously. Convergence analysis verifies that the proposed family of methods has local 10th-order convergence. Among the test models investigated are blood rheology, a fractional nonlinear equation model, fluid permeability in biogels, and beam localization models. In comparison to other methods, the family of inverse simultaneous iterative techniques gets initial estimations to exact roots within a given tolerance while using less function evaluations in each iterative step. Numerical results, basins of attraction for fractional nonlinear equation, residual graphs are presented in detail for the simultaneous iterative techniques. The newly developed simultaneous iterative techniques were thoroughly investigated and proven to be efficient, robust, and authentic in their domain.
We present an extension of a well-known result of Traub to increase the R-order of convergence of one-point iterative methods by a simple modification of this type of methods. We consider the extension to one-point iterative methods with memory and present a particular case where Kurchatov's method is used. Moreover, we analyze the efficiency and the semilocal convergence of this method. Finally, two applications are presented, where differentiable and nondifferentiable equations are considered, that illustrate the above-mentioned.
A model for H2 inside single-walled carbon nanotubes is outlined. ARPACK (the Arnoldi package), a robust iterative matrix-vector eigenvalue software library, is used to determine the allowed quantum states of H2 inside various carbon nanotubes. This information is used to construct the equilibrium constants for H2 adsorption as a function of temperature for a variety of CNTs.
Hypochlorous acid, HOCl, is an important intermediate in the O(1D)HCl reactive system. Due in part to a large number of vibrational bound states (over 800), extremely large direct product basis sets (around 300,000) are required to compute the energy levels just below the dissociation threshold. This situation, combined with a very high density of states, results in difficult convergence for iterative methods — e.g. Lanczos requires 50,000 iterations, and filter diagonalization uses 60,000 iterations. In contrast, using new methodologies, we are able to compute the highest-lying bound states with only 271 iterations, although the CPU cost per iteration is substantially greater. Lower lying states are also computed, for a fraction of the CPU cost of the highest energy calculation.
A steady-state rolling problem with rigid-plastic, strain-rate sensitive, slightly compressible material model and nonlinear Coulomb friction law is considered. For the corresponding variational problem, existence, uniqueness and convergence results, at compressibility parameter approaching zero, are obtained. A regularized variational problem is stated and studied and its finite element approximation is analyzed. Two computational algorithms are proposed and applied to solve an illustrative example.
In this study, we design a new efficient family of sixth-order iterative methods for solving scalar as well as system of nonlinear equations. The main beauty of the proposed family is that we have to calculate only one inverse of the Jacobian matrix in the case of nonlinear system which reduces the computational cost. The convergence properties are fully investigated along with two main theorems describing their order of convergence. By using complex dynamics tools, its stability is analyzed, showing stable members of the family. From this study, we intend to have more information about these methods in order to detect those with best stability properties. In addition, we also presented a numerical work which confirms the order of convergence of the proposed family is well deduced for scalar, as well as system of nonlinear equations. Further, we have also shown the implementation of the proposed techniques on real world problems like Van der Pol equation, Hammerstein integral equation, etc.
In this paper, we present a new and interesting optimal scheme of order eight in a general way for solving nonlinear equations, numerically. The beauty of our scheme is that it is capable of producing further new and interesting optimal schemes of order eight from every existing optimal fourth-order scheme whose first substep employs Newton’s method. The construction of this scheme is based on rational functional approach. The theoretical and computational properties of the proposed scheme are fully investigated along with a main theorem which establishes the order of convergence and asymptotic error constant. Several numerical examples are given and analyzed in detail to demonstrate faster convergence and higher computational efficiency of our methods.
This paper aims at providing new modifications of Steffensen’s scheme for solving nonlinear equations. The new general approach is discussed in detail theoretically. It is observed that a super fast R-order along with an interesting computational efficiency index will be achieved. Dynamical behaviors of the schemes are given to show an enlarged radii of convergence. Some experiments are also brought forward to support the given theory.
In this paper, we propose fast iterative methods based on the Newton–Raphson–Kantorovich approximation in function space [Bellman and Kalaba, (1965)] to solve three kinds of the Lane–Emden type problems. First, a reformulation of the problem is performed using a quasilinearization technique which leads to an iterative scheme. Such scheme consists in an ordinary differential equation that uses the approximate solution from the previous iteration to yield the unknown solution of the current iteration. At every iteration, a further discretization of the problem is achieved which provides the numerical solution with low computational cost. Numerical simulation shows the accuracy as well as the efficiency of the method.
In this paper, we study a local convergence analysis of a family of iterative methods with sixth and seventh order convergence for nonlinear equations, which was established by [Cordero et al. [2010] in “A family of iterative methods with sixth and seventh order convergence for nonlinear equations,” Math. Comput. Model.52, 1190–1496]. Earlier studies have shown convergence using Taylor expansions and hypotheses reaching up to the sixth derivative. In our work, we make an attempt to study and establish a local convergence theorem by using only hypotheses the first derivative of the function and Lipschitz constants. We can also obtain error bounds and radii of convergence based on our results. Hence, the applicability of the methods is expanded. Moreover, we consider some different numerical examples and obtain the radii of convergence centered at the solution for different parameter values 𝜃 of the family. Furthermore, the basins of attraction of the family with different parameter values are also studied, which allow us to distinguish between the good and bad members of the family in terms of convergence and stable properties, and help us find the members with better or the best stable behavior.
We develop some new iterative methods, using decomposition technique, for solving the problems which involve nonlinear equations. Importantly, these methods include the generalization of some well-known existing methods. We prove the convergence criteria of our newly proposed methods. Various test examples are considered to validate the efficiency of our new methods. We also give the numerical as well as graphical analysis for two mathematical models to endorse the performance of these methods.
In this paper, we derive new multi-parametric families of iterative methods whose orders range from six to eight, for solving nonlinear systems. Based on a generating function method known in the literature, we construct these families in the most general way possible in order to include some well-known methods as special cases. Several applied problems are solved to check the performance of our methods and other existing ones and to verify the theoretical results. It is found that our methods are competitive in performance compared to the other methods. Moreover, the basin of attraction method is introduced for nonlinear systems to confirm our findings and to choose the best performers.
In this paper, we present some efficient iterative methods for solving nonlinear equation (systems of nonlinear equations, respectively) by using modified homotopy perturbation methods. We also discuss the convergence criteria of the present methods. Some numerical examples are given to illustrate the performance and efficiency of the proposed methods.
In this work, we develop a simple yet robust and highly practical algorithm for constructing iterative methods of higher convergence orders. The algorithm can be easily implemented in software packages for achieving desired convergence orders. Convergence analysis shows that the algorithm can develop methods of various convergence orders which is also supported through the numerical work. The algorithm is shown to converge even if the derivative of the function vanishes during the iterative process. Computational results ascertain that the developed algorithm is efficient and demonstrate equal or better performance as compared with other well known methods and the classical Newton method.
Mathematical models that utilize network representations have proven to be valuable tools for investigating biological systems. Often dynamic models are not feasible due to their complex functional forms that rely on unknown rate parameters. Network propagation has been shown to accurately capture the sensitivity of nodes to changes in other nodes; without the need for dynamic systems and parameter estimation. Node sensitivity measures rely solely on network structure and encode a sensitivity matrix that serves as a good approximation to the Jacobian matrix. The use of a propagation-based sensitivity matrix as a Jacobian has important implications for network optimization. This work develops Integrated Graph Propagation and OptimizatioN (IGPON), which aims to identify optimal perturbation patterns that can drive networks to desired target states. IGPON embeds propagation into an objective function that aims to minimize the distance between a current observed state and a target state. Optimization is performed using Broyden’s method with the propagationbased sensitivity matrix as the Jacobian. IGPON is applied to simulated random networks, DREAM4 in silico networks, and over-represented pathways from STAT6 knockout data and YBX1 knockdown data. Results demonstrate that IGPON is an effective way to optimize directed and undirected networks that are robust to uncertainty in the network structure.
A standard optimal control problem - one with general functional, nonlinear phase system and convex compact bounding set on control function, is considered. On the basis of optimization of a variation procedure and constructive use of nonclassical approximation of a cost functional, new variants of gradient methods are constructed. The quality of the proposed modifications is determined by nonlocal descent property for bilinear problems and by possibility of improvement for stationary controls in nonconvex problems.