Rate of convergence#quadratic convergence
{{Short description|Speed of convergence of a mathematical sequence}}
{{Differential equations}}
In mathematical analysis, particularly numerical analysis, the rate of convergence and order of convergence of a sequence that converges to a limit are any of several characterizations of how quickly that sequence approaches its limit. These are broadly divided into rates and orders of convergence that describe how quickly a sequence further approaches its limit once it is already close to it, called asymptotic rates and orders of convergence, and those that describe how quickly sequences approach their limits from starting points that are not necessarily close to their limits, called non-asymptotic rates and orders of convergence.
Asymptotic behavior is particularly useful for deciding when to stop a sequence of numerical computations, for instance once a target precision has been reached with an iterative root-finding algorithm, but pre-asymptotic behavior is often crucial for determining whether to begin a sequence of computations at all, since it may be impossible or impractical to ever reach a target precision with a poorly chosen approach. Asymptotic rates and orders of convergence are the focus of this article.
In practical numerical computations, asymptotic rates and orders of convergence follow two common conventions for two types of sequences: the first for sequences of iterations of an iterative numerical method and the second for sequences of successively more accurate numerical discretizations of a target. In formal mathematics, rates of convergence and orders of convergence are often described comparatively using asymptotic notation commonly called "big O notation," which can be used to encompass both of the prior conventions; this is an application of asymptotic analysis.
For iterative methods, a sequence that converges to is said to have asymptotic order of convergence and asymptotic rate of convergence if
Where methodological precision is required, these rates and orders of convergence are known specifically as the rates and orders of Q-convergence, short for quotient-convergence, since the limit in question is a quotient of error terms. The rate of convergence may also be called the asymptotic error constant, and some authors will use rate where this article uses order.{{cite web |last=Senning |first=Jonathan R. |title=Computing and Estimating the Rate of Convergence |url=http://www.math-cs.gordon.edu/courses/ma342/handouts/rate.pdf |access-date=2020-08-07 |website=gordon.edu}} Series acceleration methods are techniques for improving the rate of convergence of the sequence of partial sums of a series and possibly its order of convergence, also.
Similar concepts are used for sequences of discretizations. For instance, ideally the solution of a differential equation discretized via a regular grid will converge to the solution of the continuous equation as the grid spacing goes to zero, and if so the asymptotic rate and order of that convergence are important properties of the gridding method. A sequence of approximate grid solutions of some problem that converges to a true solution with a corresponding sequence of regular grid spacings that converge to 0 is said to have asymptotic order of convergence and asymptotic rate of convergence if
where the absolute value symbols stand for a metric for the space of solutions such as the uniform norm. Similar definitions also apply for non-grid discretization schemes such as the polygon meshes of a finite element method or the basis sets in computational chemistry: in general, the appropriate definition of the asymptotic rate will involve the asymptotic limit of the ratio of an approximation error term above to an asymptotic order power of a discretization scale parameter below.
In general, comparatively, one sequence that converges to a limit is said to asymptotically converge more quickly than another sequence that converges to a limit if
and the two are said to asymptotically converge with the same order of convergence if the limit is any positive finite value. The two are said to be asymptotically equivalent if the limit is equal to one. These comparative definitions of rate and order of asymptotic convergence are fundamental in asymptotic analysis and find wide application in mathematical analysis as a whole, including numerical analysis, real analysis, complex analysis, and functional analysis.
Asymptotic rates of convergence for iterative methods
= Definitions =
Suppose that the sequence of iterates of an iterative method converges to the limit number as . The sequence is said to converge with order to and with a rate of convergence if the limit of quotients of absolute differences of sequential iterates from their limit satisfies
{{cite journal |last=Van Tuyl |first=Andrew H. |year=1994 |title=Acceleration of convergence of a family of logarithmically convergent sequences |url=https://www.ams.org/journals/mcom/1994-63-207/S0025-5718-1994-1234428-2/S0025-5718-1994-1234428-2.pdf |journal=Mathematics of Computation |volume=63 |issue=207 |pages=229–246 |doi=10.2307/2153571 |jstor=2153571 |access-date=2020-08-02}} or the limit does not exist. This definition is technically called Q-convergence, short for quotient-convergence, and the rates and orders are called rates and orders of Q-convergence when that technical specificity is needed. {{slink||R-convergence}}, below, is an appropriate alternative when this limit does not exist.
Sequences with larger orders
Integer powers of
The common names for integer orders of convergence connect to asymptotic big O notation, where the convergence of the quotient implies
In general, when x_{k+1} - L
= 0,x_k - L x_{k+1} - L
= 1.x_k - L x_{k+1} - x_{k}
= 1.x_{k} - x_{k-1}
== R-convergence ==
The definitions of Q-convergence rates have the shortcoming that they do not naturally capture the convergence behavior of sequences that do converge, but do not converge with an asymptotically constant rate with every step, so that the Q-convergence limit does not exist. One class of examples is the staggered geometric progressions that get closer to their limits only every other step or every several steps, for instance the example
In cases like these, a closely related but more technical definition of rate of convergence called R-convergence is more appropriate. The "R-" prefix stands for "root."{{Cite book |last1=Nocedal |first1=Jorge |title=Numerical Optimization |last2=Wright |first2=Stephen J. |publisher=Springer-Verlag |year=2006 |isbn=978-0-387-30303-1 |edition=2nd |location=Berlin, New York}}{{rp|620}} A sequence
|x_k - L|\le\varepsilon_k\quad\text{for all }k
and
Any error bounding sequence
For the example
. r
=Examples=
The geometric progression
Thus
More generally, for any initial value
The staggered geometric progression
; r
The sequence
converges to zero Q-superlinearly. In fact, it is quadratically convergent with a quadratic convergence rate of 1. It is shown in the third plot of the figure below.
Finally, the sequence
converges to zero Q-sublinearly and logarithmically and its convergence is shown as the fourth plot of the figure below.
= Convergence rates to fixed points of recurrent sequences =
Recurrent sequences
= Order estimation =
A practical method to calculate the order of convergence for a sequence generated by a fixed point iteration is to calculate the following sequence, which converges to the order
For numerical approximation of an exact value through a numerical method of order
= Accelerating convergence rates =
{{Main|Series acceleration}}
Many methods exist to accelerate the convergence of a given sequence, i.e., to transform one sequence into a second sequence that converges more quickly to the same limit. Such techniques are in general known as "series acceleration" methods. These may reduce the computational costs of approximating the limits of the original sequences. One example of series acceleration by sequence transformation is Aitken's delta-squared process. These methods in general, and in particular Aitken's method, do not typically increase the order of convergence and thus they are useful only if initially the convergence is not faster than linear: if
Asymptotic rates of convergence for discretization methods
{{more citations needed section|date=August 2020}}
= Definitions =
A sequence of discretized approximations
for some positive constants
When all the discretizations are generated using a single common method, it is common to discuss the asymptotic rate and order of convergence for the method itself rather than any particular discrete sequences of discretized solutions. In these cases one considers a single abstract discretized solution
again for some positive constants
In some cases multiple rates and orders for the same method but with different choices of scale parameter may be important, for instance for finite difference methods based on multidimensional grids where the different dimensions have different grid spacings or for finite element methods based on polygon meshes where choosing either average distance between mesh points or maximum distance between mesh points as scale parameters may imply different orders of convergence. In some especially technical contexts, discretization methods' asymptotic rates and orders of convergence will be characterized by several scale parameters at once with the value of each scale parameter possibly affecting the asymptotic rate and order of convergence of the method with respect to the other scale parameters.
= Example =
Consider the ordinary differential equation
:
with initial condition
:
which implies the first-order linear recurrence with constant coefficients
:
Given
The exact analytical solution to the differential equation is
= y_0\left(1 - nh\kappa + \frac{n^2 h^2\kappa^2}{2} + ...\right).
Therefore the error of the discrete approximation at each discrete point is
:
For any specific
{h_k} = \lim_{h_k \rightarrow 0} \fracy_k(p) - f(p)
{h_k} = \frac{h_k n_{p,k} \kappa^2}{2} = \frac{p \kappa^2}{2}y_{k, n_{p,k}} - f(h_k n_{p,k})
for any sequence of grids with successively smaller grid spacings
Comparing asymptotic rates of convergence
= Definitions =
In asymptotic analysis in general, one sequence
= 0,b_k - L
the two are said to asymptotically converge to
= \mub_k - L
for some positive finite constant
= 1.b_k - L
These comparative definitions of rate and order of asymptotic convergence are fundamental in asymptotic analysis.{{cite journal |last1=Balcázar |first1=José L. |last2=Gabarró |first2=Joaquim |title=Nonuniform complexity classes specified by lower and upper bounds |url=http://archive.numdam.org/article/ITA_1989__23_2_177_0.pdf |url-status=live |journal=RAIRO – Theoretical Informatics and Applications – Informatique Théorique et Applications |language=en |volume=23 |issue=2 |page=180 |issn=0988-3754 |archive-url=https://web.archive.org/web/20170314153158/http://archive.numdam.org/article/ITA_1989__23_2_177_0.pdf |archive-date=14 March 2017 |access-date=14 March 2017 |via=Numdam}}{{cite book |last1=Cucker |first1=Felipe |title=Condition: The Geometry of Numerical Algorithms |last2=Bürgisser |first2=Peter |publisher=Springer |year=2013 |isbn=978-3-642-38896-5 |location=Berlin, Heidelberg |pages=467–468 |chapter=A.1 Big Oh, Little Oh, and Other Comparisons |doi=10.1007/978-3-642-38896-5 |chapter-url=https://books.google.com/books?id=SNu4BAAAQBAJ&pg=PA467}} For the first two of these there are associated expressions in asymptotic O notation: the first is that
= Examples =
For any two geometric progressions
For any two sequences of elements proportional to an inverse power of
For any sequence
Non-asymptotic rates of convergence
{{more citations needed section|date=October 2024}}
Non-asymptotic rates of convergence do not have the common, standard definitions that asymptotic rates of convergence have. Among formal techniques, Lyapunov theory is one of the most powerful and widely applied frameworks for characterizing and analyzing non-asymptotic convergence behavior.
For iterative methods, one common practical approach is to discuss these rates in terms of the number of iterates or the computer time required to reach close neighborhoods of a limit from starting points far from the limit. The non-asymptotic rate is then an inverse of that number of iterates or computer time. In practical applications, an iterative method that required fewer steps or less computer time than another to reach target accuracy will be said to have converged faster than the other, even if its asymptotic convergence is slower. These rates will generally be different for different starting points and different error thresholds for defining the neighborhoods. It is most common to discuss summaries of statistical distributions of these single point rates corresponding to distributions of possible starting points, such as the "average non-asymptotic rate," the "median non-asymptotic rate," or the "worst-case non-asymptotic rate" for some method applied to some problem with some fixed error threshold. These ensembles of starting points can be chosen according to parameters like initial distance from the eventual limit in order to define quantities like "average non-asymptotic rate of convergence from a given distance."
For discretized approximation methods, similar approaches can be used with a discretization scale parameter such as an inverse of a number of grid or mesh points or a Fourier series cutoff frequency playing the role of inverse iterate number, though it is not especially common. For any problem, there is a greatest discretization scale parameter compatible with a desired accuracy of approximation, and it may not be as small as required for the asymptotic rate and order of convergence to provide accurate estimates of the error. In practical applications, when one discretization method gives a desired accuracy with a larger discretization scale parameter than another it will often be said to converge faster than the other, even if its eventual asymptotic convergence is slower.