Anderson acceleration

In mathematics, Anderson acceleration, also called Anderson mixing, is a method for the acceleration of the convergence rate of fixed-point iterations. Introduced by Donald G. Anderson,{{cite journal |last1=Anderson |first1=Donald G. |title=Iterative Procedures for Nonlinear Integral Equations |journal=Journal of the ACM |date=October 1965 |volume=12 |issue=4 |pages=547–560 |doi=10.1145/321296.321305|doi-access=free }} this technique can be used to find the solution to fixed point equations f(x) = x often arising in the field of computational science.

Definition

Given a function f:\mathbb{R}^n \to \mathbb{R}^n, consider the problem of finding a fixed point of f, which is a solution to the equation f(x) = x. A classical approach to the problem is to employ a fixed-point iteration scheme;{{cite book |author-link1= Alfio Quarteroni |last1=Quarteroni |first1=Alfio |last2=Sacco |first2=Riccardo |last3=Saleri |first3=Fausto |title=Numerical mathematics |date=30 November 2010 |publisher=Springer |isbn=978-3-540-49809-4|edition=2nd}} that is, given an initial guess x_0 for the solution, to compute the sequence x_{i+1} = f(x_i) until some convergence criterion is met. However, the convergence of such a scheme is not guaranteed in general; moreover, the rate of convergence is usually linear, which can become too slow if the evaluation of the function f is computationally expensive. Anderson acceleration is a method to accelerate the convergence of the fixed-point sequence.

Define the residual g(x) = f(x) - x, and denote f_k = f(x_k) and g_k = g(x_k) (where x_k corresponds to the sequence of iterates from the previous paragraph). Given an initial guess x_0 and an integer parameter m \geq 1, the method can be formulated as follows:{{cite journal |last1=Walker |first1=Homer F. |last2=Ni |first2=Peng |title=Anderson Acceleration for Fixed-Point Iterations |journal=SIAM Journal on Numerical Analysis |date=January 2011 |volume=49 |issue=4 |pages=1715–1735 |doi=10.1137/10078356X|citeseerx=10.1.1.722.2636 }}{{refn|group="note"|This formulation is not the same as given by the original author; it is an equivalent, more explicit formulation given by Walker and Ni.}}

:x_1 = f(x_0)

:\forall k = 1, 2, \dots

::m_k = \min\{m, k\}

::G_k = \begin{bmatrix} g_{k-m_k} & \dots & g_k \end{bmatrix}

::\alpha_k = \operatorname{argmin}_{\alpha\in A_k} \|G_k \alpha\|_2, \quad \text{where}\;A_k = \{\alpha = (\alpha_0, \dots, \alpha_{m_k}) \in \mathbb{R}^{m_k+1} : \sum_{i=0}^{m_k}\alpha_i = 1\}

::x_{k+1} = \sum_{i=0}^{m_k}(\alpha_k)_i f_{k-m_k+i}

where the matrix–vector multiplication G_k \alpha = \sum_{i=0}^{m_k}(\alpha)_i g_{k-m_k+i}, and (\alpha)_i is the ith element of \alpha. Conventional stopping criteria can be used to end the iterations of the method. For example, iterations can be stopped when \|x_{k+1} - x_k\| falls under a prescribed tolerance, or when the residual g(x_k) falls under a prescribed tolerance.

With respect to the standard fixed-point iteration, the method has been found to converge faster and be more robust, and in some cases avoid the divergence of the fixed-point sequence.

= Derivation =

For the solution x^*, we know that f(x^*) = x^*, which is equivalent to saying that g(x^*) = \vec{0}. We can therefore rephrase the problem as an optimization problem where we want to minimize \|g(x)\|_2.

Instead of going directly from x_k to x_{k+1} by choosing x_{k+1} = f(x_k) as in fixed-point iteration, let's consider an intermediate point x'_{k+1} that we choose to be the linear combination x'_{k+1} = X_k \alpha_k , where the coefficient vector \alpha_k \in A_k, and X_k = \begin{bmatrix} x_{k-m_k} & \dots & x_k \end{bmatrix} is the matrix containing the last m_k+1 points, and choose x'_{k+1} such that it minimizes \|g(x'_{k+1})\|_2. Since the elements in \alpha_k sum to one, we can make the first order approximation g(X_k\alpha_k) = g\left(\sum_{i=0}^{m_k} (\alpha_k)_i x_{k-m_k+i}\right) \approx \sum_{i=0}^{m_k} (\alpha_k)_i g(x_{k-m_k+i}) = G_k\alpha_k, and our problem becomes to find the \alpha that minimizes \|G_k\alpha\|_2. After having found \alpha_k, we could in principle calculate x'_{k+1}.

However, since f is designed to bring a point closer to x^*, f(x'_{k+1}) is probably closer to x^* than what x'_{k+1} is, so it makes sense to choose x_{k+1}=f(x'_{k+1}) rather than x_{k+1}=x'_{k+1}. Furthermore, since the elements in \alpha_k sum to one, we can make the first order approximation f(x'_{k+1}) = f\left(\sum_{i=0}^{m_k}(\alpha_k)_i x_{k-m_k+i}\right) \approx \sum_{i=0}^{m_k}(\alpha_k)_i f(x_{k-m_k+i}) = \sum_{i=0}^{m_k}(\alpha_k)_i f_{k-m_k+i}. We therefore choose

x_{k+1} = \sum_{i=0}^{m_k}(\alpha_k)_i f_{k-m_k+i}.

= Solution of the minimization problem =

At each iteration of the algorithm, the constrained optimization problem \operatorname{argmin}\|G_k \alpha\|_2, subject to \alpha\in A_k needs to be solved. The problem can be recast in several equivalent formulations, yielding different solution methods which may result in a more convenient implementation:

  • defining the matrices \mathcal{G}_k = \begin{bmatrix} g_{k-m_k+1} - g_{k-m_k} & \dots & g_{k} - g_{k-1}\end{bmatrix} and \mathcal{X}_k = \begin{bmatrix} x_{k-m_k+1} - x_{k-m_k} & \dots & x_{k} - x_{k-1} \end{bmatrix}, solve \gamma_k = \operatorname{argmin}_{\gamma\in\mathbb{R}^{m_k}}\|g_k - \mathcal{G}_k\gamma\|_2, and set x_{k+1} = x_k + g_k - (\mathcal{X}_k + \mathcal{G}_k)\gamma_k;{{cite journal |last1=Fang |first1=Haw-ren |last2=Saad |first2=Yousef |title=Two classes of multisecant methods for nonlinear acceleration |journal=Numerical Linear Algebra with Applications |date=March 2009 |volume=16 |issue=3 |pages=197–221 |doi=10.1002/nla.617}}
  • solve \theta_k = \{(\theta_k)_i\}_{i = 1}^{m_k} = \operatorname{argmin}_{\theta\in\mathbb{R}^{m_k}}\left\|g_k + \sum_{i=1}^{m_k}\theta_i(g_{k-i} - g_k)\right\|_2, then set x_{k+1} = x_k + g_k + \sum_{j = 1}^{m_k}(\theta_k)_j(x_{k-j} - x_k + g_{k-j} - g_k).

For both choices, the optimization problem is in the form of an unconstrained linear least-squares problem, which can be solved by standard methods including QR decomposition and singular value decomposition, possibly including regularization techniques to deal with rank deficiencies and conditioning issues in the optimization problem. Solving the least-squares problem by solving the normal equations is generally not advisable due to potential numerical instabilities and generally high computational cost.

Stagnation in the method (i.e. subsequent iterations with the same value, x_{k+1} = x_k) causes the method to break down, due to the singularity of the least-squares problem. Similarly, near-stagnation (x_{k+1}\approx x_k) results in bad conditioning of the least squares problem. Moreover, the choice of the parameter m might be relevant in determining the conditioning of the least-squares problem, as discussed below.

= Relaxation =

The algorithm can be modified introducing a variable relaxation parameter (or mixing parameter) \beta_k > 0. At each step, compute the new iterate as x_{k+1} = (1 - \beta_k)\sum_{i=0}^{m_k}(\alpha_k)_i x_{k-m_k+i} + \beta_k \sum_{i=0}^{m_k}(\alpha_k)_i f(x_{k-m_k+i})\;.The choice of \beta_k is crucial to the convergence properties of the method; in principle, \beta_k might vary at each iteration, although it is often chosen to be constant.

= Choice of {{mvar|m}} =

The parameter m determines how much information from previous iterations is used to compute the new iteration x_{k+1}. On the one hand, if m is chosen to be too small, too little information is used and convergence may be undesirably slow. On the other hand, if m is too large, information from old iterations may be retained for too many subsequent iterations, so that again convergence may be slow. Moreover, the choice of m affects the size of the optimization problem. A too large value of m may worsen the conditioning of the least squares problem and the cost of its solution. In general, the particular problem to be solved determines the best choice of the m parameter.

= Choice of {{mvar|m}}{{sub|{{mvar|k}}}} =

With respect to the algorithm described above, the choice of m_k at each iteration can be modified. One possibility is to choose m_k = k for each iteration k (sometimes referred to as Anderson acceleration without truncation). This way, every new iteration x_{k+1} is computed using all the previously computed iterations. A more sophisticated technique is based on choosing m_k so as to maintain a small enough conditioning for the least-squares problem.

Relations to other classes of methods

Newton's method can be applied to the solution of f(x) - x = 0 to compute a fixed point of f(x) with quadratic convergence. However, such method requires the evaluation of the exact derivative of f(x), which can be very costly. Approximating the derivative by means of finite differences is a possible alternative, but it requires multiple evaluations of f(x) at each iteration, which again can become very costly. Anderson acceleration requires only one evaluation of the function f(x) per iteration, and no evaluation of its derivative. On the other hand, the convergence of an Anderson-accelerated fixed point sequence is still linear in general.{{cite journal |last1=Evans |first1=Claire |last2=Pollock |first2=Sara |last3=Rebholz |first3=Leo G. |last4=Xiao |first4=Mengying |title=A Proof That Anderson Acceleration Improves the Convergence Rate in Linearly Converging Fixed-Point Methods (But Not in Those Converging Quadratically) |journal=SIAM Journal on Numerical Analysis |date=20 February 2020 |volume=58 |issue=1 |pages=788–810 |doi=10.1137/19M1245384|arxiv=1810.08455 }}

Several authors have pointed out similarities between the Anderson acceleration scheme and other methods for the solution of non-linear equations. In particular:

  • Eyert{{cite journal |last1=Eyert |first1=V. |title=A Comparative Study on Methods for Convergence Acceleration of Iterative Vector Sequences |journal=Journal of Computational Physics |date=March 1996 |volume=124 |issue=2 |pages=271–285 |doi=10.1006/jcph.1996.0059}} and Fang and Saad interpreted the algorithm within the class of quasi-Newton and multisecant methods, that generalize the well known secant method, for the solution of the non-linear equation g(x) = 0; they also showed how the scheme can be seen as a method in the Broyden class;{{cite journal |last1=Broyden |first1=C. G. |title=A class of methods for solving nonlinear simultaneous equations |journal=Mathematics of Computation |date=1965 |volume=19 |issue=92 |pages=577–593 |doi=10.1090/S0025-5718-1965-0198670-6|doi-access=free }}
  • Walker and Ni{{cite thesis |last=Ni |first=Peng |date=November 2009 |title=Anderson Acceleration of Fixed-point Iteration with Applications to Electronic Structure Computations |type=PhD}} showed that the Anderson acceleration scheme is equivalent to the GMRES method in the case of linear problems (i.e. the problem of finding a solution to A\mathbf{x} = \mathbf{x} for some square matrix A), and can thus be seen as a generalization of GMRES to the non-linear case; a similar result was found by Washio and Oosterlee.{{cite journal |last1=Oosterlee |first1=C. W. |last2=Washio |first2=T. |title=Krylov Subspace Acceleration of Nonlinear Multigrid with Application to Recirculating Flows |journal=SIAM Journal on Scientific Computing |date=January 2000 |volume=21 |issue=5 |pages=1670–1690 |doi=10.1137/S1064827598338093}}

Moreover, several equivalent or nearly equivalent methods have been independently developed by other authors,{{cite journal |last1=Pulay |first1=Péter |title=Convergence acceleration of iterative sequences. the case of scf iteration |journal=Chemical Physics Letters |date=July 1980 |volume=73 |issue=2 |pages=393–398 |doi=10.1016/0009-2614(80)80396-4}}{{cite journal |last1=Pulay |first1=P. |title=ImprovedSCF convergence acceleration |journal=Journal of Computational Chemistry |date=1982 |volume=3 |issue=4 |pages=556–560 |doi=10.1002/jcc.540030413}}{{cite journal |last1=Carlson |first1=Neil N. |last2=Miller |first2=Keith |title=Design and Application of a Gradient-Weighted Moving Finite Element Code I: in One Dimension |journal=SIAM Journal on Scientific Computing |date=May 1998 |volume=19 |issue=3 |pages=728–765 |doi=10.1137/S106482759426955X}}{{cite journal |last1=Miller |first1=Keith |title=Nonlinear Krylov and moving nodes in the method of lines |journal=Journal of Computational and Applied Mathematics |date=November 2005 |volume=183 |issue=2 |pages=275–287 |doi=10.1016/j.cam.2004.12.032|doi-access= }} although most often in the context of some specific application of interest rather than as a general method for fixed point equations.

Example MATLAB implementation

The following is an example implementation in MATLAB language of the Anderson acceleration scheme for finding the fixed-point of the function f(x) = \sin(x) + \arctan(x). Notice that:

  • the optimization problem was solved in the form \gamma_k = \operatorname{argmin}_{\gamma\in\mathbb{R}^{m_k}}\|g_k - \mathcal{G}_k\gamma\|_2 using QR decomposition;
  • the computation of the QR decomposition is sub-optimal: indeed, at each iteration a single column is added to the matrix \mathcal{G}_k, and possibly a single column is removed; this fact can be exploited to efficiently update the QR decomposition with less computational effort;{{cite journal |last1=Daniel |first1=J. W. |last2=Gragg |first2=W. B. |last3=Kaufman |first3=L. |last4=Stewart |first4=G. W. |title=Reorthogonalization and stable algorithms for updating the Gram-Schmidt $QR$ factorization |journal=Mathematics of Computation |date=October 1976 |volume=30 |issue=136 |pages=772 |doi=10.1090/S0025-5718-1976-0431641-8|doi-access=free }}
  • the algorithm can be made more memory-efficient by storing only the latest few iterations and residuals, if the whole vector of iterations x_k is not needed;
  • the code is straightforwardly generalized to the case of a vector-valued f(x).

f = @(x) sin(x) + atan(x); % Function whose fixed point is to be computed.

x0 = 1; % Initial guess.

k_max = 100; % Maximum number of iterations.

tol_res = 1e-6; % Tolerance on the residual.

m = 3; % Parameter m.

x = [x0, f(x0)]; % Vector of iterates x.

g = f(x) - x; % Vector of residuals.

G_k = g(2) - g(1); % Matrix of increments in residuals.

X_k = x(2) - x(1); % Matrix of increments in x.

k = 2;

while k < k_max && abs(g(k)) > tol_res

m_k = min(k, m);

% Solve the optimization problem by QR decomposition.

[Q, R] = qr(G_k);

gamma_k = R \ (Q' * g(k));

% Compute new iterate and new residual.

x(k + 1) = x(k) + g(k) - (X_k + G_k) * gamma_k;

g(k + 1) = f(x(k + 1)) - x(k + 1);

% Update increment matrices with new elements.

X_k = [X_k, x(k + 1) - x(k)];

G_k = [G_k, g(k + 1) - g(k)];

n = size(X_k, 2);

if n > m_k

X_k = X_k(:, n - m_k + 1:end);

G_k = G_k(:, n - m_k + 1:end);

end

k = k + 1;

end

% Prints result: Computed fixed point 2.013444 after 9 iterations

fprintf("Computed fixed point %f after %d iterations\n", x(end), k);

See also

Notes

{{reflist|group=note}}

References