Broyden–Fletcher–Goldfarb–Shanno algorithm

{{short description|Optimization method}}

{{Redirect|BFGS|the Canadian hardcore punk band|Bunchofuckingoofs}}

{{More citations needed|date=March 2016}}

In numerical optimization, the Broyden–Fletcher–Goldfarb–Shanno (BFGS) algorithm is an iterative method for solving unconstrained nonlinear optimization problems.{{Citation | last1=Fletcher | first1=Roger | title=Practical Methods of Optimization | publisher=John Wiley & Sons | location=New York | edition=2nd | isbn=978-0-471-91547-8 | year=1987 | url-access=registration | url=https://archive.org/details/practicalmethods0000flet }} Like the related Davidon–Fletcher–Powell method, BFGS determines the descent direction by preconditioning the gradient with curvature information. It does so by gradually improving an approximation to the Hessian matrix of the loss function, obtained only from gradient evaluations (or approximate gradient evaluations) via a generalized secant method.{{citation |first1=J. E. Jr. |last1=Dennis |author-link=John E. Dennis |first2=Robert B. |last2=Schnabel |author-link2=Robert B. Schnabel |title=Numerical Methods for Unconstrained Optimization and Nonlinear Equations |chapter=Secant Methods for Unconstrained Minimization |location=Englewood Cliffs, NJ |publisher=Prentice-Hall |year=1983 |isbn=0-13-627216-9 |pages=194–215 |chapter-url=https://books.google.com/books?id=ksvJTtJCx9cC&pg=PA194 }}

Since the updates of the BFGS curvature matrix do not require matrix inversion, its computational complexity is only \mathcal{O}(n^{2}), compared to \mathcal{O}(n^{3}) in Newton's method. Also in common use is L-BFGS, which is a limited-memory version of BFGS that is particularly suited to problems with very large numbers of variables (e.g., >1000). The BFGS-B variant handles simple box constraints.{{Citation|url=http://www.ece.northwestern.edu/~nocedal/PSfiles/limited.ps.gz |pages= 1190–1208|doi=10.1137/0916069|title= A Limited Memory Algorithm for Bound Constrained Optimization|journal= SIAM Journal on Scientific Computing|volume= 16|issue= 5|year= 1995|last1= Byrd|first1= Richard H.|last2= Lu|first2= Peihuang|last3= Nocedal|first3= Jorge|last4= Zhu|first4= Ciyou |citeseerx= 10.1.1.645.5814}} The BFGS matrix also admits a compact representation, which makes it better suited for large constrained problems.

The algorithm is named after Charles George Broyden, Roger Fletcher, Donald Goldfarb and David Shanno.{{Citation| last=Broyden | first=C. G. | author-link=Charles George Broyden | year=1970 | title=The convergence of a class of double-rank minimization algorithms | journal=Journal of the Institute of Mathematics and Its Applications | volume=6 | pages=76–90 | doi=10.1093/imamat/6.1.76}}{{Citation | last1=Fletcher|first1= R.|title= A New Approach to Variable Metric Algorithms|journal=Computer Journal |year=1970|volume=13|pages=317–322 | doi=10.1093/comjnl/13.3.317 | issue=3|doi-access=free}}{{Citation|author-link=Donald Goldfarb|last=Goldfarb|first= D.|title=A Family of Variable Metric Updates Derived by Variational Means|journal=Mathematics of Computation|year=1970|volume=24|pages=23–26|doi=10.1090/S0025-5718-1970-0258249-6|issue=109|doi-access=free}}{{Citation | last1=Shanno|first1= David F.|title=Conditioning of quasi-Newton methods for function minimization|date=July 1970|journal=Mathematics of Computation|volume=24|pages= 647–656|mr=274029|doi=10.1090/S0025-5718-1970-0274029-X | issue=111 |doi-access=free}}

Rationale

The optimization problem is to minimize f(\mathbf{x}), where \mathbf{x} is a vector in \mathbb{R}^n, and f is a differentiable scalar function. There are no constraints on the values that \mathbf{x} can take.

The algorithm begins at an initial estimate \mathbf{x}_0 for the optimal value and proceeds iteratively to get a better estimate at each stage.

The search direction pk at stage k is given by the solution of the analogue of the Newton equation:

:B_k \mathbf{p}_k = -\nabla f(\mathbf{x}_k),

where B_k is an approximation to the Hessian matrix at \mathbf{x}_k , which is updated iteratively at each stage, and \nabla f(\mathbf{x}_k) is the gradient of the function evaluated at xk. A line search in the direction pk is then used to find the next point xk+1 by minimizing f(\mathbf{x}_k + \gamma\mathbf{p}_k) over the scalar \gamma > 0.

The quasi-Newton condition imposed on the update of B_k is

:B_{k+1} (\mathbf{x}_{k+1} - \mathbf{x}_k) = \nabla f(\mathbf{x}_{k+1}) - \nabla f(\mathbf{x}_k).

Let \mathbf{y}_k = \nabla f(\mathbf{x}_{k+1}) - \nabla f(\mathbf{x}_k) and \mathbf{s}_k = \mathbf{x}_{k+1} - \mathbf{x}_k, then B_{k+1} satisfies

:B_{k+1} \mathbf{s}_k = \mathbf{y}_k,

which is the secant equation.

The curvature condition \mathbf{s}_k^\top \mathbf{y}_k > 0 should be satisfied for B_{k+1} to be positive definite, which can be verified by pre-multiplying the secant equation with \mathbf{s}_k^T. If the function is not strongly convex, then the condition has to be enforced explicitly e.g. by finding a point xk+1 satisfying the Wolfe conditions, which entail the curvature condition, using line search.

Instead of requiring the full Hessian matrix at the point \mathbf{x}_{k+1} to be computed as B_{k+1}, the approximate Hessian at stage k is updated by the addition of two matrices:

:B_{k+1} = B_k + U_k + V_k.

Both U_k and V_k are symmetric rank-one matrices, but their sum is a rank-two update matrix. BFGS and DFP updating matrix both differ from its predecessor by a rank-two matrix. Another simpler rank-one method is known as symmetric rank-one method, which does not guarantee the positive definiteness. In order to maintain the symmetry and positive definiteness of B_{k+1}, the update form can be chosen as B_{k+1} = B_k + \alpha\mathbf{u}\mathbf{u}^\top + \beta\mathbf{v}\mathbf{v}^\top. Imposing the secant condition, B_{k+1} \mathbf{s}_k = \mathbf{y}_k . Choosing \mathbf{u} = \mathbf{y}_k and \mathbf{v} = B_k \mathbf{s}_k, we can obtain:{{Citation | last1=Fletcher | first1=Roger | title=Practical methods of optimization | publisher=John Wiley & Sons | location=New York | edition=2nd | isbn=978-0-471-91547-8 | year=1987 | url-access=registration | url=https://archive.org/details/practicalmethods0000flet }}

:\alpha = \frac{1}{\mathbf{y}_k^T \mathbf{s}_k},

:\beta = -\frac{1}{\mathbf{s}_k^T B_k \mathbf{s}_k}.

Finally, we substitute \alpha and \beta into B_{k+1} = B_k + \alpha\mathbf{u}\mathbf{u}^\top + \beta\mathbf{v}\mathbf{v}^\top and get the update equation of B_{k+1}:

:B_{k+1} = B_k + \frac{\mathbf{y}_k \mathbf{y}_k^{\mathrm{T}}}{\mathbf{y}_k^{\mathrm{T}} \mathbf{s}_k} - \frac{B_k \mathbf{s}_k \mathbf{s}_k^{\mathrm{T}} B_k^{\mathrm{T}} }{\mathbf{s}_k^{\mathrm{T}} B_k \mathbf{s}_k}.

Algorithm

Consider the following unconstrained optimization problem

{{Citation | last1=Nocedal | first1=Jorge | last2=Wright | first2=Stephen J. | title=Numerical Optimization | publisher=Springer-Verlag | location=Berlin, New York | edition=2nd | isbn=978-0-387-30303-1 | year=2006}}

From an initial guess \mathbf{x}_0 and an approximate inverted Hessian matrix H_0 the following steps are repeated as \mathbf{x}_k converges to the solution:

  1. Obtain a direction \mathbf{p}_k by solving \mathbf{p}_k = -H_k \nabla f(\mathbf{x}_k).
  2. Perform a one-dimensional optimization (line search) to find an acceptable stepsize \alpha_k in the direction found in the first step. If an exact line search is performed, then \alpha_k=\arg \min f(\mathbf{x}_k+\alpha\mathbf{p}_k) . In practice, an inexact line search usually suffices, with an acceptable \alpha_k satisfying Wolfe conditions.
  3. Set \mathbf{s}_k = \alpha_k \mathbf{p}_k and update \mathbf{x}_{k+1} = \mathbf{x}_k + \mathbf{s}_k.
  4. \mathbf{y}_k = {\nabla f(\mathbf{x}_{k+1}) - \nabla f(\mathbf{x}_k)}.
  5. H_{k+1} = H_k + \frac{(\mathbf{s}_k^{\mathrm{T}}\mathbf{y}_k+\mathbf{y}_k^{\mathrm{T}} H_k \mathbf{y}_k)(\mathbf{s}_k \mathbf{s}_k^{\mathrm{T}})}{(\mathbf{s}_k^{\mathrm{T}} \mathbf{y}_k)^2} - \frac{H_k \mathbf{y}_k \mathbf{s}_k^{\mathrm{T}} + \mathbf{s}_k \mathbf{y}_k^{\mathrm{T}}H_k}{\mathbf{s}_k^{\mathrm{T}} \mathbf{y}_k}.

In statistical estimation problems (such as maximum likelihood or Bayesian inference), credible intervals or confidence intervals for the solution can be estimated from the inverse of the final Hessian matrix {{Citation needed|reason=there is a need to see in which works this was applied, esp. given the next sentence|date=May 2021}}. However, these quantities are technically defined by the true Hessian matrix, and the BFGS approximation may not converge to the true Hessian matrix.{{Cite journal | last1=Ge | last2=Powell| first1=Ren-pu | first2=M. J. D. | title=The Convergence of Variable Metric Matrices in Unconstrained Optimization | journal=Mathematical Programming |volume=27| year=1983 | issue=2|at=123 |doi=10.1007/BF02591941 | s2cid=8113073}}

Further developments

The BFGS update formula heavily relies on the curvature \mathbf{s}_k^\top \mathbf{y}_k being strictly positive and bounded away from zero.

This condition is satisfied when we perform a line search with Wolfe conditions on a convex target.

However, some real-life applications (like Sequential Quadratic Programming methods) routinely produce negative or nearly-zero curvatures.

This can occur when optimizing a nonconvex target or when employing a trust-region approach instead of a line search.

It is also possible to produce spurious values due to noise in the target.

In such cases, one of the so-called damped BFGS updates can be used (see

{{citation

| title = Numerical Optimization

| author = Jorge Nocedal

| author2 = Stephen J. Wright

| year = 2006

}}

) which modify \mathbf{s}_k and/or \mathbf{y}_k in order to obtain a more robust update.

Notable implementations

Notable open source implementations are:

  • ALGLIB implements BFGS and its limited-memory version in C++ and C#
  • GNU Octave uses a form of BFGS in its fsolve function, with trust region extensions.
  • The GSL implements BFGS as gsl_multimin_fdfminimizer_vector_bfgs2.{{Cite web|title=GNU Scientific Library — GSL 2.6 documentation|url=https://www.gnu.org/software/gsl/doc/html/index.html|access-date=2020-11-22|website=www.gnu.org}}
  • In R, the BFGS algorithm (and the L-BFGS-B version that allows box constraints) is implemented as an option of the base function optim().{{Cite web|title=R: General-purpose Optimization|url=https://stat.ethz.ch/R-manual/R-devel/library/stats/html/optim.html|access-date=2020-11-22|website=stat.ethz.ch}}
  • In SciPy, the scipy.optimize.fmin_bfgs function implements BFGS.{{Cite web|title=scipy.optimize.fmin_bfgs — SciPy v1.5.4 Reference Guide|url=https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.fmin_bfgs.html|access-date=2020-11-22|website=docs.scipy.org}} It is also possible to run BFGS using any of the L-BFGS algorithms by setting the parameter L to a very large number. It is also one of the default methods used when running scipy.optimize.minimize with no constraints.{{Cite web|title=scipy.optimize.minimize — SciPy v1.5.4 Reference Guide|url=https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.minimize.html|access-date=2025-01-22|website=docs.scipy.org}}
  • In Julia, the [https://julianlsolvers.github.io/Optim.jl/stable/ Optim.jl] package implements BFGS and L-BFGS as a solver option to the optimize() function (among other options).{{cite web |title=Optim.jl Configurable options |url=https://julianlsolvers.github.io/Optim.jl/stable/#user/config/#solver-options |website=julianlsolvers}}

Notable proprietary implementations include:

  • The large scale nonlinear optimization software Artelys Knitro implements, among others, both BFGS and L-BFGS algorithms.
  • In the MATLAB Optimization Toolbox, the fminunc function uses BFGS with cubic line search when the problem size is set to "medium scale."
  • Mathematica includes BFGS.
  • LS-DYNA also uses BFGS to solve implicit Problems.

See also

References

{{reflist}}

Further reading

  • {{Citation | last1=Avriel |first1=Mordecai |year=2003|title=Nonlinear Programming: Analysis and Methods|publisher= Dover Publishing|isbn= 978-0-486-43227-4}}
  • {{Citation|last1=Bonnans |first1=J. Frédéric |last2=Gilbert |first2=J. Charles |last3=Lemaréchal |first3=Claude |author-link3=Claude Lemaréchal |last4=Sagastizábal |first4=Claudia A. |author4-link=Claudia Sagastizábal |title=Numerical Optimization: Theoretical and Practical Aspects |edition=Second |publisher=Springer |location=Berlin|year=2006 |isbn=3-540-35445-X |chapter=Newtonian Methods |pages=51–66 }}
  • {{Citation | last1=Fletcher | first1=Roger | title=Practical Methods of Optimization | publisher=John Wiley & Sons | location=New York | edition=2nd | isbn=978-0-471-91547-8 | year=1987 | url-access=registration | url=https://archive.org/details/practicalmethods0000flet }}
  • {{Citation|last1=Luenberger|first1=David G.|author-link1=David G. Luenberger|last2=Ye|first2=Yinyu|author-link2=Yinyu Ye|title=Linear and nonlinear programming|edition=Third|series=International Series in Operations Research & Management Science|volume=116|publisher=Springer|location=New York|year=2008|pages=xiv+546|isbn=978-0-387-74502-2| mr = 2423726}}
  • {{citation |last=Kelley |first=C. T. |title=Iterative Methods for Optimization |location=Philadelphia |publisher=Society for Industrial and Applied Mathematics |year=1999 |isbn=0-89871-433-8 |pages=71–86 }}

{{Optimization algorithms|unconstrained}}

{{DEFAULTSORT:Broyden-Fletcher-Goldfarb-Shanno algorithm}}

Category:Optimization algorithms and methods