Sequential linear-quadratic programming
{{refimprove|date=November 2017}}
Sequential linear-quadratic programming (SLQP) is an iterative method for nonlinear optimization problems where objective function and constraints are twice continuously differentiable. Similarly to sequential quadratic programming (SQP), SLQP proceeds by solving a sequence of optimization subproblems. The difference between the two approaches is that:
- in SQP, each subproblem is a quadratic program, with a quadratic model of the objective subject to a linearization of the constraints
- in SLQP, two subproblems are solved at each step: a linear program (LP) used to determine an active set, followed by an equality-constrained quadratic program (EQP) used to compute the total step
This decomposition makes SLQP suitable to large-scale optimization problems, for which efficient LP and EQP solvers are available, these problems being easier to scale than full-fledged quadratic programs.
It may be considered related to, but distinct from, quasi-Newton methods.
Algorithm basics
Consider a nonlinear programming problem of the form:
:
\min\limits_{x} & f(x) \\
\mbox{s.t.} & b(x) \ge 0 \\
& c(x) = 0.
\end{array}
The Lagrangian for this problem is{{ cite book | year=2006|url=http://www.ece.northwestern.edu/~nocedal/book/num-opt.html| title= Numerical Optimization| publisher=Springer.|isbn=0-387-30303-0| author=Jorge Nocedal and Stephen J. Wright}}
:
where and are Lagrange multipliers.
= LP phase =
In the LP phase of SLQP, the following linear program is solved:
:
\min\limits_{d} & f(x_k) + \nabla f(x_k)^Td\\
\mathrm{s.t.} & b(x_k) + \nabla b(x_k)^Td \ge 0 \\
& c(x_k) + \nabla c(x_k)^T d = 0. \end{array}
Let denote the active set at the optimum of this problem, that is to say, the set of constraints that are equal to zero at . Denote by and the sub-vectors of and corresponding to elements of .
= EQP phase =
In the EQP phase of SLQP, the search direction of the step is obtained by solving the following equality-constrained quadratic program:
:
\min\limits_{d} & f(x_k) + \nabla f(x_k)^Td + \tfrac{1}{2} d^T \nabla_{xx}^2 \mathcal{L}(x_k,\lambda_k,\sigma_k) d\\
\mathrm{s.t.} & b_{{\cal A}_k}(x_k) + \nabla b_{{\cal A}_k}(x_k)^Td = 0 \\
& c_{{\cal A}_k}(x_k) + \nabla c_{{\cal A}_k}(x_k)^T d = 0. \end{array}
Note that the term in the objective functions above may be left out for the minimization problems, since it is constant.
See also
Notes
{{Reflist}}
References
- {{ cite book | year=2006|url=http://www.ece.northwestern.edu/~nocedal/book/num-opt.html| title= Numerical Optimization| publisher=Springer.|isbn=0-387-30303-0| author=Jorge Nocedal and Stephen J. Wright}}
{{optimization algorithms}}
Category:Optimization algorithms and methods
{{Mathapplied-stub}}