Wolfe duality

{{Refimprove|date=May 2012}}

In mathematical optimization, Wolfe duality, named after Philip Wolfe, is type of dual problem in which the objective function and constraints are all differentiable functions. Using this concept a lower bound for a minimization problem can be found because of the weak duality principle.{{cite journal|author=Philip Wolfe|title=A duality theorem for non-linear programming|journal=Quarterly of Applied Mathematics|volume=19|year=1961|issue=3 |pages=239–244|doi=10.1090/qam/135625 |doi-access=free}}

Mathematical formulation

For a minimization problem with inequality constraints,

: \begin{align}

&\underset{x}{\operatorname{minimize}}& & f(x) \\

&\operatorname{subject\;to}

& &g_i(x) \leq 0, \quad i = 1,\dots,m

\end{align}

the Lagrangian dual problem is

: \begin{align}

&\underset{u}{\operatorname{maximize}}& & \inf_x \left(f(x) + \sum_{j=1}^m u_j g_j(x)\right) \\

&\operatorname{subject\;to}

& &u_i \geq 0, \quad i = 1,\dots,m

\end{align}

where the objective function is the Lagrange dual function. Provided that the functions f and g_1, \ldots, g_m are convex and continuously differentiable, the infimum occurs where the gradient is equal to zero. The problem

: \begin{align}

&\underset{x, u}{\operatorname{maximize}}& & f(x) + \sum_{j=1}^m u_j g_j(x) \\

&\operatorname{subject\;to}

& & \nabla f(x) + \sum_{j=1}^m u_j \nabla g_j(x) = 0 \\

&&&u_i \geq 0, \quad i = 1,\dots,m

\end{align}

is called the Wolfe dual problem.{{Cite book |last=Eiselt |first=Horst A. |url=https://books.google.com/books?id=SlW9DwAAQBAJ |title=Nonlinear Optimization: Methods and Applications |date=2019 |publisher=Springer International Publishing AG |others=Carl-Louis Sandblom |isbn=978-3-030-19462-8 |series=International Series in Operations Research and Management Science Ser |location=Cham |pages=147}}{{Clarify|The current reference only deals with the case where the objective function is quadratic, even thought the remainder of the article does not make this assumption.|date=February 2025}} This problem employs the KKT conditions as a constraint. Also, the equality constraint \nabla f(x) + \sum_{j=1}^m u_j \nabla g_j(x) is nonlinear in general, so the Wolfe dual problem may be a nonconvex optimization problem. In any case, weak duality holds.{{cite journal |last1=Geoffrion |first1=Arthur M. |title=Duality in Nonlinear Programming: A Simplified Applications-Oriented Development | jstor=2028848 |journal=SIAM Review |volume=13 |year=1971 |pages=1–37 |issue=1 |doi=10.1137/1013001}}

See also

References

{{Reflist}}

Category:Convex optimization

{{applied-math-stub}}