Convex optimization#linear

{{short description|Subfield of mathematical optimization}}

Convex optimization is a subfield of mathematical optimization that studies the problem of minimizing convex functions over convex sets (or, equivalently, maximizing concave functions over convex sets). Many classes of convex optimization problems admit polynomial-time algorithms,{{harvnb|Nesterov|Nemirovskii|1994}} whereas mathematical optimization is in general NP-hard.

{{cite journal | last1 = Murty | first1 = Katta | last2 = Kabadi | first2 = Santosh | title = Some NP-complete problems in quadratic and nonlinear programming | journal = Mathematical Programming | volume = 39 | issue = 2 | pages = 117–129 | year = 1987 | doi = 10.1007/BF02592948| hdl = 2027.42/6740 | s2cid = 30500771 | hdl-access = free}}Sahni, S. "Computationally related problems," in SIAM Journal on Computing, 3, 262--279, 1974.{{cite journal | url=https://link.springer.com/article/10.1007/BF00120662 | doi=10.1007/BF00120662 | title=Quadratic programming with one negative eigenvalue is NP-hard | date=1991 | last1=Pardalos | first1=Panos M. | last2=Vavasis | first2=Stephen A. | journal=Journal of Global Optimization | volume=1 | pages=15–22 | url-access=subscription }}

Definition

= Abstract form =

A convex optimization problem is defined by two ingredients:{{cite book |last1=Hiriart-Urruty |first1=Jean-Baptiste |url=https://books.google.com/books?id=Gdl4Jc3RVjcC&q=lemarechal+convex+analysis+and+minimization |title=Convex analysis and minimization algorithms: Fundamentals |last2=Lemaréchal |first2=Claude |year=1996 |isbn=9783540568506 |page=291|publisher=Springer }}{{cite book |last1=Ben-Tal |first1=Aharon |url=https://books.google.com/books?id=M3MqpEJ3jzQC&q=Lectures+on+Modern+Convex+Optimization:+Analysis,+Algorithms, |title=Lectures on modern convex optimization: analysis, algorithms, and engineering applications |last2=Nemirovskiĭ |first2=Arkadiĭ Semenovich |year=2001 |isbn=9780898714913 |pages=335–336}}

  • The objective function, which is a real-valued convex function of n variables, f :\mathcal D \subseteq \mathbb{R}^n \to \mathbb{R};
  • The feasible set, which is a convex subset C\subseteq \mathbb{R}^n.

The goal of the problem is to find some \mathbf{x^\ast} \in C attaining

:\inf \{ f(\mathbf{x}) : \mathbf{x} \in C \}.

In general, there are three options regarding the existence of a solution:{{Cite book |last1=Boyd |first1=Stephen |url=https://web.stanford.edu/~boyd/cvxbook/bv_cvxbook.pdf |title=Convex Optimization |last2=Vandenberghe |first2=Lieven |publisher=Cambridge University Press |year=2004 |isbn=978-0-521-83378-3 |access-date=12 Apr 2021}}{{Rp|location=chpt.4}}

  • If such a point x* exists, it is referred to as an optimal point or solution; the set of all optimal points is called the optimal set; and the problem is called solvable.
  • If f is unbounded below over C, or the infimum is not attained, then the optimization problem is said to be unbounded.
  • Otherwise, if C is the empty set, then the problem is said to be infeasible.

=Standard form=

A convex optimization problem is in standard form if it is written as

:\begin{align}

&\underset{\mathbf{x}}{\operatorname{minimize}}& & f(\mathbf{x}) \\

&\operatorname{subject\ to}

& &g_i(\mathbf{x}) \leq 0, \quad i = 1, \dots, m \\

&&&h_i(\mathbf{x}) = 0, \quad i = 1, \dots, p,

\end{align}

where:{{Rp|location=chpt.4}}

  • \mathbf{x} \in \mathbb{R}^n is the vector of optimization variables;
  • The objective function f: \mathcal D \subseteq \mathbb{R}^n \to \mathbb{R} is a convex function;
  • The inequality constraint functions g_i : \mathbb{R}^n \to \mathbb{R}, i=1, \ldots, m, are convex functions;
  • The equality constraint functions h_i : \mathbb{R}^n \to \mathbb{R}, i=1, \ldots, p, are affine transformations, that is, of the form: h_i(\mathbf{x}) = \mathbf{a_i}\cdot \mathbf{x} - b_i, where \mathbf{a_i} is a vector and b_i is a scalar.

The feasible set C of the optimization problem consists of all points \mathbf{x} \in \mathcal{D} satisfying the inequality and the equality constraints. This set is convex because \mathcal{D} is convex, the sublevel sets of convex functions are convex, affine sets are convex, and the intersection of convex sets is convex.{{Rp|location=chpt.2}}

Many optimization problems can be equivalently formulated in this standard form. For example, the problem of maximizing a concave function f can be re-formulated equivalently as the problem of minimizing the convex function -f. The problem of maximizing a concave function over a convex set is commonly called a convex optimization problem.{{cite web | url=https://www.solver.com/convex-optimization | title=Optimization Problem Types - Convex Optimization | date=9 January 2011 }}

=Epigraph form (standard form with linear objective){{Anchor|linear}}=

In the standard form it is possible to assume, without loss of generality, that the objective function f is a linear function. This is because any program with a general objective can be transformed into a program with a linear objective by adding a single variable t and a single constraint, as follows:{{Cite book |last=Arkadi Nemirovsky |url=https://citeseerx.ist.psu.edu/document?repid=rep1&type=pdf&doi=8c3cb6395a35cb504019f87f447d65cb6cf1cdf0 |title=Interior point polynomial-time methods in convex programming |year=2004}}{{Rp|location=1.4}}

:\begin{align}

&\underset{\mathbf{x},t}{\operatorname{minimize}}& & t \\

&\operatorname{subject\ to}

& &f(\mathbf{x}) - t \leq 0 \\

&& &g_i(\mathbf{x}) \leq 0, \quad i = 1, \dots, m \\

&&&h_i(\mathbf{x}) = 0, \quad i = 1, \dots, p,

\end{align}

= Conic form =

Every convex program can be presented in a conic form, which means minimizing a linear objective over the intersection of an affine plane and a convex cone:{{Rp|location=5.1}}

:\begin{align}

&\underset{\mathbf{x}}{\operatorname{minimize}}& & c^T x \\

&\operatorname{subject\ to}

& &x \in (b+L)\cap K

\end{align}

where K is a closed pointed convex cone, L is a linear subspace of Rn, and b is a vector in Rn. A linear program in standard form is the special case in which K is the nonnegative orthant of Rn.

= Eliminating linear equality constraints =

It is possible to convert a convex program in standard form, to a convex program with no equality constraints.{{Rp|page=132}} Denote the equality constraints hi(x)=0 as Ax=b, where A has n columns. If Ax=b is infeasible, then of course the original problem is infeasible. Otherwise, it has some solution x0 , and the set of all solutions can be presented as: Fz+x0, where z is in Rk, k=n-rank(A), and F is an n-by-k matrix. Substituting x = Fz+x0 in the original problem gives:

\begin{align}

&\underset{\mathbf{x}}{\operatorname{minimize}}& & f(\mathbf{F \mathbf{z} + \mathbf{x}_0}) \\

&\operatorname{subject\ to}

& &g_i(\mathbf{F \mathbf{z} + \mathbf{x}_0}) \leq 0, \quad i = 1, \dots, m \\

\end{align}

where the variables are z. Note that there are rank(A) fewer variables. This means that, in principle, one can restrict attention to convex optimization problems without equality constraints. In practice, however, it is often preferred to retain the equality constraints, since they might make some algorithms more efficient, and also make the problem easier to understand and analyze.

Special cases

The following problem classes are all convex optimization problems, or can be reduced to convex optimization problems via simple transformations:{{Rp|location=chpt.4}}

{{cite journal |last1=Agrawal |first1=Akshay |last2=Verschueren |first2=Robin |last3=Diamond |first3=Steven |last4=Boyd |first4=Stephen |year=2018 |title=A rewriting system for convex optimization problems |url=https://web.stanford.edu/~boyd/papers/pdf/cvxpy_rewriting.pdf |journal=Control and Decision |volume=5 |issue=1 |pages=42–60 |arxiv=1709.04494 |doi=10.1080/23307706.2017.1397554 |s2cid=67856259}}

File:Hierarchy compact convex.png, QP: quadratic programming, SOCP second-order cone program, SDP: semidefinite programming, CP: conic optimization.)]]

Other special cases include;

Properties

The following are useful properties of convex optimization problems:{{cite journal | author = Rockafellar, R. Tyrrell | title = Lagrange multipliers and optimality | journal = SIAM Review | volume = 35 | issue = 2 | year = 1993 | pages = 183–238 |url = http://web.williams.edu/Mathematics/sjmiller/public_html/105Sp10/handouts/Rockafellar_LagrangeMultAndOptimality.pdf | doi=10.1137/1035044| citeseerx = 10.1.1.161.7209}}{{Rp|location=chpt.4}}

  • every local minimum is a global minimum;
  • the optimal set is convex;
  • if the objective function is strictly convex, then the problem has at most one optimal point.

These results are used by the theory of convex minimization along with geometric notions from functional analysis (in Hilbert spaces) such as the Hilbert projection theorem, the separating hyperplane theorem, and Farkas' lemma.{{citation needed|date=July 2022}}

Algorithms

= Unconstrained and equality-constrained problems =

The convex programs easiest to solve are the unconstrained problems, or the problems with only equality constraints. As the equality constraints are all linear, they can be eliminated with linear algebra and integrated into the objective, thus converting an equality-constrained problem into an unconstrained one.

In the class of unconstrained (or equality-constrained) problems, the simplest ones are those in which the objective is quadratic. For these problems, the KKT conditions (which are necessary for optimality) are all linear, so they can be solved analytically.{{Rp|location=chpt.11}}

For unconstrained (or equality-constrained) problems with a general convex objective that is twice-differentiable, Newton's method can be used. It can be seen as reducing a general unconstrained convex problem, to a sequence of quadratic problems.{{Rp|location=chpt.11}}Newton's method can be combined with line search for an appropriate step size, and it can be mathematically proven to converge quickly.

Other efficient algorithms for unconstrained minimization are gradient descent (a special case of steepest descent).

= General problems =

The more challenging problems are those with inequality constraints. A common way to solve them is to reduce them to unconstrained problems by adding a barrier function, enforcing the inequality constraints, to the objective function. Such methods are called interior point methods.{{Rp|location=chpt.11}}They have to be initialized by finding a feasible interior point using by so-called phase I methods, which either find a feasible point or show that none exist. Phase I methods generally consist of reducing the search in question to a simpler convex optimization problem.{{Rp|location=chpt.11}}

Convex optimization problems can also be solved by the following contemporary methods:For methods for convex minimization, see the volumes by Hiriart-Urruty and Lemaréchal (bundle) and the textbooks by Ruszczyński, Bertsekas, and

Boyd and Vandenberghe (interior point).

Subgradient methods can be implemented simply and so are widely used.{{Cite journal |date=2006 |title=Numerical Optimization |url=https://link.springer.com/book/10.1007/978-0-387-40065-5 |journal=Springer Series in Operations Research and Financial Engineering |language=en |doi=10.1007/978-0-387-40065-5|isbn=978-0-387-30303-1 |url-access=subscription }} Dual subgradient methods are subgradient methods applied to a dual problem. The drift-plus-penalty method is similar to the dual subgradient method, but takes a time average of the primal variables.{{Citation needed|date=April 2021}}

Lagrange multipliers

{{Main|Lagrange multiplier}}

{{Unreferenced section|date=April 2021}}

Consider a convex minimization problem given in standard form by a cost function f(x) and inequality constraints g_i(x)\leq 0 for 1 \leq i \leq m. Then the domain \mathcal{X} is:

:\mathcal{X} = \left\{x\in X \vert g_1(x), \ldots, g_m(x)\leq 0\right\}.

The Lagrangian function for the problem is{{cite book |first1=Brian |last1=Beavis |first2=Ian M. |last2=Dobbs |chapter=Static Optimization |title=Optimization and Stability Theory for Economic Analysis |location=New York |publisher=Cambridge University Press |year=1990 |isbn=0-521-33605-8 |page=40 |chapter-url=https://books.google.com/books?id=L7HMACFgnXMC&pg=PA40 }}

:L(x,\lambda_{0},\lambda_1, \ldots ,\lambda_{m})=\lambda_{0} f(x) + \lambda_{1} g_{1} (x)+\cdots + \lambda_{m} g_{m} (x).

For each point x in X that minimizes f over X, there exist real numbers \lambda_{0},\lambda_1, \ldots, \lambda_{m}, called Lagrange multipliers, that satisfy these conditions simultaneously:

  1. x minimizes L(y,\lambda_{0},\lambda_{1},\ldots ,\lambda_{m}) over all y \in X,
  2. \lambda_{0},\lambda_{1},\ldots ,\lambda_{m} \geq 0, with at least one \lambda_{k} > 0,
  3. \lambda_{1}g_{1}(x)=\cdots = \lambda_{m}g_{m}(x) = 0 (complementary slackness).

If there exists a "strictly feasible point", that is, a point z satisfying

:g_{1}(z), \ldots, g_{m}(z)<0,

then the statement above can be strengthened to require that \lambda_{0}=1.

Conversely, if some x in X satisfies (1)–(3) for scalars \lambda_{0},\ldots,\lambda_{m} with \lambda_{0}=1 then x is certain to minimize f over X.

Software

There is a large software ecosystem for convex optimization. This ecosystem has two main categories: solvers on the one hand and modeling tools (or interfaces) on the other hand.

Solvers implement the algorithms themselves and are usually written in C. They require users to specify optimization problems in very specific formats which may not be natural from a modeling perspective. Modeling tools are separate pieces of software that let the user specify an optimization in higher-level syntax. They manage all transformations to and from the user's high-level model and the solver's input/output format.

Below are two tables. The first shows shows modelling tools (such as CVXPY and JuMP.jl) and the second solvers (such as SCS and MOSEK). They are by no means exhaustive.

class="wikitable sortable"

|+

!Program

!Language

!Description

!FOSS?

!Ref

CVX

|MATLAB

|Interfaces with SeDuMi and SDPT3 solvers; designed to only express convex optimization problems.

!{{Yes}}

|{{Cite web|last=Borchers|first=Brian|title=An Overview Of Software For Convex Optimization|url=http://infohost.nmt.edu/~borchers/presentation.pdf|url-status=dead|archive-url=https://web.archive.org/web/20170918180026/http://infohost.nmt.edu/~borchers/presentation.pdf|archive-date=2017-09-18|access-date=12 Apr 2021}}

CVXPY

|Python

|

!{{Yes}}

|{{Cite web|title=Welcome to CVXPY 1.1 — CVXPY 1.1.11 documentation|url=https://www.cvxpy.org/|access-date=2021-04-12|website=www.cvxpy.org}}

Convex.jl

|Julia

|Disciplined convex programming, supports many solvers.

!{{Yes}}

|{{cite arXiv |last1=Udell |first1=Madeleine |last2=Mohan |first2=Karanveer |last3=Zeng |first3=David |last4=Hong |first4=Jenny |last5=Diamond |first5=Steven |last6=Boyd |first6=Stephen |date=2014-10-17 |title=Convex Optimization in Julia |class=math.OC |eprint=1410.4821 }}

CVXR

|R

|

!{{Yes}}

|{{Cite web|title=Disciplined Convex Optimiation - CVXR|url=https://www.cvxgrp.org/CVXR/|access-date=2021-06-17|website=www.cvxgrp.org}}

GAMS

|

|Modeling system for linear, nonlinear, mixed integer linear/nonlinear, and second-order cone programming problems.

!{{No}}

|

GloptiPoly

|MATLAB,

Octave

|Modeling system for polynomial optimization.

!{{Yes}}

|

JuMP.jl

|Julia

|Supports many solvers. Also supports integer and nonlinear optimization, and some nonconvex optimization.

!{{Yes}}

|{{cite journal |last1=Lubin |first1=Miles |last2=Dowson |first2=Oscar |last3=Dias Garcia |first3=Joaquim |last4=Huchette |first4=Joey |last5=Legat |first5=Benoît |last6=Vielma |first6=Juan Pablo |date=2023 |title=JuMP 1.0: Recent improvements to a modeling language for mathematical optimization | journal = Mathematical Programming Computation | doi = 10.1007/s12532-023-00239-3 |eprint=2206.03866 }}

ROME

|

|Modeling system for robust optimization. Supports distributionally robust optimization and uncertainty sets.

!{{Yes}}

|

SOSTOOLS

|

|Modeling system for polynomial optimization. Uses SDPT3 and SeDuMi. Requires Symbolic Computation Toolbox.

!{{Yes}}

|

SparsePOP

|

|Modeling system for polynomial optimization. Uses the SDPA or SeDuMi solvers.

!{{Yes}}

|

YALMIP

|MATLAB, Octave

|Interfaces with CPLEX, GUROBI, MOSEK, SDPT3, SEDUMI, CSDP, SDPA, PENNON solvers; also supports integer and nonlinear optimization, and some nonconvex optimization. Can perform robust optimization with uncertainty in LP/SOCP/SDP constraints.

!{{Yes}}

|

class="wikitable sortable"

|+

!Program

!Language

!Description

!FOSS?

!Ref

AIMMS

|

|Can do robust optimization on linear programming (with MOSEK to solve second-order cone programming) and mixed integer linear programming. Modeling package for LP + SDP and robust versions.

!{{No}}

|

CPLEX

|

|Supports primal-dual methods for LP + SOCP. Can solve LP, QP, SOCP, and mixed integer linear programming problems.

!{{No}}

|

CSDP

|C

|Supports primal-dual methods for LP + SDP. Interfaces available for MATLAB, R, and Python. Parallel version available. SDP solver.

!{{Yes}}

|

[https://cvxopt.org/ CVXOPT]

|Python

|Supports primal-dual methods for LP + SOCP + SDP. Uses Nesterov-Todd scaling. Interfaces to MOSEK and DSDP.

!{{Yes}}

|

MOSEK

|

|Supports primal-dual methods for LP + SOCP.

!{{No}}

|

SeDuMi

|MATLAB, Octave, MEX

|Solves LP + SOCP + SDP. Supports primal-dual methods for LP + SOCP + SDP.

!{{Yes}}

|

SDPA

|C++

|Solves LP + SDP. Supports primal-dual methods for LP + SDP. Parallelized and extended precision versions are available.

!{{Yes}}

|

SDPT3

|MATLAB, Octave, MEX

|Solves LP + SOCP + SDP. Supports primal-dual methods for LP + SOCP + SDP.

!{{Yes}}

|

ConicBundle

|

|Supports general-purpose codes for LP + SOCP + SDP. Uses a bundle method. Special support for SDP and SOCP constraints.

!{{Yes}}

|

DSDP

|

|Supports general-purpose codes for LP + SDP. Uses a dual interior point method.

!{{Yes}}

|

LOQO

|

|Supports general-purpose codes for SOCP, which it treats as a nonlinear programming problem.

!{{No}}

|

PENNON

|

|Supports general-purpose codes. Uses an augmented Lagrangian method, especially for problems with SDP constraints.

!{{No}}

|

SDPLR

|

|Supports general-purpose codes. Uses low-rank factorization with an augmented Lagrangian method.

!{{Yes}}

|

Applications

Convex optimization can be used to model problems in a wide range of disciplines, such as automatic control systems, estimation and signal processing, communications and networks, electronic circuit design,{{Rp|location=|page=17}} data analysis and modeling, finance, statistics (optimal experimental design),Christensen/Klarbring, chpt. 4. and structural optimization, where the approximation concept has proven to be efficient.Schmit, L.A.; Fleury, C. 1980: Structural synthesis by combining approximation concepts and dual methods. J. Amer. Inst. Aeronaut. Astronaut 18, 1252-1260 Convex optimization can be used to model problems in the following fields:

  • Portfolio optimization.{{Cite web |last1=Boyd |first1=Stephen |last2=Diamond |first2=Stephen |last3=Zhang |first3=Junzi |last4=Agrawal |first4=Akshay |title=Convex Optimization Applications |url=https://web.stanford.edu/~boyd/papers/pdf/cvx_applications.pdf |url-status=live |archive-url=https://web.archive.org/web/20151001185038/http://web.stanford.edu/~boyd/papers/pdf/cvx_applications.pdf |archive-date=2015-10-01 |access-date=12 Apr 2021}}
  • Worst-case risk analysis.
  • Optimal advertising.
  • Variations of statistical regression (including regularization and quantile regression).
  • Model fitting (particularly multiclass classification{{Cite web |last=Malick |first=Jérôme |date=2011-09-28 |title=Convex optimization: applications, formulations, relaxations |url=https://www-ljk.imag.fr//membres/Jerome.Malick/Talks/11-INRIA.pdf |url-status=live |archive-url=https://web.archive.org/web/20210412044738/https://www-ljk.imag.fr//membres/Jerome.Malick/Talks/11-INRIA.pdf |archive-date=2021-04-12 |access-date=12 Apr 2021}}).
  • Electricity generation optimization.
  • Combinatorial optimization.
  • Non-probabilistic modelling of uncertainty.Ben Haim Y. and Elishakoff I., Convex Models of Uncertainty in Applied Mechanics, Elsevier Science Publishers, Amsterdam, 1990
  • Localization using wireless signals Ahmad Bazzi, Dirk TM Slock, and Lisa Meilhac. "Online angle of arrival estimation in the presence of mutual coupling." 2016 IEEE Statistical Signal Processing Workshop (SSP). IEEE, 2016.

Extensions

Extensions of convex optimization include the optimization of biconvex, pseudo-convex, and quasiconvex functions. Extensions of the theory of convex analysis and iterative methods for approximately solving non-convex minimization problems occur in the field of generalized convexity, also known as abstract convex analysis.{{Citation needed|date=April 2021}}

See also

Notes

References

  • {{cite book

|last1=Bertsekas

|first1=Dimitri P.

|last2=Nedic

|first2=Angelia

|last3 = Ozdaglar

|first3 = Asuman

| title = Convex Analysis and Optimization

| publisher = Athena Scientific

| year = 2003

| location = Belmont, MA.

| isbn = 978-1-886529-45-8

}}

  • {{cite book

| last = Bertsekas

| first = Dimitri P.

| author-link = Dimitri P. Bertsekas

| title = Convex Optimization Theory

| publisher = Athena Scientific

| year = 2009

| location = Belmont, MA.

| isbn = 978-1-886529-31-1

}}

  • {{cite book

| last = Bertsekas

| first = Dimitri P.

| author-link = Dimitri P. Bertsekas

| title = Convex Optimization Algorithms

| publisher = Athena Scientific

| year = 2015

| location = Belmont, MA.

| isbn = 978-1-886529-28-1

}}

  • {{Cite book|last1=Borwein|first1=Jonathan|url=https://carma.newcastle.edu.au/resources/jon/Preprints/Books/CaNo2/cano2f.pdf|title=Convex Analysis and Nonlinear Optimization: Theory and Examples, Second Edition|last2=Lewis|first2=Adrian|publisher=Springer|year=2000|access-date=12 Apr 2021}}
  • {{cite book

| author1 = Christensen, Peter W.

| author2 = Anders Klarbring

| title = An introduction to structural optimization

| volume = 153

| publisher = Springer Science & Business Media

| year = 2008

| url = https://books.google.com/books?id=80IeN__MYI8C

| isbn = 9781402086663

}}

  • Hiriart-Urruty, Jean-Baptiste, and Lemaréchal, Claude. (2004). Fundamentals of Convex analysis. Berlin: Springer.
  • {{cite book|last1=Hiriart-Urruty|first1=Jean-Baptiste|last2=Lemaréchal|first2=Claude|title=Convex analysis and minimization algorithms, Volume I: Fundamentals|series=Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences]|volume=305|publisher=Springer-Verlag|location=Berlin|year=1993|pages=xviii+417|isbn=978-3-540-56850-6|mr=1261420|author-link2=Claude Lemaréchal}}
  • {{cite book|last1=Hiriart-Urruty|first1=Jean-Baptiste|last2=Lemaréchal|first2=Claude|title=Convex analysis and minimization algorithms, Volume II: Advanced theory and bundle methods|series=Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences]|volume=306|publisher=Springer-Verlag|location=Berlin|year=1993|pages=xviii+346|isbn=978-3-540-56852-0|mr=1295240|author-link2=Claude Lemaréchal}}
  • {{cite book|first=Krzysztof C.|last=Kiwiel|title=Methods of Descent for Nondifferentiable Optimization|url=https://archive.org/details/methodsofdescent0000kiwi|url-access=registration|year=1985|publisher=Springer-Verlag|location= New York|series=Lecture Notes in Mathematics|isbn=978-3-540-15642-0 }}
  • {{cite book|last=Lemaréchal|first=Claude|s2cid=9048698|chapter=Lagrangian relaxation|pages=112–156|title=Computational combinatorial optimization: Papers from the Spring School held in Schloß Dagstuhl, May 15–19, 2000|editor=Michael Jünger and Denis Naddef|series=Lecture Notes in Computer Science|volume=2241|publisher=Springer-Verlag|location=Berlin|year=2001|isbn=978-3-540-42877-0|mr=1900016|doi=10.1007/3-540-45586-8_4|author-link=Claude Lemaréchal}}
  • {{cite book |last1=Nesterov|first1=Yurii|last2=Nemirovskii|first2=Arkadii|year=1994|title=Interior Point Polynomial Methods in Convex Programming|publisher=SIAM

}}

  • Nesterov, Yurii. (2004). [https://books.google.com/books?id=2-ElBQAAQBAJ&dq=%22Introductory+Lectures+on+Convex+Optimization%22&pg=PA1 Introductory Lectures on Convex Optimization], Kluwer Academic Publishers
  • {{cite book

| last = Rockafellar

| first = R. T.

| author-link = R. Tyrrell Rockafellar

| title = Convex analysis

| publisher = Princeton University Press

| year = 1970

| location = Princeton

}}

  • {{cite book

| last = Ruszczyński

| first = Andrzej

| author-link = Andrzej Piotr Ruszczyński

| title = Nonlinear Optimization

| publisher = Princeton University Press

| year = 2006

}}

  • Schmit, L.A.; Fleury, C. 1980: Structural synthesis by combining approximation concepts and dual methods. J. Amer. Inst. Aeronaut. Astronaut 18, 1252-1260