Basis pursuit

{{Short description|Optimization problem}}

{{no footnotes|date=April 2020}}

Basis pursuit is the mathematical optimization problem of the form

: \min_x \|x\|_1 \quad \text{subject to} \quad y = Ax,

where x is a N-dimensional solution vector (signal), y is a M-dimensional vector of observations (measurements), A is a M × N transform matrix (usually measurement matrix) and M < N. The version of basis pursuit that seeks to minimize the L0 norm is NP-hard.{{Cite journal |last=Natarajan |first=B. K. |date=April 1995 |title=Sparse Approximate Solutions to Linear Systems |url=https://epubs.siam.org/doi/10.1137/S0097539792240406 |journal=SIAM Journal on Computing |volume=24 |issue=2 |pages=227–234 |doi=10.1137/S0097539792240406 |issn=0097-5397}}

It is usually applied in cases where there is an underdetermined system of linear equations y = Ax that must be exactly satisfied, and the sparsest solution in the L1 sense is desired.

When it is desirable to trade off exact equality of Ax and y in exchange for a sparser x, basis pursuit denoising is preferred.

Basis pursuit problems can be converted to linear programming problems in polynomial time and vice versa, making the two types of problems polynomially equivalent.A. M. Tillmann [https://onlinelibrary.wiley.com/doi/pdf/10.1002/pamm.201510351 Equivalence of Linear Programming and Basis Pursuit], PAMM (Proceedings in Applied Mathematics and Mechanics) Volume 15, 2015, pp. 735-738, DOI: 10.1002/PAMM.201510351

Equivalence to Linear Programming

A basis pursuit problem can be converted to a linear programming problem by first noting that

:\begin{align} \|x\|_1 &= |x_1| + |x_2| + \ldots + |x_n| \\ &= (x_1^+ + x_1^-) + (x_2^+ + x_2^-) + \ldots + (x_n^+ + x_n^-)\end{align}

where x_i^+, x_i^- \geq 0. This construction is derived from the constraint x_i = x_i^+ - x_i^-, where the value of |x_i| is intended to be stored in x_i^+ or x_i^- depending on whether x_i is greater or less than zero, respectively. Although a range of x_i^+ and x_i^- values can potentially satisfy this constraint, solvers using the simplex algorithm will find solutions where one or both of x_i^+ or x_i^- is zero, resulting in the relation |x_i| = (x_i^+ + x_i^-).

From this expansion, the problem can be recast in canonical form as:

: \begin{align}

& \text{Find a vector} && (\mathbf{x^+}, \mathbf{x^-}) \\

& \text{that minimizes} && \mathbf{1}^T \mathbf{x^+} + \mathbf{1}^T \mathbf{x^-} \\

& \text{subject to} && A \mathbf{x^+} - A \mathbf{x^-} = \mathbf{y} \\

& \text{and} && \mathbf{x^+}, \mathbf{x^-} \geq \mathbf{0}.

\end{align}

See also

Notes

{{Reflist}}

References & further reading

  • Stephen Boyd, Lieven Vandenbergh: Convex Optimization, Cambridge University Press, 2004, {{ISBN|9780521833783}}, pp. 337–337
  • Simon Foucart, Holger Rauhut: A Mathematical Introduction to Compressive Sensing. Springer, 2013, {{ISBN|9780817649487}}, pp. 77–110