Descent direction

In optimization, a descent direction is a vector \mathbf{p}\in\mathbb R^n that points towards a local minimum \mathbf{x}^* of an objective function f:\mathbb R^n\to\mathbb R.

Computing \mathbf{x}^* by an iterative method, such as line search defines a descent direction \mathbf{p}_k\in\mathbb R^n at the kth iterate to be any \mathbf{p}_k such that \langle\mathbf{p}_k,\nabla f(\mathbf{x}_k)\rangle < 0, where \langle , \rangle denotes the inner product. The motivation for such an approach is that small steps along \mathbf{p}_k guarantee that \displaystyle f is reduced, by Taylor's theorem.

Using this definition, the negative of a non-zero gradient is always a

descent direction, as \langle -\nabla f(\mathbf{x}_k), \nabla f(\mathbf{x}_k) \rangle = -\langle \nabla f(\mathbf{x}_k), \nabla f(\mathbf{x}_k) \rangle < 0 .

Numerous methods exist to compute descent directions, all with differing merits, such as gradient descent or the conjugate gradient method.

More generally, if P is a positive definite matrix, then

p_k = -P \nabla f(x_k) is a descent direction at x_k.{{cite book | author = J. M. Ortega and W. C. Rheinbold | title = Iterative Solution of Nonlinear Equations in Several Variables | pages = 243 | year = 1970 | doi = 10.1137/1.9780898719468

| isbn = 978-0-89871-461-6 }} This generality is used in preconditioned gradient descent methods.

See also

References

{{Reflist}}

{{DEFAULTSORT:Descent Direction}}

Category:Mathematical optimization