method of steepest descent
{{Short description|Extension of Laplace's method for approximating integrals}}
{{for|the optimization algorithm|Gradient descent}}
In mathematics, the method of steepest descent or saddle-point method is an extension of Laplace's method for approximating an integral, where one deforms a contour integral in the complex plane to pass near a stationary point (saddle point), in roughly the direction of steepest descent or stationary phase. The saddle-point approximation is used with integrals in the complex plane, whereas Laplace’s method is used with real integrals.
The integral to be estimated is often of the form
:
where C is a contour, and λ is large. One version of the method of steepest descent deforms the contour of integration C into a new path integration C′ so that the following conditions hold:
- C′ passes through one or more zeros of the derivative g′(z),
- the imaginary part of g(z) is constant on C′.
The method of steepest descent was first published by {{harvtxt|Debye|1909}}, who used it to estimate Bessel functions and pointed out that it occurred in the unpublished note by {{harvtxt|Riemann|1863}} about hypergeometric functions. The contour of steepest descent has a minimax property, see {{harvtxt|Fedoryuk|2001}}. {{harvtxt|Siegel|1932}} described some other unpublished notes of Riemann, where he used this method to derive the Riemann–Siegel formula.
Basic idea
The method of steepest descent is a method to approximate a complex integral of the formfor large , where and are analytic functions of . Because the integrand is analytic, the contour can be deformed into a new contour without changing the integral. In particular, one seeks a new contour on which the imaginary part, denoted , of is constant ( denotes the real part). Then and the remaining integral can be approximated with other methods like Laplace's method.{{Cite book|last1=Bender|first1=Carl M.|url=http://link.springer.com/10.1007/978-1-4757-3069-2|title=Advanced Mathematical Methods for Scientists and Engineers I|last2=Orszag|first2=Steven A.|date=1999|publisher=Springer New York|isbn=978-1-4419-3187-0|location=New York, NY|language=en|doi=10.1007/978-1-4757-3069-2}}
Etymology
The method is called the method of steepest descent because for analytic , constant phase contours are equivalent to steepest descent contours.
If is an analytic function of , it satisfies the Cauchy–Riemann equations
=
\frac{\partial Y}{\partial y}
\qquad \text{and} \qquad
\frac{\partial X}{\partial y}
=
- \frac{\partial Y}{\partial x}
.Then so contours of constant phase are also contours of steepest descent.
A simple estimate
Let {{math| f, S : Cn → C}} and {{math|C ⊂ Cn}}. If
:
where denotes the real part, and there exists a positive real number {{math|λ0}} such that
:
then the following estimate holds:A modified version of Lemma 2.1.1 on page 56 in {{harvtxt|Fedoryuk|1987}}.
:
Proof of the simple estimate:
:
\left| \int_{C} f(x) e^{\lambda S(x)} dx \right| &\leqslant \int_C |f(x)| \left|e^{\lambda S(x)} \right| dx \\
&\equiv \int_{C} |f(x)| e^{\lambda M} \left | e^{\lambda_0 (S(x)-M)} e^{(\lambda-\lambda_0)(S(x)-M)} \right| dx \\
&\leqslant \int_C |f(x)| e^{\lambda M} \left| e^{\lambda_0 (S(x)-M)} \right| dx && \left| e^{(\lambda-\lambda_0)(S(x) - M)} \right| \leqslant 1 \\
&= \underbrace{e^{-\lambda_0 M} \int_{C} \left| f(x) e^{\lambda_0 S(x)} \right| dx}_{\text{const}} \cdot e^{\lambda M}.
\end{align}
The case of a single non-degenerate saddle point
= Basic notions and notation =
Let {{mvar|x}} be a complex {{mvar|n}}-dimensional vector, and
:
denote the Hessian matrix for a function {{math|S(x)}}. If
:
is a vector function, then its Jacobian matrix is defined as
:
A non-degenerate saddle point, {{math|z0 ∈ Cn}}, of a holomorphic function {{math|S(z)}} is a critical point of the function (i.e., {{math|∇S(z0) {{=}} 0}}) where the function's Hessian matrix has a non-vanishing determinant (i.e., ).
The following is the main tool for constructing the asymptotics of integrals in the case of a non-degenerate saddle point:
=Complex Morse lemma=
The Morse lemma for real-valued functions generalizes as followsLemma 3.3.2 on page 113 in {{harvtxt|Fedoryuk|1987}} for holomorphic functions: near a non-degenerate saddle point {{math|z0}} of a holomorphic function {{math|S(z)}}, there exist coordinates in terms of which {{math|S(z) − S(z0)}} is exactly quadratic. To make this precise, let {{mvar|S}} be a holomorphic function with domain {{math|W ⊂ Cn}}, and let {{math|z0}} in {{mvar|W}} be a non-degenerate saddle point of {{mvar|S}}, that is, {{math|∇S(z0) {{=}} 0}} and . Then there exist neighborhoods {{math|U ⊂ W}} of {{math|z0}} and {{math|V ⊂ Cn}} of {{math|w {{=}} 0}}, and a bijective holomorphic function {{math|φ : V → U}} with {{math|φ(0) {{=}} z''0}} such that
:
Here, the {{math|μj}} are the eigenvalues of the matrix .
File:Complex Morse Lemma Illustration.pdf
{{math proof|title=Proof of complex Morse lemma|proof=
The following proof is a straightforward generalization of the proof of the real Morse Lemma, which can be found in.{{harvtxt|Poston|Stewart|1978}}, page 54; see also the comment on page 479 in {{harvtxt|Wong|1989}}.
We begin by demonstrating
:Auxiliary statement. Let {{math| f : Cn → C}} be holomorphic in a neighborhood of the origin and {{math| f (0) {{=}} 0}}. Then in some neighborhood, there exist functions {{math|gi : Cn → C}} such that where each {{math|gi}} is holomorphic and
From the identity
:
we conclude that
:
and
:
Without loss of generality, we translate the origin to {{math|z0}}, such that {{math|z0 {{=}} 0}} and {{math|S(0) {{=}} 0}}. Using the Auxiliary Statement, we have
:
Since the origin is a saddle point,
:
we can also apply the Auxiliary Statement to the functions {{math|gi(z)}} and obtain
{{NumBlk|:||{{EquationRef|1}}}}
Recall that an arbitrary matrix {{mvar|A}} can be represented as a sum of symmetric {{math|A(s)}} and anti-symmetric {{math|A(a)}} matrices,
:
The contraction of any symmetric matrix B with an arbitrary matrix {{mvar|A}} is
{{NumBlk|:|
|{{EquationRef|2}}}}
i.e., the anti-symmetric component of {{mvar|A}} does not contribute because
:
Thus, {{math|hij(z)}} in equation (1) can be assumed to be symmetric with respect to the interchange of the indices {{mvar|i}} and {{mvar|j}}. Note that
:
hence, {{math|det(hij(0)) ≠ 0}} because the origin is a non-degenerate saddle point.
Let us show by induction that there are local coordinates {{math|u {{=}} (u1, ... un), z {{=}} ψ(u), 0 {{=}} ψ(0)}}, such that
{{NumBlk|:||{{EquationRef|3}}}}
First, assume that there exist local coordinates {{math|y {{=}} (y1, ... yn), z {{=}} φ(y), 0 {{=}} φ(0)}}, such that
{{NumBlk|:||{{EquationRef|4}}}}
where {{math|Hij}} is symmetric due to equation (2). By a linear change of the variables {{math|(yr, ... yn)}}, we can assure that {{math|Hrr(0) ≠ 0}}. From the chain rule, we have
:
Therefore:
:
whence,
:
The matrix {{math|(Hij(0))}} can be recast in the Jordan normal form: {{math|(Hij(0)) {{=}} LJL−1}}, were {{mvar|L}} gives the desired non-singular linear transformation and the diagonal of {{mvar|J}} contains non-zero eigenvalues of {{math|(Hij(0))}}. If {{math|Hij(0) ≠ 0}} then, due to continuity of {{math|Hij(y)}}, it must be also non-vanishing in some neighborhood of the origin. Having introduced , we write
:
S(\boldsymbol{\varphi}(y)) =& y_1^2 + \cdots + y_{r-1}^2 + H_{rr}(y) \sum_{i,j = r}^n y_i y_j \tilde{H}_{ij} (y) \\
=& y_1^2 + \cdots + y_{r-1}^2 + H_{rr}(y)\left[ y_r^2 + 2y_r \sum_{j=r+1}^n y_j \tilde{H}_{rj} (y) + \sum_{i,j = r+1}^n y_i y_j \tilde{H}_{ij} (y) \right] \\
=& y_1^2 + \cdots + y_{r-1}^2 + H_{rr}(y)\left[ \left( y_r + \sum_{j=r+1}^n y_j \tilde{H}_{rj} (y)\right)^2 - \left( \sum_{j=r+1}^n y_j \tilde{H}_{rj} (y)\right)^2 \right] + H_{rr}(y) \sum_{i,j = r+1}^n y_i y_j \tilde{H}_{ij}(y)
\end{align}
Motivated by the last expression, we introduce new coordinates {{math|z {{=}} η(x), 0 {{=}} η(0),}}
:
The change of the variables {{math|y ↔ x}} is locally invertible since the corresponding Jacobian is non-zero,
:
Therefore,
{{NumBlk|:||{{EquationRef|5}}}}
Comparing equations (4) and (5), we conclude that equation (3) is verified. Denoting the eigenvalues of by {{math|μj''}}, equation (3) can be rewritten as
{{NumBlk|:||{{EquationRef|6}}}}
Therefore,
{{NumBlk|:||{{EquationRef|7}}}}
From equation (6), it follows that . The Jordan normal form of reads , where {{math|Jz}} is an upper diagonal matrix containing the eigenvalues and {{math|det P ≠ 0}}; hence, _{zz} (0) = \mu_1 \cdots \mu_n. We obtain from equation (7)
:
If , then interchanging two variables assures that .
}}
= The asymptotic expansion in the case of a single non-degenerate saddle point =
Assume
- {{math| f (z)}} and {{math|S(z)}} are holomorphic functions in an open, bounded, and simply connected set {{math|Ωx ⊂ Cn}} such that the {{math|Ix {{=}} Ωx ∩ Rn}} is connected;
- has a single maximum: for exactly one point {{math|x0 ∈ Ix}};
- {{math|x0}} is a non-degenerate saddle point (i.e., {{math|∇S(x0) {{=}} 0}} and ).
Then, the following asymptotic holds
{{NumBlk|:||{{EquationRef|8}}}}
where {{math|μj}} are eigenvalues of the Hessian and are defined with arguments
{{NumBlk|:|
|{{EquationRef|9}}}}
This statement is a special case of more general results presented in Fedoryuk (1987).{{harvtxt|Fedoryuk|1987}}, pages 417-420.
{{math proof|title=Derivation of equation (8)|proof=
File:Illustration To Derivation Of Asymptotic For Saddle Point Integration.pdf
First, we deform the contour {{math|Ix}} into a new contour passing through the saddle point {{math|x0}} and sharing the boundary with {{math|Ix}}. This deformation does not change the value of the integral {{math|I(λ)}}. We employ the Complex Morse Lemma to change the variables of integration. According to the lemma, the function {{math|φ(w)}} maps a neighborhood {{math|x0 ∈ U ⊂ Ωx}} onto a neighborhood {{math|Ωw}} containing the origin. The integral {{math|I(λ)}} can be split into two: {{math|I(λ) {{=}} I0(λ) + I1(λ)}}, where {{math|I0(λ)}} is the integral over , while {{math|I1(λ)}} is over (i.e., the remaining part of the contour {{math|I′x}}). Since the latter region does not contain the saddle point {{math|x0}}, the value of {{math|I1(λ)}} is exponentially smaller than {{math|I0(λ)}} as {{math|λ → ∞}};This conclusion follows from a comparison between the final asymptotic for {{math|I0(λ)}}, given by equation (8), and a simple estimate for the discarded integral {{math|I1(λ)}}. thus, {{math|I1(λ)}} is ignored. Introducing the contour {{math|Iw}} such that , we have
{{NumBlk|:||{{EquationRef|10}}}}
Recalling that {{math|x0 {{=}} φ(0)}} as well as , we expand the pre-exponential function into a Taylor series and keep just the leading zero-order term
{{NumBlk|:||{{EquationRef|11}}}}
Here, we have substituted the integration region {{math|Iw}} by {{math|Rn}} because both contain the origin, which is a saddle point, hence they are equal up to an exponentially small term.This is justified by comparing the integral asymptotic over {{math|Rn}} [see equation (8)] with a simple estimate for the altered part. The integrals in the r.h.s. of equation (11) can be expressed as
{{NumBlk|:||{{EquationRef|12}}}}
From this representation, we conclude that condition (9) must be satisfied in order for the r.h.s. and l.h.s. of equation (12) to coincide. According to assumption 2, is a negatively defined quadratic form (viz., ) implying the existence of the integral , which is readily calculated
:
}}
Equation (8) can also be written as
{{NumBlk|:||{{EquationRef|13}}}}
where the branch of
:
is selected as follows
:
\left (\det \left (-S_{xx}(x^0) \right ) \right)^{-\frac{1}{2}} &= \exp\left( -i \text{ Ind} \left (- S_{xx}(x^0) \right ) \right) \prod_{j=1}^n \left| \mu_j \right|^{-\frac{1}{2}}, \\
\text{Ind} \left (-S_{xx}''(x^0) \right) &= \tfrac{1}{2} \sum_{j=1}^n \arg (-\mu_j), && |\arg(-\mu_j)| < \tfrac{\pi}{2}.
\end{align}
Consider important special cases:
- If {{math|S(x)}} is real valued for real {{mvar|x}} and {{math|x0}} in {{math|Rn}} (aka, the multidimensional Laplace method), thenSee equation (4.4.9) on page 125 in {{harvtxt|Fedoryuk|1987}}
- If {{math|S(x)}} is purely imaginary for real {{mvar|x}} (i.e., for all {{mvar|x}} in {{math|Rn}}) and {{math|x0}} in {{math|Rn}} (aka, the multidimensional stationary phase method),Rigorously speaking, this case cannot be inferred from equation (8) because the second assumption, utilized in the derivation, is violated. To include the discussed case of a purely imaginary phase function, condition (9) should be replaced by thenSee equation (2.2.6') on page 186 in {{harvtxt|Fedoryuk|1987}} where denotes the signature of matrix , which equals to the number of negative eigenvalues minus the number of positive ones. It is noteworthy that in applications of the stationary phase method to the multidimensional WKB approximation in quantum mechanics (as well as in optics), {{math|Ind}} is related to the Maslov index see, e.g., {{harvtxt|Chaichian|Demichev|2001}} and {{harvtxt|Schulman|2005}}.
The case of multiple non-degenerate saddle points
If the function {{math|S(x)}} has multiple isolated non-degenerate saddle points, i.e.,
:
where
:
is an open cover of {{math|Ωx}}, then the calculation of the integral asymptotic is reduced to the case of a single saddle point by employing the partition of unity. The partition of unity allows us to construct a set of continuous functions {{math|ρk(x) : Ωx → [0, 1], 1 ≤ k ≤ K,}} such that
:
\sum_{k=1}^K \rho_k(x) &= 1, && \forall x \in \Omega_x, \\
\rho_k(x) &= 0 && \forall x \in \Omega_x\setminus \Omega_x^{(k)}.
\end{align}
Whence,
:
Therefore as {{math|λ → ∞}} we have:
:
where equation (13) was utilized at the last stage, and the pre-exponential function {{math| f (x)}} at least must be continuous.
The other cases
When {{math|∇S(z0) {{=}} 0}} and , the point {{math|z0 ∈ Cn}} is called a degenerate saddle point of a function {{math|S(z'')}}.
Calculating the asymptotic of
:
when {{math|λ → ∞, f (x)}} is continuous, and {{math|S(z)}} has a degenerate saddle point, is a very rich problem, whose solution heavily relies on the catastrophe theory. Here, the catastrophe theory replaces the Morse lemma, valid only in the non-degenerate case, to transform the function {{math|S(z)}} into one of the multitude of canonical representations. For further details see, e.g., {{harvtxt|Poston|Stewart|1978}} and {{harvtxt|Fedoryuk|1987}}.
Integrals with degenerate saddle points naturally appear in many applications including optical caustics and the multidimensional WKB approximation in quantum mechanics.
The other cases such as, e.g., {{math| f (x)}} and/or {{math|S(x)}} are discontinuous or when an extremum of {{math|S(x)}} lies at the integration region's boundary, require special care (see, e.g., {{harvtxt|Fedoryuk|1987}} and {{harvtxt|Wong|1989}}).
Extensions and generalizations
An extension of the steepest descent method is the so-called nonlinear stationary phase/steepest descent method. Here, instead of integrals, one needs to evaluate asymptotically solutions of Riemann–Hilbert factorization problems.
Given a contour C in the complex sphere, a function f defined on that contour and a special point, say infinity, one seeks a function M holomorphic away from the contour C, with prescribed jump across C, and with a given normalization at infinity. If f and hence M are matrices rather than scalars this is a problem that in general does not admit an explicit solution.
An asymptotic evaluation is then possible along the lines of the linear stationary phase/steepest descent method. The idea is to reduce asymptotically the solution of the given Riemann–Hilbert problem to that of a simpler, explicitly solvable, Riemann–Hilbert problem. Cauchy's theorem is used to justify deformations of the jump contour.
The nonlinear stationary phase was introduced by Deift and Zhou in 1993, based on earlier work of the Russian mathematician Alexander Its. A (properly speaking) nonlinear steepest descent method was introduced by Kamvissis, K. McLaughlin and P. Miller in 2003, based on previous work of Lax, Levermore, Deift, Venakides and Zhou. As in the linear case, steepest descent contours solve a min-max problem. In the nonlinear case they turn out to be "S-curves" (defined in a different context back in the 80s by Stahl, Gonchar and Rakhmanov).
The nonlinear stationary phase/steepest descent method has applications to the theory of soliton equations and integrable models, random matrices and combinatorics.
Another extension is the Method of Chester–Friedman–Ursell for coalescing saddle points and uniform asymptotic extensions.
See also
Notes
{{reflist}}
References
{{sfn whitelist|CITEREFFedoryuk2001}}
- {{Citation
| last1=Chaichian
| first1=M.
| last2=Demichev
| first2=A.
| title=Path Integrals in Physics Volume 1: Stochastic Process and Quantum Mechanics
| publisher=Taylor & Francis
| year=2001
| page=174
| isbn=075030801X
}}
- {{Citation
| last1=Debye
| first1=P.
| author1-link=Peter Debye
| title=Näherungsformeln für die Zylinderfunktionen für große Werte des Arguments und unbeschränkt veränderliche Werte des Index
| doi=10.1007/BF01450097
| year=1909
| journal=Mathematische Annalen
| volume=67
| issue=4
| pages=535–558| s2cid=122219667
| url=https://zenodo.org/record/2397260
}} English translation in {{Citation | last1=Debye | first1=Peter J. W. | title=The collected papers of Peter J. W. Debye | publisher=Interscience Publishers, Inc., New York | isbn=978-0-918024-58-9 |mr=0063975 | year=1954}}
- {{Citation
| last1=Deift
| first1=P.
| last2=Zhou
| first2=X.
| year=1993
| title=A steepest descent method for oscillatory Riemann-Hilbert problems. Asymptotics for the MKdV equation
| periodical=Ann. of Math.
| volume=137
| issue=2
| pages=295–368
| doi=10.2307/2946540
| publisher=The Annals of Mathematics, Vol. 137, No. 2
| jstor=2946540
| arxiv=math/9201261
| s2cid=12699956
}}.
- {{Citation
| last=Erdelyi
| first=A.
| year=1956
| title=Asymptotic Expansions
| publisher=Dover
}}.
- {{eom|title=Saddle point method|first=M. V.|last= Fedoryuk}}.
- {{Citation
| last1=Fedoryuk
| first1=M. V.
| title=Asymptotic: Integrals and Series
| publisher =Nauka, Moscow
| year=1987}} [in Russian].
- {{Citation
| last1=Kamvissis
| first1=S.
| last2=McLaughlin
| first2=K. T.-R.
| last3=Miller
| first3=P.
| year=2003
| title=Semiclassical Soliton Ensembles for the Focusing Nonlinear Schrödinger Equation
| periodical=Annals of Mathematics Studies
| volume=154
| publisher=Princeton University Press
}}.
- {{Citation
|title=Sullo svolgimento del quoziente di due serie ipergeometriche in frazione continua infinita
|first=B.
|last=Riemann
|year=1863}} (Unpublished note, reproduced in Riemann's collected papers.)
- {{Citation
|authorlink=Carl Ludwig Siegel
|last=Siegel
|first= C. L.
|title=Über Riemanns Nachlaß zur analytischen Zahlentheorie
|journal=Quellen und Studien zur Geschichte der Mathematik, Astronomie und Physik
|volume=2 |pages= 45–80
|year= 1932}} Reprinted in Gesammelte Abhandlungen, Vol. 1. Berlin: Springer-Verlag, 1966.
- Translated in {{cite arXiv |eprint=1810.05198 |mode=cs2|last1=Barkan|first1=Eric|title=On Riemanns Nachlass for Analytic Number Theory: A translation of Siegel's Uber|last2=Sklar|first2=David|class=math.HO|year=2018}}.
- {{Citation
|last1=Poston
|first1=T.
|last2=Stewart
|first2=I.
|title=Catastrophe Theory and Its Applications
|publisher=Pitman|year=1978
}}.
- {{Citation
|last1=Schulman
|first1=L. S.
|title=Techniques and Applications of Path Integration
|publisher=Dover
|year=2005
|isbn=0486445283
|chapter=Ch. 17: The Phase of the Semiclassical Amplitude
}}
- {{Citation
|last1=Wong
|first1=R.
|title=Asymptotic approximations of integrals
|publisher=Academic Press
|year=1989
}}.