Method of steepest descent
In mathematics, the method of steepest descent or stationary-phase method or saddle-point method is an extension of Laplace's method for approximating an integral, where one deforms a contour integral in the complex plane to pass near a stationary point, in roughly the direction of steepest descent or stationary phase. The saddle-point approximation is used with integrals in the complex plane, whereas Laplace’s method is used with real integrals.
The integral to be estimated is often of the form
where C is a contour, and λ is large. One version of the method of steepest descent deforms the contour of integration C into a new path integration C′ so that the following conditions hold:
- C′ passes through one or more zeros of the derivative g′,
- the imaginary part of g is constant on C′.
A simple estimateA modified version of Lemma 2.1.1 on page 56 in .
Let and. Ifwhere denotes the real part, and there exists a positive real number such that
then the following estimate holds:
Proof of the simple estimate
The case of a single non-degenerate saddle point
Basic notions and notation
Let be a complex -dimensional vector, anddenote the Hessian matrix for a function. If
is a vector function, then its Jacobian matrix is defined as
A non-degenerate saddle point,, of a holomorphic function is a critical point of the function where the function's Hessian matrix has a non-vanishing determinant.
The following is the main tool for constructing the asymptotics of integrals in the case of a non-degenerate saddle point:
Complex Morse lemma
The Morse lemma for real-valued functions generalizes as follows for holomorphic functions: near a non-degenerate saddle point of a holomorphic function, there exist coordinates in terms of which is exactly quadratic. To make this precise, let be a holomorphic function with domain, and let in be a non-degenerate saddle point of, that is, and. Then there exist neighborhoods of and of, and a bijective holomorphic function with such thatHere, the are the eigenvalues of the matrix.
Proof of complex Morse lemma
The following proof is a straightforward generalization of the proof of the real Morse Lemma, which can be found in. We begin by demonstrating
Proof of auxiliary statement
From the identity
we conclude that
and
Without loss of generality, we translate the origin to, such that and. Using the Auxiliary Statement, we have
Since the origin is a saddle point,
we can also apply the Auxiliary Statement to the functions and obtain
Recall that an arbitrary matrix can be represented as a sum of symmetric and anti-symmetric matrices,
The contraction of any symmetric matrix B with an arbitrary matrix is
i.e., the anti-symmetric component of does not contribute because
Thus, in equation can be assumed to be symmetric with respect to the interchange of the indices and. Note that
hence, because the origin is a non-degenerate saddle point.
Let us show by induction that there are local coordinates, such that
First, assume that there exist local coordinates, such that
where is symmetric due to equation. By a linear change of the variables, we can assure that. From the chain rule, we have
Therefore:
whence,
The matrix can be recast in the Jordan normal form:, were gives the desired non-singular linear transformation and the diagonal of contains non-zero eigenvalues of. If then, due to continuity of, it must be also non-vanishing in some neighborhood of the origin. Having introduced, we write
Motivated by the last expression, we introduce new coordinates
The change of the variables is locally invertible since the corresponding Jacobian is non-zero,
Therefore,
Comparing equations and, we conclude that equation is verified. Denoting the eigenvalues of by, equation can be rewritten as
Therefore,
From equation, it follows that. The Jordan normal form of reads, where is an upper diagonal matrix containing the eigenvalues and ; hence,. We obtain from equation
If, then interchanging two variables assures that.
The asymptotic expansion in the case of a single non-degenerate saddle point
Assume- and are holomorphic functions in an open, bounded, and simply connected set such that the is connected;
- has a single maximum: for exactly one point ;
- is a non-degenerate saddle point.
where are eigenvalues of the Hessian and are defined with arguments
This statement is a special case of more general results presented in Fedoryuk.
Derivation of equation
First, we deform the contour into a new contour passing through the saddle point and sharing the boundary with. This deformation does not change the value of the integral. We employ the Complex Morse Lemma to change the variables of integration. According to the lemma, the function maps a neighborhood onto a neighborhood containing the origin. The integral can be split into two:, where is the integral over, while is over . Since the latter region does not contain the saddle point, the value of is exponentially smaller than as ; thus, is ignored. Introducing the contour such that, we have
Recalling that as well as, we expand the pre-exponential function into a Taylor series and keep just the leading zero-order term
Here, we have substituted the integration region by because both contain the origin, which is a saddle point, hence they are equal up to an exponentially small term. The integrals in the r.h.s. of equation can be expressed as
From this representation, we conclude that condition must be satisfied in order for the r.h.s. and l.h.s. of equation to coincide. According to assumption 2, is a negatively defined quadratic form implying the existence of the integral, which is readily calculated
Equation can also be written as
where the branch of
is selected as follows
Consider important special cases:
- If is real valued for real and in , then
- If is purely imaginary for real and in , then
The case of multiple non-degenerate saddle points
where
is an open cover of, then the calculation of the integral asymptotic is reduced to the case of a single saddle point by employing the partition of unity. The partition of unity allows us to construct a set of continuous functions such that
Whence,
Therefore as we have:
where equation was utilized at the last stage, and the pre-exponential function at least must be continuous.
The other cases
When and, the point is called a degenerate saddle point of a function.Calculating the asymptotic of
when is continuous, and has a degenerate saddle point, is a very rich problem, whose solution heavily relies on the catastrophe theory. Here, the catastrophe theory replaces the Morse lemma, valid only in the non-degenerate case, to transform the function into one of the multitude of canonical representations. For further details see, e.g., and.
Integrals with degenerate saddle points naturally appear in many applications including optical caustics and the multidimensional WKB approximation in quantum mechanics.
The other cases such as, e.g., and/or are discontinuous or when an extremum of lies at the integration region's boundary, require special care.
Extensions and generalizations
An extension of the steepest descent method is the so-called nonlinear stationary phase/steepest descent method. Here, instead of integrals, one needs to evaluate asymptotically solutions of Riemann-Hilbert factorization problems.Given a contour C in the complex sphere, a function f defined on that contour and a special point, say infinity, one seeks a function M holomorphic away from the contour C, with prescribed jump across C, and with a given normalization at infinity. If f and hence M are matrices rather than scalars this is a problem that in general does not admit an explicit solution.
An asymptotic evaluation is then possible along the lines of the linear stationary phase/steepest descent method. The idea is to reduce asymptotically the solution of the given Riemann-Hilbert problem to that of a simpler, explicitly solvable, Riemann-Hilbert problem. Cauchy's theorem is used to justify deformations of the jump contour.
The nonlinear stationary phase was introduced by Deift and Zhou in 1993, based on earlier work of the Russian mathematician Alexander Its. A nonlinear steepest descent method was introduced by Kamvissis, K. McLaughlin and P. Miller in 2003, based on previous work of Lax, Levermore, Deift, Venakides and Zhou. As in the linear case, steepest descent contours solve a min-max problem. In the nonlinear case they turn out to be "S-curves".
The nonlinear stationary phase/steepest descent method has applications to the theory of soliton equations and integrable models, random matrices and combinatorics.