Gaussian quadrature
In numerical analysis, a quadrature rule is an approximation of the definite integral of a function, usually stated as a weighted sum of function values at specified points within the domain of integration. An n-point Gaussian quadrature rule, named after Carl Friedrich Gauss, is a quadrature rule constructed to yield an exact result for polynomials of degree or less by a suitable choice of the nodes and weights for. Modern formulation using orthogonal polynomials was developed by Carl Gustav Jacobi 1826. The most common domain of integration for such a rule is taken as, so the rule is stated as
which is exact for polynomials of degree or less. This exact rule is known as the Gauss-Legendre quadrature rule. The quadrature rule will only be an accurate approximation to the integral above if is well-approximated by a polynomial of degree or less on.
The Gauss-Legendre quadrature rule is not typically used for integrable functions with endpoint singularities. Instead, if the integrand can be written as
where is well-approximated by a low-degree polynomial, then alternative nodes and weights will usually give more accurate quadrature rules. These are known as Gauss-Jacobi quadrature rules, i.e.,
Common weights include and. One may also want to integrate over semi-infinite and infinite intervals.
It can be shown that the quadrature nodes are the roots of a polynomial belonging to a class of orthogonal polynomials. This is a key observation for computing Gauss quadrature nodes and weights.
Gauss–Legendre quadrature
For the simplest integration problem stated above, i.e., is well-approximated by polynomials on, the associated orthogonal polynomials are Legendre polynomials, denoted by. With the -th polynomial normalized to give, the -th Gauss node,, is the -th root of and the weights are given by the formulaSome low-order quadrature rules are tabulated below.
Change of interval
An integral over must be changed into an integral over before applying the Gaussian quadrature rule. This change of interval can be done in the following way:Applying point Gaussian quadrature rule then results in the following approximation:
Other forms
The integration problem can be expressed in a slightly more general way by introducing a positive weight function into the integrand, and allowing an interval other than. That is, the problem is to calculatefor some choices of,, and. For,, and, the problem is the same as that considered above. Other choices lead to other integration rules. Some of these are tabulated below. Equation numbers are given for Abramowitz and Stegun.
Interval | Orthogonal polynomials | A & S | For more information, see … | |
Legendre polynomials | 25.4.29 | |||
Jacobi polynomials | 25.4.33 | Gauss–Jacobi quadrature | ||
Chebyshev polynomials | 25.4.38 | Chebyshev–Gauss quadrature | ||
Chebyshev polynomials | 25.4.40 | Chebyshev–Gauss quadrature | ||
Laguerre polynomials | 25.4.45 | Gauss–Laguerre quadrature | ||
Generalized Laguerre polynomials | Gauss–Laguerre quadrature | |||
Hermite polynomials | 25.4.46 | Gauss–Hermite quadrature |
Fundamental theorem
Let be a nontrivial polynomial of degree such thatIf we pick the nodes to be the zeros of, then there exist weights which make the Gauss-quadrature computed integral exact for all polynomials of degree or less. Furthermore, all these nodes will lie in the open interval .
The polynomial is said to be an orthogonal polynomial of degree associated to the weight function. It is unique up to a constant normalization factor. The idea underlying the proof is that, because of its sufficiently low degree, can be divided by to produce a quotient of degree strictly lower than, and a remainder of still lower degree, so that both will be orthogonal to, by the defining property of. Thus
Because of the choice of nodes, the corresponding relation
holds also. The exactness of the computed integral for then follows from corresponding exactness for polynomials of degree only or less.
General formula for the weights
The weights can be expressed aswhere is the coefficient of in. To prove this, note that using Lagrange interpolation one can express in terms of as
because has degree less than and is thus fixed by the values it attains at different points. Multiplying both sides by and integrating from to yields
The weights are thus given by
This integral expression for can be expressed in terms of the orthogonal polynomials and as follows.
We can write
where is the coefficient of in. Taking the limit of to yields using L'Hôpital's rule
We can thus write the integral expression for the weights as
In the integrand, writing
yields
provided, because
is a polynomial of degree which is then orthogonal to. So, if is a polynomial of at most nth degree we have
We can evaluate the integral on the right hand side for as follows. Because is a polynomial of degree, we have
where is a polynomial of degree. Since is orthogonal to we have
We can then write
The term in the brackets is a polynomial of degree, which is therefore orthogonal to. The integral can thus be written as
According to equation, the weights are obtained by dividing this by and that yields the expression in equation.
can also be expressed in terms of the orthogonal polynomials and now. In the 3-term recurrence relation the term with vanishes, so in Eq. can be replaced by.
Proof that the weights are positive
Consider the following polynomial of degreewhere, as above, the are the roots of the polynomial.
Clearly. Since the degree of is less than, the Gaussian quadrature formula involving the weights and nodes obtained from applies. Since for j not equal to i, we have
Since both and are non-negative functions, it follows that.
Computation of Gaussian quadrature rules
There are many algorithms for computing the nodes and weights of Gaussian quadrature rules. The most popular are the Golub-Welsch algorithm requiring operations, Newton's method for solving using the three-term recurrence for evaluation requiring operations, and asymptotic formulas for large n requiring operations.Recurrence relation
Orthogonal polynomials with for for a scalar product, degree and leading coefficient one satisfy the recurrence relationand scalar product defined
for where is the maximal degree which can be taken to be infinity, and where. First of all, the polynomials defined by the recurrence relation starting with have leading coefficient one and correct degree. Given the starting point by, the orthogonality of can be shown by induction. For one has
Now if are orthogonal, then also, because in
all scalar products vanish except for the first one and the one where meets the same orthogonal polynomial. Therefore,
However, if the scalar product satisfies , the recurrence relation reduces to a three-term recurrence relation: For is a polynomial of degree less than or equal to. On the other hand, is orthogonal to every polynomial of degree less than or equal to. Therefore, one has and for. The recurrence relation then simplifies to
or
where
.
The Golub-Welsch algorithm
The three-term recurrence relation can be written in matrix form where, is the th standard basis vector, i.e.,, and is the so-called Jacobi matrix:The zeros of the polynomials up to degree, which are used as nodes for the Gaussian quadrature can be found by computing the eigenvalues of this tridiagonal matrix. This procedure is known as Golub–Welsch algorithm.
For computing the weights and nodes, it is preferable to consider the symmetric tridiagonal matrix with elements
and are similar matrices and therefore have the same eigenvalues. The weights can be computed from the corresponding eigenvectors: If is a normalized eigenvector associated to the eigenvalue, the corresponding weight can be computed from the first component of this eigenvector, namely:
where is the integral of the weight function
See, for instance, for further details.
Error estimates
The error of a Gaussian quadrature rule can be stated as follows. For an integrand which has continuous derivatives,for some in, where is the monic orthogonal polynomial of degree and where
In the important special case of, we have the error estimate
Stoer and Bulirsch remark that this error estimate is inconvenient in practice, since it may be difficult to estimate the order derivative, and furthermore the actual error may be much less than a bound established by the derivative. Another approach is to use two Gaussian quadrature rules of different orders, and to estimate the error as the difference between the two results. For this purpose, Gauss–Kronrod quadrature rules can be useful.
Gauss–Kronrod rules
If the interval is subdivided, the Gauss evaluation points of the new subintervals never coincide with the previous evaluation points, and thus the integrand must be evaluated at every point. Gauss–Kronrod rules are extensions of Gauss quadrature rules generated by adding points to an -point rule in such a way that the resulting rule is of order. This allows for computing higher-order estimates while re-using the function values of a lower-order estimate. The difference between a Gauss quadrature rule and its Kronrod extension is often used as an estimate of the approximation error.Gauss–Lobatto rules
Also known as Lobatto quadrature, named after Dutch mathematician Rehuel Lobatto. It is similar to Gaussian quadrature with the following differences:- The integration points include the end points of the integration interval.
- It is accurate for polynomials up to degree, where is the number of integration points.
Abscissas: is the st zero of.
Weights:
Remainder:
Some of the weights are:
Number of points, n | Points, | Weights, |
An adaptive variant of this algorithm with 2 interior nodes is found in GNU Octave and MATLAB as
quadl
and integrate
.