Gaussian quadrature#General formula for the weights

{{Short description|Approximation of the definite integral of a function}}

{{Redirect|Gaussian integration|the integral of a Gaussian function|Gaussian integral}}

{{more footnotes|date=September 2018}}

[[File:Comparison Gaussquad trapezoidal.svg|thumb|upright=1.5|alt=Comparison between 2-point Gaussian and trapezoidal quadrature.|Comparison between 2-point Gaussian and trapezoidal quadrature.

The blue curve shows the function whose definite integral on the interval {{math|[−1, 1]}} is to be calculated (the integrand). The trapezoidal rule approximates the function with a linear function that coincides with the integrand at the endpoints of the interval and is represented by an orange dashed line. The approximation is apparently not good, so the error is large (the trapezoidal rule gives an approximation of the integral equal to {{math|1=y(−1) + y(1) = −10}}, while the correct value is {{math|{{frac|2|3}}}}). To obtain a more accurate result, the interval must be partitioned into many subintervals and then the composite trapezoidal rule must be used, which requires many more calculations.

The Gaussian quadrature chooses more suitable points instead, so even a linear function approximates the function better (the black dashed line). As the integrand is the third-degree polynomial {{math|1=y(x) = 7x{{sup|3}} − 8x{{sup|2}} − 3x + 3}}, the 2-point Gaussian quadrature rule even returns an exact result.]]

In numerical analysis, an {{mvar|n}}-point Gaussian quadrature rule, named after Carl Friedrich Gauss,{{harvnb|Gauss|1815}} is a quadrature rule constructed to yield an exact result for polynomials of degree {{math|2n − 1}} or less by a suitable choice of the nodes {{mvar|x{{sub|i}}}} and weights {{mvar|w{{sub|i}}}} for {{math|1=i = 1, ..., n}}.

The modern formulation using orthogonal polynomials was developed by Carl Gustav Jacobi in 1826.{{harvnb|Jacobi|1826}} The most common domain of integration for such a rule is taken as {{math|[−1, 1]}}, so the rule is stated as

\int_{-1}^1 f(x)\,dx \approx \sum_{i=1}^n w_i f(x_i),

which is exact for polynomials of degree {{math|2n − 1}} or less. This exact rule is known as the Gauss–Legendre quadrature rule. The quadrature rule will only be an accurate approximation to the integral above if {{math|f (x)}} is well-approximated by a polynomial of degree {{math|2n − 1}} or less on {{math|[−1, 1]}}.

The Gauss–Legendre quadrature rule is not typically used for integrable functions with endpoint singularities. Instead, if the integrand can be written as

f(x) = \left(1 - x\right)^\alpha \left(1 + x\right)^\beta g(x),\quad \alpha,\beta > -1,

where {{math|g(x)}} is well-approximated by a low-degree polynomial, then alternative nodes {{mvar|x{{sub|i}}'}} and weights {{mvar|w{{sub|i}}'}} will usually give more accurate quadrature rules. These are known as Gauss–Jacobi quadrature rules, i.e.,

\int_{-1}^1 f(x)\,dx = \int_{-1}^1 \left(1 - x\right)^\alpha \left(1 + x\right)^\beta g(x)\,dx \approx \sum_{i=1}^n w_i' g\left(x_i'\right).

Common weights include \frac{1}{\sqrt{1 - x^2}} (Chebyshev–Gauss) and \sqrt{1 - x^2}. One may also want to integrate over semi-infinite (Gauss–Laguerre quadrature) and infinite intervals (Gauss–Hermite quadrature).

It can be shown (see Press et al., or Stoer and Bulirsch) that the quadrature nodes {{mvar|x{{sub|i}}}} are the roots of a polynomial belonging to a class of orthogonal polynomials (the class orthogonal with respect to a weighted inner-product). This is a key observation for computing Gauss quadrature nodes and weights.

== Gauss–Legendre quadrature ==

{{Further|Gauss–Legendre quadrature}}

File:Legendrepolynomials6.svg

For the simplest integration problem stated above, i.e., {{math|f(x)}} is well-approximated by polynomials on [-1, 1], the associated orthogonal polynomials are Legendre polynomials, denoted by {{math|Pn(x)}}. With the {{mvar|n}}-th polynomial normalized to give {{math|1=Pn(1) = 1}}, the {{mvar|i}}-th Gauss node, {{mvar|xi}}, is the {{mvar|i}}-th root of {{mvar|Pn}} and the weights are given by the formula{{harvnb|Abramowitz|Stegun|1983|p=887}}

w_i = \frac{2}{\left( 1 - x_i^2 \right) \left[P'_n(x_i)\right]^2}.

Some low-order quadrature rules are tabulated below (over interval {{math|[−1, 1]}}, see the section below for other intervals).

class="wikitable" style="margin:auto; background:white; text-align:center;"

! Number of points, {{mvar|n}}

! colspan="2" | Points, {{mvar|xi}}

! colspan="2" | Weights, {{mvar|wi}}

1

| colspan="2" | 0

| colspan="2" | 2

2

| \pm\frac{1}{\sqrt{3}}

±0.57735...

| colspan="2" | 1

rowspan="2" | 3

| colspan="2" | 0

| \frac{8}{9}

0.888889...
\pm\sqrt{\frac{3}{5}}±0.774597...

| \frac{5}{9}

0.555556...
rowspan="2" | 4

| \pm\sqrt{\frac{3}{7} - \frac{2}{7}\sqrt{\frac{6}{5}}}

±0.339981...

| \frac{18 + \sqrt{30}}{36}

0.652145...
\pm\sqrt{\frac{3}{7} + \frac{2}{7}\sqrt{\frac{6}{5}}}±0.861136...

| \frac{18 - \sqrt{30}}{36}

0.347855...
rowspan="3" | 5

| colspan="2" | 0

| \frac{128}{225}

0.568889...
\pm\frac{1}{3}\sqrt{5 - 2\sqrt{\frac{10}{7}}}±0.538469...

| \frac{322 + 13\sqrt{70}}{900}

0.478629...
\pm\frac{1}{3}\sqrt{5 + 2\sqrt{\frac{10}{7}}}±0.90618...

| \frac{322 - 13\sqrt{70}}{900}

0.236927...

Change of interval

An integral over {{math|[a, b]}} must be changed into an integral over {{math|[−1, 1]}} before applying the Gaussian quadrature rule. This change of interval can be done in the following way:

\int_a^b f(x)\,dx = \int_{ -1}^1 f\left(\frac{b-a}{2}\xi + \frac{a+b}{2}\right)\,\frac{dx}{d\xi}d\xi

with \frac{dx}{d\xi} = \frac{b-a}{2}

Applying the n point Gaussian quadrature (\xi, w) rule then results in the following approximation:

\int_a^b f(x)\,dx \approx \frac{b-a}{2} \sum_{i=1}^n w_i f\left(\frac{b-a}{2}\xi_i + \frac{a+b}{2}\right).

Example of two-point Gauss quadrature rule

Use the two-point Gauss quadrature rule to approximate the distance in meters covered by a rocket from t = 8\mathrm{s} to t = 30\mathrm{s}, as given by

s = \int_{8}^{30}{\left( 2000\ln\left[ \frac{140000}{140000 - 2100t} \right] - 9.8t \right){dt}}

Change the limits so that one can use the weights and abscissae given in Table 1. Also, find the absolute relative true error. The true value is given as 11061.34 m.

Solution

First, changing the limits of integration from \left[ 8,30 \right] to \left[ - 1,1 \right] gives

\begin{align}

\int_{8}^{30} {f(t) dt} &= \frac{30 - 8}{2} \int_{- 1}^{1}{f\left( \frac{30 - 8}{2}x + \frac{30 + 8}{2} \right){dx}} \\

&= 11\int_{- 1}^{1}{f\left( 11x + 19 \right){dx}}

\end{align}

Next, get the weighting factors and function argument values from Table 1 for the two-point rule,

  • c_1 = 1.000000000
  • x_1 = - 0.577350269
  • c_2 = 1.000000000
  • x_2 = 0.577350269

Now we can use the Gauss quadrature formula

\begin{align}

11\int_{-1}^{1}{f\left( 11x + 19 \right){dx}} & \approx 11\left[ c_1 f\left( 11 x_1 + 19 \right) + c_2 f\left( 11 x_2 + 19 \right) \right] \\

&= 11\left[ f\left( 11( - 0.5773503) + 19 \right) + f\left( 11(0.5773503) + 19 \right) \right] \\

&= 11\left[ f(12.64915) + f(25.35085) \right] \\

&= 11\left[ (296.8317) + (708.4811) \right] \\

&= 11058.44

\end{align}

since

\begin{align}

f(12.64915) & = 2000\ln\left[ \frac{140000}{140000 - 2100(12.64915)} \right] - 9.8(12.64915) \\

&= 296.8317

\end{align}

\begin{align}

f(25.35085) & = 2000\ln\left[ \frac{140000}{140000 - 2100(25.35085)} \right] - 9.8(25.35085) \\

&= 708.4811

\end{align}

Given that the true value is 11061.34 m, the absolute relative true error, \left| \varepsilon_{t} \right| is

\left| \varepsilon_{t} \right| = \left| \frac{11061.34 - 11058.44}{11061.34} \right| \times 100\% = 0.0262\%

Other forms

The integration problem can be expressed in a slightly more general way by introducing a positive weight function {{mvar|ω}} into the integrand, and allowing an interval other than {{math|[−1, 1]}}. That is, the problem is to calculate

\int_a^b \omega(x)\,f(x)\,dx

for some choices of {{mvar|a}}, {{mvar|b}}, and {{mvar|ω}}. For {{math|1=a = −1}}, {{math|1=b = 1}}, and {{math|1=ω(x) = 1}}, the problem is the same as that considered above. Other choices lead to other integration rules. Some of these are tabulated below. Equation numbers are given for Abramowitz and Stegun (A & S).

class="wikitable" style="margin:auto; background:white; text-align:center;"

! Interval

! {{math|ω(x)}}

! Orthogonal polynomials

! A & S

! For more information, see ...

{{closed-closed|−1, 1}}{{math|1}}Legendre polynomials25.4.29{{section linkGauss–Legendre quadrature}}
{{open-open|−1, 1}}\left(1 - x\right)^\alpha \left(1 + x\right)^\beta,\quad \alpha, \beta > -1Jacobi polynomials25.4.33 ({{math|1=β = 0}})Gauss–Jacobi quadrature
{{open-open|−1, 1}}\frac{1}{\sqrt{1 - x^2}}Chebyshev polynomials (first kind)25.4.38Chebyshev–Gauss quadrature
{{closed-closed|−1, 1}}\sqrt{1 - x^2}Chebyshev polynomials (second kind)25.4.40Chebyshev–Gauss quadrature
{{closed-open|0, ∞}} e^{-x}\, Laguerre polynomials25.4.45Gauss–Laguerre quadrature
{{closed-open|0, ∞}} x^\alpha e^{-x},\quad \alpha>-1 Generalized Laguerre polynomialsGauss–Laguerre quadrature
{{open-open|−∞, ∞}} e^{-x^2} Hermite polynomials25.4.46Gauss–Hermite quadrature

= Fundamental theorem =

Let {{mvar|pn}} be a nontrivial polynomial of degree {{mvar|n}} such that

\int_a^b \omega(x) \, x^k p_n(x) \, dx = 0, \quad \text{for all } k = 0, 1, \ldots, n - 1.

Note that this will be true for all the orthogonal polynomials above, because each {{mvar|pn}} is constructed to be orthogonal to the other polynomials {{mvar|pj}} for {{math|j<n}}, and {{math|xk}} is in the span of that set.

If we pick the {{mvar|n}} nodes {{mvar|xi}} to be the zeros of {{mvar|pn}}, then there exist {{mvar|n}} weights {{mvar|wi}} which make the Gaussian quadrature computed integral exact for all polynomials {{math|h(x)}} of degree {{math|2n − 1}} or less. Furthermore, all these nodes {{mvar|xi}} will lie in the open interval {{math|(a, b)}}.{{harvnb|Stoer|Bulirsch|2002|pp=172–175}}

To prove the first part of this claim, let {{math|h(x)}} be any polynomial of degree {{math|2n − 1}} or less. Divide it by the orthogonal polynomial {{mvar|pn}} to get

h(x) = p_n(x) \, q(x) + r(x).

where {{math|q(x)}} is the quotient, of degree {{math|n − 1}} or less (because the sum of its degree and that of the divisor {{mvar|pn}} must equal that of the dividend), and {{math|r(x)}} is the remainder, also of degree {{math|n − 1}} or less (because the degree of the remainder is always less than that of the divisor). Since {{mvar|pn}} is by assumption orthogonal to all monomials of degree less than {{mvar|n}}, it must be orthogonal to the quotient {{math|q(x)}}. Therefore

\int_a^b \omega(x)\,h(x)\,dx = \int_a^b \omega(x)\,\big( \, p_n(x) q(x) + r(x) \, \big)\,dx = \int_a^b \omega(x)\,r(x)\,dx.

Since the remainder {{math|r(x)}} is of degree {{math|n − 1}} or less, we can interpolate it exactly using {{mvar|n}} interpolation points with Lagrange polynomials {{math|li(x)}}, where

l_i(x) = \prod _{j \ne i} \frac{x-x_j}{x_i-x_j}.

We have

r(x) = \sum_{i=1}^n l_i(x) \, r(x_i).

Then its integral will equal

\int_a^b \omega(x)\,r(x)\,dx = \int_a^b \omega(x) \, \sum_{i=1}^n l_i(x) \, r(x_i) \, dx = \sum_{i=1}^n \, r(x_i) \, \int_a^b \omega(x) \, l_i(x) \, dx = \sum_{i=1}^n \, r(x_i) \, w_i,

where {{math|wi}}, the weight associated with the node {{math|xi}}, is defined to equal the weighted integral of {{math|li(x)}} (see below for other formulas for the weights). But all the {{mvar|xi}} are roots of {{mvar|pn}}, so the division formula above tells us that

h(x_i) = p_n(x_i) \, q(x_i) + r(x_i) = r(x_i),

for all {{mvar|i}}. Thus we finally have

\int_a^b \omega(x)\,h(x)\,dx = \int_a^b \omega(x) \, r(x) \, dx = \sum_{i=1}^n w_i \, r(x_i) = \sum_{i=1}^n w_i \, h(x_i).

This proves that for any polynomial {{math|h(x)}} of degree {{math|2n − 1}} or less, its integral is given exactly by the Gaussian quadrature sum.

To prove the second part of the claim, consider the factored form of the polynomial {{math|pn}}. Any complex conjugate roots will yield a quadratic factor that is either strictly positive or strictly negative over the entire real line. Any factors for roots outside the interval from {{mvar|a}} to {{mvar|b}} will not change sign over that interval. Finally, for factors corresponding to roots {{mvar|xi}} inside the interval from {{mvar|a}} to {{mvar|b}} that are of odd multiplicity, multiply {{math|pn}} by one more factor to make a new polynomial

p_n(x) \, \prod_i (x - x_i).

This polynomial cannot change sign over the interval from {{mvar|a}} to {{mvar|b}} because all its roots there are now of even multiplicity. So the integral

\int_a^b p_n(x) \, \left( \prod_i (x - x_i) \right) \, \omega(x) \, dx \ne 0,

since the weight function {{math|ω(x)}} is always non-negative. But {{math|pn}} is orthogonal to all polynomials of degree {{math|n − 1}} or less, so the degree of the product

\prod_i (x - x_i)

must be at least {{mvar|n}}. Therefore {{math|pn}} has {{mvar|n}} distinct roots, all real, in the interval from {{mvar|a}} to {{mvar|b}}.

== General formula for the weights ==

The weights can be expressed as

{{NumBlk|:|w_{i} = \frac{a_{n}}{a_{n-1}} \frac{\int_{a}^{b} \omega(x) p_{n-1}(x)^2 dx}{p'_{n}(x_{i}) p_{n-1}(x_{i})}|{{EquationRef|1}}}}

where a_{k} is the coefficient of x^{k} in p_{k}(x). To prove this, note that using Lagrange interpolation one can express {{math|r(x)}} in terms of r(x_{i}) as

r(x) = \sum_{i=1}^{n} r(x_{i}) \prod_{\begin{smallmatrix} 1 \leq j \leq n \\ j \neq i \end{smallmatrix}}\frac{x-x_{j}}{x_{i}-x_{j}}

because {{math|r(x)}} has degree less than {{mvar|n}} and is thus fixed by the values it attains at {{mvar|n}} different points. Multiplying both sides by {{math|ω(x)}} and integrating from {{mvar|a}} to {{mvar|b}} yields

\int_{a}^{b}\omega(x)r(x)dx = \sum_{i=1}^{n} r(x_{i}) \int_{a}^{b}\omega(x)\prod_{\begin{smallmatrix} 1 \leq j \leq n \\ j \neq i \end{smallmatrix}} \frac{x-x_{j}}{x_{i}-x_{j}}dx

The weights {{mvar|wi}} are thus given by

w_{i} = \int_{a}^{b}\omega(x)\prod_{\begin{smallmatrix}1\leq j\leq n\\j\neq i\end{smallmatrix}}\frac{x-x_{j}}{x_{i}-x_{j}}dx

This integral expression for w_{i} can be expressed in terms of the orthogonal polynomials p_{n}(x) and p_{n-1}(x) as follows.

We can write

\prod_{\begin{smallmatrix} 1 \leq j \leq n \\ j \neq i \end{smallmatrix}} \left(x-x_{j}\right) = \frac{\prod_{1\leq j\leq n} \left(x - x_{j}\right)}{x-x_{i}} = \frac{p_{n}(x)}{a_{n}\left(x-x_{i}\right)}

where a_{n} is the coefficient of x^n in p_{n}(x). Taking the limit of {{mvar|x}} to x_{i} yields using L'Hôpital's rule

\prod_{\begin{smallmatrix} 1 \leq j \leq n \\ j \neq i \end{smallmatrix}} \left(x_{i}-x_{j}\right) = \frac{p'_{n}(x_{i})}{a_{n}}

We can thus write the integral expression for the weights as

{{NumBlk|:|w_{i} = \frac{1}{p'_{n}(x_{i})}\int_{a}^{b}\omega(x)\frac{p_{n}(x)}{x-x_{i}}dx|{{EquationRef|2}}}}

In the integrand, writing

\frac{1}{x-x_i} = \frac{1 - \left(\frac{x}{x_i}\right)^{k}}{x - x_i} + \left(\frac{x}{x_i}\right)^{k} \frac{1}{x - x_i}

yields

\int_a^b\omega(x)\frac{x^kp_n(x)}{x-x_i}dx = x_i^k \int_{a}^{b}\omega(x)\frac{p_n(x)}{x-x_i}dx

provided k \leq n, because

\frac{1-\left(\frac{x}{x_{i}}\right)^{k}}{x-x_{i}}

is a polynomial of degree {{math|k − 1}} which is then orthogonal to p_{n}(x). So, if {{math|q(x)}} is a polynomial of at most nth degree we have

\int_{a}^{b}\omega(x)\frac{p_{n}(x)}{x-x_{i}} dx = \frac{1}{q(x_{i})} \int_{a}^{b} \omega(x)\frac{q(x) p_n(x)}{x-x_{i}}dx

We can evaluate the integral on the right hand side for q(x) = p_{n-1}(x) as follows. Because \frac{p_{n}(x)}{x-x_{i}} is a polynomial of degree {{math|n − 1}}, we have

\frac{p_{n}(x)}{x-x_{i}} = a_{n}x^{n-1} + s(x)

where {{math|s(x)}} is a polynomial of degree n - 2. Since {{math|s(x)}} is orthogonal to p_{n-1}(x) we have

\int_{a}^{b}\omega(x)\frac{p_{n}(x)}{x-x_{i}}dx=\frac{a_{n}}{p_{n-1}(x_{i})} \int_{a}^{b}\omega(x)p_{n-1}(x)x^{n-1}dx

We can then write

x^{n-1} = \left(x^{n-1} - \frac{p_{n-1}(x)}{a_{n-1}}\right) + \frac{p_{n-1}(x)}{a_{n-1}}

The term in the brackets is a polynomial of degree n - 2, which is therefore orthogonal to p_{n-1}(x). The integral can thus be written as

\int_{a}^{b}\omega(x)\frac{p_{n}(x)}{x-x_{i}}dx = \frac{a_{n}}{a_{n-1} p_{n-1}(x_{i})} \int_{a}^{b}\omega(x) p_{n-1}(x)^{2} dx

According to equation ({{EquationNote|2}}), the weights are obtained by dividing this by p'_{n}(x_{i}) and that yields the expression in equation ({{EquationNote|1}}).

w_{i} can also be expressed in terms of the orthogonal polynomials p_{n}(x) and now p_{n+1}(x). In the 3-term recurrence relation p_{n+1}(x_{i}) = (a) p_{n}(x_{i}) + (b) p_{n-1}(x_{i}) the term with p_{n}(x_{i}) vanishes, so p_{n-1}(x_{i}) in Eq. (1) can be replaced by \frac{1}{b} p_{n+1} \left(x_i\right).

==Proof that the weights are positive==

Consider the following polynomial of degree 2n - 2

f(x) = \prod_{\begin{smallmatrix} 1 \leq j \leq n \\ j \neq i \end{smallmatrix}}\frac{\left(x - x_j\right)^2}{\left(x_i - x_j\right)^2}

where, as above, the {{mvar|xj}} are the roots of the polynomial p_{n}(x).

Clearly f(x_j) = \delta_{ij}. Since the degree of f(x) is less than 2n - 1, the Gaussian quadrature formula involving the weights and nodes obtained from p_{n}(x) applies. Since f(x_{j}) = 0 for {{mvar|j}} not equal to {{mvar|i}}, we have

\int_{a}^{b}\omega(x)f(x)dx=\sum_{j=1}^{n}w_{j}f(x_{j}) = \sum_{j=1}^{n} \delta_{ij} w_j = w_{i} > 0.

Since both \omega(x) and f(x) are non-negative functions, it follows that w_{i} > 0.

= Computation of Gaussian quadrature rules =

There are many algorithms for computing the nodes {{mvar|xi}} and weights {{mvar|wi}} of Gaussian quadrature rules. The most popular are the Golub-Welsch algorithm requiring {{math|O(n2)}} operations, Newton's method for solving p_n(x) = 0 using the three-term recurrence for evaluation requiring {{math|O(n2)}} operations, and asymptotic formulas for large n requiring {{math|O(n)}} operations.

== Recurrence relation ==

Orthogonal polynomials p_r with (p_r, p_s) = 0 for r \ne s for a scalar product (\cdot , \cdot), degree (p_r) = r and leading coefficient one (i.e. monic orthogonal polynomials) satisfy the recurrence relation

p_{r+1}(x) = (x - a_{r,r}) p_r(x) - a_{r,r-1} p_{r-1}(x) \cdots - a_{r,0}p_0(x)

and scalar product defined

(f(x),g(x))=\int_a^b\omega(x)f(x)g(x)dx

for r = 0, 1, \ldots, n - 1 where {{mvar|n}} is the maximal degree which can be taken to be infinity, and where a_{r,s} = \frac{\left(xp_r, p_s\right)}{\left(p_s, p_s\right)}. First of all, the polynomials defined by the recurrence relation starting with p_0(x) = 1 have leading coefficient one and correct degree. Given the starting point by p_0, the orthogonality of p_r can be shown by induction. For r = s = 0 one has

(p_1,p_0) = (x-a_{0,0}) (p_0,p_0) = (xp_0,p_0) - a_{0,0}(p_0,p_0) = (xp_0,p_0) - (xp_0,p_0) = 0.

Now if p_0, p_1, \ldots, p_r are orthogonal, then also p_{r+1}, because in

(p_{r+1}, p_s) = (xp_r, p_s) - a_{r,r}(p_r, p_s) - a_{r,r-1}(p_{r-1}, p_s)\cdots - a_{r,0}(p_0, p_s)

all scalar products vanish except for the first one and the one where p_s meets the same orthogonal polynomial. Therefore,

(p_{r+1},p_s) = (xp_r,p_s) - a_{r,s}(p_s,p_s) = (xp_r,p_s)-(xp_r,p_s) = 0.

However, if the scalar product satisfies (xf, g) = (f,xg) (which is the case for Gaussian quadrature), the recurrence relation reduces to a three-term recurrence relation: For s < r - 1, xp_s is a polynomial of degree less than or equal to {{math|r − 1}}. On the other hand, p_r is orthogonal to every polynomial of degree less than or equal to {{math|r − 1}}. Therefore, one has (xp_r, p_s) = (p_r, xp_s) = 0 and a_{r,s} = 0 for {{math|s < r − 1}}. The recurrence relation then simplifies to

p_{r+1}(x) = (x-a_{r,r}) p_r(x) - a_{r,r-1} p_{r-1}(x)

or

p_{r+1}(x) = (x-a_r) p_r(x) - b_r p_{r-1}(x)

(with the convention p_{-1}(x) \equiv 0) where

a_r := \frac{(xp_r,p_r)}{(p_r,p_r)}, \qquad b_r := \frac{(xp_r,p_{r-1})}{(p_{r-1},p_{r-1})} = \frac{(p_r,p_r)}{(p_{r-1},p_{r-1})}

(the last because of (xp_r, p_{r-1}) = (p_r, xp_{r-1}) = (p_r, p_r), since xp_{r-1} differs from p_r by a degree less than {{mvar|r}}).

== The Golub-Welsch algorithm ==

The three-term recurrence relation can be written in matrix form J\tilde{P} = x\tilde{P} - p_n(x) \mathbf{e}_n where \tilde{P} = \begin{bmatrix} p_0(x) & p_1(x) & \cdots & p_{n-1}(x) \end{bmatrix}^\mathsf{T}, \mathbf{e}_n is the nth standard basis vector, i.e., \mathbf{e}_n = \begin{bmatrix} 0 & \cdots & 0 & 1 \end{bmatrix}^\mathsf{T}, and {{mvar|J}} is the following tridiagonal matrix, called the Jacobi matrix:

\mathbf{J} = \begin{bmatrix}

a_0 & 1 & 0 & \cdots & 0 \\

b_1 & a_1 & 1 & \ddots & \vdots \\

0 & b_2 & \ddots & \ddots & 0 \\

\vdots & \ddots & \ddots & a_{n-2} & 1 \\

0 & \cdots & 0 & b_{n-1} & a_{n-1}

\end{bmatrix}.

The zeros x_j of the polynomials up to degree {{mvar|n}}, which are used as nodes for the Gaussian quadrature can be found by computing the eigenvalues of this matrix. This procedure is known as Golub–Welsch algorithm.

For computing the weights and nodes, it is preferable to consider the symmetric tridiagonal matrix \mathcal{J} with elements

\begin{align}

\mathcal{J}_{k,i} = J_{k,i} &= a_{k-1} & k &= 1,2,\ldots,n \\[2.1ex]

\mathcal{J}_{k-1,i} = \mathcal{J}_{k,k-1} = \sqrt{J_{k,k-1}J_{k-1,k}} &= \sqrt{b_{k-1}} & k &= \hphantom{1,\,}2,\ldots,n.

\end{align}

That is,

\mathcal{J} = \begin{bmatrix}

a_0 & \sqrt{b_1} & 0 & \cdots & 0 \\

\sqrt{b_1} & a_1 & \sqrt{b_2} & \ddots & \vdots \\

0 & \sqrt{b_2} & \ddots & \ddots & 0 \\

\vdots & \ddots & \ddots & a_{n-2} & \sqrt{b_{n-1}} \\

0 & \cdots & 0 & \sqrt{b_{n-1}} & a_{n-1}

\end{bmatrix}.

{{math|J}} and \mathcal{J} are similar matrices and therefore have the same eigenvalues (the nodes). The weights can be computed from the corresponding eigenvectors: If \phi^{(j)} is a normalized eigenvector (i.e., an eigenvector with euclidean norm equal to one) associated with the eigenvalue {{mvar|xj}}, the corresponding weight can be computed from the first component of this eigenvector, namely:

w_j = \mu_0 \left(\phi_1^{(j)}\right)^2

where \mu_0 is the integral of the weight function

\mu_0 = \int_a^b \omega(x) dx.

See, for instance, {{harv|Gil|Segura|Temme|2007}} for further details.

= Error estimates =

The error of a Gaussian quadrature rule can be stated as follows.{{harvnb|Stoer|Bulirsch|2002|loc=Thm 3.6.24}} For an integrand which has {{math|2n}} continuous derivatives,

\int_a^b \omega(x)\,f(x)\,dx - \sum_{i=1}^n w_i\,f(x_i) = \frac{f^{(2n)}(\xi)}{(2n)!} \, (p_n, p_n)

for some {{mvar|ξ}} in {{math|(a, b)}}, where {{mvar|pn}} is the monic (i.e. the leading coefficient is {{math|1}}) orthogonal polynomial of degree {{mvar|n}} and where

(f,g) = \int_a^b \omega(x) f(x) g(x) \, dx.

In the important special case of {{math|1=ω(x) = 1}}, we have the error estimate{{harvnb|Kahaner|Moler|Nash|1989|loc=§5.2}}

\frac{\left(b - a\right)^{2n+1} \left(n!\right)^4}{(2n + 1)\left[\left(2n\right)!\right]^3} f^{(2n)} (\xi), \qquad a < \xi < b.

Stoer and Bulirsch remark that this error estimate is inconvenient in practice, since it may be difficult to estimate the order {{math|2n}} derivative, and furthermore the actual error may be much less than a bound established by the derivative. Another approach is to use two Gaussian quadrature rules of different orders, and to estimate the error as the difference between the two results. For this purpose, Gauss–Kronrod quadrature rules can be useful.

= Gauss–Kronrod rules =

{{main|Gauss–Kronrod quadrature formula}}

If the interval {{math|[a, b]}} is subdivided, the Gauss evaluation points of the new subintervals never coincide with the previous evaluation points (except at zero for odd numbers), and thus the integrand must be evaluated at every point. Gauss–Kronrod rules are extensions of Gauss quadrature rules generated by adding {{math|n + 1}} points to an {{mvar|n}}-point rule in such a way that the resulting rule is of order {{math|2n + 1}}. This allows for computing higher-order estimates while re-using the function values of a lower-order estimate. The difference between a Gauss quadrature rule and its Kronrod extension is often used as an estimate of the approximation error.

= Gauss–Lobatto rules =

Also known as Lobatto quadrature,{{harvnb|Abramowitz|Stegun|1983|p=888}} named after Dutch mathematician Rehuel Lobatto. It is similar to Gaussian quadrature with the following differences:

  1. The integration points include the end points of the integration interval.
  2. It is accurate for polynomials up to degree {{math|2n − 3}}, where {{mvar|n}} is the number of integration points.{{harvnb|Quarteroni|Sacco|Saleri|2000}}

Lobatto quadrature of function {{math|f(x)}} on interval {{math|[−1, 1]}}:

\int_{-1}^1 {f(x) \, dx} = \frac {2} {n(n-1)}[f(1) + f(-1)] + \sum_{i = 2}^{n-1} {w_i f(x_i)} + R_n.

Abscissas: {{mvar|xi}} is the (i - 1)st zero of P'_{n-1}(x), here P_m(x) denotes the standard Legendre polynomial of {{mvar|m}}-th degree and the dash denotes the derivative.

Weights:

w_i = \frac{2}{n(n - 1)\left[P_{n-1}\left(x_i\right)\right]^2}, \qquad x_i \ne \pm 1.

Remainder:

R_n = \frac{-n\left(n - 1\right)^3 2^{2n-1} \left[\left(n - 2\right)!\right]^4}{(2n-1) \left[\left(2n - 2\right)!\right]^3} f^{(2n-2)}(\xi), \qquad -1 < \xi < 1.

Some of the weights are:

class="wikitable" style="margin:auto; background:white; text-align:center;"

! Number of points, n

! Points, {{mvar|xi}}

! Weights, {{mvar|wi}}

rowspan="2" | 3

| 0

\frac{4}{3}
\pm 1\frac{1}{3}
rowspan="2" | 4

| \pm \sqrt{\frac{1}{5}}

\frac{5}{6}
\pm 1\frac{1}{6}
rowspan="3" | 5

| 0

\frac{32}{45}
\pm\sqrt{\frac{3}{7}}\frac{49}{90}
\pm 1\frac{1}{10}
rowspan="3" | 6

| \pm\sqrt{\frac{1}{3}-\frac{2\sqrt{7}}{21}}

\frac{14+\sqrt{7}}{30}
\pm\sqrt{\frac{1}{3} + \frac{2\sqrt{7}}{21}}\frac{14 - \sqrt{7}}{30}
\pm 1\frac{1}{15}
rowspan="4" | 7

| 0

\frac{256}{525}
\pm\sqrt{\frac{5}{11}-\frac{2}{11}\sqrt{\frac{5}{3}}}\frac{124 + 7\sqrt{15}}{350}
\pm\sqrt{\frac{5}{11} + \frac{2}{11}\sqrt{\frac{5}{3}}}\frac{124 - 7\sqrt{15}}{350}
\pm 1\frac{1}{21}

An adaptive variant of this algorithm with 2 interior nodes{{harvnb|Gander|Gautschi|2000}} is found in GNU Octave and MATLAB as quadl and integrate.{{harvnb|MathWorks|2012}}{{harvnb|Eaton|Bateman|Hauberg|Wehbring|2018}}

References

= Citations =

= Bibliography =

{{refbegin}}

{{sfn whitelist | CITEREFAbramowitzStegun1983}}

  • {{AS ref | 25.4, Integration }}
  • {{cite journal | first1 = Donald G. | last1 = Anderson | title = Gaussian quadrature formulae for \int_0^1 -\ln(x)f(x) dx | year = 1965 | volume = 19 | number = 91 | pages = 477–481 | journal = Math. Comp. | doi = 10.1090/s0025-5718-1965-0178569-1 | doi-access = free }}
  • {{cite journal | first1 = Bernard | last1 = Danloy | title = Numerical construction of Gaussian quadrature formulas for \int_0^1 (-\log x) x^\alpha f(x) dx and \int_0^\infty E_m(x) f(x) dx | journal = Math. Comp. | year = 1973 | volume = 27 | number = 124 | pages = 861–869 | doi = 10.1090/S0025-5718-1973-0331730-X | mr = 0331730}}
  • {{cite web | last1 = Eaton | first1 = John W. | last2 = Bateman | first2 = David | last3 = Hauberg | first3 = Søren | last4 = Wehbring | first4 = Rik | title = Functions of One Variable (GNU Octave) | url = https://octave.org/doc/v4.2.2/Functions-of-One-Variable.html#XREFquadl | access-date = 28 September 2018 | year = 2018}}
  • {{cite journal | title = Adaptive Quadrature - Revisited | last1 = Gander | first1 = Walter | last2 = Gautschi | first2 = Walter | journal = BIT Numerical Mathematics | date = 2000 | volume = 40 | issue = 1 | pages = 84–101 | doi = 10.1023/A:1022318402393 | url = https://www.inf.ethz.ch/personal/gander/}}
  • {{cite book | last = Gauss | first = Carl Friedrich | author-link = Carl Friedrich Gauss | title = Methodus nova integralium valores per approximationem inveniendi | series = Comm. Soc. Sci. Göttingen Math | volume = 3 | year = 1815 | at = S. 29–76 | url = http://gallica.bnf.fr/ark:/12148/bpt6k2412190.r=Gauss.langEN }} datiert 1814, auch in Werke, Band 3, 1876, S. 163–196. English Translation by Wikisource.
  • {{cite journal | first1 = Walter | last1 = Gautschi | title = Construction of Gauss–Christoffel Quadrature Formulas | journal= Math. Comp. | year = 1968 | volume = 22 | issue= 102 | pages = 251–270 | doi = 10.1090/S0025-5718-1968-0228171-0 | mr = 0228171}}
  • {{cite journal | first1 = Walter | last1 = Gautschi | title = On the construction of Gaussian quadrature rules from modified moments | journal= Math. Comp. | year = 1970 | volume = 24 | issue = 110 | pages = 245–260 | doi = 10.1090/S0025-5718-1970-0285117-6 | mr = 0285177}}
  • {{cite book | first = Walter | last = Gautschi | title = A Software Repository for Gaussian Quadratures and Christoffel Functions | publisher = SIAM | isbn = 978-1-611976-34-2 | year = 2020}}
  • {{citation | last1 = Gil | first1 = Amparo | last2 = Segura | first2 = Javier | last3 = Temme | first3 = Nico M. | chapter=§5.3: Gauss quadrature | title = Numerical Methods for Special Functions | year = 2007 | publisher = SIAM | isbn = 978-0-89871-634-4 }}
  • {{cite journal | first1 = Gene H. | last1 = Golub | title = Calculation of Gauss Quadrature Rules | journal = Mathematics of Computation | author-link = Gene Golub | first2 = John H. | last2 = Welsch | volume = 23 | issue = 106 | year = 1969 | pages = 221–230 | jstor = 2004418 | doi = 10.1090/S0025-5718-69-99647-1 | doi-access= free}}
  • {{cite journal | first = C. G. J. | last = Jacobi | author-link = Carl Gustav Jacob Jacobi | title = Ueber Gauß' neue Methode, die Werthe der Integrale näherungsweise zu finden | journal = Journal für die Reine und Angewandte Mathematik | volume = 1 | year = 1826 | at = S. 301–308 | url = http://gdz.sub.uni-goettingen.de/dms/load/img/?PPN=PPN243919689_0001&DMDID=DMDLOG_0035 | postscript = und Werke, Band 6.}}
  • {{Cite journal | last1 = Kabir | first1 = Hossein | last2 = Matikolaei | first2 = Sayed Amir Hossein Hassanpour | date = 2017 | title = Implementing an Accurate Generalized Gaussian Quadrature Solution to Find the Elastic Field in a Homogeneous Anisotropic Media | journal = Journal of the Serbian Society for Computational Mechanics | volume = 11 | issue = 1 | pages = 11–19 | doi = 10.24874/jsscm.2017.11.01.02}}
  • {{cite book | last1 = Kahaner | first1 = David | last2 = Moler | first2 = Cleve | author2-link = Cleve Moler | last3 = Nash | first3 = Stephen | title = Numerical Methods and Software | year = 1989 | publisher = Prentice-Hall | isbn = 978-0-13-627258-8 | url-access = registration | url = https://archive.org/details/numericalmethods0000kaha }}
  • {{cite journal | first1 = Teresa | last1 = Laudadio | first2 = Nicola | last2 = Mastronardi | first3 = Paul | last3 = Van Dooren | title = Computing Gaussian quadrature rules with high relative accuracy | journal = Numerical Algorithms | volume = 92 | year = 2023 | pages = 767–793|doi=10.1007/s11075-022-01297-9| doi-access = free }}
  • {{citation | first1 = Dirk P. | last1 = Laurie | title = Accurate recovery of recursion coefficients from Gaussian quadrature formulas | year = 1999 | volume = 112 | number = 1–2 | pages = 165–180 | journal = J. Comput. Appl. Math. | doi = 10.1016/S0377-0427(99)00228-9 | doi-access = free }}
  • {{ cite journal | first1 = Dirk P. | last1 = Laurie | title = Computation of Gauss-type quadrature formulas | year = 2001 | pages = 201–217 | volume = 127 | number = 1–2 | journal = J. Comput. Appl. Math. | doi = 10.1016/S0377-0427(00)00506-9 | bibcode = 2001JCoAM.127..201L }}
  • {{cite web | title = Numerical integration - MATLAB integral | url = https://www.mathworks.com/help/matlab/ref/integral.html | author = MathWorks | date = 2012}}
  • {{cite journal | first1 = R. | last1 = Piessens | title = Gaussian quadrature formulas for the numerical integration of Bromwich's integral and the inversion of the laplace transform | year = 1971 | volume = 5 | number = 1 | journal= J. Eng. Math. | pages = 1–9 | doi = 10.1007/BF01535429 | bibcode = 1971JEnMa...5....1P }}
  • {{Citation | last1 = Press | first1 = WH | author1-link = William H. Press | last2 = Teukolsky | first2 = SA | last3 = Vetterling | first3 = WT | last4 = Flannery | first4 = BP | year = 2007 | title = Numerical Recipes: The Art of Scientific Computing | edition = 3rd | publisher = Cambridge University Press | location = New York | isbn = 978-0-521-88068-8 | chapter = Section 4.6. Gaussian Quadratures and Orthogonal Polynomials | chapter-url = http://apps.nrbook.com/empanel/index.html?pg=179}}
  • {{cite book | author-link1= Alfio Quarteroni | last1 = Quarteroni | first1 = Alfio | first2 = Riccardo | last2 = Sacco | first3 = Fausto | last3 = Saleri | title = Numerical Mathematics | location = New York | publisher = Springer-Verlag | pages = 425–478 | date = 2000 | isbn = 0-387-98959-5 | title-link = Numerical Mathematics |doi=10.1007/978-3-540-49809-4_10}}
  • {{cite journal | first1 = Cordian | last1 = Riener | last2 = Schweighofer | first2 = Markus | title = Optimization approaches to quadrature: New characterizations of Gaussian quadrature on the line and quadrature with few nodes on plane algebraic curves, on the plane and in higher dimensions | year = 2018 | journal = Journal of Complexity | volume = 45 | pages = 22–54 | doi = 10.1016/j.jco.2017.10.002 | arxiv = 1607.08404 }}
  • {{ cite journal | first1 = Robin P. | last1 = Sagar | title = A Gaussian quadrature for the calculation of generalized Fermi-Dirac integrals | year = 1991 | journal = Comput. Phys. Commun. | volume = 66 | pages = 271–275 | number = 2–3 | doi = 10.1016/0010-4655(91)90076-W | bibcode = 1991CoPhC..66..271S }}
  • {{citation | last1 = Stoer | first1 = Josef | last2 = Bulirsch | first2 = Roland | year = 2002 | title = Introduction to Numerical Analysis | edition = 3rd | publisher = Springer | isbn = 978-0-387-95452-3 }}
  • {{dlmf | title=§3.5(v): Gauss Quadrature | id = 3.5.v | last = Temme | first = Nico M.}}
  • {{cite journal | first1 = E. | last1 = Yakimiw | title = Accurate computation of weights in classical Gauss–Christoffel quadrature rules | year = 1996 | journal = J. Comput. Phys. | volume = 129 | issue = 2 | pages = 406–430 | bibcode = 1996JCoPh.129..406Y | doi = 10.1006/jcph.1996.0258}}

{{refend}}