Skip to article frontmatterSkip to article content
Site not loading correctly?

This may be due to an incorrect BASE_URL configuration. See the MyST Documentation for reference.

Big Idea

11fdx\int_{-1}^{1} f\,dx is a linear functional on ff. Restricted to Pn\mathbb{P}_n it is just a row vector of weights w\mathbf{w} acting on the nodal values: pn=wf\int p_n = \mathbf{w}^\top \mathbf{f}. There are two natural representations. Value-space quadrature sets wj=jw_j = \int \ell_j and sums against the nodal values fjf_j. Coefficient-space integration expands pnp_n in Chebyshev polynomials and integrates each TkT_k using a closed-form identity. On Chebyshev nodes both reduce to the same rule, Clenshaw-Curtis quadrature, with spectral accuracy for smooth integrands.

Quadrature in Value Space

The quadrature framework is simple. Sample ff at nodes x0,,xnx_0, \ldots, x_n, build the Lagrange interpolant pn=jfjjp_n = \sum_j f_j\, \ell_j, and integrate:

11pn(x)dx  =  11j=0nfjj(x)dx  =  j=0n(11j(x)dx)wjfj  =  wf.\int_{-1}^{1} p_n(x)\, dx \;=\; \int_{-1}^{1} \sum_{j=0}^n f_j\, \ell_j(x)\, dx \;=\; \sum_{j=0}^n \underbrace{\Big(\int_{-1}^{1} \ell_j(x)\, dx\Big)}_{w_j} f_j \;=\; \mathbf{w}^\top \mathbf{f}.

The continuous integral has become a finite-dimensional inner product: the value vector f\mathbf{f} tested against the weight vector w\mathbf{w}. Once the nodes are fixed, the quadrature weights wj=11j(x)dxw_j = \int_{-1}^1 \ell_j(x)\, dx are purely geometric and can be computed once and reused for any ff.

Trapezoidal uses the hat basis on an equispaced grid and gets the familiar weights wj=hw_j = h (with h/2h/2 at the endpoints). Newton-Cotes generalises this to higher-degree polynomial interpolation on equispaced nodes, but as in §2 it suffers from Runge at large nn. The weights start to alternate in sign and grow exponentially, turning the sum into catastrophic cancellation. Don’t go that route.

Chebyshev nodes fix it, as always. On them the integrals wj=11j(x)dxw_j = \int_{-1}^{1} \ell_j(x)\, dx are bounded and positive. Computing them directly from the barycentric form of j\ell_j is cumbersome, but we already have the right machinery. In the next section we expand pnp_n in the Chebyshev basis and integrate there; afterwards we translate back to read off the explicit weights.

Integration in Coefficient Space

Expand pnp_n in the Chebyshev basis

pn(x)  =  k=0nckTk(x),p_n(x) \;=\; \sum_{k=0}^n c_k\, T_k(x),

and integrate term by term using linearity:

11pn(x)dx  =  k=0nck11Tk(x)dx.\int_{-1}^{1} p_n(x)\, dx \;=\; \sum_{k=0}^n c_k \int_{-1}^{1} T_k(x)\, dx.

Everything reduces to the closed-form values of Tk\int T_k, which come cleanly out of the x=cosθx = \cos\theta substitution.

Proposition 1 (Integrals of Chebyshev polynomials)

Ik  :=  11Tk(x)dx  =  {21k2,k even,0,k odd.I_k \;:=\; \int_{-1}^{1} T_k(x)\, dx \;=\; \begin{cases} \dfrac{2}{1 - k^2}, & k \text{ even}, \\[1ex] 0, & k \text{ odd}. \end{cases}

Proof 1

Let x=cosθx = \cos\theta, so dx=sinθdθdx = -\sin\theta\, d\theta and Tk(cosθ)=cos(kθ)T_k(\cos\theta) = \cos(k\theta). The integral becomes

11Tk(x)dx  =  0πcos(kθ)sinθdθ.\int_{-1}^{1} T_k(x)\, dx \;=\; \int_0^{\pi} \cos(k\theta)\, \sin\theta\, d\theta.

Apply the product-to-sum identity cos(kθ)sinθ=12[sin((k+1)θ)sin((k1)θ)]\cos(k\theta)\sin\theta = \tfrac{1}{2}\left[\sin((k+1)\theta) - \sin((k-1)\theta)\right] to split the integral into

120πsin((k+1)θ)dθ    120πsin((k1)θ)dθ.\tfrac{1}{2}\int_0^\pi \sin((k+1)\theta)\, d\theta \;-\; \tfrac{1}{2}\int_0^\pi \sin((k-1)\theta)\, d\theta.

Each piece uses 0πsin(mθ)dθ=(1(1)m)/m\int_0^\pi \sin(m\theta)\, d\theta = (1 - (-1)^m)/m, which equals 2/m2/m for mm odd and 0 for mm even.

  • If kk is even, both k+1k+1 and k1k-1 are odd, giving 12[2/(k+1)2/(k1)]=2/(k21)=2/(1k2)\tfrac{1}{2}[2/(k+1) - 2/(k-1)] = -2/(k^2 - 1) = 2/(1-k^2).

  • If kk is odd with k1k \ge 1, both k+1k+1 and k1k-1 are even, and both integrals vanish.

At k=0k = 0 the formula reduces directly to 0πsinθdθ=2\int_0^\pi \sin\theta\, d\theta = 2, matching 2/(10)=22/(1-0) = 2.

Plugging the proposition into the term-by-term sum collapses the integral of pnp_n to a clean one-liner:

11pn(x)dx  =  2c0  +  k=2k evenn2ck1k2.\int_{-1}^{1} p_n(x)\, dx \;=\; 2 c_0 \;+\; \sum_{\substack{k = 2 \\ k \text{ even}}}^n \frac{2 c_k}{1 - k^2}.

Only the even-index Chebyshev coefficients contribute. Operationally: compute c=DCT(f)\mathbf{c} = \mathrm{DCT}(\mathbf{f}) in O(NlogN)O(N \log N), then dot it against the integral vector I=(I0,I1,,IN)=(2,0,23,0,215,0,235,)\mathbf{I} = (I_0, I_1, \ldots, I_N) = (2, 0, -\tfrac{2}{3}, 0, -\tfrac{2}{15}, 0, -\tfrac{2}{35}, \ldots) from Proposition 1.

Clenshaw-Curtis: Back to Value-Space Weights

Chaining the two views gives the value-space weights we promised. Writing Ik=11TkI_k = \int_{-1}^1 T_k for the integrals from Proposition 1, the coefficient-space integral is

11pn(x)dx  =  11k=0NckTk(x)dx  =  k=0NckIk.\int_{-1}^{1} p_n(x)\, dx \;=\; \int_{-1}^1 \sum_{k=0}^N c_k\, T_k(x)\, dx \;=\; \sum_{k=0}^N c_k\, I_k.

From Where to Place the Nodes the discrete Chebyshev coefficients are

ck  =  2Nj=0Nf(xj)cos ⁣(jkπN),c_k \;=\; \frac{2}{N} \sum_{j=0}^N {}^{\prime\prime} f(x_j) \cos\!\left(\frac{j k \pi}{N}\right),

where \sum'' halves the j=0j = 0 and j=Nj = N terms. Substituting this into the coefficient-space integral,

11pn(x)dx  =  k=0NIk2Nj=0Nf(xj)cos ⁣(jkπN),\int_{-1}^{1} p_n(x)\, dx \;=\; \sum_{k=0}^N I_k \cdot \frac{2}{N} \sum_{j=0}^N{}^{\prime\prime} f(x_j) \cos\!\left(\frac{j k \pi}{N}\right),

and swapping the order of summation,

11pn(x)dx  =  j=0Nf(xj)2Nk=0Ncos ⁣(jkπN)Ikwj (up to the  on j=0,N).\int_{-1}^{1} p_n(x)\, dx \;=\; \sum_{j=0}^N{}^{\prime\prime} f(x_j) \cdot \underbrace{\frac{2}{N} \sum_{k=0}^N \cos\!\left(\frac{j k \pi}{N}\right) I_k}_{\textstyle w_j\text{ (up to the $''$ on }j = 0, N\text{)}}.

The inner sum in brackets defines the Clenshaw-Curtis weights. Only the even kk contribute, since Ik=0I_k = 0 for kk odd. So

wj  =  2Nk=0k evenNcos ⁣(jkπN)21k2,w_j \;=\; \frac{2}{N} \sum_{\substack{k = 0 \\ k \text{ even}}}^N \cos\!\left(\frac{j k \pi}{N}\right) \cdot \frac{2}{1 - k^2},

with an extra factor of 1/21/2 at the endpoints j=0,Nj = 0, N coming from the outer \sum''. The closed form is a finite cosine series in jj; evaluating it at j=0,1,,Nj = 0, 1, \ldots, N produces the whole weight vector in O(N2)O(N^2) work, or O(NlogN)O(N \log N) with an FFT.

Source
<Figure size 700x380 with 1 Axes>

All weights are positive, so the quadrature never relies on delicate sign cancellation. The endpoint weights are small to compensate for the endpoint clustering of Chebyshev nodes: each node sits in a region of size 1/N\propto 1/N in the interior and 1/N2\propto 1/N^2 near ±1\pm 1, and the weights mirror that. Contrast with Newton-Cotes at large NN, where weights alternate sign and grow exponentially.

Convergence

The quadrature error inherits the truncation error of the Chebyshev expansion, so the rates from Regularity and Coefficient Decay translate directly.

Theorem 1 (Clenshaw-Curtis convergence rates)

Let I=11f(x)dxI = \int_{-1}^1 f(x)\, dx and IN=wfI_N = \mathbf{w}^\top \mathbf{f} be the Clenshaw-Curtis approximation on N+1N+1 Chebyshev nodes.

  1. If ff is analytic on a Bernstein ellipse Eρ\mathcal{E}_\rho, then

    IIN  =  O(ρN).|I - I_N| \;=\; O(\rho^{-N}).
  2. If f(r1)f^{(r-1)} is absolutely continuous and f(r)f^{(r)} has bounded variation with r1r \ge 1, then

    IIN  =  O(Nr).|I - I_N| \;=\; O(N^{-r}).
  3. If ff has a jump, the rate is no better than O(N1)O(N^{-1}) (although INII_N \to I still holds, unlike the uniform convergence of pnp_n).

Proof 2

By construction, IN=11pn(x)dxI_N = \int_{-1}^1 p_n(x)\, dx for the Chebyshev interpolant pnp_n of ff. So

IIN  =  11(fpn)(x)dx    2fpn.|I - I_N| \;=\; \left|\int_{-1}^1 (f - p_n)(x)\, dx\right| \;\le\; 2\,\|f - p_n\|_\infty.

Cases (1) and (2) follow by applying the interpolant rates from Corollary 1 combined with the O(logN)O(\log N) Lebesgue-constant bound from the same chapter; the log factor is absorbed into the O()O(\cdot). For case (3) the LL^\infty bound on fpnf - p_n fails, but the coefficient tail ck=O(1/k)|c_k| = O(1/k) summed against the quadrature integral vector IkI_k gives IIN=O(1/N)|I - I_N| = O(1/N).

Source
<Figure size 1000x400 with 2 Axes>

The analytic integrand drops to machine precision around N30N \approx 30 under Clenshaw-Curtis; trapezoid plods along at O(N2)O(N^{-2}). For the C2C^2 integrand both rules eventually drop algebraically, but Clenshaw-Curtis benefits from each extra derivative the integrand has, while trapezoid is permanently stuck at second order.

A Demo

Try the rule on something genuinely awful. The integrand

g(x)  =  ex[sech ⁣(4sin(40x))]exg(x) \;=\; e^x\, \big[\,\mathrm{sech}\!\big(4 \sin(40 x)\big)\big]^{e^x}

oscillates 40 times across [1,1][-1,1], has an envelope that itself rides on exe^x, and admits no closed-form antiderivative. It is, however, real-analytic on [1,1][-1,1], so the regularity dictionary predicts geometric convergence once the resolution is high enough to see the oscillation.

Source
<Figure size 1100x400 with 2 Axes>

Two regimes, separated by a knee where Clenshaw-Curtis finally resolves the sin(40x)\sin(40x) oscillation. Below the knee the integrand looks rougher than PN\mathbb{P}_N can capture and both rules flounder. Above it, Clenshaw-Curtis switches to its asymptotic geometric rate and reaches machine precision within a handful of doublings. Trapezoid, blind to the analyticity, plods on at O(N2)O(N^{-2}) throughout.

The take-away: spectral convergence is a promise about the asymptotic regime, contingent on resolving whatever oscillations the integrand already has. Once you cross that threshold, the difference between the two rules is no longer a constant factor. It is the difference between “correct to all digits” and “still wrong in the second.”