Skip to article frontmatterSkip to article content
Site not loading correctly?

This may be due to an incorrect BASE_URL configuration. See the MyST Documentation for reference.

Big Idea

Polynomial interpolation at Chebyshev nodes is the standard tool for one-dimensional approximation. The textbook warning that “high-order interpolation is unstable” is a statement about equispaced nodes, not about polynomial interpolation as such. With Chebyshev nodes and the barycentric evaluation formula, the interpolant converges to any sufficiently smooth function at a rate determined by the smoothness of the function itself.

The conceptual tool behind everything that follows is linear algebra on function spaces. Polynomials of degree at most nn form an (n+1)(n+1)-dimensional vector space; a function is a vector, sampling is a choice of basis, and differentiation and integration are a linear operator and a linear functional on that space. Approximation theory studies which basis to pick, where to put the nodes, and how the smoothness of the underlying function controls the error. The remainder of the chapter formalises this picture and develops the differentiation, integration, root-finding, and boundary-value-problem solvers it supports.

Why Polynomials?

The starting point is a 19th-century theorem that justifies everything that follows.

Theorem 1 (Weierstrass Approximation Theorem)

Let fC([a,b])f \in C([a,b]) and ε>0\varepsilon > 0. There exists a polynomial pp such that

fp  <  ε.\|f - p\|_\infty \;<\; \varepsilon.

So polynomials are dense in the space of continuous functions on a bounded interval. Every continuous ff can be approximated arbitrarily well, in the supremum norm, by some polynomial of some degree. This is the existence statement that licenses the rest of the chapter. The harder question, and the one we actually answer, is constructive: given ff, how do we compute a near-best polynomial approximation, and at what cost?

A subtlety from the same era warns us not to be naive about it. Faber and Bernstein showed that there is no fixed sequence of nodes {xj(n)}\{x_j^{(n)}\} for which the polynomial interpolant converges to ff for every continuous ff. Convergence depends on both the nodes and the regularity of ff. The two themes of this chapter follow from those two ingredients: where to put the nodes, and how the smoothness of ff controls how fast a truncated Chebyshev series converges. The good news is that for the functions we actually meet in applications, analytic, CrC^r with bounded rr-th derivative, or just Lipschitz, the rates are excellent.

What This Chapter Does

The chapter is in two parts.

Part A: 1D, deterministic. We start from polynomial interpolation as a change of basis between values and coefficients, and identify which bases are numerically usable. We then ask where to place the nodes, with the answer being Chebyshev points. Coefficient decay reveals the smoothness of the underlying function, and that same dictionary controls the accuracy of every downstream operation: differentiation by the matrix DND_N, integration by Clenshaw–Curtis quadrature, root-finding via the colleague-matrix eigenvalue problem, and the solution of linear boundary value problems by spectral collocation. Part A ends with adaptivity, where the coefficient tail itself tells us when we have used enough basis functions.

Part B: high-dimensional, Monte Carlo. Tensor-product Chebyshev in dd dimensions inherits the 1D rate but pays an ndn^d cost for the grid. The way out is to pick a parameterised basis (ridge functions σ(wx+b)\sigma(w \cdot x + b)) and a different regularity notion (the Barron norm, an L¹ moment of the Fourier transform). The universal-approximation theorem is the modern Weierstrass for this basis, and Barron’s theorem is the analogous regularity-rate dictionary, with rate O(Cf/n)O(C_f/\sqrt n) in any dimension. Part B closes with PyTorch examples that read both the density theorem and the dimension-free rate off training curves.

Learning Outcomes

After completing this chapter you should be able to: