Total Variation¶
For discontinuous , interpret this as the sum of integrals plus the jump magnitudes.
The Fundamental Theorem¶
Interpretation:
continuous ():
has one derivative ():
has derivatives:
analytic: for some
Analytic Functions: The Bernstein Ellipse¶
For analytic functions, we can give a precise geometric characterization of the convergence rate.
The largest for which is analytic inside determines the convergence rate. Singularities in the complex plane—even far from —limit and slow convergence.
Approximation Error¶
The error in truncating the Chebyshev series at terms:
Using the coefficient decay:
: (algebraic)
analytic: (exponential)
| Smoothness | Coefficient Decay | Approximation Error |
|---|---|---|
| Discontinuous | ||
| Continuous | ||
| Analytic |
Example: The Sign Function¶
is discontinuous at .
n = 2000
x = chebpts(n)
f = np.sign(x)
c = vals2coeffs(f)
# Coefficients decay as O(k^{-1})
plt.semilogy(np.abs(c))
# Only odd coefficients are nonzero (antisymmetric function)Approximation error: (not because Chebyshev avoids Gibbs phenomenon in max norm).
Example: ¶
is continuous but not differentiable at .
n = 1000
x = chebpts(n)
f = np.abs(x)
c = vals2coeffs(f)
# Coefficients decay as O(k^{-2})
# Approximation error: O(n^{-1})The error bound:
Example: ¶
This function has three continuous derivatives (jumps in at roots of ).
f = lambda x: np.abs(np.sin(5*x))**3
n = 10000
x = chebpts(n)
c = vals2coeffs(f(x))
# Coefficients decay as O(k^{-4})
# Approximation error: O(n^{-3})Example: Analytic Function¶
is entire (analytic everywhere).
f = lambda x: np.sin(6*x) + np.sin(60*np.exp(x))
n = 250
x = chebpts(n)
c = vals2coeffs(f(x))
# Coefficients decay EXPONENTIALLY
# Around n=150, coefficients hit machine precisionThe coefficients plateau at (machine precision), indicating the series has converged.
Visualizing Coefficient Decay¶
fig, axes = plt.subplots(3, 2, figsize=(10, 8))
functions = [
(np.sign, r"$\mathrm{sign}(x)$", "Discontinuous"),
(np.abs, r"$|x|$", "Continuous, not $C^1$"),
(lambda x: np.abs(np.sin(5*x))**3, r"$|\sin(5x)|^3$", "$C^3$"),
]
for i, (f, label, desc) in enumerate(functions):
x = chebpts(1000)
vals = f(x)
coeffs = vals2coeffs(vals)
axes[i, 0].plot(x, vals, 'k')
axes[i, 0].set_title(f"{label} — {desc}")
axes[i, 1].semilogy(np.abs(coeffs), 'k.')
axes[i, 1].set_ylabel("$|c_k|$")
axes[i, 1].set_ylim([1e-16, 1e1])The Gibbs Phenomenon¶
For discontinuous functions, polynomial approximations exhibit overshoot near discontinuities:
The maximum of the interpolant exceeds the function by about 9% (for step functions)
This overshoot persists as
It’s a fundamental property of polynomial approximation, not a numerical artifact
xs = np.linspace(-1, 1, 10000)
for n in [10, 50, 100]:
x = chebpts(n)
f = np.sign(x)
p = bary(xs, f, x)
print(f"n={n}: max overshoot = {np.max(p):.4f}")
# Always around 1.09 regardless of nAdaptive Resolution¶
The coefficient decay principle enables adaptive algorithms:
def adaptive_approx(f, tol=1e-12):
"""Find n such that coefficients decay below tol."""
for n in [16, 32, 64, 128, 256, 512, 1024]:
x = chebpts(n)
c = vals2coeffs(f(x))
# Check if last few coefficients are small
if np.max(np.abs(c[-5:])) < tol * np.max(np.abs(c)):
# Trim to significant coefficients
k = np.where(np.abs(c) > tol * np.max(np.abs(c)))[0][-1]
return c[:k+1], n
raise ValueError("Function requires more than 1024 points")This is the core idea behind Chebfun’s automatic degree selection.
Locating Non-Smoothness¶
The coefficient decay pattern reveals where smoothness breaks down:
Smooth everywhere: exponential decay
Singularity at one point: algebraic decay
Multiple singularities: decay rate limited by worst singularity
For functions with boundary layers or internal layers, the coefficients reveal the layer location and sharpness.
Lebesgue Constants¶
The Lebesgue constant quantifies how close interpolation is to best approximation.
The Lebesgue constant tells us how much worse interpolation can be compared to best approximation.
Growth Rates by Node Type¶
| Nodes | growth |
|---|---|
| Chebyshev | (logarithmic) |
| Equispaced | (exponential!) |
| Lower bound (any nodes) |
Chebyshev nodes are nearly optimal—their Lebesgue constant grows only logarithmically.
The Weierstrass Approximation Theorem¶
A foundational result guaranteeing that polynomials can approximate any continuous function:
This theorem applies even to pathological continuous functions (like those that are continuous but nowhere differentiable).
The Faber-Bernstein Result¶
However, Weierstrass does not guarantee uniform convergence of interpolants:
This means: for any node choice, there exists some continuous function whose interpolants diverge. However, for the vast majority of functions encountered in practice (those with some smoothness), Chebyshev interpolation works phenomenally well.
Practical Guidelines¶
Check coefficient decay to verify your approximation is resolved
Exponential decay to machine precision indicates an analytic function
Algebraic decay suggests limited smoothness—count the rate to find
Plateauing coefficients at some level indicates numerical noise or rounding
Summary¶
| What You See | What It Means |
|---|---|
| Analytic function | |
| Function has derivatives | |
| Discontinuous function | |
| Plateau at 10-15 | Machine precision reached |
| No decay | Function not resolved, need more points |
The key insight: Chebyshev coefficients encode smoothness. Spectral methods achieve their remarkable accuracy because smooth functions have rapidly decaying coefficients.