This page serves as a quick reference summarizing the key theorems and concepts that appear throughout the course. Use it for review or to see how different topics connect.
These are the fundamental concepts that unify the entire course: approximation theorems justify what we’re doing, while error analysis tells us how well we’re doing it.
Foundational Theorems¶
Theorem 1 (Taylor’s Theorem)
If is -times differentiable, then:
for some between and .
Why it matters: This is THE foundation of numerical analysis.
Truncation error: The remainder term tells us the error when we truncate
Finite differences: Derived by Taylor expanding
Newton’s method: Comes from linearizing (first-order Taylor)
Convergence order: Higher-order methods use more Taylor terms
Theorem 2 (Weierstrass Approximation Theorem)
Let be continuous on . For any , there exists a polynomial such that:
Why it matters: Justifies the entire polynomial approximation enterprise. Continuous functions CAN be approximated by polynomials. The question is how efficiently.
Weierstrass guarantees existence but says nothing about:
Which polynomial degree is needed
Which nodes to use for interpolation
How to construct the polynomial efficiently
These are the questions numerical analysis answers.
Theorem 3 (Best Approximation Theorem)
Let be an inner product space and a finite-dimensional subspace. For any , there exists a unique minimizing .
The best approximation is characterized by orthogonality:
Why it matters:
Least squares: is exactly this orthogonality condition
Fourier/Chebyshev: Coefficients come from projecting onto basis functions
Why QR works: Orthogonal projection onto
Forward and Backward Error¶
Definition 1 (Forward and Backward Error)
Definition 2 (Condition Number)
Theorem 4 (The Fundamental Error Relationship)
Remark 1 (The Error Analysis Philosophy)
This decomposition separates concerns:
Algorithm designers aim for backward stability (small backward error)
Problem formulators aim for well-conditioned problems (small )
Users should check conditioning before trusting results
A backward stable algorithm gives the exact answer to a slightly wrong problem. If the problem is well-conditioned, “slightly wrong problem” means “nearly correct answer.”
The Contraction Mapping Principle¶
Theorem 5 (Banach Fixed Point Theorem)
Let be a complete metric space and a contraction mapping, i.e., there exists such that
Then:
has a unique fixed point
For any starting point , the iteration converges to
The convergence is geometric:
A priori bound:
A posteriori bound:
Proof 1
Existence and convergence: The sequence is Cauchy. For :
Since , this tends to 0 as . By completeness, exists.
Taking limits in and using continuity of (contractions are Lipschitz, hence continuous):
Uniqueness: If and , then
Since , this forces , so .
Convergence rate: .
A posteriori bound: Let in the Cauchy estimate with replaced by .
Remark 2 (Where We Use the Banach Fixed Point Theorem)
This theorem is THE foundation for iterative methods throughout scientific computing:
| Application | Space | Contraction |
|---|---|---|
| Fixed-point iteration | or | with |
| Newton’s method (local) | Neighborhood of root | Newton map near simple root |
| Picard iteration for ODEs | Integral operator | |
| Iterative linear solvers | with | |
| Implicit function theorem | Banach space | Contraction from IFT proof |
The a posteriori bound is particularly useful: it gives a computable error estimate from consecutive iterates.
The Neumann Series¶
Theorem 6 (Neumann Series)
Let be a unital Banach algebra with identity 1 and norm . If satisfies , then is invertible and
More generally, if denotes the bounded linear operators on a Banach space , and has spectral radius , then .
Proof 2
Convergence: Since (submultiplicativity of the norm in a Banach algebra) and , the series converges absolutely, hence converges in the complete space .
Verification: The partial sums satisfy
As , , so .
Spectral radius version: Gelfand’s formula gives . If , then for any with , we have for large , ensuring convergence.
Remark 3 (Where We Use the Neumann Series)
The Neumann series is the operator-theoretic generalization of the geometric series for .
| Application | Banach Algebra | Element |
|---|---|---|
| Iterative linear solvers | Iteration matrix | |
| Perturbation of inverses | for perturbed | |
| Integral equations | Integral operator | |
| ODE stability | in implicit methods | |
| Resolvent | for $ |
Key insight: The condition (not ) is necessary and sufficient for convergence. This is why spectral radius, not norm, controls iterative methods.
Corollary 1 (Perturbation of Inverses)
If is invertible and , then is invertible and