This chapter is about iterative methods: algorithms that generate sequences of approximations converging to a solution. Newton’s method—which you know from calculus—is the workhorse, but understanding when and why it works requires careful analysis.
Motivation: The Quake Fast Inverse Square Root¶
Before diving into theory, consider this legendary piece of code from Quake III Arena (1999):
float Q_rsqrt(float x) {
long i;
float x2, y;
x2 = x * 0.5F;
y = x;
i = *(long*)&y; // interpret float bits as integer
i = 0x5f3759df - (i >> 1); // magic!
y = *(float*)&i;
y = y * (1.5F - (x2 * y * y)); // Newton refinement
return y;
}This computes —a function that cannot be evaluated directly using just . How do we turn this into something computable?
Translating to a Root-Finding Problem¶
Remark 1 (Key Insight)
Finding is equivalent to solving where .
If , then , so , which means .
Why translate to root finding? Because you already know an algorithm for finding roots: Newton’s method from calculus.
Newton’s Method Refresher¶
Recall: to solve , approximate by its tangent line and find where it crosses zero:
For , we have . Substituting:
This is exactly the last line of the Quake code!
The floating-point bit tricks provide a clever initial guess, and one Newton iteration refines it to sufficient accuracy.
The Root-Finding Problem¶
Given a nonlinear function , find such that .
Examples:
has root
has roots
has a root near
For most nonlinear equations, we cannot find closed-form solutions. We need a different approach.
The Big Picture: Iterative Methods¶
The Quake example illustrates a fundamental pattern in scientific computing:
Definition 1 (Iterative Methods)
Many problems cannot be solved in a finite number of arithmetic operations. Instead, we construct a sequence that converges to the solution:
In practice, we stop when or some other criterion is sufficiently small.
The key questions for any iterative method are:
Does it converge? Under what conditions?
How fast? How many iterations do we need?
How sensitive is the answer? (This is where condition numbers come in.)
A Unifying Principle¶
Newton’s method, bisection, and fixed-point iteration may seem like three unrelated techniques. But they share a common structure: each generates a sequence that (hopefully) converges to the root.
The key insight is that convergence happens when the iteration shrinks errors:
If each step of an iteration reduces the error by a factor :
then the sequence converges geometrically to .
This principle—formalized as the Banach Fixed Point Theorem in Fixed Point Iteration—is one of the most important ideas in numerical analysis. It explains:
Why Newton’s method converges (when it does)
Why bisection always converges (it halves the interval each step)
Why iterative linear solvers like Jacobi and Gauss-Seidel work
The same idea recurs throughout this course: in ODE integrators, optimization algorithms, and beyond.
Learning Outcomes¶
After completing this chapter, you should be able to:
L4.1: Explain how iterative methods generate sequences of approximations converging to a solution.
L4.2: Define order of convergence (linear, quadratic, cubic) and estimate the number of iterations required to achieve a desired accuracy as a function of initial error and convergence order.
L4.3: State the Intermediate Value Theorem and derive the bisection algorithm from it.
L4.4: Calculate the number of bisection iterations required for a given tolerance.
L4.5: Define fixed points, restate root finding as a fixed-point problem, and explain why the choice of matters.
L4.6: State the Banach Fixed Point Theorem (1D contraction mapping) and verify its hypotheses for a given iteration.
L4.7: Use cobweb diagrams to visualize fixed-point iterations and distinguish convergent from divergent cases.
L4.8: Derive Newton’s method from linear approximation and identify it as a fixed-point iteration.
L4.9: Explain why Newton’s method is a local method: the linear approximation is only accurate near the root, so convergence requires a sufficiently close initial guess.
L4.10: Identify when Newton’s method fails or degrades (e.g., , bad initial guess) and explain how the modified Newton iteration for multiple roots is another fixed-point method.
L4.11: Compute the condition number of a root and explain why multiple roots are ill-conditioned.
L4.12: Define forward error () and backward error () for root finding, and relate them via the condition number: forward backward.
L4.13: Identify the two natural stopping criteria ( and ) and explain when each can be misleading.