Motivation: The Quake Fast Inverse Square Root¶
Before diving into theory, consider this legendary piece of code from Quake III Arena (1999):
float Q_rsqrt(float x) {
long i;
float x2, y;
x2 = x * 0.5F;
y = x;
i = *(long*)&y; // interpret float bits as integer
i = 0x5f3759df - (i >> 1); // magic!
y = *(float*)&i;
y = y * (1.5F - (x2 * y * y)); // Newton refinement
return y;
}This computes —a function that cannot be evaluated directly using just . How do we turn this into something computable?
Translating to a Root-Finding Problem¶
Why translate to root finding? Because you already know an algorithm for finding roots: Newton’s method from calculus.
Newton’s Method Refresher¶
Recall: to solve , approximate by its tangent line and find where it crosses zero:
For , we have . Substituting:
This is exactly the last line of the Quake code!
The floating-point bit tricks provide a clever initial guess, and one Newton iteration refines it to sufficient accuracy.
The Root-Finding Problem¶
Given a nonlinear function , find such that .
Examples:
has root
has roots
has a root near
For most nonlinear equations, we cannot find closed-form solutions. We need a different approach.
The Big Picture: Iterative Methods¶
The Quake example illustrates a fundamental pattern in scientific computing:
The key questions for any iterative method are:
Does it converge? Under what conditions?
How fast? How many iterations do we need?
How sensitive is the answer? (This is where condition numbers come in.)
A Unifying Principle¶
Newton’s method, bisection, and fixed-point iteration may seem like three unrelated techniques. But they share a common structure: each generates a sequence that (hopefully) converges to the root.
The key insight is that convergence happens when the iteration shrinks errors:
This principle—formalized as the Banach Fixed Point Theorem in Fixed Point Iteration—is one of the most important ideas in numerical analysis. It explains:
Why Newton’s method converges (when it does)
Why bisection always converges (it halves the interval each step)
Why iterative linear solvers like Jacobi and Gauss-Seidel work
The same idea recurs throughout this course: in ODE integrators, optimization algorithms, and beyond.
Learning Outcomes¶
After completing this chapter, you should be able to:
L4.1: Explain how iterative methods generate sequences of approximations converging to a solution, and state a stopping criterion.
L4.2: Define order of convergence (linear, superlinear, quadratic) and determine how many iterations are needed to achieve a desired accuracy.
L4.3: State the Intermediate Value Theorem and derive the bisection algorithm from it.
L4.4: Calculate the number of bisection iterations required for a given tolerance.
L4.5: Define fixed points, restate root finding as a fixed-point problem, and explain why the choice of matters.
L4.6: State the Banach Fixed Point Theorem and verify its hypotheses for a given iteration.
L4.7: Derive Newton’s method from Taylor series and identify it as a fixed-point iteration.
L4.8: Explain why Newton’s method converges quadratically for simple roots and identify when it fails (e.g., , bad initial guess).
L4.9: Compute the condition number of a root and explain why multiple roots are ill-conditioned.
L4.10: Define forward error () and backward error () for root finding, and relate them via the condition number: forward backward.