Skip to article frontmatterSkip to article content
Site not loading correctly?

This may be due to an incorrect BASE_URL configuration. See the MyST Documentation for reference.

Big Idea

This chapter is about iterative methods: algorithms that generate sequences of approximations converging to a solution. Newton’s method—which you know from calculus—is the workhorse, but understanding when and why it works requires careful analysis.

Motivation: The Quake Fast Inverse Square Root

Before diving into theory, consider this legendary piece of code from Quake III Arena (1999):

float Q_rsqrt(float x) {
    long i;
    float x2, y;
    x2 = x * 0.5F;
    y  = x;
    i  = *(long*)&y;                    // interpret float bits as integer
    i  = 0x5f3759df - (i >> 1);         // magic!
    y  = *(float*)&i;
    y  = y * (1.5F - (x2 * y * y));     // Newton refinement
    return y;
}

This computes y=1/xy = 1/\sqrt{x}—a function that cannot be evaluated directly using just +,,×,÷+, -, \times, \div. How do we turn this into something computable?

Translating to a Root-Finding Problem

Remark 1 (Key Insight)

Finding y=1/xy = 1/\sqrt{x} is equivalent to solving g(y)=0g(y) = 0 where g(y)=1/y2xg(y) = 1/y^2 - x.

If y=1/xy = 1/\sqrt{x}, then y2=1/xy^2 = 1/x, so 1/y2=x1/y^2 = x, which means 1/y2x=01/y^2 - x = 0.

Why translate to root finding? Because you already know an algorithm for finding roots: Newton’s method from calculus.

Newton’s Method Refresher

Recall: to solve g(y)=0g(y) = 0, approximate gg by its tangent line and find where it crosses zero:

yn+1=yng(yn)g(yn)y_{n+1} = y_n - \frac{g(y_n)}{g'(y_n)}

For g(y)=1/y2xg(y) = 1/y^2 - x, we have g(y)=2/y3g'(y) = -2/y^3. Substituting:

yn+1=yn1/yn2x2/yn3=yn+yn2(1xyn2)=yn(32x2yn2)y_{n+1} = y_n - \frac{1/y_n^2 - x}{-2/y_n^3} = y_n + \frac{y_n}{2}\left(1 - xy_n^2\right) = y_n\left(\frac{3}{2} - \frac{x}{2}y_n^2\right)

This is exactly the last line of the Quake code!

The floating-point bit tricks provide a clever initial guess, and one Newton iteration refines it to sufficient accuracy.

The Root-Finding Problem

Given a nonlinear function f(x)f(x), find xx^* such that f(x)=0f(x^*) = 0.

Examples:

For most nonlinear equations, we cannot find closed-form solutions. We need a different approach.

The Big Picture: Iterative Methods

The Quake example illustrates a fundamental pattern in scientific computing:

Definition 1 (Iterative Methods)

Many problems cannot be solved in a finite number of arithmetic operations. Instead, we construct a sequence x0,x1,x2,x_0, x_1, x_2, \ldots that converges to the solution:

limnxn=x\lim_{n \to \infty} x_n = x^*

In practice, we stop when xn+1xn|x_{n+1} - x_n| or some other criterion is sufficiently small.

The key questions for any iterative method are:

  1. Does it converge? Under what conditions?

  2. How fast? How many iterations do we need?

  3. How sensitive is the answer? (This is where condition numbers come in.)

A Unifying Principle

Newton’s method, bisection, and fixed-point iteration may seem like three unrelated techniques. But they share a common structure: each generates a sequence that (hopefully) converges to the root.

The key insight is that convergence happens when the iteration shrinks errors:

Contraction Implies Convergence

If each step of an iteration reduces the error by a factor L<1L < 1:

xn+1xLxnx|x_{n+1} - x^*| \leq L \, |x_n - x^*|

then the sequence converges geometrically to xx^*.

This principle—formalized as the Banach Fixed Point Theorem in Fixed Point Iteration—is one of the most important ideas in numerical analysis. It explains:

The same idea recurs throughout this course: in ODE integrators, optimization algorithms, and beyond.

Learning Outcomes

After completing this chapter, you should be able to: