Skip to article frontmatterSkip to article content
Site not loading correctly?

This may be due to an incorrect BASE_URL configuration. See the MyST Documentation for reference.

Motivation: The Quake Fast Inverse Square Root

Before diving into theory, consider this legendary piece of code from Quake III Arena (1999):

float Q_rsqrt(float x) {
    long i;
    float x2, y;
    x2 = x * 0.5F;
    y  = x;
    i  = *(long*)&y;                    // interpret float bits as integer
    i  = 0x5f3759df - (i >> 1);         // magic!
    y  = *(float*)&i;
    y  = y * (1.5F - (x2 * y * y));     // Newton refinement
    return y;
}

This computes y=1/xy = 1/\sqrt{x}—a function that cannot be evaluated directly using just +,,×,÷+, -, \times, \div. How do we turn this into something computable?

Translating to a Root-Finding Problem

Why translate to root finding? Because you already know an algorithm for finding roots: Newton’s method from calculus.

Newton’s Method Refresher

Recall: to solve g(y)=0g(y) = 0, approximate gg by its tangent line and find where it crosses zero:

yn+1=yng(yn)g(yn)y_{n+1} = y_n - \frac{g(y_n)}{g'(y_n)}
Applying to g(y)=1/y2xg(y) = 1/y^2 - x

For g(y)=1/y2xg(y) = 1/y^2 - x, we have g(y)=2/y3g'(y) = -2/y^3. Substituting:

yn+1=yn1/yn2x2/yn3=yn+yn2(1xyn2)=yn(32x2yn2)y_{n+1} = y_n - \frac{1/y_n^2 - x}{-2/y_n^3} = y_n + \frac{y_n}{2}\left(1 - xy_n^2\right) = y_n\left(\frac{3}{2} - \frac{x}{2}y_n^2\right)

This is exactly the last line of the Quake code!

The floating-point bit tricks provide a clever initial guess, and one Newton iteration refines it to sufficient accuracy.

The Root-Finding Problem

Given a nonlinear function f(x)f(x), find xx^* such that f(x)=0f(x^*) = 0.

Examples:

For most nonlinear equations, we cannot find closed-form solutions. We need a different approach.

The Big Picture: Iterative Methods

The Quake example illustrates a fundamental pattern in scientific computing:

The key questions for any iterative method are:

  1. Does it converge? Under what conditions?

  2. How fast? How many iterations do we need?

  3. How sensitive is the answer? (This is where condition numbers come in.)

A Unifying Principle

Newton’s method, bisection, and fixed-point iteration may seem like three unrelated techniques. But they share a common structure: each generates a sequence that (hopefully) converges to the root.

The key insight is that convergence happens when the iteration shrinks errors:

This principle—formalized as the Banach Fixed Point Theorem in Fixed Point Iteration—is one of the most important ideas in numerical analysis. It explains:

The same idea recurs throughout this course: in ODE integrators, optimization algorithms, and beyond.

Learning Outcomes

After completing this chapter, you should be able to: