Skip to article frontmatterSkip to article content
Site not loading correctly?

This may be due to an incorrect BASE_URL configuration. See the MyST Documentation for reference.

Motivation

Computing unit normal vectors is essential for:

Given a vector v=(v1,v2,v3)\mathbf{v} = (v_1, v_2, v_3), its unit vector is:

v^=vv12+v22+v32\hat{\mathbf{v}} = \frac{\mathbf{v}}{\sqrt{v_1^2 + v_2^2 + v_3^2}}

This requires evaluating the inverse square root: f(x)=1xf(x) = \frac{1}{\sqrt{x}}

In the 1990s when computational power was limited, a fast implementation was crucial for real-time graphics.

The Famous Code

From the Quake III Arena source code:

float Q_rsqrt( float x )
{
    long i;
    float x2, y;
    const float threehalfs = 1.5F;

    x2 = x * 0.5F;
    y  = x;
    i  = * ( long * ) &y;               // evil floating point bit level hacking
    i  = 0x5f3759df - ( i >> 1 );       // what the fuck?
    y  = * ( float * ) &i;
    y  = y * ( threehalfs - ( x2 * y * y ) );   // 1st iteration
    return y;
}

Let’s understand what this does.

The Key Insight: Logarithms from Bit Patterns

We want to compute y=1xy = \frac{1}{\sqrt{x}}. Taking logarithms:

log2(y)=12log2(x)\log_2(y) = -\frac{1}{2}\log_2(x)

From the IEEE 754 representation of a positive float xx:

fl(x)=2Ex127(1+mx)\text{fl}(x) = 2^{E_x - 127}(1 + m_x)

Taking log2\log_2:

log2(fl(x))=Ex127+log2(1+mx)\log_2(\text{fl}(x)) = E_x - 127 + \log_2(1 + m_x)

Since mx[0,1)m_x \in [0, 1), we can approximate:

log2(1+mx)mx+σ\log_2(1 + m_x) \approx m_x + \sigma

where σ0.0430\sigma \approx 0.0430 is a correction constant.

The Magic Formula

Substituting mx=Mx/223m_x = M_x / 2^{23} (where MxM_x is the integer mantissa):

log2(fl(x))1223(Mx+223Ex)+σ127\log_2(\text{fl}(x)) \approx \frac{1}{2^{23}}(M_x + 2^{23} E_x) + \sigma - 127

The key observation: the integer interpretation of float bits approximates the logarithm!

log2(fl(x))1223Int(x)+σ127\log_2(\text{fl}(x)) \approx \frac{1}{2^{23}} \cdot \text{Int}(x) + \sigma - 127

Applying this to our inverse square root equation log2(y)=12log2(x)\log_2(y) = -\frac{1}{2}\log_2(x):

Int(y)=32223(127σ)Magic Number12Int(x)\text{Int}(y) = \underbrace{\frac{3}{2} \cdot 2^{23}(127 - \sigma)}_{\text{Magic Number}} - \frac{1}{2}\text{Int}(x)

The magic number 0x5f3759df comes from this formula!

The line i = 0x5f3759df - (i >> 1) computes:

Newton’s Method Refinement

The bit manipulation gives a rough approximation. To improve accuracy, we apply Newton’s method to:

g(y)=1y2xg(y) = \frac{1}{y^2} - x

The root of gg is y=1xy = \frac{1}{\sqrt{x}}.

Newton’s iteration:

yn+1=yng(yn)g(yn)=yn(32x2yn2)y_{n+1} = y_n - \frac{g(y_n)}{g'(y_n)} = y_n\left(\frac{3}{2} - \frac{x}{2}y_n^2\right)

This is exactly line 12 of the code:

y = y * (threehalfs - (x2 * y * y));

Why It Works

  1. Bit manipulation exploits the logarithmic relationship between a float’s value and its bit pattern to get a rough initial guess

  2. Newton’s method rapidly refines this guess (one iteration often suffices for graphics applications)

The algorithm is remarkably accurate while using only:

No expensive division or square root operations are needed.

Modern Relevance

Modern CPUs have dedicated instructions for inverse square root (e.g., rsqrtss on x86), making this trick less necessary.