Skip to article frontmatterSkip to article content
Site not loading correctly?

This may be due to an incorrect BASE_URL configuration. See the MyST Documentation for reference.

Overview

Every floating-point computation introduces error:

Understanding these errors—and designing algorithms that control them—is essential for reliable scientific computing. This chapter builds up the error analysis framework in four steps:

  1. Floating-point representation — Computers can’t store most real numbers exactly. Every number has a small representation error bounded by machine epsilon.

  2. The Quake fast inverse square root — A famous algorithm showing how deep understanding of floating-point enables creative numerical tricks. It bridges to Newton’s method in the next chapter.

  3. Condition numbers — Some mathematical problems amplify errors. The condition number κ\kappa measures this sensitivity. This is a property of the problem, not the algorithm.

  4. Forward and backward error — Algorithms introduce additional errors. Backward error measures algorithm quality; forward error is what we actually care about.

The Golden Rule

Forward errorCondition number×Backward error\boxed{\text{Forward error} \leq \text{Condition number} \times \text{Backward error}}

This cleanly separates:

A stable algorithm produces answers with small backward error. An ill-conditioned problem amplifies any error, no matter how good the algorithm.

Learning Outcomes

After completing this chapter, you should be able to: