A Motivating Example¶
Consider computing for large .
The condition number is — this is a well-conditioned problem!
Yet using 4-digit decimal arithmetic:
The true value is 0.01122. We lost two significant digits — 10% relative error from a well-conditioned problem!
What went wrong? The algorithm subtracts two nearly-equal numbers, amplifying their individual roundoff errors. This is called subtractive cancellation.
A stable alternative: Rationalize the expression:
Same problem, different algorithm, full accuracy.
This example reveals something fundamental: not all errors are the problem’s fault. We need a framework to separate algorithm quality from problem sensitivity.
Forward Error¶
Consider computing . Due to roundoff or approximation, we actually compute .
Forward error is the most intuitive measure—it directly answers “how wrong is my answer?”
Absolute vs. Relative Error¶
For meaningful comparisons, we often use relative error:
Example: An error of 0.001 means different things for:
: relative error is 0.1% (excellent)
: relative error is 100% (useless)
Relative error captures this distinction. When we say “accurate to 6 digits,” we mean relative error .
The Catch¶
Forward error seems like the natural measure—we want to know how wrong we are! But there’s a problem:
Forward error requires knowing the true answer .
If we knew the true answer, we wouldn’t need to compute it. This is where backward error comes in.
Backward Error¶
Instead of asking “how wrong is my answer?”, backward error asks a different question.
In other words: our computed might not equal , but it does equal for some nearby . The backward error measures how far is from .
Why Backward Error?¶
At first, backward error seems less useful than forward error. But it turns out to be more fundamental for algorithm analysis. Here’s why:
1. Backward error is computable
Forward error requires knowing the true answer—but if we knew that, we wouldn’t need to compute it!
Backward error, on the other hand, is often directly computable:
Linear systems: Given computed , the residual tells us backward error immediately—we solved exactly.
Root finding: Given approximate root , the residual measures backward error.
2. Backward error separates algorithm from problem
There are two sources of error:
Algorithm quality: Does the algorithm solve some nearby problem accurately?
Problem sensitivity: How much does the answer change when the problem changes slightly?
Backward error isolates the first question. If an algorithm has small backward error, it’s doing its job well—any remaining forward error is the problem’s fault (ill-conditioning), not the algorithm’s.
3. Backward error accounts for input uncertainty
Real-world inputs have measurement error. If your input has uncertainty , then:
You don’t actually know you’re solving —you might be solving for some
An algorithm with backward error is as good as exact for your purposes
Principle: Don’t ask for more accuracy than your inputs justify.
Residual as Backward Error¶
For many problems, backward error equals the residual—a directly computable quantity.
| Problem | Residual | Interpretation |
|---|---|---|
| Linear system | solves exactly | |
| Root finding | is a root of | |
| Least squares | How well fits the data |
Key insight: Backward stability implies forward stability (for well-conditioned problems). This is why we focus on backward stability—it’s the stronger, more useful guarantee.
The Relationship: Forward Error ≤ κ × Backward Error¶
Now we connect backward error to forward error using the condition number from the previous section.
Derivation¶
Suppose we compute instead of the true . If the algorithm is backward stable:
The forward error is the difference .
By Taylor expansion:
Converting to relative errors:
What This Means¶
| Component | Controlled by | Can we improve it? |
|---|---|---|
| Forward error | Both | Indirectly |
| Condition number | The problem | No (it’s math) |
| Backward error | The algorithm | Yes! |
Practical implications:
Well-conditioned + backward stable → accurate: Small and small backward error guarantee small forward error
Ill-conditioned → trouble: Even perfect algorithms give poor forward error when is large
Focus on backward stability: Write algorithms with small backward error; that’s all you can control
Unstable Algorithms: Common Pitfalls¶
Even well-conditioned problems can produce terrible results with unstable algorithms. Three operations are particularly dangerous:
| Operation | Problem | Stable Alternative |
|---|---|---|
| Subtracting close numbers | Relative error explodes | Reformulate algebraically |
| Dividing by small numbers | Amplifies numerator error | Use logarithms or rescale |
| Adding numbers of vastly different magnitude | Small values lost | Sort and sum smallest first |
Example: The Quadratic Formula
Solve . The roots are and .
Standard formula for the small root:
Catastrophic cancellation! The true value is 10-6, so we have 100% relative error.
Stable alternative: Use , so . Compute (no cancellation), then with full precision.
Example: Computing
For small , direct computation fails:
x = 1e-15
result = np.exp(x) - 1 # Returns ~1.1e-15 (only 1 correct digit!)Why? , stored as 1.0 due to limited precision. Then .
Stable alternative: Use the dedicated function:
result = np.expm1(1e-15) # Returns 1.0e-15 (full precision!)Example: Kahan Summation
When summing many numbers, roundoff errors accumulate. Kahan summation tracks and compensates for these errors:
def kahan_sum(x):
"""Compensated summation — much more accurate than naive sum."""
s = 0.0 # Running sum
c = 0.0 # Compensation for lost low-order bits
for xi in x:
y = xi - c # Compensated value to add
t = s + y # New sum (may lose precision)
c = (t - s) - y # Recover what was lost
s = t
return sThis achieves error instead of for naive summation.
Summary¶
| Concept | Measures | Controlled by | Computable? |
|---|---|---|---|
| Forward error | Distance to true answer | Problem + algorithm | Usually not |
| Backward error | Perturbation to input | Algorithm | Often yes (residual) |
| Condition number | Sensitivity of problem | Problem (math) | Sometimes |
The workflow:
Check if the problem is well-conditioned (small )
Use a backward-stable algorithm (small backward error)
Then forward error will be small automatically
If forward error is large despite backward stability, the problem is ill-conditioned—reformulate if possible.