Skip to article frontmatterSkip to article content
Site not loading correctly?

This may be due to an incorrect BASE_URL configuration. See the MyST Documentation for reference.

In the previous section, we saw that the condition number measures how sensitive a problem is to input perturbations. But sensitivity of the problem is only half the story — the algorithm we choose matters just as much.

Definitions

Example: Two Algorithms for the Same Function

Example: The Finite Difference Revisited

The same pattern — well-conditioned problem, unstable algorithm — appears in a computation we already know well.

Recognizing this pattern — is the problem sensitive, or is the algorithm choosing a sensitive path? — is one of the central skills in numerical analysis.

Floating-Point Operations to Watch For

Not every source of error is an ill-conditioning issue. The three operations below can each cause trouble, but for different reasons:

  1. Subtracting nearly equal numbers (xyx - y when xyx \approx y): Subtraction of nearly equal numbers is genuinely ill-conditioned — the condition number of subtraction is κ=(a+b)/ab\kappa = (|a|+|b|)/|a-b| \to \infty as aba \to b. The key insight is that an algorithm may choose to subtract nearly equal numbers as an intermediate step, even when the overall problem doesn’t require it. If the subtraction can be avoided by algebraic reformulation, the algorithm becomes stable.

  2. Dividing by a small number (x/yx / y when y1|y| \ll 1): This is not ill-conditioned — the condition number of f(y)=x/yf(y) = x/y with respect to yy is κ=1\kappa = 1. The issue is that small absolute errors in yy become large absolute errors in the result. If yy itself was computed with some absolute error δ\delta, then x/(y+δ)x/yxδ/y2x/(y+\delta) - x/y \approx -x\delta/y^2, which is large when yy is small. Remedy: rewrite using logarithms or rescale the computation.

  3. Adding numbers of vastly different magnitude (x+yx + y when xy|x| \gg |y|): This is also not an ill-conditioning issue. Subtraction is not involved, so there is no cancellation. The problem is that floating-point numbers have finite precision: if yy is smaller than the spacing between consecutive floats near xx, then fl(x+y)=x\text{fl}(x + y) = x and yy is simply lost. Remedy: sort and sum smallest first, or use compensated summation (Kahan summation).

Looking Ahead

We will formalize these ideas using forward and backward error when we study linear systems. There, we will see that backward stability — the precise version of “solves a nearby problem exactly” — is the gold standard for numerical algorithms. For now, the key takeaway is: distinguish between the problem (conditioning) and the algorithm (stability).