The condition number tells us about the problem. But the same problem can be solved by different algorithms — and some amplify errors while others don’t. A stable algorithm solves a nearby problem exactly; an unstable algorithm magnifies rounding errors, producing results far worse than the problem’s conditioning would suggest.
In the previous section, we saw that the condition number measures how sensitive a problem is to input perturbations. But sensitivity of the problem is only half the story — the algorithm we choose matters just as much.
Definitions¶
Definition 1 (Stable Algorithm)
A stable algorithm for a given problem yields a numerical approximation that is the exact solution of an only slightly perturbed problem. Equivalently, a stable algorithm does not amplify rounding errors beyond what the problem’s conditioning requires.
An unstable algorithm magnifies rounding errors, producing results far worse than the problem’s conditioning would suggest.
The two key questions in any numerical computation are:
Is the problem well-conditioned? (condition number — intrinsic to the mathematics)
Is the algorithm stable? (algorithm design — what we can control)
If both answers are yes, the computed result will be accurate. If the problem is ill-conditioned, no algorithm can help. But if the problem is well-conditioned and the result is poor, the algorithm is to blame.
Example: Two Algorithms for the Same Function¶
Example 1 (Stable vs. Unstable Evaluation of )
Consider . The condition number satisfies — this is a well-conditioned problem.
Algorithm 1 (unstable): Evaluate directly using 4-digit decimal arithmetic:
The true value is . We lost two significant digits — 10% relative error from a well-conditioned problem! The subtraction of nearly equal numbers is ill-conditioned, with , causing catastrophic cancellation.
Algorithm 2 (stable): Rationalize the expression:
Now:
Same problem, different algorithm, full accuracy. Algorithm 1 is unstable because it introduces an ill-conditioned subtraction step; Algorithm 2 avoids the subtraction and is stable.
Example: The Finite Difference Revisited¶
The same pattern — well-conditioned problem, unstable algorithm — appears in a computation we already know well.
Example 2 (Computing — Well-Conditioned Problem, Unstable Algorithm)
Consider computing for at . The problem’s condition number is:
The problem is well-conditioned. But the forward difference algorithm computes via the subtraction , whose condition number is:
The algorithm introduces an ill-conditioned subtraction step that the problem itself does not require. A different method (e.g., automatic differentiation) avoids this subtraction entirely and is stable.
Recognizing this pattern — is the problem sensitive, or is the algorithm choosing a sensitive path? — is one of the central skills in numerical analysis.
Floating-Point Operations to Watch For¶
Not every source of error is an ill-conditioning issue. The three operations below can each cause trouble, but for different reasons:
Subtracting nearly equal numbers ( when ): Subtraction of nearly equal numbers is genuinely ill-conditioned — the condition number of subtraction is as . The key insight is that an algorithm may choose to subtract nearly equal numbers as an intermediate step, even when the overall problem doesn’t require it. If the subtraction can be avoided by algebraic reformulation, the algorithm becomes stable.
Dividing by a small number ( when ): This is not ill-conditioned — the condition number of with respect to is . The issue is that small absolute errors in become large absolute errors in the result. If itself was computed with some absolute error , then , which is large when is small. Remedy: rewrite using logarithms or rescale the computation.
Adding numbers of vastly different magnitude ( when ): This is also not an ill-conditioning issue. Subtraction is not involved, so there is no cancellation. The problem is that floating-point numbers have finite precision: if is smaller than the spacing between consecutive floats near , then and is simply lost. Remedy: sort and sum smallest first, or use compensated summation (Kahan summation).
Looking Ahead¶
We will formalize these ideas using forward and backward error when we study linear systems. There, we will see that backward stability, the precise version of “solves a nearby problem exactly”, is the gold standard for numerical algorithms. For now, the key takeaway is: distinguish between the problem (conditioning) and the algorithm (stability).