Skip to article frontmatterSkip to article content
Site not loading correctly?

This may be due to an incorrect BASE_URL configuration. See the MyST Documentation for reference.

In the previous section, we saw that floating-point inputs carry small relative errors bounded by machine epsilon, and that subtracting nearly equal numbers amplifies those errors catastrophically. The condition number makes this precise — and applies to any computation, not just subtraction.

Condition of ff at xx

When we evaluate f(x)f(x) but only have an approximation xx^* to the input, the output f(x)f(x^*) may be far from f(x)f(x). The condition number is the worst-case ratio of relative error in the output to relative error in the input.

Simplified Formula via Taylor’s Theorem

Condition Number of a Differentiable Function

For a differentiable function ff, we can derive the condition number formula directly from the definition of relative error, without appealing to Taylor’s theorem.

Absolute Condition Number

The absolute condition number measures the ratio of absolute error in the output to absolute error in the input:

κ^=f(x).\hat{\kappa} = |f'(x)|.

If f(x)|f'(x)| is large, a small absolute change in xx produces a large absolute change in f(x)f(x).

Derivation from Relative Errors

Starting from Definition 1, suppose xx is perturbed by a small amount Δx\Delta x. The relative change in xx is Δxx\frac{|\Delta x|}{|x|}, while the relative change in the output is f(x+Δx)f(x)f(x)\frac{|f(x + \Delta x) - f(x)|}{|f(x)|}. Taking the ratio:

κ=f(x+Δx)f(x)/f(x)Δx/x=xf(x)f(x+Δx)f(x)Δx.\kappa = \frac{|f(x + \Delta x) - f(x)| / |f(x)|}{|\Delta x| / |x|} = \frac{|x|}{|f(x)|} \cdot \frac{|f(x + \Delta x) - f(x)|}{|\Delta x|}.

The last factor is a difference quotient. In the limit Δx0\Delta x \to 0 it becomes f(x)|f'(x)|, recovering the formula from Proposition 1.

The formula κ=xf(x)/f(x)\kappa = |x f'(x) / f(x)| can be read as the ratio of the logarithmic derivative of ff to the logarithmic derivative of xx:

κ=(lnf)(lnx),since (lnf)=ff and (lnx)=1x.\kappa = \left| \frac{(\ln f)'}{(\ln x)'} \right|, \qquad \text{since } (\ln f)' = \frac{f'}{f} \text{ and } (\ln x)' = \frac{1}{x}.

This makes intuitive sense: the logarithmic derivative measures the infinitesimal rate of relative change, so the condition number is the ratio of the relative rate of change of the output to that of the input.

Note that if ff has a zero at xx, then κ\kappa \to \infty. This is not necessarily because the computation is genuinely sensitive. The absolute error may be perfectly well-behaved, but relative error is undefined at a zero of ff.

Several Variables

For a differentiable map f:RmRnf : \mathbb{R}^m \to \mathbb{R}^n, the relative condition number generalises to

κ=J(x)f(x)/x\kappa = \frac{\|J(x)\|}{\|f(x)\| / \|x\|}

where J(x)J(x) is the Jacobian matrix of ff at xx and \|\cdot\| denotes the induced matrix norm. The one-variable formula is the special case m=n=1m = n = 1.

Examples

Example 1 (Square Root (Well-Conditioned))

Consider f(x)=xf(x) = \sqrt{x} with f(x)=12xf'(x) = \frac{1}{2\sqrt{x}}.

Near xˉ=1\bar{x} = 1, we have yˉ=f(xˉ)=1\bar{y} = f(\bar{x}) = 1. Using a Taylor approximation:

yyˉ=x112(x1)=12(xxˉ)y - \bar{y} = \sqrt{x} - 1 \approx \frac{1}{2}(x - 1) = \frac{1}{2}(x - \bar{x})

Variations in yy are always smaller than variations in xx. Computing the condition number:

κ=xf(x)f(x)=x12xx=12\kappa = \left| \frac{x f'(x)}{f(x)} \right| = \left| \frac{x \cdot \frac{1}{2\sqrt{x}}}{\sqrt{x}} \right| = \frac{1}{2}

Result: κ=1/2\kappa = 1/2 — evaluating x\sqrt{x} is well-conditioned.

Example 2 (Tangent Near π/2 (Ill-Conditioned))

Consider f(x)=tan(x)f(x) = \tan(x) near x=π2x = \frac{\pi}{2}.

Take two points:

x1=π20.001,x2=π20.002x_1 = \frac{\pi}{2} - 0.001, \quad x_2 = \frac{\pi}{2} - 0.002

Then x1x2=0.001|x_1 - x_2| = 0.001, but f(x1)f(x2)500|f(x_1) - f(x_2)| \approx 500. A tiny input change causes a huge output change!

Computing the condition number:

κ=xf(x)f(x)=xcos(x)sin(x)=2xcsc(2x) as xπ/2\kappa = \left| \frac{x f'(x)}{f(x)} \right| = \left| \frac{x}{\cos(x)\sin(x)} \right| = |2x \csc(2x)| \to \infty \text{ as } x \to \pi/2

Result: κ\kappa \to \infty — evaluating tan(x)\tan(x) near π2\frac{\pi}{2} is ill-conditioned.

Example 3 (Logarithm Near 1 (Ill-Conditioned))

Consider f(x)=ln(x)f(x) = \ln(x) near x=1x = 1.

κ=xf(x)f(x)=x(1/x)ln(x)=1ln(x)\kappa = \left| \frac{x f'(x)}{f(x)} \right| = \left| \frac{x \cdot (1/x)}{\ln(x)} \right| = \frac{1}{|\ln(x)|}

As x1x \to 1, we have ln(x)0\ln(x) \to 0, so κ\kappa \to \infty.

Result: κ\kappa \to \infty — evaluating ln(x)\ln(x) near x=1x = 1 is ill-conditioned.

But note why this happens: the condition number blows up because f(x)=ln(x)0f(x) = \ln(x) \to 0, so the relative error f(x)f(x)/f(x)|f(x) - f(x^*)|/|f(x)| has a denominator going to zero. The absolute error is perfectly well-behaved — f(1)=1|f'(1)| = 1, so a small input perturbation produces a small absolute output perturbation. The “ill-conditioning” here is really that relative error is undefined at a zero of ff. This happens for any function near one of its roots, and is an artifact of measuring error in relative terms rather than a genuine sensitivity of the computation.

Condition Number of Subtraction

The condition number formula κ=xf(x)/f(x)\kappa = |xf'(x)/f(x)| works for functions of one variable. For a function of two variables g(a,b)=abg(a, b) = a - b, we generalize: the condition number measures how the relative error in gg relates to the relative errors in aa and bb.

Example 4 (Subtraction is Ill-Conditioned When aba \approx b)

Consider g(a,b)=abg(a, b) = a - b with small perturbations a~=a(1+ε1)\tilde{a} = a(1 + \varepsilon_1) and b~=b(1+ε2)\tilde{b} = b(1 + \varepsilon_2):

g~g=aε1bε2\tilde{g} - g = a\varepsilon_1 - b\varepsilon_2

The relative error in the result is:

g~gga+babmax(ε1,ε2)\frac{|\tilde{g} - g|}{|g|} \leq \frac{|a| + |b|}{|a - b|} \cdot \max(|\varepsilon_1|, |\varepsilon_2|)

The condition number of subtraction is therefore:

κ=a+bab\kappa = \frac{|a| + |b|}{|a - b|}
  • When aa and bb are well-separated: κO(1)\kappa \approx \mathcal{O}(1) — subtraction is well-conditioned.

  • When aba \approx b: κ\kappa \to \infty — subtraction is ill-conditioned.

This is precisely the catastrophic cancellation we saw in the floating-point chapter, now explained through the lens of condition numbers.

Condition Number of the Finite Difference

We can now give a condition number explanation of the finite difference trade-off.

Example 5 (Finite Difference Condition Number)

The forward difference f(x0+h)f(x0)h\frac{f(x_0 + h) - f(x_0)}{h} requires computing aba - b where a=f(x0+h)a = f(x_0 + h) and b=f(x0)b = f(x_0).

Applying the subtraction condition number with abf(x0)a \approx b \approx f(x_0):

κf(x0+h)+f(x0)f(x0+h)f(x0)2f(x0)hf(x0)\kappa \approx \frac{|f(x_0 + h)| + |f(x_0)|}{|f(x_0 + h) - f(x_0)|} \approx \frac{2|f(x_0)|}{h|f'(x_0)|}

As h0h \to 0, this condition number grows like 1/h1/h. Each input carries relative error μ\mu (machine epsilon), so the round-off contribution to the finite difference is:

round-off errorκμf(x0)2μf(x0)h\text{round-off error} \approx \kappa \cdot \mu \cdot |f'(x_0)| \approx \frac{2\mu|f(x_0)|}{h}

This matches exactly the round-off term we derived from the floating-point analysis — but now we see it as a conditioning problem: the subtraction step is ill-conditioned for small hh.

Summary

The condition number κ\kappa measures how much a problem amplifies input errors. It is a property of the mathematical problem, not the algorithm — if κ\kappa is large, no algorithm can avoid accuracy loss. But what if κ\kappa is small and we still get poor results? Then the problem isn’t to blame — the algorithm is. In the next section, we make this precise with the concept of stable and unstable algorithms.