Practice analyzing errors, stability, and conditioning.
Self-Assessment Questions¶
Test your understanding with these conceptual questions:
Machine Epsilon: What happens when you add a number smaller than to 1.0?
Conditioning: What makes evaluating near ill-conditioned? Can any algorithm fix this?
Stability: Why is computing directly unstable for large ? What’s a stable alternative?
Machine Epsilon: What is the approximate machine epsilon for single and double precision? What does it tell us about the accuracy of floating point computations?
Binary Representation: How do you convert a positive integer to its negative two’s complement representation?
Subtractive Cancellation: When subtracting two numbers and , under what conditions is the result reliable?
Forward vs. Backward: If an algorithm has small backward error but large forward error, what does this tell you about the problem?
Trade-offs: In finite differences, why can’t we just use an extremely small to get arbitrary accuracy?
Floating-Point Arithmetic¶
Q2.1: Machine Epsilon¶
(a) Write a program to experimentally determine machine epsilon for single (32-bit) and double (64-bit) precision.
(b) Verify your answers against the theoretical values 2-24 and 2-53.
(c) Compute in double precision. What do you get? Why?
Q2.2: Representation Errors¶
(a) Show that 0.1 cannot be represented exactly in binary floating-point.
(b) Compute
0.1 + 0.1 + 0.1 - 0.3in Python. Explain the result.(c) Why might
sum([0.1] * 10)not equal1.0?
Q2.3: Summation Order¶
Consider summing for .
(a) Compute the sum from to (forward).
(b) Compute the sum from to 1 (backward).
(c) Which is more accurate? Why? (Hint: think about the relative sizes of terms being added.)
Q2.4: Kahan Summation¶
Kahan (1965) proposed a compensated summation algorithm that tracks roundoff error:
def kahan_sum(xs):
s = 0.0 # Running sum
c = 0.0 # Compensation for lost low-order bits
for x in xs:
y = x - c # Compensate for previous error
t = s + y # Add to sum
c = (t - s) - y # Recover the roundoff error
s = t
return s(a) Implement this algorithm.
(b) Test with the sum using 6-digit decimal arithmetic. Trace through by hand.
(c) What is the role of the variable
c? Why does it improve accuracy?(d) Compare Kahan summation to naive summation for the integral:
using the trapezoidal rule with points, in single precision (
np.float32).
Forward and Backward Error¶
Q2.5: Quadratic Formula¶
For with , , :
(a) Compute both roots using the standard quadratic formula.
(b) Identify which root suffers from cancellation.
(c) Use the identity to compute the small root accurately.
(d) Compute the backward error (residual) for both methods.
Q2.6: Exponential Minus One¶
(a) For , , compute using direct subtraction.
(b) Compare to
numpy.expm1(x).(c) Plot the relative error of the direct method vs. .
(d) At what does the direct method lose all significant digits?
Q2.7: Variance Computation¶
Consider computing variance: .
An alternative “one-pass” formula is: .
(a) For , compute variance both ways.
(b) Which method is more accurate? Why?
(c) Research Welford’s algorithm. Why is it preferred?
Condition Numbers¶
Q2.8: Condition Number Computation¶
Compute the condition number for:
(a) at
(b) at and
(c) at and
(d) at for integer
Which evaluations are well-conditioned? Which are ill-conditioned?
Solution for (a)
Q2.9: Subtractive Cancellation¶
The expression suffers from cancellation for large .
(a) Compute this directly for .
(b) Rationalize to get and compute again.
(c) What is the relative error of the direct method?
Q2.10: Trigonometric Reformulation¶
For small , the expression loses accuracy.
(a) Compute directly.
(b) Use the identity and compute again.
(c) Compare to the Taylor approximation .
Stability Analysis¶
Q2.11: Stable Reformulation¶
For each of the following expressions, identify the potential numerical issue and propose a stable reformulation:
(a) for large positive
(b) for small
(c) for small
(d) when
Solution for (a)
Q2.12: Residuals and Forward Error¶
Solve the system:
(a) Solve using Gaussian elimination.
(b) Perturb by 0.0001 and solve again.
(c) Compute the condition number of .
(d) Verify that forward error relative perturbation.
Computational Exercises¶
Q2.13: Unstable Recursion¶
Consider evaluating the integrals
for .
(a) Show that .
(b) Show that , then use the recursion
to numerically generate through . Be sure to use in the form above (not ). Include a table showing and . Briefly explain what you observe.
(c) Show that for , we have . Discuss the results from (b) in light of this bound.
(d) Derive a formula for computing given (the backward recursion).
(e) Show that equals an infinite series.
Hint for (e)
Use the backward recursion assuming . Repeatedly substitute to find the pattern.
(f) Show that for any and positive integer , there exists such that taking as a starting value produces integral evaluations with absolute error smaller than for all .
(g) Explain why the backward algorithm is stable.
Hint for (g)
Include a small round-off error at each step. What happens to this error as you iterate backward?
(h) Write a computer program that computes within an absolute error of at most 10-5. Explain how you chose .
Solution for (a)
Using integration by parts or direct manipulation:
Q2.14: Trapezoidal Rule with Round-off¶
Consider computing the integral:
using the trapezoidal rule with . The approximation is:
where and .
(a) Implement this using naive summation with single precision (32-bit) floats. Plot the relative error vs. for , . Explain what you observe.
(b) Implement using Kahan summation. Plot the relative error vs. . Explain what you observe.
Hint
To force single precision in NumPy:
s = np.asarray(0.0, dtype=np.float32)Q2.15: Fast Inverse Square Root Accuracy¶
Refer to the fast inverse square root code.
(a) Compute the relative error of the fast inverse square root without Newton iteration (just the bit manipulation). Plot the relative error for inputs .
(b) Compute the relative error with Newton iterations. How many iterations would you use for a game engine? Support your answer with error plots for 1, 2, and 3 iterations.