Overview¶
Suppose you want to write a program to compute , or solve a differential equation, or evaluate an integral. How would you do it?
The challenge is that computers are simple machines. They can only perform basic arithmetic: , , , . But scientific computing demands much more:
How do you compute ? You can’t—not exactly—using only arithmetic.
How do you compute , , ? Same problem.
How do you compute a derivative ? You’d need a limit—not a finite operation.
How do you compute an integral ? Again, not directly computable.
The solution: Approximate these objects using only arithmetic. Taylor polynomials let us write:
Now we can compute the right-hand side—it’s just additions and multiplications. The cost is an approximation error. Taylor’s theorem tells us exactly how large that error is:
This is the starting point for all of numerical analysis.
Learning Outcomes¶
After completing this chapter, you should be able to:
L1.1: Write Taylor expansions with Lagrange remainder.
L1.2: Derive finite difference formulas from Taylor series.
L1.3: Analyze truncation error order (, , etc.).
L1.4: Explain the trade-off between truncation and roundoff error.
L1.5: Choose appropriate step sizes for numerical differentiation.