Skip to article frontmatterSkip to article content
Site not loading correctly?

This may be due to an incorrect BASE_URL configuration. See the MyST Documentation for reference.

Discretization

We consider the initial value problem:

dudt=f(t,u),u(t0)=u0\frac{du}{dt} = f(t, u), \quad u(t_0) = u_0

where uu may be a vector. We assume ff is Lipschitz continuous, guaranteeing a unique solution.

The solution u(t)u(t) is a continuous function, but computers work with discrete data. We discretize the time interval [t0,te][t_0, t_e] into a lattice:

t0<t1<t2<<tN=tet_0 < t_1 < t_2 < \cdots < t_N = t_e

with step sizes hn=tn+1tnh_n = t_{n+1} - t_n. For simplicity, we often use uniform spacing hn=hh_n = h.

A discretization method associates to each lattice a lattice function:

unu(tn),n=0,1,,Nu_n \approx u(t_n), \quad n = 0, 1, \ldots, N

We store only the values {u0,u1,,uN}\{u_0, u_1, \ldots, u_N\}—a finite amount of data representing the continuous solution.

The simplest methods replace derivatives with finite differences:

dudtun+1unh\frac{du}{dt} \approx \frac{u_{n+1} - u_n}{h}

This immediately gives forward Euler: un+1=un+hf(tn,un)u_{n+1} = u_n + h f(t_n, u_n).

The Error Framework for ODEs

The concepts from our error analysis carry over directly:

ConceptGeneral SettingODE Numerics
Backward errorResidual f(x~)b|f(\tilde{x}) - b|Local truncation error τn\tau_n
Forward errorx~x|\tilde{x} - x|Global error unu(tn)|u_n - u(t_n)|
Condition numberSensitivity to perturbationsLipschitz constant LL of ff
StabilityBackward stable algorithmAbsolutely stable method

Every numerical method introduces local truncation error at each step—this is the backward error. The central question: does this error accumulate controllably, or does it explode?

Global error(Amplification factor)n×Local truncation error\text{Global error} \lesssim \text{(Amplification factor)}^n \times \text{Local truncation error}

The amplification factor depends on both the method (its stability properties) and the problem (the Lipschitz constant). A method is useful when errors remain bounded—this is the stability requirement, the ODE analog of backward stability.

For stiff problems, where eigenvalues span many orders of magnitude, explicit methods require impractically small step sizes for stability even when accuracy would permit larger steps. Implicit methods—the “backward stable” algorithms of ODE numerics—handle such problems gracefully.

The stiffness ratio R=maxλi/minλiR = \max|\lambda_i| / \min|\lambda_i| can be understood as a condition number for method choice: it measures how ill-conditioned the “use an explicit method” approach is. When RR is large, the ratio of work required for stability versus work required for accuracy becomes enormous—this is the computational signature of stiffness.

Learning Outcomes

After completing this chapter, you should be able to (THIS NEEDS UPDATES):