def forward_euler(f, u0, t0, T, h):
"""
Solve u' = f(t, u) from t0 to T with step size h.
"""
t = t0
u = u0
times = [t]
values = [u]
while t < T:
u = u + h * f(t, u)
t = t + h
times.append(t)
values.append(u)
return np.array(times), np.array(values)
This is the discrete Duhamel principle: the global error at step n equals the initial error propagated forward, plus all local truncation errors propagated by the appropriate powers of the amplification factor.
This structure is identical to the Neumann series from iterative methods! Compare:
Context
Recurrence
Solution
Iterative methods
xk+1=Axk+b
xn=Anx0+∑j=0n−1An−1−jb
ODE error
En+1=REn−hτn
En=RnE0−h∑j=0n−1Rn−1−jτj
where R=1+hλ is the amplification factor.
Both are discrete convolutions with a geometric kernel. The convergence/stability condition is the same: ∣R∣<1 (ODE stability) parallels ∥A∥<1 (Neumann series convergence).
This also discretizes the continuous variation of constants formula:
The key point: the constant LΨC(eLΨT−1) depends on the problem (through LΨ and T) but not on h. Thus consistency of order p implies convergence of order p.
Remark 3 (The Forward/Backward Error Framework for ODEs)
The convergence theorem is the ODE version of our fundamental error relationship:
Condition number = eL(tn−t0) (Lipschitz amplification over time)
The Lipschitz constant L measures how sensitive the ODE is to perturbations. The factor eLT is the condition number for time evolution—a property of the problem, not the algorithm.
A consistent method (small backward error per step) achieves convergence (controlled forward error) as long as the problem isn’t too ill-conditioned (moderate L).
Convergence (as h→0) doesn’t guarantee useful results for fixed h>0. We need absolute stability.
The convergence theorem tells us that errors are bounded as h→0. But in practice, we use a fixed step size h>0. The question becomes: does the error recurrence
produce bounded errors for our chosen h? This depends on the amplification factorR=1+hλ. If ∣R∣>1, errors grow exponentially with each step—the method is unstable. Stability is the condition that ensures our backward errors (local truncation errors) don’t get amplified catastrophically as we march forward in time.
The factor eL(tn−t0) in the convergence bound is unavoidable—it reflects the problem’s sensitivity (condition number), not the algorithm. The Lipschitz constant L measures how quickly nearby solution curves diverge or converge:
If L=0 (e.g., u′=g(t)): solution curves are parallel, errors don’t grow
If L>0: solution curves diverge, errors can grow exponentially
If L<0: solution curves converge, errors are damped
Forward Euler’s conditional stability becomes problematic for stiff problems—where large eigenvalues force tiny step sizes even when the solution varies slowly.
For the test equation with λ=−1000:
Stability requires h<0.002
But the solution e−1000t decays to negligible levels by t=0.01
We’re forced to take thousands of tiny steps tracking a transient we don’t care about. This motivates implicit methods like backward Euler, which we develop next.