Skip to article frontmatterSkip to article content
Site not loading correctly?

This may be due to an incorrect BASE_URL configuration. See the MyST Documentation for reference.

The Initial Value Problem

We seek to approximate the solution of

dudt=f(t,u),u(t0)=u0\frac{du}{dt} = f(t, u), \quad u(t_0) = u_0

on a grid of times t0<t1<t2<<tN=Tt_0 < t_1 < t_2 < \cdots < t_N = T with step size h=tn+1tnh = t_{n+1} - t_n.

Let unu_n denote our approximation to u(tn)u(t_n).

Derivation

Replace du/dtdu/dt with a forward difference:

u(tn+1)u(tn)hf(tn,u(tn))\frac{u(t_{n+1}) - u(t_n)}{h} \approx f(t_n, u(t_n))

This gives the forward Euler (or explicit Euler) method:

Geometric Interpretation

Forward Euler follows the tangent line:

  1. At (tn,un)(t_n, u_n), the slope is f(tn,un)f(t_n, u_n)

  2. Follow this slope for time hh

  3. Arrive at un+1=un+hslopeu_{n+1} = u_n + h \cdot \text{slope}

Each step follows a tangent to a (possibly different) solution curve.

Implementation

def forward_euler(f, u0, t0, T, h):
    """
    Solve u' = f(t, u) from t0 to T with step size h.
    """
    t = t0
    u = u0
    times = [t]
    values = [u]

    while t < T:
        u = u + h * f(t, u)
        t = t + h
        times.append(t)
        values.append(u)

    return np.array(times), np.array(values)

Local Truncation Error

The local truncation error measures how well the exact solution satisfies the numerical scheme.

For forward Euler, Taylor expand the exact solution:

u(tn+1)=u(tn)+hu(tn)+h22u(tn)+O(h3)u(t_{n+1}) = u(t_n) + h u'(t_n) + \frac{h^2}{2}u''(t_n) + O(h^3)

Since u(tn)=f(tn,u(tn))u'(t_n) = f(t_n, u(t_n)):

τn=h2u(tn)+O(h2)=O(h)\tau_n = \frac{h}{2}u''(t_n) + O(h^2) = O(h)

Global Error and Convergence

The global error En=unu(tn)E_n = u_n - u(t_n) accumulates local errors over all steps.

Error Propagation: Building Intuition

To understand how local errors accumulate, consider the linear test problem u=λu+g(t)u' = \lambda u + g(t). The exact solution satisfies:

u(tn+1)=(1+hλ)u(tn)+hg(tn)+hτnu(t_{n+1}) = (1 + h\lambda)u(t_n) + hg(t_n) + h\tau_n

The numerical solution satisfies:

un+1=(1+hλ)un+hg(tn)u_{n+1} = (1 + h\lambda)u_n + hg(t_n)

Subtracting gives the error recurrence:

En+1=(1+hλ)EnhτnE_{n+1} = (1 + h\lambda)E_n - h\tau_n

Unrolling this recurrence:

En=(1+hλ)nE0hj=0n1(1+hλ)n1jτjE_n = (1 + h\lambda)^n E_0 - h\sum_{j=0}^{n-1}(1 + h\lambda)^{n-1-j}\tau_j

This is the discrete Duhamel principle: the global error at step nn equals the initial error propagated forward, plus all local truncation errors propagated by the appropriate powers of the amplification factor.

Remark 2 (Connection to Neumann Series)

This structure is identical to the Neumann series from iterative methods! Compare:

ContextRecurrenceSolution
Iterative methodsxk+1=Axk+bx_{k+1} = Ax_k + bxn=Anx0+j=0n1An1jbx_n = A^n x_0 + \sum_{j=0}^{n-1} A^{n-1-j}b
ODE errorEn+1=REnhτnE_{n+1} = RE_n - h\tau_nEn=RnE0hj=0n1Rn1jτjE_n = R^n E_0 - h\sum_{j=0}^{n-1} R^{n-1-j}\tau_j

where R=1+hλR = 1 + h\lambda is the amplification factor.

Both are discrete convolutions with a geometric kernel. The convergence/stability condition is the same: R<1|R| < 1 (ODE stability) parallels A<1\|A\| < 1 (Neumann series convergence).

This also discretizes the continuous variation of constants formula:

u(t)=eλtu0+0teλ(ts)g(s)dsu(t) = e^{\lambda t}u_0 + \int_0^t e^{\lambda(t-s)}g(s)\,ds

The exponential eλte^{\lambda t} becomes the power Rn=(1+hλ)nR^n = (1+h\lambda)^n, and the integral becomes a sum.

Convergence Theorem

Proof 1

We prove this for forward Euler on a nonlinear problem; the general case is similar.

Step 1: Error recurrence. The numerical method gives un+1=un+hf(tn,un)u_{n+1} = u_n + hf(t_n, u_n). The exact solution satisfies u(tn+1)=u(tn)+hf(tn,u(tn))+hτnu(t_{n+1}) = u(t_n) + hf(t_n, u(t_n)) + h\tau_n.

Subtracting:

En+1=En+h[f(tn,un)f(tn,u(tn))]hτnE_{n+1} = E_n + h[f(t_n, u_n) - f(t_n, u(t_n))] - h\tau_n

Step 2: Lipschitz bound. Since ff is Lipschitz with constant LL:

f(tn,un)f(tn,u(tn))LEn|f(t_n, u_n) - f(t_n, u(t_n))| \leq L|E_n|

Thus:

En+1En+hLEn+hτn=(1+hL)En+hτn|E_{n+1}| \leq |E_n| + hL|E_n| + h|\tau_n| = (1 + hL)|E_n| + h|\tau_n|

Step 3: Unroll the recurrence. Let τmax=maxjτj\tau_{\max} = \max_j |\tau_j|. Then:

En(1+hL)nE0+hτmaxj=0n1(1+hL)j|E_n| \leq (1 + hL)^n|E_0| + h\tau_{\max}\sum_{j=0}^{n-1}(1 + hL)^j

Step 4: Bound the geometric sum. Using (1+hL)kekhL(1 + hL)^k \leq e^{khL} and j=0n1(1+hL)j(1+hL)n1hL\sum_{j=0}^{n-1}(1+hL)^j \leq \frac{(1+hL)^n - 1}{hL}:

EnenhLE0+enhL1Lτmax|E_n| \leq e^{nhL}|E_0| + \frac{e^{nhL} - 1}{L}\tau_{\max}

Step 5: Substitute nh=tnt0nh = t_n - t_0 and τmax=O(hp)\tau_{\max} = O(h^p). With E0=0E_0 = 0 (exact initial condition):

EneL(tnt0)1LChp=O(hp)|E_n| \leq \frac{e^{L(t_n - t_0)} - 1}{L} \cdot Ch^p = O(h^p)
Proof 2

This follows directly from Theorem 1. If the method has consistency order pp, then τChp\|\tau\| \leq Ch^p. The theorem gives:

EnCLΨ(eLΨT1)hp=O(hp)\|E_n\| \leq \frac{C}{L_\Psi}\left(e^{L_\Psi T} - 1\right) h^p = O(h^p)

The key point: the constant CLΨ(eLΨT1)\frac{C}{L_\Psi}(e^{L_\Psi T} - 1) depends on the problem (through LΨL_\Psi and TT) but not on hh. Thus consistency of order pp implies convergence of order pp.

Remark 3 (The Forward/Backward Error Framework for ODEs)

The convergence theorem is the ODE version of our fundamental error relationship:

Forward errorCondition number×Backward error\text{Forward error} \leq \text{Condition number} \times \text{Backward error}

In the ODE setting:

  • Backward error = Local truncation error τn=O(hp)\tau_n = O(h^p)

  • Forward error = Global error En=O(hp)\|E_n\| = O(h^p)

  • Condition number = eL(tnt0)e^{L(t_n - t_0)} (Lipschitz amplification over time)

The Lipschitz constant LL measures how sensitive the ODE is to perturbations. The factor eLTe^{LT} is the condition number for time evolution—a property of the problem, not the algorithm.

A consistent method (small backward error per step) achieves convergence (controlled forward error) as long as the problem isn’t too ill-conditioned (moderate LL).

Stability Analysis

Convergence (as h0h \to 0) doesn’t guarantee useful results for fixed h>0h > 0. We need absolute stability.

The convergence theorem tells us that errors are bounded as h0h \to 0. But in practice, we use a fixed step size h>0h > 0. The question becomes: does the error recurrence

En+1=(1+hλ)EnhτnE_{n+1} = (1 + h\lambda)E_n - h\tau_n

produce bounded errors for our chosen hh? This depends on the amplification factor R=1+hλR = 1 + h\lambda. If R>1|R| > 1, errors grow exponentially with each step—the method is unstable. Stability is the condition that ensures our backward errors (local truncation errors) don’t get amplified catastrophically as we march forward in time.

The Test Equation

Apply forward Euler to u=λuu' = \lambda u with Re(λ)<0\text{Re}(\lambda) < 0 (decaying solutions):

un+1=(1+hλ)un=R(z)un,z=hλu_{n+1} = (1 + h\lambda)u_n = R(z)u_n, \quad z = h\lambda

where R(z)=1+zR(z) = 1 + z is the stability function.

Stability Region

For the numerical solution to remain bounded, we need R(z)=1+z1|R(z)| = |1 + z| \leq 1.

For real λ<0\lambda < 0: stability requires 2hλ0-2 \leq h\lambda \leq 0, so:

h2λh \leq \frac{2}{|\lambda|}

Forward Euler is conditionally stable—the step size is restricted by the problem’s eigenvalues.

Remark 4 (The Exponential Factor)

The factor eL(tnt0)e^{L(t_n - t_0)} in the convergence bound is unavoidable—it reflects the problem’s sensitivity (condition number), not the algorithm. The Lipschitz constant LL measures how quickly nearby solution curves diverge or converge:

  • If L=0L = 0 (e.g., u=g(t)u' = g(t)): solution curves are parallel, errors don’t grow

  • If L>0L > 0: solution curves diverge, errors can grow exponentially

  • If L<0L < 0: solution curves converge, errors are damped

Limitations

Forward Euler’s conditional stability becomes problematic for stiff problems—where large eigenvalues force tiny step sizes even when the solution varies slowly.

For the test equation with λ=1000\lambda = -1000:

We’re forced to take thousands of tiny steps tracking a transient we don’t care about. This motivates implicit methods like backward Euler, which we develop next.