Skip to article frontmatterSkip to article content
Site not loading correctly?

This may be due to an incorrect BASE_URL configuration. See the MyST Documentation for reference.

Big Idea

Runge-Kutta methods achieve higher order accuracy by evaluating ff at multiple points within each step—no derivatives of ff required. Embedded pairs compute two approximations of different orders using (nearly) the same work, enabling automatic error estimation. This powers adaptive time-stepping: adjust hh to maintain accuracy while minimizing computational cost.

Beyond Euler: The Need for Higher Order

Euler’s method has local truncation error O(h)O(h)—first order. To achieve 10-6 accuracy might require millions of steps.

The challenge: Higher-order Taylor methods require derivatives of ff:

u(t+h)=u(t)+hf+h22(ft+fuf)+u(t+h) = u(t) + hf + \frac{h^2}{2}(f_t + f_u f) + \cdots

Computing ftf_t, fuf_u, fuuf_{uu}, etc. is tedious and error-prone.

The Runge-Kutta idea: Evaluate ff at cleverly chosen intermediate points and combine to cancel error terms—no derivatives needed.

Second-Order Runge-Kutta (RK2)

The Method

Given the initial value problem u(t)=f(t,u)u'(t) = f(t, u), we proceed in two stages:

r1=f(tn,un)r2=f(tn+αh,un+βhr1)\begin{aligned} r_1 &= f(t_n, u_n) \\ r_2 &= f(t_n + \alpha h, u_n + \beta h r_1) \end{aligned}

Here α,β[0,1]\alpha, \beta \in [0, 1] are fractional values to be determined. The update is:

un+1=un+h(ar1+br2)u_{n+1} = u_n + h(a r_1 + b r_2)

When a=1a = 1 and b=0b = 0, this reduces to forward Euler. Our goal is to find a,b,α,βa, b, \alpha, \beta that give second-order accuracy.

Derivation: Matching Taylor Series

Goal: Choose parameters so the local truncation error is O(h2)O(h^2) instead of O(h)O(h).

Step 1: Taylor expand the exact solution.

u(tn+1)=u(tn)+hu(tn)+h22u(tn)+O(h3)u(t_{n+1}) = u(t_n) + hu'(t_n) + \frac{h^2}{2}u''(t_n) + O(h^3)

Using u=fu' = f and the chain rule u=ft+fufu'' = f_t + f_u f:

u(tn+1)=un+hf+h22(ft+fuf)+O(h3)u(t_{n+1}) = u_n + hf + \frac{h^2}{2}(f_t + f_u f) + O(h^3)

Step 2: Taylor expand r2r_2.

r2=f(tn+αh,un+βhr1)=f+αhft+βhfur1+O(h2)r_2 = f(t_n + \alpha h, u_n + \beta h r_1) = f + \alpha h f_t + \beta h f_u r_1 + O(h^2)

Since r1=fr_1 = f:

r2=f+h(αft+βfuf)+O(h2)r_2 = f + h(\alpha f_t + \beta f_u f) + O(h^2)

Step 3: Expand the numerical method.

Substituting into the update formula:

un+1=un+h(ar1+br2)=un+h(a+b)f+h2b(αft+βfuf)+O(h3)u_{n+1} = u_n + h(ar_1 + br_2) = u_n + h(a+b)f + h^2 b(\alpha f_t + \beta f_u f) + O(h^3)

Step 4: Match coefficients.

Equating terms of equal order in hh on both sides:

OrderExact solutionNumerical methodCondition
O(h)O(h)ff(a+b)f(a+b)fa+b=1a + b = 1
O(h2)O(h^2), ftf_t term12ft\frac{1}{2}f_tbαftb\alpha f_tbα=12b\alpha = \frac{1}{2}
O(h2)O(h^2), fuff_u f term12fuf\frac{1}{2}f_u fbβfufb\beta f_u fbβ=12b\beta = \frac{1}{2}

Three equations, four unknowns — a one-parameter family of solutions!

Common RK2 Methods

Definition 1 (RK2 Midpoint Method)

With a=0a = 0, b=1b = 1, and α=β=1/2\alpha = \beta = 1/2:

r1=f(tn,un)r2=f(tn+h2,un+h2r1)un+1=un+hr2\begin{aligned} r_1 &= f(t_n, u_n) \\ r_2 &= f\left(t_n + \frac{h}{2}, u_n + \frac{h}{2}r_1\right) \\ u_{n+1} &= u_n + h r_2 \end{aligned}

Interpretation: Take half an Euler step, evaluate the slope there, use that slope for the full step.

Midpoint method (a=0, b=1, \alpha=\beta=1/2) solving u'=-2u with h=0.2. Step 1: Compute slope r_1 at the current point. Step 2: Euler step with step-size h/2. Step 3: Compute slope r_2 at the intermediate point. Step 4: Full step uses slope r_2. The diamond shows Euler’s result for comparison.

Midpoint method (a=0,b=1,α=β=1/2a=0, b=1, \alpha=\beta=1/2) solving u=2uu'=-2u with h=0.2h=0.2. Step 1: Compute slope r1r_1 at the current point. Step 2: Euler step with step-size h/2h/2. Step 3: Compute slope r2r_2 at the intermediate point. Step 4: Full step uses slope r2r_2. The diamond shows Euler’s result for comparison.

Definition 2 (Heun’s Method (Averaging))

With a=b=1/2a = b = 1/2 and α=β=1\alpha = \beta = 1:

r1=f(tn,un)r2=f(tn+h,un+hr1)un+1=un+h2(r1+r2)\begin{aligned} r_1 &= f(t_n, u_n) \\ r_2 &= f(t_n + h, u_n + h r_1) \\ u_{n+1} &= u_n + \frac{h}{2}(r_1 + r_2) \end{aligned}

Interpretation: Take a full Euler step, then average the initial and final slopes.

Heun’s method (a=b=1/2, \alpha=\beta=1) solving u'=-5u with h=0.2. Step 1: Compute slope at initial point. Step 2: Full Euler step. Step 3: Compute slope at the new point. Step 4: Final step uses the average of the two slopes. The diamond shows Euler’s result for comparison.

Heun’s method (a=b=1/2,α=β=1a=b=1/2, \alpha=\beta=1) solving u=5uu'=-5u with h=0.2h=0.2. Step 1: Compute slope at initial point. Step 2: Full Euler step. Step 3: Compute slope at the new point. Step 4: Final step uses the average of the two slopes. The diamond shows Euler’s result for comparison.

Both methods have local truncation error O(h2)O(h^2), i.e. second order.

Embedded Pairs and Error Estimation

Fixed step sizes are inefficient: too small wastes work, too large loses accuracy. To adapt hh locally, we need an estimate of the local truncation error at each step. But τn\tau_n depends on the unknown exact solution, so we cannot compute it directly. The key idea is to obtain a cheap error estimate by comparing two methods of different order.

The Embedded Pair Idea

Compute two approximations of different orders using the same ff evaluations:

The difference estimates the local error of the lower-order method:

errun+1u^n+1=O(hp)\text{err} \approx |u_{n+1} - \hat{u}_{n+1}| = O(h^p)

RK12: Euler-Midpoint Pair

Embed Euler (order 1) with RK2 midpoint (order 2):

Definition 3 (RK12 Embedded Pair)

r1=f(tn,un)r2=f(tn+h2,un+h2r1)u^n+1=un+hr1(Euler, order 1)un+1=un+hr2(Midpoint, order 2)\begin{aligned} r_1 &= f(t_n, u_n) \\ r_2 &= f\left(t_n + \frac{h}{2}, u_n + \frac{h}{2}r_1\right) \\ \hat{u}_{n+1} &= u_n + h r_1 \quad &\text{(Euler, order 1)} \\ u_{n+1} &= u_n + h r_2 \quad &\text{(Midpoint, order 2)} \end{aligned}

Error estimate: err=un+1u^n+1=h2r2r1\text{err} = |u_{n+1} - \hat{u}_{n+1}| = \frac{h}{2}|r_2 - r_1|

Cost: 2 function evaluations per step (same as plain RK2), plus we get an error estimate!

Why This Works

The difference between the methods is approximately:

u^n+1un+1=h(r1r2)h22(ft+fuf)=h22u(tn)\hat{u}_{n+1} - u_{n+1} = h(r_1 - r_2) \approx -\frac{h^2}{2}(f_t + f_u f) = -\frac{h^2}{2}u''(t_n)

This is (up to sign) the leading error term in Euler’s method.

Adaptive Time-Stepping

With an error estimate, we can adjust hh automatically.

Algorithm 1 (Adaptive Step-Size Control)

Input: Tolerance parameters atol, rtol; current step size hh; current state (tn,un)(t_n, u_n)

Output: New state (tn+1,un+1)(t_{n+1}, u_{n+1}) and updated step size hnewh_{\text{new}}

  1. Compute the step using RK12:

    • r1=f(tn,un)r_1 = f(t_n, u_n)

    • r2=f(tn+h/2,un+hr1/2)r_2 = f(t_n + h/2, u_n + hr_1/2)

    • u^n+1=un+hr1\hat{u}_{n+1} = u_n + hr_1 (Euler)

    • un+1=un+hr2u_{n+1} = u_n + hr_2 (Midpoint)

  2. Estimate scaled error:

    s=atol+max(un,un+1)rtols = \text{atol} + \max(|u_n|, |u_{n+1}|) \cdot \text{rtol}

    err=un+1u^n+1s\text{err} = \left\|\frac{u_{n+1} - \hat{u}_{n+1}}{s}\right\|
  3. Accept or reject:

    • If err1\text{err} \leq 1: accept step, set tn+1tn+ht_{n+1} \leftarrow t_n + h

    • If err>1\text{err} > 1: reject step, keep (tn,un)(t_n, u_n)

  4. Update step size:

    hnew=hmin(αmax,max(αmin,αerr1/(p+1)))h_{\text{new}} = h \cdot \min\left(\alpha_{\max}, \max\left(\alpha_{\min}, \alpha \cdot \text{err}^{-1/(p+1)}\right)\right)

    where α0.9\alpha \approx 0.9 (safety factor), αmin0.2\alpha_{\min} \approx 0.2, αmax5\alpha_{\max} \approx 5.

  5. Repeat from step 1 (with rejected step) or continue to next time step (if accepted).

Generalizations to Higher Order Methods

The RK2 derivation extends to higher orders:

Remark 1 (Higher-Order Runge-Kutta Methods)

RK4 (Classical 4th-order): Uses 4 stages, local error O(h4)O(h^4). The most popular “hand-coded” method.

RK45 (Dormand-Prince): A 5th-order method with embedded 4th-order for error estimation. Used in scipy.integrate.solve_ivp with method='RK45'.

The derivation follows the same pattern: Taylor expand, match coefficients. But the algebra becomes increasingly complex; these methods are derived once and tabulated.