Many systems we want to model are not purely deterministic:
Finance: Stock prices fluctuate due to unpredictable market forces. The
Black–Scholes model for asset prices is an SDE, specifically geometric Brownian motion.
Biology: Populations are subject to environmental noise. Deterministic
logistic growth dX=rX(1−X/K)dt becomes stochastic when we add
σXdW to model random environmental fluctuations.
Physics: A pollen grain suspended in water gets buffeted by water
molecules, the original Brownian motion observed by Robert Brown in 1827.
An ODE for the grain’s position would predict it stays still (no net force).
The actual path is a random, jittery curve.
Imagine a person standing at the origin on the real line. At each tick of a
clock (spaced δt apart), they step right by +δ or left by
−δ, each with probability 21. Let ξj∈{−δ,+δ}
denote the j-th step. The steps are independent, and by symmetry each has
Here E[X] denotes the expected value (or mean) of a random
variable X: the average over all possible outcomes, weighted by their
probabilities. Since E[ξj]=0, the variance is
Var(ξj)=E[ξj2]=δ2.
After n steps the position is Sn=∑j=1nξj. Since
Sn+1=Sn+ξn+1, we can ask: if the walker is currently at
position s, what is the expected position after one more step? In
general, the conditional expectation of a random variable Y given
that another random variable X takes the value x is
the average of Y restricted to only those outcomes where X=x. In
our case Y=Sn+1, X=Sn, and x=s. Given Sn=s, the
only two possible values of Sn+1 are s+δ and s−δ,
each with conditional probability 21. So:
This says: no matter where the walker currently stands, the expected
position after one more step is unchanged. To get the unconditional
expectation, we average over all possible values of Sn, weighting each
by its probability. Write pk=P(Sn=kδ) for the
probability that the walker is at position kδ after n steps. Then
In the second equality we used the conditional result above:
E[Sn+1∣Sn=kδ]=kδ. The final sum is
the definition of E[Sn]: recall that Sn takes the values
kδ with probabilities pk, so
E[Sn]=∑k(kδ)pk. (This argument is an instance
of the law of total expectation.) Since
S0=0, unrolling gives E[Sn]=E[Sn−1]=⋯=E[S0]=0.
For the mean-square displacementE[Sn2] (the average
squared distance from the origin, taken over many realizations), square
Sn+1=Sn+ξn+1 and average over the two cases:
For this to have a well-defined limit as the discretization is refined, we
need δ2/δt to remain constant, i.e. δ∝δt.
The natural choice is δ=δt, giving
E[Sn2]=t.
This is precisely what is observed experimentally: if you track a pollen
grain suspended in water under a microscope, the mean-square displacement
grows linearly with time. Einstein (1905) explained this by the argument
above, connecting the microscopic randomness of molecular collisions to the
macroscopic diffusion rate.
Imagine a person standing at the origin who, at regular time intervals
δt, takes a step of random size. Let ξj denote the j-th
step, drawn from a probability distribution with density p(ξ)
(so that P(a≤ξj≤b)=∫abp(ξ)dξ).
Suppose the steps are independent with mean zero and variance σ2:
Here E[X] denotes the expected value (or mean) of a random
variable X: the average over all possible outcomes, weighted by their
probabilities.
After n steps the position is Sn=∑j=1nξj. What are the
mean and variance of Sn? Writing out the expected value against the
joint density p(ξ1,…,ξn):
The integral is linear in the sum. Since the steps are independent, the
joint density factors as
p(ξ1,…,ξn)=p(ξ1)⋯p(ξn), and the
cross-integrals collapse: ∫p(ξk)dξk=1 for k=j.
This gives
For the variance, independence is essential. Recall that
Var(X)=E[X2]−(E[X])2. Since
E[Sn]=0, the variance reduces to
Var(Sn)=E[Sn2], the mean-square
displacement: the average of the squared distance from the origin,
taken over many realizations of the walk. Expanding Sn2:
After time t=nδt the mean-square displacement is
E[Sn2]=(t/δt)σ2. For this to depend
only on t and not on the discretization δt, we need
σ2∝δt, i.e. each step has standard deviation
proportional to δt.
This is precisely what is observed experimentally: if you track a pollen
grain suspended in water under a microscope, the mean-square displacement
grows linearly with time. Einstein (1905) explained this by the argument
above, connecting the microscopic randomness of molecular collisions to the
macroscopic diffusion rate.
Nothing in the drunkard’s walk argument used the shape of the step
distribution — only that the steps are independent with mean zero and
finite variance. The ±δ coin flip, a uniform distribution, or
any other zero-mean, finite-variance law all give the same linear growth
of mean-square displacement. So why does Brownian motion have Gaussian
increments?
The answer is the Central Limit Theorem (CLT): if
ξ1,ξ2,… are i.i.d. with mean 0 and variance σ2,
then the normalized sum converges in distribution to a Gaussian,
Equivalently, SndN(0,nσ2).
In our setting n=t/δt steps of variance
σ2=δt give SndN(0,t),
regardless of the step distribution. The Gaussian emerges as the
universal limit of many small independent kicks — a coin-flip walk,
a uniform walk, or molecular collisions all produce the same Gaussian
in the continuum limit.
This universality is made precise by Donsker’s theorem (the functional
CLT): the rescaled random walk converges to Brownian motion as a
continuous process, not just at a single time. Brownian motion is the
unique scaling limit of every finite-variance random walk, in the same way
the Gaussian is the universal limit for sums of independent random
variables. Using Gaussian increments from the start simply gives exact
Brownian increments at every scale, rather than only in the limit.
The δt scaling is precisely what Einstein’s argument
requires. When δt→0, this
random walk converges to a continuous random function W(t) called
Brownian motion (or a Wiener process).
A stochastic process like W(t) gives a different function on every
realization. The best we can do is describe statistics of the process:
expected values, variances, and correlations across realizations. When we
solve an SDE numerically, the solution Xn at each time step is itself a
random variable that depends on all the random increments
ΔW0,ΔW1,…,ΔWn−1.
In practice, we rarely know the density of Xn explicitly, so we cannot
evaluate E[Xn] as an integral. Instead, we approximate it by
Monte Carlo sampling: run M independent simulations and average the
results,
This is a quadrature rule for the integral ∫xp(x)dx, where the
sample points X(i) are drawn from the distribution p rather than
placed on a deterministic grid. See the companion notebook
Monte Carlo Methods for Monte Carlo integration examples,
importance sampling, and the Metropolis--Hastings algorithm.
where λ,μ are constants, is called geometric Brownian motion
(GBM). Both drift and diffusion are proportional to X. This is the SDE
underlying the Black–Scholes model for asset prices.
Modelling interpretation. The relative change in price over a short
interval has two components:
A deterministic trend λdt (expected rate of return).
A random fluctuation μdW (volatility).
So dX/X=λdt+μdW, i.e. dX=λXdt+μXdW.
The multiplicative noise ensures prices stay positive and fluctuations scale
with price level.
Exact solution. This is one of the rare SDEs with a closed-form
solution, making it an ideal test problem: we can compare the
Euler–Maruyama approximation against the true solution on the same
Brownian path. The exact solution is
The −21μ2 correction is a mathematical consequence of the Itô
correction term (see Theorem 3 below), not a modelling choice.
The derivation uses Itô’s formula, the stochastic chain rule derived in
the next section: set Y=lnX and apply Itô’s formula with
φ(x)=lnx (so φ′=1/x, φ′′=−1/x2). Here
f(X)=λX and g(X)=μX, so
Both Xn (numerical) and X(tn) (exact) are random variables. On any
particular run, the error ∣Xn−X(tn)∣ depends on the Brownian path.
We need convergence concepts that account for randomness. Now that we have
the expected value at our disposal, there are two natural ways to measure
the error.
Strong convergence asks: how close is each numerical path to the true
path, on average? We compute the pathwise error ∣Xn−X(tn)∣ on
each realization, then take the expected value across all realizations.
This matters when individual trajectories are important, for example
when simulating a specific stock price path or a particular particle
trajectory.
Strong convergence is demanding: it requires every individual path to be
accurate. Often we only care about statistics of the solution, for
instance the expected payoff of a financial derivative or the mean
concentration of a chemical species. Weak convergence asks a different
question: how well does the method reproduce the expected value?
Rather than averaging the error, we look at the error of the average:
∣E[Xn]−E[X(tn)]∣.
To understand where the exact GBM solution comes from, and in particular
the −21μ2 correction, we need a stochastic version of
the chain rule. This requires first making sense of integration with
respect to Brownian motion.
We have already seen that the integral form of an SDE involves an expression
∫0TH(s)dW(s), where H is some process. The Itô integral
defines this as the limit of left-endpoint Riemann sums:
The left-endpoint choice is what makes this the Itô integral (as opposed
to a midpoint or right-endpoint convention). It is also the choice that
Euler–Maruyama naturally implements.
Two properties of the Itô integral are essential for everything that follows:
Now we can derive the stochastic chain rule. Suppose X(t) satisfies the
SDE dX=f(X)dt+g(X)dW, and let φ be a twice continuously
differentiable function. We want the SDE satisfied by Y(t)=φ(X(t)).
Numerical solutions are random variables; each simulation gives one
sample path.
The h scaling of Brownian increments is why (dW)2=dt, why
the chain rule gains a correction (Itô’s formula), and why strong
convergence is order 1/2 instead of 1.
Higham, D. J. (2001). An Algorithmic Introduction to Numerical Simulation of Stochastic Differential Equations. SIAM Review, 43(3), 525–546. 10.1137/S0036144500378302