Module 2.3
Deterministic differential equations assume that the future is completely determined by the present state. Reality is rarely so obliging. Financial markets fluctuate under the influence of countless unpredictable agents; molecules in a fluid are buffeted by thermal noise; gene expression in a single cell is driven by random encounters between transcription factors and DNA. Stochastic differential equations (SDEs) extend the framework of ordinary and partial differential equations to incorporate randomness as a first-class modelling ingredient. The Ito calculus that underpins SDEs differs from ordinary calculus in a fundamental way: the chain rule acquires an extra correction term (Ito's lemma) because the Wiener process has infinite variation. Mastering this calculus opens the door to mathematical finance (the Black-Scholes formula is a direct consequence), computational biology (Langevin dynamics), and statistical physics. Beyond SDEs, reaction-diffusion systems couple local chemical kinetics with spatial diffusion and can spontaneously generate spatial patterns from homogeneity, a mechanism proposed by Alan Turing in 1952 and now understood to drive pigmentation patterns in animals, vegetation stripes in arid ecosystems, and morphogen gradients in embryonic development. This module equips you with the mathematical foundations and computational tools for both stochastic and pattern-forming models.
After completing this module you will be able to:
A standard Brownian motion (or Wiener process) is a continuous-time stochastic process \(\{W(t)\}_{t \geq 0}\) satisfying:
A crucial property is that Brownian paths are nowhere differentiable almost surely, and the quadratic variation satisfies \([W,W](t) = t\). This prevents the use of ordinary calculus and necessitates the Ito calculus.
An Ito SDE is a stochastic differential equation of the form
$$ dX_t = a(X_t, t)\,dt + b(X_t, t)\,dW_t, $$where \(a(x,t)\) is the drift coefficient and \(b(x,t)\) is the diffusion coefficient. The rigorous meaning is the integral equation
$$ X_t = X_0 + \int_0^t a(X_s, s)\,ds + \int_0^t b(X_s, s)\,dW_s, $$where the second integral is the Ito stochastic integral, defined as the \(L^2\)-limit of sums \(\sum b(X_{t_i}, t_i)(W_{t_{i+1}} - W_{t_i})\) with the integrand evaluated at the left endpoint of each subinterval.
Geometric Brownian motion (GBM) is the process satisfying the SDE
$$ dS_t = \mu\,S_t\,dt + \sigma\,S_t\,dW_t, $$where \(\mu \in \mathbb{R}\) is the drift rate and \(\sigma > 0\) is the volatility. Applying Ito's lemma to \(\ln S_t\) yields the exact solution
$$ S_t = S_0 \exp\!\left[\left(\mu - \tfrac{1}{2}\sigma^2\right)t + \sigma\,W_t\right]. $$GBM is the foundational model of the Black-Scholes option pricing theory because it keeps \(S_t > 0\) and produces log-normal returns.
The Euler-Maruyama method discretises the Ito SDE \(dX = a\,dt + b\,dW\) on a uniform time grid \(t_n = n\Delta t\) as
$$ X_{n+1} = X_n + a(X_n, t_n)\,\Delta t + b(X_n, t_n)\,\Delta W_n, $$where \(\Delta W_n = W(t_{n+1}) - W(t_n) \sim \mathcal{N}(0, \Delta t)\). In practice, each \(\Delta W_n\) is sampled as \(\sqrt{\Delta t}\;\xi_n\) with \(\xi_n \sim \mathcal{N}(0,1)\).
A numerical scheme \(\hat{X}\) for an SDE has strong order of convergence \(\gamma\) if
$$ \mathbb{E}\!\left[\left|X_T - \hat{X}_T\right|\right] \leq C\,(\Delta t)^{\gamma}. $$It has weak order of convergence \(\beta\) if, for every sufficiently smooth test function \(g\),
$$ \left|\mathbb{E}\!\left[g(X_T)\right] - \mathbb{E}\!\left[g(\hat{X}_T)\right]\right| \leq C\,(\Delta t)^{\beta}. $$Strong convergence demands pathwise accuracy; weak convergence demands only that expectations (and hence probability distributions) are approximated well. The Euler-Maruyama method has strong order \(\gamma = \frac{1}{2}\) and weak order \(\beta = 1\).
A reaction-diffusion system for two species \(u(x,t)\) and \(v(x,t)\) takes the form
$$ \frac{\partial u}{\partial t} = D_u\,\nabla^2 u + f(u,v), \qquad \frac{\partial v}{\partial t} = D_v\,\nabla^2 v + g(u,v), $$where \(D_u, D_v > 0\) are diffusion coefficients and \(f,g\) describe the local reaction kinetics. The homogeneous steady state \((u^*,v^*)\) satisfying \(f(u^*,v^*) = g(u^*,v^*) = 0\) may be stable to spatially uniform perturbations yet unstable to spatially varying perturbations when the diffusion coefficients are sufficiently different. This is a Turing instability.
Let \(X_t\) satisfy the Ito SDE \(dX_t = a\,dt + b\,dW_t\), and let \(f(x,t) \in C^{2,1}(\mathbb{R}\times[0,\infty))\). Then the process \(Y_t = f(X_t, t)\) satisfies
$$ df = \left(\frac{\partial f}{\partial t} + a\,\frac{\partial f}{\partial x} + \frac{1}{2}\,b^2\,\frac{\partial^2 f}{\partial x^2}\right)dt + b\,\frac{\partial f}{\partial x}\,dW_t. $$Proof sketch. Expand \(f(X_{t+dt}, t+dt)\) in a Taylor series to second order. In ordinary calculus, the \((dX)^2\) term is negligible. For an Ito process, however, \((dX)^2 = b^2\,(dW)^2 + \text{higher order}\), and the key identity \((dW)^2 = dt\) (in the mean-square sense) yields a non-vanishing second-order contribution. Specifically:
$$ df = \frac{\partial f}{\partial t}\,dt + \frac{\partial f}{\partial x}\,dX + \frac{1}{2}\frac{\partial^2 f}{\partial x^2}(dX)^2. $$Substituting \(dX = a\,dt + b\,dW\) and applying the Ito multiplication rules \((dt)^2 = 0\), \(dt\,dW = 0\), \((dW)^2 = dt\), we obtain the result. The term \(\frac{1}{2}b^2 f_{xx}\,dt\) is the Ito correction, absent in ordinary calculus.
Consider the Ito SDE \(dX_t = a(X_t,t)\,dt + b(X_t,t)\,dW_t\) on \([0,T]\) with Lipschitz-continuous coefficients \(a\) and \(b\). Let \(\hat{X}\) denote the Euler-Maruyama approximation with step size \(\Delta t\). Then:
Proof sketch (strong convergence). Define the local error \(e_n = X_{t_n} - \hat{X}_{t_n}\). Using the Lipschitz conditions on \(a\) and \(b\), one shows that the expected squared error satisfies a discrete Gronwall inequality:
$$ \mathbb{E}[|e_{n+1}|^2] \leq (1 + C\Delta t)\,\mathbb{E}[|e_n|^2] + C\,(\Delta t)^2. $$Iterating and applying the discrete Gronwall lemma yields \(\mathbb{E}[|e_N|^2] \leq C\,\Delta t\), so \(\left(\mathbb{E}[|e_N|^2]\right)^{1/2} \leq C\,(\Delta t)^{1/2}\). The weak convergence proof uses a backward Kolmogorov PDE argument: define \(u(x,t) = \mathbb{E}[g(X_T) \mid X_t = x]\), show that \(u\) satisfies a PDE, then apply a Taylor expansion of the scheme within this PDE framework to extract the additional half-order of accuracy.
The classical chain rule gives \(d[f(X)] = f'(X)\,dX\). Ito's lemma adds the correction term \(\frac{1}{2}b^2 f''(X)\,dt\), which arises because Brownian motion has non-zero quadratic variation. Forgetting this term leads to incorrect drift computations. For example, applying the ordinary chain rule to \(\ln S_t\) under GBM gives drift \(\mu\), whereas Ito's lemma gives the correct drift \(\mu - \frac{1}{2}\sigma^2\).
The Euler-Maruyama method has strong order \(\frac{1}{2}\), not 1. Halving \(\Delta t\) reduces the strong error by a factor of \(\sqrt{2} \approx 1.41\), not 2. To reduce the strong error by half, you must quarter the step size. For weak convergence the order is 1, so halving the step size does halve the error in expectations.
The Monte Carlo standard error scales as \(\sigma / \sqrt{M}\) where \(M\) is the number of paths. Doubling \(M\) reduces the standard error by a factor of \(\sqrt{2}\), not 2. To halve the error, you need four times as many paths. Variance reduction techniques (antithetic variates, control variates, importance sampling) can dramatically improve efficiency beyond brute-force sampling.
In reaction-diffusion systems, diffusion can destabilise a spatially homogeneous equilibrium. Turing's insight was precisely that an activator-inhibitor system stable without diffusion can become unstable when the inhibitor diffuses faster than the activator. The resulting patterns emerge from the interplay between local reaction and non-local diffusion, not from any external template.
Black-Scholes assumes constant volatility and log-normal returns. Real markets exhibit heavy tails, volatility smiles, jumps, and stochastic volatility. The model remains foundational as a benchmark and pedagogical tool, but practitioners use extensions (Heston model, SABR, local volatility) to capture observed market behaviour.
Problem. Given the GBM SDE \(dS = \mu S\,dt + \sigma S\,dW\), find the SDE for \(Y_t = \ln S_t\).
Step 1 — Identify the ingredients. Here \(a(S,t) = \mu S\), \(b(S,t) = \sigma S\), and \(f(S) = \ln S\). We compute
$$ f'(S) = \frac{1}{S}, \qquad f''(S) = -\frac{1}{S^2}. $$Step 2 — Apply Ito's lemma.
$$ d(\ln S) = \frac{1}{S}\,dS + \frac{1}{2}\left(-\frac{1}{S^2}\right)(\sigma S)^2\,dt = \frac{1}{S}\,dS - \frac{1}{2}\sigma^2\,dt. $$Step 3 — Substitute \(dS\):
$$ d(\ln S) = \frac{1}{S}\bigl(\mu S\,dt + \sigma S\,dW\bigr) - \frac{1}{2}\sigma^2\,dt = \left(\mu - \frac{1}{2}\sigma^2\right)dt + \sigma\,dW. $$Step 4 — Integrate:
$$ \ln S_t - \ln S_0 = \left(\mu - \frac{1}{2}\sigma^2\right)t + \sigma\,W_t. $$Final solution:
$$ S_t = S_0\,\exp\!\left[\left(\mu - \frac{1}{2}\sigma^2\right)t + \sigma\,W_t\right]. $$Note the Ito correction \(-\frac{1}{2}\sigma^2\) in the drift of \(\ln S\). Without it, we would incorrectly predict \(\mathbb{E}[S_t] = S_0 e^{(\mu + \frac{1}{2}\sigma^2)t}\) instead of the correct \(\mathbb{E}[S_t] = S_0 e^{\mu t}\).
Problem. Simulate the Ornstein-Uhlenbeck (OU) process
$$ dX_t = \theta(\mu - X_t)\,dt + \sigma\,dW_t, \qquad X_0 = x_0, $$with mean-reversion speed \(\theta = 1\), long-run mean \(\mu = 0\), volatility \(\sigma = 0.3\), and \(x_0 = 1\), on \([0,5]\).
Step 1 — Identify drift and diffusion: \(a(x,t) = \theta(\mu - x) = -x\), \(b(x,t) = \sigma = 0.3\).
Step 2 — Discretise. With step size \(\Delta t\):
$$ X_{n+1} = X_n + (-X_n)\,\Delta t + 0.3\,\sqrt{\Delta t}\;\xi_n, \qquad \xi_n \sim \mathcal{N}(0,1). $$ $$ X_{n+1} = X_n(1 - \Delta t) + 0.3\,\sqrt{\Delta t}\;\xi_n. $$Step 3 — Verify against the exact distribution. The OU process has the exact solution
$$ X_t = x_0 e^{-\theta t} + \sigma\int_0^t e^{-\theta(t-s)}\,dW_s, $$which is normally distributed with mean \(x_0 e^{-t}\) and variance \(\frac{\sigma^2}{2\theta}(1 - e^{-2\theta t}) = \frac{0.09}{2}(1-e^{-2t})\). At \(t = 5\), the mean is \(e^{-5} \approx 0.0067\) and the variance is \(\approx 0.045\). Running 10000 Euler-Maruyama paths with \(\Delta t = 0.01\) and computing the sample mean and variance at \(t=5\) should closely match these values.
Problem. Consider the Schnakenberg model:
$$ \frac{\partial u}{\partial t} = D_u\,u_{xx} + a - u + u^2 v, \qquad \frac{\partial v}{\partial t} = D_v\,v_{xx} + b - u^2 v, $$with \(a = 0.1\), \(b = 0.9\). Find the homogeneous steady state and determine when Turing instability occurs.
Step 1 — Homogeneous steady state. Set the reaction terms to zero:
$$ a - u^* + (u^*)^2 v^* = 0, \qquad b - (u^*)^2 v^* = 0. $$From the second equation, \((u^*)^2 v^* = b\). Substituting into the first: \(a - u^* + b = 0\), so \(u^* = a + b = 1\) and \(v^* = b/(u^*)^2 = 0.9\).
Step 2 — Linearise. The Jacobian of the reaction terms at \((u^*, v^*)\) is
$$ J = \begin{pmatrix} f_u & f_v \\ g_u & g_v \end{pmatrix} = \begin{pmatrix} -1 + 2u^*v^* & (u^*)^2 \\ -2u^*v^* & -(u^*)^2 \end{pmatrix} = \begin{pmatrix} 0.8 & 1 \\ -1.8 & -1 \end{pmatrix}. $$Step 3 — Check stability without diffusion. \(\text{tr}(J) = 0.8 - 1 = -0.2 < 0\) and \(\det(J) = (0.8)(-1) - (1)(-1.8) = -0.8 + 1.8 = 1.0 > 0\). Both eigenvalues have negative real part, so the steady state is stable without diffusion.
Step 4 — Turing condition. For a perturbation with wavenumber \(k\), the dispersion relation requires
$$ D_v f_u + D_u g_v < 0 \quad \text{and} \quad (D_v f_u + D_u g_v)^2 > 4 D_u D_v \det(J). $$Since \(f_u = 0.8 > 0\) and \(g_v = -1 < 0\), we need \(D_v\) sufficiently larger than \(D_u\). Taking \(D_u = 0.05\) and \(D_v = 1\):
$$ D_v f_u + D_u g_v = 1(0.8) + 0.05(-1) = 0.75. $$This is positive, so the first condition fails. We need \(D_v f_u + D_u g_v < 0\), which requires a larger ratio. Setting \(D_u = 1\), \(D_v = 20\):
$$ D_v f_u + D_u g_v = 20(0.8) + 1(-1) = 15. $$Still positive. In fact, since \(f_u > 0\) and \(|f_u/g_v| = 0.8\), the condition becomes \(D_v/D_u < |g_v/f_u| = 1.25\). So we need the inhibitor to diffuse faster. With \(u\) as the activator (\(f_u > 0\)) and \(v\) as the inhibitor, set \(D_u = 0.01\), \(D_v = 1\):
$$ D_v f_u + D_u g_v = 0.8 + 0.01(-1) = 0.79 > 0. $$Re-examining: the standard Turing condition requires \(D_v f_u + D_u g_v > 2\sqrt{D_u D_v \det(J)}\). With \(D_u = 0.01\), \(D_v = 1\):
$$ 0.79 > 2\sqrt{0.01 \cdot 1 \cdot 1.0} = 0.2. $$This is satisfied. Hence there exists a band of wavenumbers \(k\) for which the homogeneous steady state is linearly unstable. The most unstable wavenumber determines the spatial wavelength of the emerging Turing pattern.
We simulate the GBM SDE \(dS = \mu S\,dt + \sigma S\,dW\) using the Euler-Maruyama method, plot multiple sample paths, and compare the Monte Carlo estimate of \(\mathbb{E}[S_T]\) against the exact value \(S_0 e^{\mu T}\).
Output will display: (left) 20 GBM sample paths with the exact expected value marked, (right) histogram of terminal values S(T) compared with the exact and Monte Carlo means.
We frame Monte Carlo simulation as a sequential decision problem. An autonomous agent observes the running estimator variance and dynamically adapts the number of sample paths to minimise estimation error within a fixed computational budget.
The agent's state at decision step \(k\) is a tuple:
At each decision step the agent chooses from a discrete action space:
Over many episodes with different SDEs and parameter regimes, the agent learns several non-trivial policies:
The converged policy outperforms both the naive strategy (fixed \(\Delta t\), all budget on paths) and the overly cautious strategy (very small \(\Delta t\), too few paths) by a significant margin across a variety of SDE models.
Exercise 2.3.1. Let \(W_t\) be a standard Brownian motion. Compute \(\mathbb{E}[W_t^2]\) and \(\mathbb{E}[W_t^4]\) using the moment-generating function of the normal distribution.
Since \(W_t \sim \mathcal{N}(0,t)\), the even moments of a centred normal with variance \(t\) are:
$$ \mathbb{E}[W_t^2] = t, \qquad \mathbb{E}[W_t^4] = 3t^2. $$The fourth moment uses the fact that for \(Z \sim \mathcal{N}(0,1)\), \(\mathbb{E}[Z^4] = 3\). Since \(W_t = \sqrt{t}\,Z\), \(\mathbb{E}[W_t^4] = t^2 \cdot 3\).
Exercise 2.3.2. Apply Ito's lemma to find the SDE satisfied by \(Y_t = e^{X_t}\) where \(dX_t = -\alpha X_t\,dt + \sigma\,dW_t\) (the Ornstein-Uhlenbeck process).
With \(f(x) = e^x\), we have \(f'(x) = e^x\), \(f''(x) = e^x\). Here \(a = -\alpha x\), \(b = \sigma\). By Ito's lemma:
$$ dY = e^{X}\!\left(-\alpha X\,dt + \sigma\,dW\right) + \frac{1}{2}\sigma^2 e^{X}\,dt = Y\!\left(-\alpha X + \frac{1}{2}\sigma^2\right)dt + \sigma Y\,dW. $$Note that this SDE for \(Y\) involves \(X_t = \ln Y_t\), so the system is not closed in \(Y\) alone (unlike GBM). One must track both \(X\) and \(Y\) simultaneously.
Exercise 2.3.3. Derive the Black-Scholes PDE. Let \(V(S,t)\) be the price of a European option on an asset following GBM \(dS = \mu S\,dt + \sigma S\,dW\). By constructing a delta-hedged portfolio \(\Pi = V - \frac{\partial V}{\partial S}S\) and requiring it to earn the risk-free rate \(r\), show that
$$ \frac{\partial V}{\partial t} + \frac{1}{2}\sigma^2 S^2 \frac{\partial^2 V}{\partial S^2} + rS\frac{\partial V}{\partial S} - rV = 0. $$Apply Ito's lemma to \(V(S,t)\):
$$ dV = \left(\frac{\partial V}{\partial t} + \mu S\frac{\partial V}{\partial S} + \frac{1}{2}\sigma^2 S^2\frac{\partial^2 V}{\partial S^2}\right)dt + \sigma S\frac{\partial V}{\partial S}\,dW. $$Form the portfolio \(\Pi = V - \Delta S\) with \(\Delta = \frac{\partial V}{\partial S}\):
$$ d\Pi = dV - \Delta\,dS = \left(\frac{\partial V}{\partial t} + \frac{1}{2}\sigma^2 S^2\frac{\partial^2 V}{\partial S^2}\right)dt. $$The \(dW\) term cancels, so \(\Pi\) is locally riskless. No-arbitrage requires \(d\Pi = r\Pi\,dt\):
$$ \frac{\partial V}{\partial t} + \frac{1}{2}\sigma^2 S^2\frac{\partial^2 V}{\partial S^2} = r\!\left(V - S\frac{\partial V}{\partial S}\right). $$Rearranging gives the Black-Scholes PDE.
Exercise 2.3.4. For the Schnakenberg reaction-diffusion system with \(a = 0.1\), \(b = 0.9\), \(D_u = 0.01\), \(D_v = 1\), find the most unstable wavenumber \(k^*\) by maximising the largest eigenvalue of the matrix \(J - Dk^2\) where \(D = \text{diag}(D_u, D_v)\).
The dispersion matrix is
$$ A(k) = J - k^2 D = \begin{pmatrix} 0.8 - 0.01k^2 & 1 \\ -1.8 & -1 - k^2 \end{pmatrix}. $$The eigenvalues \(\lambda(k)\) satisfy \(\lambda^2 - \text{tr}(A)\lambda + \det(A) = 0\). Instability occurs when \(\det(A(k)) < 0\):
$$ \det(A) = (0.8 - 0.01k^2)(-1 - k^2) - (1)(-1.8) = -0.8 - 0.8k^2 + 0.01k^2 + 0.01k^4 + 1.8 = 0.01k^4 - 0.79k^2 + 1. $$Setting \(\det(A) = 0\): \(0.01\kappa^2 - 0.79\kappa + 1 = 0\) where \(\kappa = k^2\). By the quadratic formula:
$$ \kappa = \frac{0.79 \pm \sqrt{0.79^2 - 0.04}}{0.02} = \frac{0.79 \pm \sqrt{0.5841}}{0.02} = \frac{0.79 \pm 0.7643}{0.02}. $$So \(\kappa_1 \approx 1.285\) and \(\kappa_2 \approx 77.72\). Unstable modes exist for \(k^2 \in (1.285, 77.72)\). The most unstable wavenumber minimises \(\det(A)\), occurring at \(\kappa^* = 0.79/(2 \times 0.01) = 39.5\), i.e., \(k^* = \sqrt{39.5} \approx 6.28\).
Exercise 2.3.5. Implement the Euler-Maruyama method for the OU process \(dX = -(X-\mu)\,dt + \sigma\,dW\). Run 10000 paths, compute the sample mean and variance at \(t = 3\), and compare with the exact values.
Exercise 2.3.6. Verify the strong convergence order of Euler-Maruyama for GBM. For step sizes \(\Delta t \in \{0.1, 0.05, 0.025, 0.0125\}\), compute the mean absolute pathwise error \(\mathbb{E}[|S_T^{\text{exact}} - S_T^{\text{EM}}|]\) using 5000 paths. Plot the error vs. \(\Delta t\) on a log-log scale and fit a line. Confirm the slope is approximately \(0.5\).
import numpy as np
import matplotlib.pyplot as plt
S0, mu, sigma, T = 100.0, 0.05, 0.2, 1.0
M = 5000
dt_values = [0.1, 0.05, 0.025, 0.0125]
errors = []
np.random.seed(123)
for dt in dt_values:
N = int(T / dt)
# Generate Brownian increments at finest resolution
dW = np.sqrt(dt) * np.random.randn(M, N)
W = np.cumsum(dW, axis=1)
W_T = W[:, -1]
# Exact solution
S_exact = S0 * np.exp((mu - 0.5*sigma**2)*T + sigma*W_T)
# Euler-Maruyama
S_em = np.full(M, S0)
for n in range(N):
S_em = S_em + mu*S_em*dt + sigma*S_em*dW[:, n]
errors.append(np.mean(np.abs(S_exact - S_em)))
# Log-log fit
log_dt = np.log(dt_values)
log_err = np.log(errors)
slope, intercept = np.polyfit(log_dt, log_err, 1)
plt.figure(figsize=(7, 5))
plt.loglog(dt_values, errors, 'bo-', markersize=8, label='Strong error')
dt_fit = np.array([0.01, 0.15])
plt.loglog(dt_fit, np.exp(intercept)*dt_fit**slope, 'r--',
label=f'Fit: slope = {slope:.3f}')
plt.xlabel(r'$\Delta t$')
plt.ylabel('Mean absolute error')
plt.title('Strong Convergence of Euler-Maruyama for GBM')
plt.legend()
plt.grid(True, alpha=0.3, which='both')
plt.tight_layout()
plt.savefig('em_strong_convergence.png', dpi=120)
plt.show()
print(f"Fitted strong order: {slope:.3f} (expected ~0.5)")
Exercise 2.3.7. Simulate the Schnakenberg reaction-diffusion system in 1D using the method of lines: discretise space with finite differences and advance in time with explicit Euler. Use \(a = 0.1\), \(b = 0.9\), \(D_u = 0.01\), \(D_v = 1.0\), domain \([0, 10]\) with periodic boundary conditions, and random initial perturbations around the steady state \((1, 0.9)\). Produce a space-time plot showing the emergence of Turing patterns.
Exercise 2.3.8. Implement a Monte Carlo pricer for a European call option under GBM. Use parameters \(S_0 = 100\), \(K = 105\) (strike), \(r = 0.03\), \(\sigma = 0.2\), \(T = 1\). Compare your Monte Carlo price with the exact Black-Scholes formula. How many paths are needed to achieve a 95% confidence interval of width less than 0.10?
Exercise 2.3.9 (Agentic). Implement the adaptive Monte Carlo agent described in the Agent Lens section. The agent maintains a budget of \(B = 10{,}000\) path-simulations. At each step it observes the running mean, variance, and remaining budget, then chooses one of: (a) simulate 100 more paths at the current \(\Delta t\), (b) halve \(\Delta t\) and simulate 50 paths, (c) activate antithetic variates for all subsequent paths, or (d) stop. Use a simple epsilon-greedy Q-learning strategy. Test on GBM with known \(\mathbb{E}[S_T]\) over 500 episodes and plot the final estimation error over time. Compare with a static baseline that uses all 10000 paths at a single fixed \(\Delta t = 0.01\).
Exercise 2.3.10 (Agentic). Build a parameter-exploration agent for the Schnakenberg reaction-diffusion system. The agent's action space is the pair \((D_u, D_v)\) drawn from a grid. Its reward is \(+1\) if the simulation produces spatially non-uniform patterns (detected by thresholding the spatial variance of \(u\) at the final time) and \(0\) otherwise. Over 200 episodes, the agent should learn the region of \((D_u, D_v)\) space that produces Turing patterns. Visualise the learned reward map as a heatmap and overlay the analytically computed Turing boundary from Exercise 2.3.4.
Q1. The quadratic variation of a standard Brownian motion \(W_t\) over \([0,t]\) is:
(a) \(0\) (b) \(\sqrt{t}\) (c) \(t\) (d) \(t^2\)
Q2. In the Ito SDE \(dX = a\,dt + b\,dW\), the Ito correction term in the formula for \(d[f(X)]\) is:
(a) \(f'(X)\,b\,dW\) (b) \(\frac{1}{2}b^2 f''(X)\,dt\) (c) \(a\,f'(X)\,dt\) (d) \(b^2 f'(X)\,dt\)
Q3. The exact solution of geometric Brownian motion \(dS = \mu S\,dt + \sigma S\,dW\) is:
(a) \(S_0 e^{\mu t + \sigma W_t}\) (b) \(S_0 e^{(\mu - \sigma^2/2)t + \sigma W_t}\) (c) \(S_0(1 + \mu t + \sigma W_t)\) (d) \(S_0 e^{\mu t}\)
Q4. The strong order of convergence of the Euler-Maruyama method is:
(a) \(1\) (b) \(2\) (c) \(\frac{1}{2}\) (d) \(\frac{3}{2}\)
Q5. The weak order of convergence of the Euler-Maruyama method is:
(a) \(\frac{1}{2}\) (b) \(1\) (c) \(2\) (d) \(\frac{1}{4}\)
Q6. In a Monte Carlo simulation with \(M\) independent paths, the standard error of the mean estimator scales as:
(a) \(1/M\) (b) \(1/M^2\) (c) \(1/\sqrt{M}\) (d) \(\sqrt{M}\)
Q7. A Turing instability in a reaction-diffusion system occurs when:
(a) Both species have equal diffusion coefficients
(b) The homogeneous steady state is unstable without diffusion
(c) The homogeneous steady state is stable without diffusion but unstable to spatial perturbations with diffusion
(d) The diffusion coefficients are both zero
Q8. In the Black-Scholes PDE, the drift rate \(\mu\) of the underlying asset:
(a) Appears explicitly in the PDE (b) Is replaced by the risk-free rate \(r\) (c) Equals zero (d) Equals \(\sigma^2\)
Q9. An Ornstein-Uhlenbeck process is characterised by:
(a) Exponential growth (b) Mean reversion (c) Constant paths (d) Jump discontinuities
Q10. Which variance reduction technique pairs each random path with a path using the negated random variates?
(a) Importance sampling (b) Control variates (c) Antithetic variates (d) Stratified sampling
Q1: (c) Q2: (b) Q3: (b) Q4: (c) Q5: (b)
Q6: (c) Q7: (c) Q8: (b) Q9: (b) Q10: (c)
Objective: Build a Python program with two modes: (1) SDE mode, which simulates a user-chosen SDE (GBM, OU, or custom) via Euler-Maruyama, runs Monte Carlo estimation, and reports confidence intervals; (2) PDE mode, which simulates a 1D or 2D reaction-diffusion system and visualises pattern formation.
| Criterion | Excellent (90--100%) | Good (70--89%) | Needs Improvement (<70%) |
|---|---|---|---|
| Correctness | GBM Monte Carlo matches exact \(\mathbb{E}[S_T]\) within 2 standard errors; convergence slopes match theoretical orders \(\gamma = 0.5\), \(\beta = 1.0\) to within 0.1; Turing patterns emerge for analytically predicted parameters. | Monte Carlo estimates reasonable; convergence slopes within 0.2 of theoretical; patterns visible but parameters not fully justified. | Incorrect SDE discretisation or no visible pattern formation. |
| Variance Reduction | Antithetic variates implemented and shown to reduce standard error by at least 30% for GBM; control variates explored as a bonus. | One variance reduction technique attempted with measurable improvement. | No variance reduction implemented. |
| Visualisation | Publication-quality figures: sample paths, histograms with CI shading, log-log convergence plots with fitted lines, space-time heatmaps with colorbars and axis labels. | All required plots present; minor formatting issues. | Missing plots or misleading axis scales. |
| Reproducibility | Includes requirements.txt, docstrings, fixed random seeds, and a README with sample outputs. Results reproducible on a fresh environment. |
Code runs but lacks documentation or seeds. | Missing dependencies or non-reproducible results. |