Module 2.2
Analytic solutions to PDEs exist only for idealised geometries and simple boundary conditions. In practice, every engineering simulation, weather forecast, and computational-physics experiment depends on numerical PDE solvers. Finite difference methods are the most transparent entry point: they replace derivatives with algebraic differences on a grid, transforming a continuous problem into a large but finite linear system. However, a naive discretisation can produce results that diverge, oscillate, or bear no resemblance to the true solution. The CFL condition and von Neumann stability analysis give you precise criteria for when a scheme will work and when it will fail. The Lax equivalence theorem elevates this to a fundamental principle: for a consistent scheme, stability is both necessary and sufficient for convergence. Beyond finite differences, the finite element method (FEM) and spectral methods offer superior accuracy on complex geometries and smooth problems, respectively. This module equips you to choose, implement, and validate numerical PDE solvers with confidence.
After completing this module you will be able to:
Given a uniform spatial grid \(x_j = j\Delta x\) and time levels \(t^n = n\Delta t\), the centred second difference approximates the second spatial derivative:
$$ u_{xx}(x_j,t^n) \approx \frac{U_j^{n+1/0} - 2U_j^n + U_{j-1}^n}{(\Delta x)^2} + \mathcal{O}((\Delta x)^2), $$where \(U_j^n \approx u(x_j,t^n)\) denotes the discrete approximation.
The explicit finite difference scheme for \(u_t = k\,u_{xx}\) uses a forward difference in time and centred difference in space:
$$ U_j^{n+1} = U_j^n + r\bigl(U_{j+1}^n - 2U_j^n + U_{j-1}^n\bigr), \qquad r = \frac{k\,\Delta t}{(\Delta x)^2}. $$The parameter \(r\) is called the mesh ratio or Fourier number.
The Crank-Nicolson scheme averages the spatial operator at time levels \(n\) and \(n+1\):
$$ U_j^{n+1} - U_j^n = \frac{r}{2}\bigl[\bigl(U_{j+1}^{n+1}-2U_j^{n+1}+U_{j-1}^{n+1}\bigr) + \bigl(U_{j+1}^n-2U_j^n+U_{j-1}^n\bigr)\bigr]. $$This is an implicit scheme (requiring a tridiagonal solve at each time step) with truncation error \(\mathcal{O}((\Delta t)^2 + (\Delta x)^2)\) and is unconditionally stable.
A finite difference scheme is consistent with the PDE if the local truncation error \(\tau_j^n \to 0\) as \(\Delta x, \Delta t \to 0\). Formally, substituting the exact solution into the difference equation and Taylor-expanding must recover the PDE plus vanishing remainder.
A scheme is stable if there exists a constant \(C\) (independent of \(\Delta x, \Delta t\)) such that the discrete solution remains bounded: \(\|U^n\| \leq C\,\|U^0\|\) for all \(n\Delta t \leq T\). In the von Neumann analysis, one substitutes the Fourier mode \(U_j^n = g^n e^{i\xi j\Delta x}\) and requires \(|g(\xi)|\leq 1\) for all wave numbers \(\xi\).
Given the boundary-value problem \(-u''=f\) on \((0,1)\) with \(u(0)=u(1)=0\), the weak form seeks \(u\in H_0^1(0,1)\) such that
$$ \int_0^1 u'(x)\,v'(x)\,dx = \int_0^1 f(x)\,v(x)\,dx \qquad \forall\, v\in H_0^1(0,1). $$This is obtained by multiplying the PDE by a test function \(v\) and integrating by parts.
For a well-posed linear initial-value problem and a consistent finite difference scheme, stability is necessary and sufficient for convergence.
That is, if the scheme is consistent with the PDE and stable, then
$$ \max_j |U_j^n - u(x_j,t^n)| \to 0 \quad\text{as}\quad \Delta x, \Delta t \to 0 $$for any fixed final time \(T = n\Delta t\). Conversely, if the scheme converges, it must be stable.
Proof sketch. Write the global error \(e^n = U^n - \hat{U}^n\) (where \(\hat{U}^n\) is the exact solution restricted to the grid). The error satisfies the same difference operator plus the truncation error: \(e^{n+1}=G\,e^n + \Delta t\,\tau^n\). Iterating, \(e^n = G^n e^0 + \Delta t\sum_{m=0}^{n-1}G^{n-1-m}\tau^m\). Stability gives \(\|G^n\|\leq C\); consistency gives \(\|\tau^m\|\to 0\). Together they yield \(\|e^n\|\to 0\).
The Courant-Friedrichs-Lewy (CFL) condition states that a necessary condition for convergence of a finite difference scheme for a hyperbolic PDE is that the numerical domain of dependence must contain the analytical domain of dependence.
For the 1D wave equation \(u_{tt}=c^2 u_{xx}\) with the standard leap-frog scheme, this requires
$$ \nu = \frac{c\,\Delta t}{\Delta x} \leq 1, $$where \(\nu\) is the Courant number. For the explicit heat equation scheme (FTCS), the analogous stability condition is \(r = k\Delta t/(\Delta x)^2 \leq 1/2\).
While reducing \(\Delta t\) improves accuracy for stable schemes, reducing \(\Delta t\) alone while keeping \(\Delta x\) fixed can violate the CFL condition for explicit schemes (since \(r = k\Delta t/(\Delta x)^2\)), though for the heat equation this actually helps. For the wave equation, the condition \(\nu = c\Delta t/\Delta x \leq 1\) means that refining \(\Delta x\) without proportionally refining \(\Delta t\) can cause instability. The key lesson: \(\Delta t\) and \(\Delta x\) must be refined together according to the stability constraint.
Implicit schemes (BTCS, Crank-Nicolson) are unconditionally stable, but each time step requires solving a linear system. For problems in multiple spatial dimensions, this system can be large and expensive. Explicit schemes, despite their time-step restriction, can be faster per step and easier to parallelise. The optimal choice depends on the problem, hardware, and accuracy requirements.
The Lax equivalence theorem shows that consistency without stability is worthless. A classic counterexample: the DuFort-Frankel scheme is consistent and unconditionally stable, but if \(\Delta t/\Delta x \to \text{const}\) as both go to zero, it approximates a different PDE (the telegraph equation). Consistency must be verified in the specific limit under consideration.
While both produce algebraic systems, the derivation and properties differ fundamentally. Finite elements start from the weak formulation and provide a variational framework guaranteeing best-approximation properties (Cea's lemma). Finite differences start from pointwise Taylor expansions. FEM naturally handles complex geometries and non-uniform meshes; FD methods are simplest on regular grids.
Spectral methods achieve exponential (spectral) convergence only for infinitely smooth solutions. If the solution has limited regularity (e.g., due to non-smooth boundary data or corners in the domain), the convergence rate degrades to algebraic, similar to or even worse than high-order finite elements. Spectral methods also struggle with complex geometries unless combined with domain decomposition.
Problem. Determine the stability condition for the FTCS scheme applied to \(u_t = k\,u_{xx}\).
Step 1 — Substitute a Fourier mode. Let \(U_j^n = g^n e^{i\xi j\Delta x}\). The FTCS scheme gives
$$ g^{n+1}e^{i\xi j\Delta x} = g^n e^{i\xi j\Delta x} + r\bigl(g^n e^{i\xi(j+1)\Delta x}-2g^n e^{i\xi j\Delta x}+g^n e^{i\xi(j-1)\Delta x}\bigr). $$Step 2 — Divide by \(g^n e^{i\xi j\Delta x}\):
$$ g = 1 + r\bigl(e^{i\xi\Delta x}-2+e^{-i\xi\Delta x}\bigr) = 1 + 2r\bigl(\cos(\xi\Delta x)-1\bigr). $$Step 3 — Simplify. Using \(\cos\theta - 1 = -2\sin^2(\theta/2)\):
$$ g = 1 - 4r\sin^2\!\left(\frac{\xi\Delta x}{2}\right). $$Step 4 — Stability requires \(|g|\leq 1\). The worst case is when \(\sin^2(\xi\Delta x/2)=1\), giving \(g=1-4r\). We need \(-1\leq 1-4r\leq 1\), which yields \(0\leq r\leq 1/2\).
Conclusion: The FTCS scheme is stable if and only if
$$ r = \frac{k\,\Delta t}{(\Delta x)^2} \leq \frac{1}{2}. $$Problem. Write the Crank-Nicolson scheme for \(u_t=k\,u_{xx}\) as a tridiagonal linear system.
Step 1 — The scheme. At interior grid points \(j=1,\ldots,M-1\):
$$ -\frac{r}{2}U_{j-1}^{n+1} + (1+r)U_j^{n+1} - \frac{r}{2}U_{j+1}^{n+1} = \frac{r}{2}U_{j-1}^n + (1-r)U_j^n + \frac{r}{2}U_{j+1}^n. $$Step 2 — Matrix form. Let \(\mathbf{U}^n = (U_1^n, U_2^n, \ldots, U_{M-1}^n)^T\). Define the tridiagonal matrices
$$ A = \begin{pmatrix} 1+r & -r/2 & & \\ -r/2 & 1+r & -r/2 & \\ & \ddots & \ddots & \ddots \\ & & -r/2 & 1+r \end{pmatrix}, \quad B = \begin{pmatrix} 1-r & r/2 & & \\ r/2 & 1-r & r/2 & \\ & \ddots & \ddots & \ddots \\ & & r/2 & 1-r \end{pmatrix}. $$Step 3 — The system at each time step is:
$$ A\,\mathbf{U}^{n+1} = B\,\mathbf{U}^n + \mathbf{b}^n, $$where \(\mathbf{b}^n\) incorporates the boundary conditions. Since \(A\) is tridiagonal and diagonally dominant (for \(r>0\)), the Thomas algorithm solves this in \(\mathcal{O}(M)\) operations.
Truncation error: \(\mathcal{O}((\Delta t)^2+(\Delta x)^2)\) since Crank-Nicolson averages the forward and backward Euler methods.
This lab implements the FTCS scheme for the heat equation \(u_t = k\,u_{xx}\) on \([0,1]\) with \(u(x,0)=\sin(\pi x)\) and homogeneous Dirichlet BCs. The exact solution is \(u(x,t)=e^{-k\pi^2 t}\sin(\pi x)\). We animate the numerical vs. exact solution and demonstrate instability when the CFL condition is violated.
Output: An animated plot comparing the FTCS numerical solution (blue dots) to the exact solution (red dashed). Change r = 0.6 to observe the explosive instability when the CFL condition is violated.
This lab implements the Crank-Nicolson scheme and compares it side-by-side with the explicit FTCS scheme. Note that Crank-Nicolson is unconditionally stable and second-order in both time and space.
Output: A 2x2 grid of plots. Top row (r=0.4): both schemes agree with the exact solution. Bottom row (r=0.8): FTCS is unstable while Crank-Nicolson remains accurate.
We frame the choice of numerical scheme parameters as a reinforcement-learning problem. The agent must balance accuracy against computational cost by tuning the discretisation.
The agent observes:
The agent chooses from:
Through Q-learning or policy gradient, the agent discovers that explicit schemes are efficient when the CFL condition allows large time steps (small diffusivity, coarse spatial grid), while implicit schemes are preferable when accuracy demands fine spatial grids that would force prohibitively small \(\Delta t\) for explicit methods. The agent learns to start coarse for a quick estimate, then adaptively refine only in regions where the error indicator is large, mimicking the logic of adaptive mesh refinement (AMR) strategies used in production PDE solvers.
Exercise 2.2.1. Perform von Neumann stability analysis on the backward-Euler (BTCS) scheme:
$$ U_j^{n+1} = U_j^n + r\bigl(U_{j+1}^{n+1}-2U_j^{n+1}+U_{j-1}^{n+1}\bigr). $$Show that it is unconditionally stable (\(|g|\leq 1\) for all \(r>0\)).
Substitute \(U_j^n = g^n e^{i\xi j\Delta x}\). The scheme gives
$$ g = 1 + r\,g\bigl(e^{i\xi\Delta x}-2+e^{-i\xi\Delta x}\bigr) = 1 - 4rg\sin^2(\xi\Delta x/2). $$Solving for \(g\):
$$ g = \frac{1}{1+4r\sin^2(\xi\Delta x/2)}. $$Since \(r>0\) and \(\sin^2\geq 0\), the denominator is \(\geq 1\), so \(0 < g \leq 1\). The scheme is unconditionally stable.
Exercise 2.2.2. Derive the local truncation error of the Crank-Nicolson scheme. Show that it is \(\mathcal{O}((\Delta t)^2+(\Delta x)^2)\).
Taylor-expand \(u(x,t+\Delta t)\) and \(u(x\pm\Delta x,t)\), \(u(x\pm\Delta x,t+\Delta t)\) about \((x,t+\Delta t/2)\). The time discretisation is the trapezoidal rule (midpoint average of forward and backward Euler), giving \(\mathcal{O}((\Delta t)^2)\). The spatial discretisation is centred differences at both time levels, each \(\mathcal{O}((\Delta x)^2)\). Thus the combined truncation error is \(\tau = \mathcal{O}((\Delta t)^2+(\Delta x)^2)\).
Exercise 2.2.3. For the explicit leap-frog scheme for the wave equation \(u_{tt}=c^2 u_{xx}\):
$$ U_j^{n+1} = 2U_j^n - U_j^{n-1} + \nu^2\bigl(U_{j+1}^n - 2U_j^n + U_{j-1}^n\bigr), $$show via von Neumann analysis that stability requires \(\nu = c\Delta t/\Delta x \leq 1\).
Substitute \(U_j^n = g^n e^{i\xi j\Delta x}\):
$$ g = 2 - 4\nu^2\sin^2(\xi\Delta x/2) - g^{-1}. $$Multiply by \(g\): \(g^2 - [2-4\nu^2\sin^2(\xi\Delta x/2)]g + 1 = 0\).
Let \(\alpha = 2-4\nu^2\sin^2(\xi\Delta x/2)\). The roots satisfy \(g_1 g_2 = 1\). For stability we need \(|g_1|=|g_2|=1\), which requires the discriminant \(\alpha^2-4\leq 0\), i.e., \(|\alpha|\leq 2\). The binding constraint is \(\alpha\geq -2\):
$$ 2-4\nu^2\sin^2(\xi\Delta x/2) \geq -2 \implies \nu^2\sin^2(\xi\Delta x/2)\leq 1. $$The worst case is \(\sin^2=1\), giving \(\nu\leq 1\).
Exercise 2.2.4. Write down the weak form of the boundary-value problem \(-u''+u=f\) on \((0,1)\) with \(u(0)=u(1)=0\). Identify the bilinear form \(a(u,v)\) and the linear form \(\ell(v)\).
Multiply by \(v\in H_0^1\) and integrate by parts:
$$ \int_0^1 u'v'\,dx + \int_0^1 uv\,dx = \int_0^1 fv\,dx. $$So \(a(u,v) = \int_0^1(u'v'+uv)\,dx\) and \(\ell(v)=\int_0^1 fv\,dx\). The bilinear form is symmetric, continuous, and coercive on \(H_0^1(0,1)\), so the Lax-Milgram theorem guarantees a unique weak solution.
Exercise 2.2.5. Implement the FTCS scheme for the heat equation. Run convergence tests: fix \(r=0.4\) and halve \(\Delta x\) repeatedly. Plot the max error at \(t=0.1\) vs. \(\Delta x\) on a log-log scale and verify second-order spatial convergence.
Exercise 2.2.6. Implement the explicit leap-frog scheme for the wave equation \(u_{tt}=c^2u_{xx}\) on \([0,1]\) with \(c=1\), \(u(x,0)=\sin(2\pi x)\), \(u_t(x,0)=0\), and Dirichlet BCs. Animate the standing wave. Verify that the scheme blows up when \(\nu>1\).
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.animation import FuncAnimation
c, L = 1.0, 1.0
M = 100
dx = L / M
x = np.linspace(0, L, M + 1)
nu = 0.9 # Courant number (try 1.1 for instability)
dt = nu * dx / c
N_steps = 500
# Initial conditions
U_prev = np.sin(2 * np.pi * x)
U_curr = U_prev.copy() # u_t(x,0)=0 => first step uses special formula
# First time step: U^1 = U^0 + 0.5*nu^2*(U_{j+1}-2U_j+U_{j-1})
U_first = np.zeros_like(x)
for j in range(1, M):
U_first[j] = U_curr[j] + 0.5*nu**2*(U_curr[j+1]-2*U_curr[j]+U_curr[j-1])
U_prev = U_curr.copy()
U_curr = U_first.copy()
frames = [U_curr.copy()]
for n in range(2, N_steps):
U_next = np.zeros_like(x)
for j in range(1, M):
U_next[j] = 2*U_curr[j] - U_prev[j] + nu**2*(U_curr[j+1]-2*U_curr[j]+U_curr[j-1])
U_prev = U_curr.copy()
U_curr = U_next.copy()
if n % 3 == 0:
frames.append(U_curr.copy())
fig, ax = plt.subplots()
line, = ax.plot(x, frames[0])
ax.set_ylim(-1.5, 1.5)
ax.set_xlabel('x'); ax.set_ylabel('u')
title = ax.set_title(f'Wave equation (nu={nu})')
def animate(i):
line.set_ydata(frames[min(i, len(frames)-1)])
return line,
ani = FuncAnimation(fig, animate, frames=len(frames), interval=30)
plt.show()
Exercise 2.2.7. Implement a 1D finite element solver using piecewise-linear hat functions for \(-u''=\sin(\pi x)\) on \([0,1]\) with \(u(0)=u(1)=0\). Compare to the exact solution \(u(x)=\sin(\pi x)/\pi^2\) and measure convergence as the number of elements increases.
Exercise 2.2.8. Compare the explicit FTCS, BTCS, and Crank-Nicolson schemes for the heat equation with initial condition \(u(x,0) = \chi_{[0.25,0.75]}(x)\) (indicator function). Plot each solution at \(t=0.01\) and comment on oscillations near the discontinuity.
Exercise 2.2.9 (Agentic). Build an adaptive time-stepping agent for the explicit heat equation solver. At each time step, the agent estimates the local error by comparing solutions at \(\Delta t\) and \(\Delta t/2\), then adjusts \(\Delta t\) (staying within the CFL limit). Track the total number of time steps vs. a fixed-step simulation achieving the same final accuracy.
Exercise 2.2.10 (Agentic). Implement a reinforcement-learning agent (Q-learning with discretised state/action spaces) that selects among FTCS, BTCS, and Crank-Nicolson at each time step based on the current solution smoothness and remaining budget. The reward is \(-(\text{error}+\lambda\cdot\text{cost})\). Train on 500 episodes with random initial conditions and report the learned policy.
Q1. The FTCS scheme for the heat equation is stable if the mesh ratio \(r = k\Delta t/(\Delta x)^2\) satisfies:
(a) \(r \leq 1\) (b) \(r \leq 1/2\) (c) \(r \leq 1/4\) (d) No restriction
Q2. The Crank-Nicolson scheme is:
(a) Explicit, first-order (b) Implicit, first-order (c) Explicit, second-order (d) Implicit, second-order
Q3. The Lax equivalence theorem states that for a consistent scheme applied to a well-posed linear problem, convergence is equivalent to:
(a) Accuracy (b) Stability (c) Efficiency (d) Robustness
Q4. In von Neumann stability analysis, the amplification factor \(g\) for the FTCS scheme is:
(a) \(g = 1+4r\sin^2(\xi\Delta x/2)\) (b) \(g = 1-4r\sin^2(\xi\Delta x/2)\) (c) \(g = e^{-r}\) (d) \(g = 1/(1+4r)\)
Q5. The CFL condition for the wave equation \(u_{tt}=c^2 u_{xx}\) with the leap-frog scheme requires:
(a) \(c\Delta t/\Delta x \leq 1/2\) (b) \(c\Delta t/\Delta x \leq 1\) (c) \(c^2\Delta t/(\Delta x)^2 \leq 1\) (d) No restriction
Q6. The Thomas algorithm solves a tridiagonal system of size \(n\) in:
(a) \(\mathcal{O}(n^3)\) (b) \(\mathcal{O}(n^2)\) (c) \(\mathcal{O}(n\log n)\) (d) \(\mathcal{O}(n)\)
Q7. In the finite element method, the weak form is obtained by:
(a) Discretising derivatives with finite differences (b) Multiplying the PDE by a test function and integrating by parts (c) Expanding the solution in Fourier modes (d) Applying the Laplace transform
Q8. Spectral methods achieve exponential convergence when the solution is:
(a) Piecewise constant (b) Continuous but not differentiable (c) Infinitely smooth (d) Discontinuous
Q9. A finite difference scheme is consistent if the local truncation error:
(a) Is exactly zero (b) Tends to zero as the mesh is refined (c) Is bounded by 1 (d) Equals the global error
Q10. For the backward-Euler (BTCS) scheme, the amplification factor is:
(a) \(g=1+4r\sin^2(\xi\Delta x/2)\) (b) \(g=1/(1+4r\sin^2(\xi\Delta x/2))\) (c) \(g=1-4r\sin^2(\xi\Delta x/2)\) (d) \(g=e^{-4r}\)
Q1: (b) Q2: (d) Q3: (b) Q4: (b) Q5: (b)
Q6: (d) Q7: (b) Q8: (c) Q9: (b) Q10: (b)
Objective: Implement three numerical schemes (FTCS, BTCS, Crank-Nicolson) for the 1D heat equation and build a benchmarking harness that compares them across a suite of test problems with varying smoothness, diffusivity, and grid resolution.
| Criterion | Excellent (90--100%) | Good (70--89%) | Needs Improvement (<70%) |
|---|---|---|---|
| Correctness | All three schemes converge at the theoretical rate; stability boundaries match the analysis; adaptive stepping maintains accuracy. | Two schemes correct; minor issues with convergence rates or stability. | Incorrect implementations or wrong convergence orders. |
| Efficiency | Vectorised inner loops; Thomas algorithm used for implicit solves; benchmark runs in under 60 seconds on a laptop for all test cases. | Runs correctly but uses slow Python loops; benchmark takes several minutes. | Prohibitively slow or memory-intensive. |
| Robustness | Handles edge cases (zero diffusivity, very fine/coarse grids, discontinuous initial conditions) with informative warnings. | Works on standard inputs; may crash on edge cases. | Crashes on valid inputs. |
| Reproducibility | Complete requirements.txt, docstrings for every function, a Jupyter notebook with all figures, and a summary table. |
Code runs but documentation is sparse. | Cannot reproduce results without significant effort. |