MODULE 0.2
| Difficulty | Beginner |
| Estimated Time | 3–4 hours |
| Prerequisites | Module 0.1 — Review of Calculus and Linear Algebra |
Nearly every law of nature is written as a differential equation. Newton's second law \(F = ma\) becomes \(m\ddot{x} = F(x, \dot{x}, t)\) the moment you express acceleration as the second derivative of position. The heat equation, Maxwell's equations, the Black-Scholes model, population dynamics, neural network training — all are differential equations in disguise.
Before you can solve any of these, you need a precise vocabulary: What is a differential equation? What does "order" mean? What distinguishes a linear equation from a nonlinear one? What does it mean for a function to be a "solution," and when can you be sure a solution exists and is unique?
This module provides that vocabulary. It also introduces the direction field — a visual tool that lets you see the qualitative behaviour of solutions without solving a single equation — and the Euler method, the simplest numerical algorithm for generating approximate solutions. Together, these analytical, visual, and computational perspectives form the lens through which the entire course will be viewed.
After completing this module you will be able to:
A differential equation is an equation that relates an unknown function to one or more of its derivatives. Formally, an equation of the form
$$ F\!\bigl(x,\, y,\, y',\, y'',\, \ldots,\, y^{(n)}\bigr) = 0 $$where \(y = y(x)\) is the unknown function, is a differential equation. The independent variable \(x\) typically represents time or a spatial coordinate.
The order of a differential equation is the order of the highest derivative that appears. For example, \(y'' + 3y' + 2y = 0\) is second-order because \(y''\) is the highest derivative.
The degree of a differential equation (when it can be defined) is the exponent of the highest-order derivative after the equation has been cleared of radicals and fractions involving derivatives. For example, \((y'')^3 + y' = x\) has order 2 and degree 3.
An ordinary differential equation (ODE) involves derivatives with respect to a single independent variable:
$$ \frac{dy}{dx} + y = e^x. $$A partial differential equation (PDE) involves partial derivatives with respect to two or more independent variables:
$$ \frac{\partial u}{\partial t} = k\,\frac{\partial^2 u}{\partial x^2} \quad\text{(heat equation)}. $$An ODE is linear if it can be written in the form
$$ a_n(x)\,y^{(n)} + a_{n-1}(x)\,y^{(n-1)} + \cdots + a_1(x)\,y' + a_0(x)\,y = g(x), $$where the coefficients \(a_i(x)\) and the forcing function \(g(x)\) depend only on the independent variable \(x\), and the unknown \(y\) and its derivatives appear only to the first power, with no products like \(y \cdot y'\).
Any ODE that cannot be put in this form is nonlinear. Examples of nonlinearities: \(y\,y'\), \((y')^2\), \(\sin(y)\), \(e^y\).
A solution of a differential equation on an interval \(I\) is a function \(y = \phi(x)\) that, when substituted into the equation, produces an identity for all \(x \in I\).
An initial value problem consists of a differential equation together with initial conditions that specify the value of the unknown function (and possibly its derivatives) at a particular point. For a first-order ODE:
$$ y' = f(x,y), \qquad y(x_0) = y_0. $$For a second-order ODE:
$$ y'' = f(x,y,y'), \qquad y(x_0) = y_0, \quad y'(x_0) = y_1. $$The number of initial conditions equals the order of the equation.
Given a first-order ODE \(y' = f(x,y)\), the direction field is the set of all short line segments drawn at points \((x,y)\) in the plane with slope \(f(x,y)\). Each segment indicates the tangent direction a solution curve must follow as it passes through that point. By plotting many such segments on a grid, one obtains a visual map of all possible solution trajectories.
Consider the initial value problem
$$ y' = f(x,y), \qquad y(x_0) = y_0. $$If both of the following conditions hold in a rectangle \(R = \{(x,y) : |x-x_0| \le a,\; |y-y_0| \le b\}\):
then there exists an interval \(|x-x_0| < h\) (for some \(h > 0\)) on which the IVP has a unique solution \(y = \phi(x)\).
What this means in practice: If \(f\) and \(\partial f/\partial y\) are both continuous near the initial point, you are guaranteed exactly one solution curve through that point. Failure of either condition can lead to non-existence or non-uniqueness.
If \(y_1(x)\) and \(y_2(x)\) are solutions of the linear homogeneous equation
$$ a_n(x)\,y^{(n)} + \cdots + a_1(x)\,y' + a_0(x)\,y = 0, $$then any linear combination \(y = c_1\,y_1(x) + c_2\,y_2(x)\) is also a solution for arbitrary constants \(c_1, c_2\).
This principle does not hold for nonlinear equations. For example, \(y_1 = 1\) and \(y_2 = x\) might each satisfy a nonlinear ODE, but \(y_1 + y_2 = 1 + x\) generally will not.
For the IVP \(y' = f(x,y)\), \(y(x_0) = y_0\), Euler's method generates approximations
$$ y_{n+1} = y_n + h\,f(x_n, y_n), \qquad x_{n+1} = x_n + h. $$If \(f\) satisfies a Lipschitz condition in \(y\) and is continuous, the global error at any fixed point \(x^*\) satisfies
$$ |y(x^*) - y_N| \le C\,h $$for a constant \(C\) depending on \(f\), the interval length, and the Lipschitz constant. That is, Euler's method is a first-order method: halving the step size halves the error (approximately).
Problem. Classify the equation \(y'' + 4y = \sin(x)\) and verify that \(y_p(x) = \frac{1}{3}\sin(x)\) is a particular solution.
Compute derivatives of \(y_p = \frac{1}{3}\sin(x)\):
$$ y_p' = \frac{1}{3}\cos(x), \qquad y_p'' = -\frac{1}{3}\sin(x). $$Substitute into the left side:
$$ y_p'' + 4y_p = -\frac{1}{3}\sin(x) + 4\cdot\frac{1}{3}\sin(x) = \frac{3}{3}\sin(x) = \sin(x). $$This equals the right side, so \(y_p = \frac{1}{3}\sin(x)\) is indeed a particular solution.
The associated homogeneous equation \(y'' + 4y = 0\) has characteristic equation \(r^2 + 4 = 0\), giving \(r = \pm 2i\). The homogeneous solution is \(y_h = c_1\cos(2x) + c_2\sin(2x)\). Therefore the general solution is:
$$ y = c_1\cos(2x) + c_2\sin(2x) + \frac{1}{3}\sin(x). $$The general solution contains two arbitrary constants \(c_1, c_2\) because the equation is second-order. The homogeneous part represents free oscillations at the natural frequency \(\omega_n = 2\), while the particular solution represents forced oscillation at the driving frequency \(\omega = 1\). When the driving frequency equals the natural frequency, resonance occurs — a phenomenon explored in Level 1.
Problem. Consider the IVP \(y' = x^2 + y^2,\quad y(0) = 0\). (a) Does the Picard-Lindelöf theorem guarantee a unique local solution? (b) Describe the direction field qualitatively.
Here \(f(x,y) = x^2 + y^2\). This function is a polynomial in \(x\) and \(y\), so it is continuous everywhere. Its partial derivative with respect to \(y\) is
$$ \frac{\partial f}{\partial y} = 2y, $$which is also continuous everywhere. Both hypotheses of the Picard-Lindelöf theorem are satisfied in any rectangle containing \((0,0)\).
Conclusion: There exists a unique solution in some neighbourhood of \(x=0\).
The slope at any point \((x,y)\) is \(f(x,y) = x^2 + y^2 \ge 0\). This means:
This equation is a Riccati equation and cannot be solved in elementary terms. Its solution is related to Bessel functions and exhibits finite-time blowup: the solution \(y(x)\) goes to \(+\infty\) at a finite value of \(x\). The direction field makes this visible — solution curves curve upward with accelerating steepness.
Using step size \(h = 0.1\) starting from \((x_0, y_0) = (0, 0)\):
| \(n\) | \(x_n\) | \(y_n\) | Slope \(f(x_n,y_n)\) |
|---|---|---|---|
| 0 | 0.0 | 0.0000 | 0.0000 |
| 1 | 0.1 | 0.0000 | 0.0100 |
| 2 | 0.2 | 0.0010 | 0.0400 |
| 3 | 0.3 | 0.0050 | 0.0900 |
| 4 | 0.4 | 0.0140 | 0.1602 |
| 5 | 0.5 | 0.0300 | 0.2509 |
Even this crude approximation shows the solution beginning to curve upward, consistent with the direction field analysis.
Problem. Show that the IVP \(y' = 3y^{2/3},\quad y(0) = 0\) has at least two solutions.
Here \(f(x,y) = 3y^{2/3}\) is continuous for all \(y\), but
$$ \frac{\partial f}{\partial y} = 2y^{-1/3}, $$which is undefined (and unbounded) at \(y=0\). The Lipschitz condition fails at the initial point. The theorem does not guarantee uniqueness.
Solution 1: \(y(x) = 0\) for all \(x\). Check: \(y' = 0\) and \(3(0)^{2/3} = 0\). Valid.
Solution 2: Try \(y = (x - c)^3\) for \(x \ge c\), \(y=0\) for \(x < c\). With \(c=0\): let \(y = x^3\). Then \(y' = 3x^2\) and \(3y^{2/3} = 3(x^3)^{2/3} = 3x^2\). Valid for \(x \ge 0\).
In fact, for any \(c \ge 0\), the function
$$ y_c(x) = \begin{cases} 0 & \text{if } x \le c, \\ (x-c)^3 & \text{if } x > c \end{cases} $$satisfies the IVP. This gives an infinite family of solutions, all passing through the origin with initial value 0.
When the Picard-Lindelöf hypotheses fail, uniqueness is not guaranteed, and physically this means the model is incomplete — additional information is needed to determine which trajectory the system actually follows. This is a critical consideration when modelling real systems.
The following code blocks run in any standard Python 3 environment with NumPy and Matplotlib, or in a Pyodide (browser-based) environment. Copy each block and experiment by changing the ODE, the grid range, or the initial conditions.
This lab plots the direction field for a first-order ODE \(y' = f(x,y)\). Modify the function f to visualise different equations.
import numpy as np
import matplotlib
matplotlib.use('Agg') # For non-interactive environments
import matplotlib.pyplot as plt
def plot_direction_field(f, x_range, y_range, nx=20, ny=20, title="Direction Field"):
"""
Plot the direction field of y' = f(x, y).
Parameters
----------
f : callable
Function f(x, y) returning the slope y'.
x_range : tuple
(x_min, x_max) for the plot domain.
y_range : tuple
(y_min, y_max) for the plot range.
nx, ny : int
Number of grid points in each direction.
"""
x = np.linspace(x_range[0], x_range[1], nx)
y = np.linspace(y_range[0], y_range[1], ny)
X, Y = np.meshgrid(x, y)
# Compute slopes
DY = f(X, Y)
DX = np.ones_like(DY)
# Normalise arrow lengths for uniform appearance
N = np.sqrt(DX**2 + DY**2)
N[N == 0] = 1 # Avoid division by zero
DX_norm = DX / N
DY_norm = DY / N
fig, ax = plt.subplots(figsize=(8, 6))
ax.quiver(X, Y, DX_norm, DY_norm, N,
cmap='coolwarm', angles='xy', scale=30, width=0.003)
ax.set_xlabel('x')
ax.set_ylabel('y')
ax.set_title(title)
ax.set_xlim(x_range)
ax.set_ylim(y_range)
ax.set_aspect('equal')
ax.grid(True, alpha=0.3)
plt.tight_layout()
plt.savefig('direction_field.png', dpi=150)
plt.show()
print("Direction field saved to direction_field.png")
# === Example: y' = x^2 + y^2 ===
f = lambda x, y: x**2 + y**2
plot_direction_field(f, (-2, 2), (-2, 2), title=r"Direction field: $y' = x^2 + y^2$")
# === Try another equation: y' = -y + sin(x) ===
g = lambda x, y: -y + np.sin(x)
plot_direction_field(g, (-4, 4), (-3, 3), title=r"Direction field: $y' = -y + \sin(x)$")
Expected output: Two direction field plots. The first (\(y' = x^2 + y^2\)) shows all arrows pointing upward with slopes growing rapidly away from the origin. The second (\(y' = -y + \sin(x)\)) shows arrows converging toward a sinusoidal attractor, illustrating stable equilibrium behaviour.
We implement Euler's method, apply it to the IVP \(y' = -2y,\; y(0)=1\) (whose exact solution is \(y = e^{-2x}\)), and measure how the error depends on the step size \(h\).
import numpy as np
import matplotlib
matplotlib.use('Agg')
import matplotlib.pyplot as plt
def euler_method(f, x0, y0, x_end, h):
"""
Solve y' = f(x, y), y(x0) = y0 using Euler's method.
Returns arrays of x-values and y-values.
"""
n_steps = int((x_end - x0) / h)
x = np.zeros(n_steps + 1)
y = np.zeros(n_steps + 1)
x[0], y[0] = x0, y0
for i in range(n_steps):
y[i+1] = y[i] + h * f(x[i], y[i])
x[i+1] = x[i] + h
return x, y
# Define the ODE and exact solution
f = lambda x, y: -2 * y
exact = lambda x: np.exp(-2 * x)
x0, y0, x_end = 0.0, 1.0, 3.0
# --- Plot solutions for different step sizes ---
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(14, 5))
x_exact = np.linspace(x0, x_end, 200)
ax1.plot(x_exact, exact(x_exact), 'k-', linewidth=2, label='Exact: $y = e^{-2x}$')
step_sizes = [0.5, 0.2, 0.1, 0.05]
errors_at_end = []
for h in step_sizes:
x_euler, y_euler = euler_method(f, x0, y0, x_end, h)
ax1.plot(x_euler, y_euler, 'o--', markersize=3, label=f'Euler h={h}')
error = abs(y_euler[-1] - exact(x_euler[-1]))
errors_at_end.append(error)
print(f"h = {h:.3f}: y({x_end}) = {y_euler[-1]:.6f}, "
f"exact = {exact(x_end):.6f}, error = {error:.6e}")
ax1.set_xlabel('x')
ax1.set_ylabel('y')
ax1.set_title("Euler's Method: $y' = -2y$, $y(0) = 1$")
ax1.legend()
ax1.grid(True, alpha=0.3)
# --- Convergence plot ---
ax2.loglog(step_sizes, errors_at_end, 'bo-', linewidth=2, label='Measured error')
ax2.loglog(step_sizes, [s * errors_at_end[0] / step_sizes[0] for s in step_sizes],
'r--', label='Slope 1 reference')
ax2.set_xlabel('Step size h')
ax2.set_ylabel('|Error at x=3|')
ax2.set_title('Error vs. Step Size (log-log)')
ax2.legend()
ax2.grid(True, alpha=0.3)
plt.tight_layout()
plt.savefig('euler_convergence.png', dpi=150)
plt.show()
print("\nPlot saved to euler_convergence.png")
print("The log-log slope is approximately 1, confirming first-order convergence.")
Expected output: The left panel shows the exact exponential decay alongside Euler approximations that improve as \(h\) decreases. The right panel shows that error vs. step size follows a straight line with slope 1 on a log-log plot, confirming that Euler's method is first-order accurate.
A first-order ODE \(y' = f(x,y)\) can be viewed as a deterministic policy for an agent navigating a two-dimensional state space. The mapping from reinforcement-learning concepts is exact:
| RL Concept | Differential-Equation Analogue |
|---|---|
| State | A point \((x, y)\) in the plane. Here \(x\) plays the role of time and \(y\) is the system's configuration. The full state is the pair \((x,y)\) because the ODE is first-order — no memory of past values is needed. |
| Action (Policy) | The slope \(f(x,y)\) assigned at each state. This is a deterministic policy \(\pi(x,y) = f(x,y)\): given any state, the policy prescribes exactly one action (the direction of the next infinitesimal step). The direction field is a visual rendering of the policy, with each arrow showing the action at that state. |
| Reward / Utility | In an IVP, the "reward" is implicitly defined by the initial condition and the physics encoded in \(f\). A solution curve that satisfies both the ODE (obeys the policy at every point) and the initial condition (starts at the prescribed state) is the optimal trajectory. Deviations from the exact policy (numerical error in Euler's method) incur a "penalty" measured as the global error. |
| Learning / Policy Update | The Picard iteration (the constructive proof behind the Picard-Lindelöf theorem) is a learning algorithm: starting from an initial guess \(y_0(x) = y_0\), the agent iteratively refines its trajectory via $$ y_{n+1}(x) = y_0 + \int_{x_0}^{x} f\!\bigl(t, y_n(t)\bigr)\,dt, $$ converging to the true solution. Each iteration reduces the "error" (the Banach fixed-point contraction), exactly like a policy-improvement step. Euler's method is a cruder but computationally cheaper version of this iteration. |
Key insight: Non-uniqueness of solutions (when Picard-Lindelöf fails) corresponds to a state where the policy is ambiguous — multiple actions are equally valid. In RL terms, this is a state with multiple optimal actions, requiring additional information (a tie-breaking rule or richer state representation) to select one.
Higher-order ODEs: A second-order ODE \(y'' = g(x, y, y')\) requires the state to include both \(y\) and \(y'\). This is analogous to expanding the state representation to include velocity as well as position — a Markov property requirement.
Exercise A1. Classify each equation by order, degree, type (ODE/PDE), and linearity:
(a) Order 3, degree 1, ODE, linear (coefficients are constants, \(y\) and derivatives appear to first power).
(b) Order 1, degree 2, ODE, nonlinear (\(y'\) is squared).
(c) Order 2, degree 1, PDE (two independent variables \(x,y\)), linear. This is Laplace's equation.
(d) Order 2, degree 1, ODE, nonlinear (contains the products \(y \cdot y''\) and \((y')^2\)).
Exercise A2. Verify that \(y(x) = c_1 e^x + c_2 e^{-x}\) is the general solution of \(y'' - y = 0\) for arbitrary constants \(c_1, c_2\). Then find the particular solution satisfying \(y(0) = 2\), \(y'(0) = 0\).
Compute \(y'' = c_1 e^x + c_2 e^{-x}\). Then \(y'' - y = c_1 e^x + c_2 e^{-x} - c_1 e^x - c_2 e^{-x} = 0\). Verified.
Apply initial conditions: \(y(0) = c_1 + c_2 = 2\) and \(y'(0) = c_1 - c_2 = 0\). Solving: \(c_1 = 1\), \(c_2 = 1\). The particular solution is \(y = e^x + e^{-x} = 2\cosh(x)\).
Exercise A3. For the IVP \(y' = y^2,\; y(0) = 1\), find the exact solution by separation of variables. At what value of \(x\) does the solution blow up (become infinite)?
Separate: \(\frac{dy}{y^2} = dx\). Integrate: \(-\frac{1}{y} = x + C\). Apply \(y(0) = 1\): \(-1 = C\). So \(y = \frac{1}{1-x}\).
The solution blows up at \(x = 1\). This finite-time blowup is a fundamental phenomenon in nonlinear ODEs.
Exercise A4. Does the Picard-Lindelöf theorem guarantee a unique solution for the IVP \(y' = \frac{x}{y},\; y(0) = 0\)? Justify your answer carefully.
Here \(f(x,y) = x/y\), which is undefined (and discontinuous) at \(y=0\). Since the initial condition is \(y(0) = 0\), the function \(f\) is not even continuous at the initial point. The first hypothesis of the Picard-Lindelöf theorem fails, so the theorem makes no guarantee — neither existence nor uniqueness is assured by this theorem. (In fact, if we interpret the ODE as \(y\,dy = x\,dx\), we get \(y^2 = x^2 + C\), and with \(y(0)=0\) we get \(y = \pm x\), so there are two solutions, confirming non-uniqueness.)
Exercise C1. Implement a direction-field plotter for \(y' = y - x^2 + 1\) on the domain \([-1, 4] \times [-1, 4]\). Overlay three Euler-method solution curves starting from \(y(0) = 0\), \(y(0) = 0.5\), and \(y(0) = 2\), all with step size \(h = 0.05\). Do the trajectories converge or diverge?
Exercise C2. For the IVP \(y' = -2y + e^{-x},\; y(0) = 1\), the exact solution is \(y = e^{-x} + 0\cdot e^{-2x} = e^{-x}\) (verify this). Run Euler's method with \(h = 0.1, 0.01, 0.001\) on \([0, 5]\). For each \(h\), compute the maximum absolute error over the interval and verify that the error scales linearly with \(h\).
import numpy as np
def euler(f, x0, y0, x_end, h):
n = int((x_end - x0) / h)
x = np.linspace(x0, x_end, n+1)
y = np.zeros(n+1)
y[0] = y0
for i in range(n):
y[i+1] = y[i] + h * f(x[i], y[i])
return x, y
f = lambda x, y: -2*y + np.exp(-x)
exact = lambda x: np.exp(-x)
for h in [0.1, 0.01, 0.001]:
x, y = euler(f, 0, 1, 5, h)
max_err = np.max(np.abs(y - exact(x)))
print(f"h = {h:.4f}: max error = {max_err:.6e}")
# Expected: errors decrease by factor ~10 each time (first-order method).
Exercise C3. Write a Python function that, given \(f(x,y)\) and a point \((x_0, y_0)\), checks the Picard-Lindelöf conditions numerically. Specifically, your function should: (a) evaluate \(f\) at and near the point to check continuity (using finite differences), (b) approximate \(\partial f / \partial y\) using a central difference, and (c) check whether the partial derivative appears bounded near the point. Test on \(f(x,y) = 3y^{2/3}\) at \((0,0)\) and on \(f(x,y) = x^2 + y^2\) at \((0,0)\).
Exercise C4. Implement the improved Euler method (Heun's method):
$$ k_1 = h\,f(x_n, y_n), \qquad k_2 = h\,f(x_n + h,\; y_n + k_1), \qquad y_{n+1} = y_n + \tfrac{1}{2}(k_1 + k_2). $$Apply it to \(y' = -2y,\; y(0) = 1\) on \([0, 3]\). Compare the error with standard Euler for step sizes \(h = 0.5, 0.2, 0.1, 0.05\). Verify that the improved Euler method is second-order (error scales as \(h^2\)).
import numpy as np
def improved_euler(f, x0, y0, x_end, h):
n = int((x_end - x0) / h)
x = np.linspace(x0, x_end, n+1)
y = np.zeros(n+1)
y[0] = y0
for i in range(n):
k1 = h * f(x[i], y[i])
k2 = h * f(x[i] + h, y[i] + k1)
y[i+1] = y[i] + 0.5 * (k1 + k2)
return x, y
f = lambda x, y: -2 * y
exact_end = np.exp(-6) # y(3)
for h in [0.5, 0.2, 0.1, 0.05]:
_, ye = improved_euler(f, 0, 1, 3, h)
err = abs(ye[-1] - exact_end)
print(f"h = {h:.2f}: error = {err:.6e}")
# Ratio of errors when h halves should be ~4 (second-order).
Exercise G1. (Policy comparison.) Consider the ODE \(y' = \sin(x) - y\). Treat the direction field as a policy map. Use Python to generate the direction field and overlay two solution trajectories: one starting at \((0, -2)\) and one at \((0, 3)\). Both converge to the same long-term behaviour. In RL terms, this corresponds to a policy with a single attracting fixed point (or cycle). Determine the asymptotic (long-time) behaviour analytically by guessing a particular solution of the form \(y_p = A\sin(x) + B\cos(x)\) and finding \(A\) and \(B\). Interpret: why does every initial condition lead to the same eventual trajectory?
Substitute \(y_p = A\sin(x) + B\cos(x)\) into \(y' + y = \sin(x)\):
$$ A\cos(x) - B\sin(x) + A\sin(x) + B\cos(x) = \sin(x). $$Equating coefficients: \(\sin(x)\colon -B + A = 1\), \(\cos(x)\colon A + B = 0\). Solving: \(A = 1/2\), \(B = -1/2\).
So \(y_p = \frac{1}{2}\sin(x) - \frac{1}{2}\cos(x)\). The general solution is \(y = Ce^{-x} + \frac{1}{2}\sin(x) - \frac{1}{2}\cos(x)\).
As \(x \to \infty\), \(Ce^{-x} \to 0\) regardless of \(C\), so every trajectory converges to \(y_p\). In RL terms, the exponential decay is the agent "forgetting" its initial state; the attractor \(y_p\) is the unique long-run equilibrium policy.
Exercise G2. (Exploration and finite-time blowup.) An agent following the policy \(y' = 1 + y^2\) (with \(y(0)=0\)) will reach \(y = +\infty\) in finite time. (a) Solve the ODE exactly. (b) What is the blowup time? (c) Run Euler's method with \(h=0.01\). At what step does the numerical solution first exceed \(10^6\)? (d) In RL language, this is a policy that leads the agent "off a cliff." Design a modified policy \(\tilde{f}(x,y) = \min(1+y^2,\; M)\) with a reward-clipping parameter \(M\). How does the clipped trajectory differ? Plot both.
1: (b) — \(y^2\) makes it nonlinear 2: (c) 3: (c) 4: (b) 5: (b) — \(\partial f/\partial y = \frac{1}{3}y^{-2/3}\) is undefined at \(y=0\) 6: (b) 7: (b) 8: (b) 9: (b) 10: (d)
Build a Python script (or Jupyter notebook) that serves as an interactive exploration tool for first-order ODEs. Your tool should:
Test your tool on the following three ODEs:
| Criterion | Points | Expectations |
|---|---|---|
| Direction field plotting | 6 | Clear, correctly oriented arrows; appropriate grid density; axes labelled. |
| Euler method implementation | 5 | Correct forward-Euler iteration; handles variable step counts; no off-by-one errors. |
| Multiple initial conditions overlay | 4 | At least 5 curves per ODE, each with a distinct colour and legend entry. |
| Error analysis (when exact solution known) | 5 | Error computed and displayed; convergence order verified for at least 3 step sizes. |
| Picard-Lindelöf condition checker | 5 | Computes \(\partial f/\partial y\) numerically; identifies and warns when the partial derivative is unbounded or undefined; tested on at least one passing and one failing case. |
| Code quality and documentation | 3 | Functions have docstrings; clear variable names; well-structured code. |
| Testing on all three specified ODEs | 2 | All three test cases run with output shown. |
You now have the vocabulary and conceptual framework for differential equations. You know what an ODE is, how to classify it, what constitutes a solution, and when solutions are guaranteed to exist and be unique. You have seen direction fields as visual policy maps and Euler's method as a first computational tool.
In the next module you will begin solving first-order ODEs systematically — separable equations, integrating factors, exact equations, and more. The tools from Module 0.1 (calculus and linear algebra) and the concepts from this module (classification, IVPs, existence-uniqueness) will be used throughout.
Next Module: 1.1 — First-Order Ordinary Differential Equations →