MODULE 0.2

Introduction to Differential Equations

DifficultyBeginner
Estimated Time3–4 hours
PrerequisitesModule 0.1 — Review of Calculus and Linear Algebra

Why This Matters

Nearly every law of nature is written as a differential equation. Newton's second law \(F = ma\) becomes \(m\ddot{x} = F(x, \dot{x}, t)\) the moment you express acceleration as the second derivative of position. The heat equation, Maxwell's equations, the Black-Scholes model, population dynamics, neural network training — all are differential equations in disguise.

Before you can solve any of these, you need a precise vocabulary: What is a differential equation? What does "order" mean? What distinguishes a linear equation from a nonlinear one? What does it mean for a function to be a "solution," and when can you be sure a solution exists and is unique?

This module provides that vocabulary. It also introduces the direction field — a visual tool that lets you see the qualitative behaviour of solutions without solving a single equation — and the Euler method, the simplest numerical algorithm for generating approximate solutions. Together, these analytical, visual, and computational perspectives form the lens through which the entire course will be viewed.

Learning Objectives

After completing this module you will be able to:

  1. Define what a differential equation is and distinguish it from an algebraic equation by identifying the presence of derivatives of an unknown function.
  2. Classify a given differential equation by its order, degree, and type (ODE vs. PDE, linear vs. nonlinear).
  3. Distinguish among general solutions, particular solutions, and singular solutions, and explain the role of arbitrary constants.
  4. Formulate an initial value problem (IVP) by pairing a differential equation with appropriate initial conditions.
  5. Verify that a given function is a solution to a differential equation by substituting it into the equation and confirming the identity.
  6. Sketch a direction field for a first-order ODE \(y' = f(x,y)\) by computing slopes at grid points, and use it to predict qualitative solution behaviour.
  7. State the Picard-Lindelöf (existence and uniqueness) theorem and identify its hypotheses on the function \(f(x,y)\) and its partial derivative \(\partial f / \partial y\).
  8. Determine whether a given IVP satisfies the hypotheses of the Picard-Lindelöf theorem and predict whether solutions exist and are unique in a neighbourhood of the initial point.
  9. Implement a direction-field plotter and Euler's method in Python using NumPy, and interpret the numerical output.
  10. Explain the connection between direction fields and agent policies, where the DE prescribes an action (slope) at each state \((x,y)\).

Core Concepts

Definitions

Definition 0.2.1 — Differential Equation

A differential equation is an equation that relates an unknown function to one or more of its derivatives. Formally, an equation of the form

$$ F\!\bigl(x,\, y,\, y',\, y'',\, \ldots,\, y^{(n)}\bigr) = 0 $$

where \(y = y(x)\) is the unknown function, is a differential equation. The independent variable \(x\) typically represents time or a spatial coordinate.

Definition 0.2.2 — Order and Degree

The order of a differential equation is the order of the highest derivative that appears. For example, \(y'' + 3y' + 2y = 0\) is second-order because \(y''\) is the highest derivative.

The degree of a differential equation (when it can be defined) is the exponent of the highest-order derivative after the equation has been cleared of radicals and fractions involving derivatives. For example, \((y'')^3 + y' = x\) has order 2 and degree 3.

Definition 0.2.3 — ODE vs. PDE

An ordinary differential equation (ODE) involves derivatives with respect to a single independent variable:

$$ \frac{dy}{dx} + y = e^x. $$

A partial differential equation (PDE) involves partial derivatives with respect to two or more independent variables:

$$ \frac{\partial u}{\partial t} = k\,\frac{\partial^2 u}{\partial x^2} \quad\text{(heat equation)}. $$

Definition 0.2.4 — Linear vs. Nonlinear

An ODE is linear if it can be written in the form

$$ a_n(x)\,y^{(n)} + a_{n-1}(x)\,y^{(n-1)} + \cdots + a_1(x)\,y' + a_0(x)\,y = g(x), $$

where the coefficients \(a_i(x)\) and the forcing function \(g(x)\) depend only on the independent variable \(x\), and the unknown \(y\) and its derivatives appear only to the first power, with no products like \(y \cdot y'\).

Any ODE that cannot be put in this form is nonlinear. Examples of nonlinearities: \(y\,y'\), \((y')^2\), \(\sin(y)\), \(e^y\).

Definition 0.2.5 — General, Particular, and Singular Solutions

A solution of a differential equation on an interval \(I\) is a function \(y = \phi(x)\) that, when substituted into the equation, produces an identity for all \(x \in I\).

  • General solution: A family of solutions containing \(n\) arbitrary constants (where \(n\) is the order of the ODE). For example, \(y = Ce^{-x}\) is the general solution of \(y' + y = 0\).
  • Particular solution: A specific member of the general solution family obtained by fixing the constants using initial or boundary conditions. For example, if \(y(0)=3\), then \(C=3\) and \(y=3e^{-x}\).
  • Singular solution: A solution that cannot be obtained from the general solution for any choice of the constants. For example, \(y=0\) can be a singular solution of \((y')^2 = 4y\).

Definition 0.2.6 — Initial Value Problem (IVP)

An initial value problem consists of a differential equation together with initial conditions that specify the value of the unknown function (and possibly its derivatives) at a particular point. For a first-order ODE:

$$ y' = f(x,y), \qquad y(x_0) = y_0. $$

For a second-order ODE:

$$ y'' = f(x,y,y'), \qquad y(x_0) = y_0, \quad y'(x_0) = y_1. $$

The number of initial conditions equals the order of the equation.

Definition 0.2.7 — Direction Field (Slope Field)

Given a first-order ODE \(y' = f(x,y)\), the direction field is the set of all short line segments drawn at points \((x,y)\) in the plane with slope \(f(x,y)\). Each segment indicates the tangent direction a solution curve must follow as it passes through that point. By plotting many such segments on a grid, one obtains a visual map of all possible solution trajectories.

Theorems and Key Results

Theorem 0.2.1 — Picard-Lindelöf (Existence and Uniqueness)

Consider the initial value problem

$$ y' = f(x,y), \qquad y(x_0) = y_0. $$

If both of the following conditions hold in a rectangle \(R = \{(x,y) : |x-x_0| \le a,\; |y-y_0| \le b\}\):

  1. \(f(x,y)\) is continuous on \(R\), and
  2. \(\dfrac{\partial f}{\partial y}(x,y)\) exists and is continuous on \(R\) (i.e., \(f\) satisfies a Lipschitz condition in \(y\)),

then there exists an interval \(|x-x_0| < h\) (for some \(h > 0\)) on which the IVP has a unique solution \(y = \phi(x)\).

What this means in practice: If \(f\) and \(\partial f/\partial y\) are both continuous near the initial point, you are guaranteed exactly one solution curve through that point. Failure of either condition can lead to non-existence or non-uniqueness.

Theorem 0.2.2 — Superposition Principle for Linear Homogeneous ODEs

If \(y_1(x)\) and \(y_2(x)\) are solutions of the linear homogeneous equation

$$ a_n(x)\,y^{(n)} + \cdots + a_1(x)\,y' + a_0(x)\,y = 0, $$

then any linear combination \(y = c_1\,y_1(x) + c_2\,y_2(x)\) is also a solution for arbitrary constants \(c_1, c_2\).

This principle does not hold for nonlinear equations. For example, \(y_1 = 1\) and \(y_2 = x\) might each satisfy a nonlinear ODE, but \(y_1 + y_2 = 1 + x\) generally will not.

Theorem 0.2.3 — Convergence of Euler's Method

For the IVP \(y' = f(x,y)\), \(y(x_0) = y_0\), Euler's method generates approximations

$$ y_{n+1} = y_n + h\,f(x_n, y_n), \qquad x_{n+1} = x_n + h. $$

If \(f\) satisfies a Lipschitz condition in \(y\) and is continuous, the global error at any fixed point \(x^*\) satisfies

$$ |y(x^*) - y_N| \le C\,h $$

for a constant \(C\) depending on \(f\), the interval length, and the Lipschitz constant. That is, Euler's method is a first-order method: halving the step size halves the error (approximately).

Common Failure Modes and Misconceptions

Confusing "solving an equation" with "checking a solution." To verify that \(y = e^{2x}\) solves \(y' - 2y = 0\), compute \(y' = 2e^{2x}\) and substitute: \(2e^{2x} - 2e^{2x} = 0\). This is verification, not derivation. Many students skip this critical sanity check.
Equating "order" with "degree." The equation \((y'')^3 + y = 0\) is second-order (because the highest derivative is \(y''\)) but third-degree (because \(y''\) is raised to the power 3). These are distinct classification axes.
Assuming every DE has a closed-form solution. Most differential equations cannot be solved in terms of elementary functions. For example, \(y' = e^{y^2}\) has no elementary antiderivative. Numerical and qualitative methods (direction fields, phase portraits) are not "backup plans" — they are often the primary tools.
Forgetting the constant of integration yields a family. The general solution of \(y' = 2x\) is \(y = x^2 + C\), not \(y = x^2\). The single function \(y=x^2\) is only the particular solution with \(C=0\). Omitting the constant means you have discarded infinitely many valid solutions.
Misapplying the existence-uniqueness theorem. The IVP \(y' = \sqrt{y},\; y(0)=0\) has \(f(x,y) = \sqrt{y}\) and \(\partial f / \partial y = \frac{1}{2\sqrt{y}}\), which is undefined at \(y=0\). The Picard-Lindelöf theorem does not guarantee uniqueness here — and indeed both \(y=0\) and \(y = \frac{x^2}{4}\) (for \(x \ge 0\)) satisfy the IVP. Failing to check the hypotheses leads students to assume uniqueness when it fails.
Treating direction field arrows as solution curves. Each small segment in a direction field shows the local slope at one point. A solution curve is a smooth curve that is tangent to these segments everywhere. Students sometimes connect segments end-to-end without maintaining tangency, producing jagged non-solutions.

Worked Examples

Example 1: Classification and Verification

Problem. Classify the equation \(y'' + 4y = \sin(x)\) and verify that \(y_p(x) = \frac{1}{3}\sin(x)\) is a particular solution.

Step 1 — Classification
  • Type: ODE (one independent variable, \(x\)).
  • Order: 2 (highest derivative is \(y''\)).
  • Degree: 1 (\(y''\) appears to the first power).
  • Linearity: Linear. The equation has the form \(a_2 y'' + a_0 y = g(x)\) with \(a_2=1\), \(a_0=4\), \(g(x)=\sin(x)\), and the unknown \(y\) and its derivatives appear only to the first power with no mutual products.
Step 2 — Verification

Compute derivatives of \(y_p = \frac{1}{3}\sin(x)\):

$$ y_p' = \frac{1}{3}\cos(x), \qquad y_p'' = -\frac{1}{3}\sin(x). $$

Substitute into the left side:

$$ y_p'' + 4y_p = -\frac{1}{3}\sin(x) + 4\cdot\frac{1}{3}\sin(x) = \frac{3}{3}\sin(x) = \sin(x). $$

This equals the right side, so \(y_p = \frac{1}{3}\sin(x)\) is indeed a particular solution.

Step 3 — General Solution

The associated homogeneous equation \(y'' + 4y = 0\) has characteristic equation \(r^2 + 4 = 0\), giving \(r = \pm 2i\). The homogeneous solution is \(y_h = c_1\cos(2x) + c_2\sin(2x)\). Therefore the general solution is:

$$ y = c_1\cos(2x) + c_2\sin(2x) + \frac{1}{3}\sin(x). $$
Interpretation

The general solution contains two arbitrary constants \(c_1, c_2\) because the equation is second-order. The homogeneous part represents free oscillations at the natural frequency \(\omega_n = 2\), while the particular solution represents forced oscillation at the driving frequency \(\omega = 1\). When the driving frequency equals the natural frequency, resonance occurs — a phenomenon explored in Level 1.

Example 2: Existence, Uniqueness, and Direction Field Analysis

Problem. Consider the IVP \(y' = x^2 + y^2,\quad y(0) = 0\). (a) Does the Picard-Lindelöf theorem guarantee a unique local solution? (b) Describe the direction field qualitatively.

Step 1 — Check Picard-Lindelöf Hypotheses

Here \(f(x,y) = x^2 + y^2\). This function is a polynomial in \(x\) and \(y\), so it is continuous everywhere. Its partial derivative with respect to \(y\) is

$$ \frac{\partial f}{\partial y} = 2y, $$

which is also continuous everywhere. Both hypotheses of the Picard-Lindelöf theorem are satisfied in any rectangle containing \((0,0)\).

Conclusion: There exists a unique solution in some neighbourhood of \(x=0\).

Step 2 — Direction Field Analysis

The slope at any point \((x,y)\) is \(f(x,y) = x^2 + y^2 \ge 0\). This means:

  • All slopes are non-negative. Solution curves never decrease.
  • At the origin \((0,0)\), the slope is 0 (horizontal tangent).
  • Along the \(y\)-axis (\(x=0\)), the slope is \(y^2\), which is zero at the origin and grows quadratically.
  • As \(|x|\) or \(|y|\) increases, slopes become very steep, suggesting rapid growth.

This equation is a Riccati equation and cannot be solved in elementary terms. Its solution is related to Bessel functions and exhibits finite-time blowup: the solution \(y(x)\) goes to \(+\infty\) at a finite value of \(x\). The direction field makes this visible — solution curves curve upward with accelerating steepness.

Step 3 — Euler's Method Approximation

Using step size \(h = 0.1\) starting from \((x_0, y_0) = (0, 0)\):

\(n\)\(x_n\)\(y_n\)Slope \(f(x_n,y_n)\)
00.00.00000.0000
10.10.00000.0100
20.20.00100.0400
30.30.00500.0900
40.40.01400.1602
50.50.03000.2509

Even this crude approximation shows the solution beginning to curve upward, consistent with the direction field analysis.

Example 3: Non-Uniqueness When Picard-Lindelöf Fails

Problem. Show that the IVP \(y' = 3y^{2/3},\quad y(0) = 0\) has at least two solutions.

Step 1 — Check the Hypotheses

Here \(f(x,y) = 3y^{2/3}\) is continuous for all \(y\), but

$$ \frac{\partial f}{\partial y} = 2y^{-1/3}, $$

which is undefined (and unbounded) at \(y=0\). The Lipschitz condition fails at the initial point. The theorem does not guarantee uniqueness.

Step 2 — Find Two Solutions

Solution 1: \(y(x) = 0\) for all \(x\). Check: \(y' = 0\) and \(3(0)^{2/3} = 0\). Valid.

Solution 2: Try \(y = (x - c)^3\) for \(x \ge c\), \(y=0\) for \(x < c\). With \(c=0\): let \(y = x^3\). Then \(y' = 3x^2\) and \(3y^{2/3} = 3(x^3)^{2/3} = 3x^2\). Valid for \(x \ge 0\).

Step 3 — Infinitely Many Solutions

In fact, for any \(c \ge 0\), the function

$$ y_c(x) = \begin{cases} 0 & \text{if } x \le c, \\ (x-c)^3 & \text{if } x > c \end{cases} $$

satisfies the IVP. This gives an infinite family of solutions, all passing through the origin with initial value 0.

Interpretation

When the Picard-Lindelöf hypotheses fail, uniqueness is not guaranteed, and physically this means the model is incomplete — additional information is needed to determine which trajectory the system actually follows. This is a critical consideration when modelling real systems.

Interactive Code Lab

The following code blocks run in any standard Python 3 environment with NumPy and Matplotlib, or in a Pyodide (browser-based) environment. Copy each block and experiment by changing the ODE, the grid range, or the initial conditions.

Lab 1: Direction Field Plotter

This lab plots the direction field for a first-order ODE \(y' = f(x,y)\). Modify the function f to visualise different equations.

import numpy as np
import matplotlib
matplotlib.use('Agg')  # For non-interactive environments
import matplotlib.pyplot as plt

def plot_direction_field(f, x_range, y_range, nx=20, ny=20, title="Direction Field"):
    """
    Plot the direction field of y' = f(x, y).

    Parameters
    ----------
    f : callable
        Function f(x, y) returning the slope y'.
    x_range : tuple
        (x_min, x_max) for the plot domain.
    y_range : tuple
        (y_min, y_max) for the plot range.
    nx, ny : int
        Number of grid points in each direction.
    """
    x = np.linspace(x_range[0], x_range[1], nx)
    y = np.linspace(y_range[0], y_range[1], ny)
    X, Y = np.meshgrid(x, y)

    # Compute slopes
    DY = f(X, Y)
    DX = np.ones_like(DY)

    # Normalise arrow lengths for uniform appearance
    N = np.sqrt(DX**2 + DY**2)
    N[N == 0] = 1  # Avoid division by zero
    DX_norm = DX / N
    DY_norm = DY / N

    fig, ax = plt.subplots(figsize=(8, 6))
    ax.quiver(X, Y, DX_norm, DY_norm, N,
              cmap='coolwarm', angles='xy', scale=30, width=0.003)
    ax.set_xlabel('x')
    ax.set_ylabel('y')
    ax.set_title(title)
    ax.set_xlim(x_range)
    ax.set_ylim(y_range)
    ax.set_aspect('equal')
    ax.grid(True, alpha=0.3)
    plt.tight_layout()
    plt.savefig('direction_field.png', dpi=150)
    plt.show()
    print("Direction field saved to direction_field.png")

# === Example: y' = x^2 + y^2 ===
f = lambda x, y: x**2 + y**2
plot_direction_field(f, (-2, 2), (-2, 2), title=r"Direction field: $y' = x^2 + y^2$")

# === Try another equation: y' = -y + sin(x) ===
g = lambda x, y: -y + np.sin(x)
plot_direction_field(g, (-4, 4), (-3, 3), title=r"Direction field: $y' = -y + \sin(x)$")

Expected output: Two direction field plots. The first (\(y' = x^2 + y^2\)) shows all arrows pointing upward with slopes growing rapidly away from the origin. The second (\(y' = -y + \sin(x)\)) shows arrows converging toward a sinusoidal attractor, illustrating stable equilibrium behaviour.

Lab 2: Euler's Method with Error Analysis

We implement Euler's method, apply it to the IVP \(y' = -2y,\; y(0)=1\) (whose exact solution is \(y = e^{-2x}\)), and measure how the error depends on the step size \(h\).

import numpy as np
import matplotlib
matplotlib.use('Agg')
import matplotlib.pyplot as plt

def euler_method(f, x0, y0, x_end, h):
    """
    Solve y' = f(x, y), y(x0) = y0 using Euler's method.

    Returns arrays of x-values and y-values.
    """
    n_steps = int((x_end - x0) / h)
    x = np.zeros(n_steps + 1)
    y = np.zeros(n_steps + 1)
    x[0], y[0] = x0, y0

    for i in range(n_steps):
        y[i+1] = y[i] + h * f(x[i], y[i])
        x[i+1] = x[i] + h

    return x, y

# Define the ODE and exact solution
f = lambda x, y: -2 * y
exact = lambda x: np.exp(-2 * x)

x0, y0, x_end = 0.0, 1.0, 3.0

# --- Plot solutions for different step sizes ---
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(14, 5))

x_exact = np.linspace(x0, x_end, 200)
ax1.plot(x_exact, exact(x_exact), 'k-', linewidth=2, label='Exact: $y = e^{-2x}$')

step_sizes = [0.5, 0.2, 0.1, 0.05]
errors_at_end = []

for h in step_sizes:
    x_euler, y_euler = euler_method(f, x0, y0, x_end, h)
    ax1.plot(x_euler, y_euler, 'o--', markersize=3, label=f'Euler h={h}')
    error = abs(y_euler[-1] - exact(x_euler[-1]))
    errors_at_end.append(error)
    print(f"h = {h:.3f}: y({x_end}) = {y_euler[-1]:.6f}, "
          f"exact = {exact(x_end):.6f}, error = {error:.6e}")

ax1.set_xlabel('x')
ax1.set_ylabel('y')
ax1.set_title("Euler's Method: $y' = -2y$, $y(0) = 1$")
ax1.legend()
ax1.grid(True, alpha=0.3)

# --- Convergence plot ---
ax2.loglog(step_sizes, errors_at_end, 'bo-', linewidth=2, label='Measured error')
ax2.loglog(step_sizes, [s * errors_at_end[0] / step_sizes[0] for s in step_sizes],
           'r--', label='Slope 1 reference')
ax2.set_xlabel('Step size h')
ax2.set_ylabel('|Error at x=3|')
ax2.set_title('Error vs. Step Size (log-log)')
ax2.legend()
ax2.grid(True, alpha=0.3)

plt.tight_layout()
plt.savefig('euler_convergence.png', dpi=150)
plt.show()
print("\nPlot saved to euler_convergence.png")
print("The log-log slope is approximately 1, confirming first-order convergence.")

Expected output: The left panel shows the exact exponential decay alongside Euler approximations that improve as \(h\) decreases. The right panel shows that error vs. step size follows a straight line with slope 1 on a log-log plot, confirming that Euler's method is first-order accurate.

Agent Lens

The Direction Field as a Policy Map

A first-order ODE \(y' = f(x,y)\) can be viewed as a deterministic policy for an agent navigating a two-dimensional state space. The mapping from reinforcement-learning concepts is exact:

RL ConceptDifferential-Equation Analogue
State A point \((x, y)\) in the plane. Here \(x\) plays the role of time and \(y\) is the system's configuration. The full state is the pair \((x,y)\) because the ODE is first-order — no memory of past values is needed.
Action (Policy) The slope \(f(x,y)\) assigned at each state. This is a deterministic policy \(\pi(x,y) = f(x,y)\): given any state, the policy prescribes exactly one action (the direction of the next infinitesimal step). The direction field is a visual rendering of the policy, with each arrow showing the action at that state.
Reward / Utility In an IVP, the "reward" is implicitly defined by the initial condition and the physics encoded in \(f\). A solution curve that satisfies both the ODE (obeys the policy at every point) and the initial condition (starts at the prescribed state) is the optimal trajectory. Deviations from the exact policy (numerical error in Euler's method) incur a "penalty" measured as the global error.
Learning / Policy Update The Picard iteration (the constructive proof behind the Picard-Lindelöf theorem) is a learning algorithm: starting from an initial guess \(y_0(x) = y_0\), the agent iteratively refines its trajectory via $$ y_{n+1}(x) = y_0 + \int_{x_0}^{x} f\!\bigl(t, y_n(t)\bigr)\,dt, $$ converging to the true solution. Each iteration reduces the "error" (the Banach fixed-point contraction), exactly like a policy-improvement step. Euler's method is a cruder but computationally cheaper version of this iteration.

Key insight: Non-uniqueness of solutions (when Picard-Lindelöf fails) corresponds to a state where the policy is ambiguous — multiple actions are equally valid. In RL terms, this is a state with multiple optimal actions, requiring additional information (a tie-breaking rule or richer state representation) to select one.

Higher-order ODEs: A second-order ODE \(y'' = g(x, y, y')\) requires the state to include both \(y\) and \(y'\). This is analogous to expanding the state representation to include velocity as well as position — a Markov property requirement.

Exercises

Analytical Exercises

Exercise A1. Classify each equation by order, degree, type (ODE/PDE), and linearity:

  1. \(y''' - 2y' + y = e^x\)
  2. \((y')^2 + y = x\)
  3. \(\dfrac{\partial^2 u}{\partial x^2} + \dfrac{\partial^2 u}{\partial y^2} = 0\)
  4. \(y\,y'' + (y')^2 = 0\)
Solution

(a) Order 3, degree 1, ODE, linear (coefficients are constants, \(y\) and derivatives appear to first power).

(b) Order 1, degree 2, ODE, nonlinear (\(y'\) is squared).

(c) Order 2, degree 1, PDE (two independent variables \(x,y\)), linear. This is Laplace's equation.

(d) Order 2, degree 1, ODE, nonlinear (contains the products \(y \cdot y''\) and \((y')^2\)).

Exercise A2. Verify that \(y(x) = c_1 e^x + c_2 e^{-x}\) is the general solution of \(y'' - y = 0\) for arbitrary constants \(c_1, c_2\). Then find the particular solution satisfying \(y(0) = 2\), \(y'(0) = 0\).

Solution

Compute \(y'' = c_1 e^x + c_2 e^{-x}\). Then \(y'' - y = c_1 e^x + c_2 e^{-x} - c_1 e^x - c_2 e^{-x} = 0\). Verified.

Apply initial conditions: \(y(0) = c_1 + c_2 = 2\) and \(y'(0) = c_1 - c_2 = 0\). Solving: \(c_1 = 1\), \(c_2 = 1\). The particular solution is \(y = e^x + e^{-x} = 2\cosh(x)\).

Exercise A3. For the IVP \(y' = y^2,\; y(0) = 1\), find the exact solution by separation of variables. At what value of \(x\) does the solution blow up (become infinite)?

Solution

Separate: \(\frac{dy}{y^2} = dx\). Integrate: \(-\frac{1}{y} = x + C\). Apply \(y(0) = 1\): \(-1 = C\). So \(y = \frac{1}{1-x}\).

The solution blows up at \(x = 1\). This finite-time blowup is a fundamental phenomenon in nonlinear ODEs.

Exercise A4. Does the Picard-Lindelöf theorem guarantee a unique solution for the IVP \(y' = \frac{x}{y},\; y(0) = 0\)? Justify your answer carefully.

Solution

Here \(f(x,y) = x/y\), which is undefined (and discontinuous) at \(y=0\). Since the initial condition is \(y(0) = 0\), the function \(f\) is not even continuous at the initial point. The first hypothesis of the Picard-Lindelöf theorem fails, so the theorem makes no guarantee — neither existence nor uniqueness is assured by this theorem. (In fact, if we interpret the ODE as \(y\,dy = x\,dx\), we get \(y^2 = x^2 + C\), and with \(y(0)=0\) we get \(y = \pm x\), so there are two solutions, confirming non-uniqueness.)

Computational Exercises

Exercise C1. Implement a direction-field plotter for \(y' = y - x^2 + 1\) on the domain \([-1, 4] \times [-1, 4]\). Overlay three Euler-method solution curves starting from \(y(0) = 0\), \(y(0) = 0.5\), and \(y(0) = 2\), all with step size \(h = 0.05\). Do the trajectories converge or diverge?

Exercise C2. For the IVP \(y' = -2y + e^{-x},\; y(0) = 1\), the exact solution is \(y = e^{-x} + 0\cdot e^{-2x} = e^{-x}\) (verify this). Run Euler's method with \(h = 0.1, 0.01, 0.001\) on \([0, 5]\). For each \(h\), compute the maximum absolute error over the interval and verify that the error scales linearly with \(h\).

Solution
import numpy as np

def euler(f, x0, y0, x_end, h):
    n = int((x_end - x0) / h)
    x = np.linspace(x0, x_end, n+1)
    y = np.zeros(n+1)
    y[0] = y0
    for i in range(n):
        y[i+1] = y[i] + h * f(x[i], y[i])
    return x, y

f = lambda x, y: -2*y + np.exp(-x)
exact = lambda x: np.exp(-x)

for h in [0.1, 0.01, 0.001]:
    x, y = euler(f, 0, 1, 5, h)
    max_err = np.max(np.abs(y - exact(x)))
    print(f"h = {h:.4f}: max error = {max_err:.6e}")
# Expected: errors decrease by factor ~10 each time (first-order method).

Exercise C3. Write a Python function that, given \(f(x,y)\) and a point \((x_0, y_0)\), checks the Picard-Lindelöf conditions numerically. Specifically, your function should: (a) evaluate \(f\) at and near the point to check continuity (using finite differences), (b) approximate \(\partial f / \partial y\) using a central difference, and (c) check whether the partial derivative appears bounded near the point. Test on \(f(x,y) = 3y^{2/3}\) at \((0,0)\) and on \(f(x,y) = x^2 + y^2\) at \((0,0)\).

Exercise C4. Implement the improved Euler method (Heun's method):

$$ k_1 = h\,f(x_n, y_n), \qquad k_2 = h\,f(x_n + h,\; y_n + k_1), \qquad y_{n+1} = y_n + \tfrac{1}{2}(k_1 + k_2). $$

Apply it to \(y' = -2y,\; y(0) = 1\) on \([0, 3]\). Compare the error with standard Euler for step sizes \(h = 0.5, 0.2, 0.1, 0.05\). Verify that the improved Euler method is second-order (error scales as \(h^2\)).

Solution
import numpy as np

def improved_euler(f, x0, y0, x_end, h):
    n = int((x_end - x0) / h)
    x = np.linspace(x0, x_end, n+1)
    y = np.zeros(n+1)
    y[0] = y0
    for i in range(n):
        k1 = h * f(x[i], y[i])
        k2 = h * f(x[i] + h, y[i] + k1)
        y[i+1] = y[i] + 0.5 * (k1 + k2)
    return x, y

f = lambda x, y: -2 * y
exact_end = np.exp(-6)  # y(3)

for h in [0.5, 0.2, 0.1, 0.05]:
    _, ye = improved_euler(f, 0, 1, 3, h)
    err = abs(ye[-1] - exact_end)
    print(f"h = {h:.2f}: error = {err:.6e}")
# Ratio of errors when h halves should be ~4 (second-order).

Agentic Exercises

Exercise G1. (Policy comparison.) Consider the ODE \(y' = \sin(x) - y\). Treat the direction field as a policy map. Use Python to generate the direction field and overlay two solution trajectories: one starting at \((0, -2)\) and one at \((0, 3)\). Both converge to the same long-term behaviour. In RL terms, this corresponds to a policy with a single attracting fixed point (or cycle). Determine the asymptotic (long-time) behaviour analytically by guessing a particular solution of the form \(y_p = A\sin(x) + B\cos(x)\) and finding \(A\) and \(B\). Interpret: why does every initial condition lead to the same eventual trajectory?

Solution

Substitute \(y_p = A\sin(x) + B\cos(x)\) into \(y' + y = \sin(x)\):

$$ A\cos(x) - B\sin(x) + A\sin(x) + B\cos(x) = \sin(x). $$

Equating coefficients: \(\sin(x)\colon -B + A = 1\), \(\cos(x)\colon A + B = 0\). Solving: \(A = 1/2\), \(B = -1/2\).

So \(y_p = \frac{1}{2}\sin(x) - \frac{1}{2}\cos(x)\). The general solution is \(y = Ce^{-x} + \frac{1}{2}\sin(x) - \frac{1}{2}\cos(x)\).

As \(x \to \infty\), \(Ce^{-x} \to 0\) regardless of \(C\), so every trajectory converges to \(y_p\). In RL terms, the exponential decay is the agent "forgetting" its initial state; the attractor \(y_p\) is the unique long-run equilibrium policy.

Exercise G2. (Exploration and finite-time blowup.) An agent following the policy \(y' = 1 + y^2\) (with \(y(0)=0\)) will reach \(y = +\infty\) in finite time. (a) Solve the ODE exactly. (b) What is the blowup time? (c) Run Euler's method with \(h=0.01\). At what step does the numerical solution first exceed \(10^6\)? (d) In RL language, this is a policy that leads the agent "off a cliff." Design a modified policy \(\tilde{f}(x,y) = \min(1+y^2,\; M)\) with a reward-clipping parameter \(M\). How does the clipped trajectory differ? Plot both.

Assessment

Quiz: 10 Questions

  1. The equation \(y''' + xy' - y^2 = \ln(x)\) is:
    (a) 3rd order, linear   (b) 3rd order, nonlinear   (c) 2nd order, nonlinear   (d) 3rd order, degree 2
  2. Which of the following is a PDE?
    (a) \(y'' + y = 0\)   (b) \(\frac{du}{dt} = -ku\)   (c) \(u_{xx} + u_{yy} = 0\)   (d) \(\frac{dy}{dx} = xy\)
  3. The general solution of a 3rd-order ODE contains how many arbitrary constants?
    (a) 1   (b) 2   (c) 3   (d) It depends on whether the equation is linear
  4. To verify that \(y = e^{3x}\) is a solution of \(y' - 3y = 0\), you should:
    (a) Integrate \(y' - 3y\)   (b) Compute \(y' = 3e^{3x}\) and check that \(3e^{3x} - 3e^{3x} = 0\)   (c) Find the general solution first   (d) Plot the direction field
  5. The IVP \(y' = y^{1/3},\; y(0) = 0\) fails which hypothesis of Picard-Lindelöf?
    (a) \(f\) is not continuous   (b) \(\partial f/\partial y\) is not continuous at the initial point   (c) Both hypotheses fail   (d) Neither hypothesis fails
  6. A direction field for \(y' = -y\) shows:
    (a) Horizontal lines everywhere   (b) Slopes that are negative when \(y > 0\) and positive when \(y < 0\)   (c) Vertical lines everywhere   (d) Slopes independent of \(y\)
  7. Euler's method with step size \(h\) has global error of order:
    (a) \(h^2\)   (b) \(h\)   (c) \(h^{1/2}\)   (d) \(h^3\)
  8. A singular solution of a DE is one that:
    (a) Blows up in finite time   (b) Cannot be obtained from the general solution for any value of the constants   (c) Satisfies the homogeneous equation   (d) Has no derivatives
  9. The equation \(y' = x + y\) with \(y(0) = 1\) satisfies the Picard-Lindelöf conditions because:
    (a) The equation is linear   (b) \(f(x,y) = x+y\) and \(\partial f/\partial y = 1\) are both continuous everywhere   (c) The solution can be found by separation of variables   (d) The direction field has no vertical tangents
  10. If \(y_1\) and \(y_2\) are solutions of a nonlinear ODE, then \(y_1 + y_2\) is:
    (a) Always a solution   (b) Never a solution   (c) A solution only if \(y_1 = -y_2\)   (d) Not guaranteed to be a solution
Answer Key

1: (b) — \(y^2\) makes it nonlinear   2: (c)   3: (c)   4: (b)   5: (b) — \(\partial f/\partial y = \frac{1}{3}y^{-2/3}\) is undefined at \(y=0\)   6: (b)   7: (b)   8: (b)   9: (b)   10: (d)

Mini-Project: Direction Field and Solution Explorer

Project Description

Build a Python script (or Jupyter notebook) that serves as an interactive exploration tool for first-order ODEs. Your tool should:

  1. Accept a user-specified function \(f(x,y)\) (as a Python lambda or string expression) and domain bounds.
  2. Plot the direction field.
  3. Overlay Euler-method solution curves for user-specified initial conditions (at least 5 different initial conditions on one plot).
  4. Compute and display the Euler-method approximation error at \(x=x_{\text{end}}\) for a user-specified exact solution (if known).
  5. Automatically check the Picard-Lindelöf conditions at each initial point: evaluate \(\partial f/\partial y\) numerically and warn if it appears unbounded.

Test your tool on the following three ODEs:

  • \(y' = -y + \sin(x)\) with initial conditions \(y(0) \in \{-2, -1, 0, 1, 2\}\).
  • \(y' = y^2 - x\) with initial conditions \(y(0) \in \{-1, -0.5, 0, 0.5, 1\}\).
  • \(y' = \sqrt{|y|}\) with initial condition \(y(0) = 0\) (the Picard-Lindelöf warning should trigger).

Rubric (30 points)

CriterionPointsExpectations
Direction field plotting6Clear, correctly oriented arrows; appropriate grid density; axes labelled.
Euler method implementation5Correct forward-Euler iteration; handles variable step counts; no off-by-one errors.
Multiple initial conditions overlay4At least 5 curves per ODE, each with a distinct colour and legend entry.
Error analysis (when exact solution known)5Error computed and displayed; convergence order verified for at least 3 step sizes.
Picard-Lindelöf condition checker5Computes \(\partial f/\partial y\) numerically; identifies and warns when the partial derivative is unbounded or undefined; tested on at least one passing and one failing case.
Code quality and documentation3Functions have docstrings; clear variable names; well-structured code.
Testing on all three specified ODEs2All three test cases run with output shown.

References & Next Steps

References

  1. Boyce, W.E. and DiPrima, R.C. Elementary Differential Equations and Boundary Value Problems, 11th ed. Wiley, 2017. — Standard introductory text; Chapters 1–2 cover classification, direction fields, and existence-uniqueness.
  2. Tenenbaum, M. and Pollard, H. Ordinary Differential Equations. Dover, 1985. — Affordable, example-rich reference with thorough treatment of solution verification and classification.
  3. Coddington, E.A. and Levinson, N. Theory of Ordinary Differential Equations. McGraw-Hill, 1955. — Rigorous proof of the Picard-Lindelöf theorem and its extensions.
  4. Butcher, J.C. Numerical Methods for Ordinary Differential Equations, 3rd ed. Wiley, 2016. — Definitive reference on Euler's method, Runge-Kutta methods, and convergence theory.
  5. Strogatz, S.H. Nonlinear Dynamics and Chaos, 2nd ed. Westview Press, 2015. — Excellent geometric and intuitive treatment of direction fields, phase portraits, and qualitative analysis.

Next Steps

You now have the vocabulary and conceptual framework for differential equations. You know what an ODE is, how to classify it, what constitutes a solution, and when solutions are guaranteed to exist and be unique. You have seen direction fields as visual policy maps and Euler's method as a first computational tool.

In the next module you will begin solving first-order ODEs systematically — separable equations, integrating factors, exact equations, and more. The tools from Module 0.1 (calculus and linear algebra) and the concepts from this module (classification, IVPs, existence-uniqueness) will be used throughout.

Next Module: 1.1 — First-Order Ordinary Differential Equations →

Back to Level 0 Overview