Section 5.2 Linear Constant Coefficient ODE I
In this section, we will look at and solve our first non-trivial ordinary differential equation. First though, we recall what it is that we are trying to solve. Quite generally, let \(V\) be a finite dimensional inner product space and \(\mb{F}_t : V \to V\) be a collection of functions, one for every \(t\) in some interval \(I\text{.}\) This collection should be thought of as a time varying vector field, which we will study in later sections. The general ordinary differential equation can be written
\begin{equation}
\dot{\mb{x}} = \mb{F}_t ( \mb{x}).\tag{5.2.1}
\end{equation}
For the moment, we consider the meaning of this equation when
\begin{equation*}
\mb{F}_t = \mb{F}
\end{equation*}
is independent of time (the so-called autonomous differential equation). For any point
\(\mb{x}\text{,}\) we can then picture
\(\mb{F} (\mb{x} )\) as a vector with initial point at
\(\mb{x}\text{.}\) Indeed, we write
\(T_\mb{x} V\) for a copy of the vector space
\(V\) at
\(\mb{x}\) and call this the
tangent space of
\(V\) at
\(\mb{x}\) (we do this for every vector in
\(V\)). We then illustrate
\(\mb{F}\) as a collection of different tangent vectors, one for each point in the the domain of
\(\mb{F}\text{.}\) This is called a
vector field. We can use sage to illustrate this perspective and graph a vector field. Evaluate the sage cell to see a two dimensional example.
Or evaluate this cell to see a three dimensional example.
Now, consider a solution to equation
(5.2.1). Such a path
\(\mb{x} (t)\) will have its tangent vector at time
\(t\) precisely equal to the vector field at the position
\(\mb{x}(t)\text{.}\) In simple terms, the path is following the flow of the field - at each point, its direction and speed are determined by
\(\mb{F}\text{.}\) A
particular solution to this equation is simply some path
\(\mb{x} (t)\) that satisfies this equation. However, just as is the case with solving a linear system of equations, we can ask for a
general solution which will include parameters and give all solutions to our equation. Sadly, for most ODEs, finding any closed form solution is out of the question. Nonetheless, we will see a theorem assuring us that a solution exists (under some mild conditions on
\(\mb{F}_t\)) and that, once the initial condition
\(\mb{x} (0)\) is fixed, this solution is unique. Let us state this last fact as a ‘theorem’ that will be proved later.
Theorem 5.2.1. Uniqueness of Solutions to ODEs.
Suppose
\(\mb{x}\) and
\(\tilde{\mb{x}}\) solve equation
(5.2.1) under suitable conditions and
\(\mb{x} (0) = \tilde{\mb{x}} (0)\text{,}\) then
\(\mb{x} = \tilde{\mb{x}}\text{.}\)
The suitable conditions mentioned in this theorem concern differentiability of \(\mb{F}\text{,}\) which we will study in the coming sections. As was mentioned, finding closed solutions to a general ODE can be difficult to impossible. However, the situation changes if we restrict our attention to only certain types of functions \(\mb{F}_t\text{.}\)
Definition 5.2.2.
Given a finite dimensional inner product space \(V\) over \(K\text{,}\) a homogeneous linear constant coefficient ordinary differential equation is an equation of the form
\begin{equation*}
\dot{\mb{x}} = T ( \mb{x} )
\end{equation*}
for some linear transformation \(T : V \to V\text{.}\)
Note that the linear transformation is not changing with respect to the independent variable \(t\) of \(\mb{x}\text{.}\) The advantage of having done sufficient linear algebra prior to this equation now becomes clear. We can solve this equation immediately by understanding its solution for the case where \(T\) can be represented as a Jordan block. After that, we simply add solutions together for the general case.
Lemma 5.2.3.
Suppose \(T\) is represented as a Jordan block \(J_{\lambda, m}\) with respect to the basis \(\mathcal{B} = \{\mb{v}_0, \mb{v}_1, \ldots, \mb{v}_{m - 1}\}\text{.}\) For any \(0 \leq k \leq m - 1\) let \(\mb{y}_k\) be the path defined by
\begin{equation*}
\mb{y}_k (t) = \frac{t^k}{k!} \mb{v}_0 + \frac{t^{k - 1}}{(k - 1)!} \mb{v}_1 + \cdots + \frac{t^1}{1!} \mb{v}_{k - 1} + \mb{v}_k.
\end{equation*}
Letting
\begin{equation*}
\mb{x}_k (t) = e^{\lambda t} \mb{y}_k (t)
\end{equation*}
the general solution to
\begin{equation*}
\dot{\mb{x}} = T ( \mb{x} )
\end{equation*}
is
\begin{equation}
\mb{x} (t) = C_0 \mb{x}_0 (t) + C_1 \mb{x}_1 (t) + \cdots + C_{m - 1} \mb{x}_{m - 1} (t) \tag{5.2.2}
\end{equation}
for scalars \(C_0, C_1 , \ldots, C_{m - 1}\text{.}\) This is the unique solution with initial condition
\begin{equation*}
\mb{x} (0) = \threevec{C_0}{\vdots}{C_{m - 1}}_{\mathcal{B}}.
\end{equation*}
Proof.
To prove this, we need only show that
\(\mb{x} (t)\) is a solution with the correct initial condition. Indeed, as our initial condition
\(\mb{x} (0)\) can be any vector in
\(V\text{,}\) any other solution
\(\mb{y} (t)\) would have to be of this form by
Theorem 5.2.1. To do this, we compute the tangent vectors
\begin{align*}
\mb{y}^\prime_k (t) \amp = \frac{d}{dt} \left( \frac{t^k}{k!} \mb{v}_0 + \frac{t^{k - 1}}{(k - 1)!} \mb{v}_1 + \cdots + \frac{t^1}{1!} \mb{v}_{k - 1} + \mb{v}_k \right), \\
\amp = \frac{t^{k - 1}}{(k - 1)!} \mb{v}_0 + \frac{t^{k - 2}}{(k - 2)!} \mb{v}_1 + \cdots + \frac{t^1}{1!} \mb{v}_{k - 2} + \mb{v}_{k - 1}, \\ \amp = \mb{y}_{k - 1} (t).
\end{align*}
\begin{align*}
\mb{x}_k^\prime (t) \amp = \frac{d}{dt} \left( e^{\lambda t} \mb{y}_k (t) \right), \\
\amp = \lambda e^{\lambda t} \mb{y}_k (t) + e^{\lambda t} \mb{y}_{k - 1} (t).
\end{align*}
On the other hand, as \(\cob{T}{\mathcal{B}}{\mathcal{B}} = J_{\lambda, m}\text{,}\) we have that
\begin{equation*}
T ( \mb{v}_j ) = \lambda \mb{v}_j + \mb{v}_{j - 1} .
\end{equation*}
This with linearity gives
\begin{align*}
T ( \mb{y}_k (t) ) \amp = T \left( \frac{t^k}{k!} \mb{v}_0 \right) + T \left( \frac{t^{k - 1}}{(k - 1)!} \mb{v}_1 \right) + \cdots + T \left( \frac{t^1}{1!} \mb{v}_{k - 1}\right) + T \left( \mb{v}_k \right), \\
\amp = \frac{t^k}{k!} T \left( \mb{v}_0 \right) + \frac{t^{k - 1}}{(k - 1)!} T \left( \mb{v}_1 \right) + \cdots + \frac{t^1}{1!} T \left( \mb{v}_{k - 1}\right) + T \left( \mb{v}_k \right), \\
\amp = \frac{t^k}{k!} \left( \lambda \mb{v}_0 \right) + \frac{t^{k - 1}}{(k - 1)!} \left( \lambda \mb{v}_1 + \mb{v}_{0} \right) + \cdots + \frac{t^1}{1!} \left( \lambda \mb{v}_{k - 1} + \mb{v}_{k - 2} \right) + \left( \lambda \mb{v}_k + \mb{v}_{k - 1} \right), \\
\amp = \lambda \left( \frac{t^k}{k!} \mb{v}_0 + \frac{t^{k - 1}}{(k - 1)!} \mb{v}_1 + \cdots + \frac{t^1}{1!} \mb{v}_{k - 1} + \mb{v}_k \right) + \cdots \\
\amp \cdots + \left( \frac{t^{k - 1}}{(k - 1)!} \mb{v}_0 + \cdots + \frac{t^1}{1!} \mb{v}_{k - 2} + \mb{v}_{k - 1} \right), \\
\amp = \lambda \mb{y}_k (t) + \mb{y}_{k - 1} (t).
\end{align*}
Again using linearity we have
\begin{align*}
\frac{d}{dt} \left( e^{\lambda t} \mb{y}_k (t) \right) \amp = \lambda e^{\lambda t} \mb{y}_k (t) + e^{\lambda t} \mb{y}_{k - 1} (t), \\
\amp = e^{\lambda t} \left( \lambda \mb{y}_k (t) + \mb{y}_{k - 1} (t) \right), \\
\amp = e^{\lambda t} T ( \mb{y}_k (t)) , \\
\amp = T \left( e^{\lambda t} \mb{y}_k (t) \right).
\end{align*}
This shows that
\(\mb{x}_k (t) = e^{\lambda t} \mb{y}_k (t)\) solves the ODE for each
\(0 \leq k \leq m - 1\text{.}\) As both sides are linear in paths, we can take any linear combination of these solutions to get another one (note that this is not the case for any ODE and results from the homogeneous condition). Thus the path in equation~
(5.2.2) is a solution.
Of course, if we can decompose \(T\) into block diagonals of the form above, we need only add the resulting solutions.
Lemma 5.2.4.
If \(V\) is an inner product space and \(T : V \to V\) is a linear transformation with characteristic polynomial \(p_T ( t) = (t - \lambda)^m\text{,}\) then there are sets \(\mathcal{B}_1, \ldots, \mathcal{B}_r\) of vectors whose union is a basis for \(V\) and for which \(T\) is represented as a Jordan block for \(V_i = \text{span} (\mathcal{B}_i)\text{.}\) Any solution to
\begin{equation*}
\dot{\mb{x}} = T ( \mb{x} )
\end{equation*}
can be obtained by solving it as in
Lemma 5.2.3 for each
\(V_i\) and adding the resulting paths together. Such a solution will be called a
\(\lambda\)-eigenspace solution.
The following theorem is an immediate corollary of these lemmas.
Theorem 5.2.5.
Let \(V\) be a finite dimensional inner product space and \(T: V \to V\) a linear transformation with characteristic polynomial
\begin{equation*}
p_T (t) = (t - \lambda_1)^{k_1} \cdots (t - \lambda_m)^{k_m}.
\end{equation*}
Let
\begin{equation*}
V = V_{\lambda_1} \oplus \cdots \oplus V_{\lambda_m}
\end{equation*}
be the decomposition of \(V\) into generalized eigenspaces so that \(T = T_1 \oplus \cdots \oplus T_m\text{.}\) Then every solution \(\mb{x}\) to
\begin{equation*}
\dot{\mb{x}} = T ( \mb{x} )
\end{equation*}
can be uniquely written as a linear combination
\begin{equation*}
\mb{x} = \mb{y}_1 + \cdots + \mb{y}_m
\end{equation*}
where \(\mb{y}_j\) is a \(\lambda_i\)-eigenspace solution to
\begin{equation}
\dot{\mb{y}}_i = T_i (\mb{y}_i ) . \tag{5.2.3}
\end{equation}
In the next section, we will give a tremendous number of detailed examples illustrating the power of this result and approach.
Exercises Exercises
1.
Suppose the \(n \times n\) matrix \(A\) is diagonalizable and
\begin{equation*}
P^{-1} A P = \text{Diag} (\lambda_1, \ldots, \lambda_n ) .
\end{equation*}
If
\begin{equation*}
\mb{x} (t) = \threevec{x_1 (t)}{\vdots}{x_n (t)}
\end{equation*}
denotes a path in \(\mathbb{R}^n\text{,}\) what is the general solution to the differentiable equation
\begin{equation*}
\dot{\mb{x}} = A \mb{x} ?
\end{equation*}
You may write your answer using \(P\text{.}\)
2.
Let
\begin{equation*}
A = \left[ \begin{matrix} 0 \amp -2 \amp 1 \\ -1 \amp 0 \amp 0 \\ -5 \amp 7 \amp -3 \end{matrix} \right].
\end{equation*}
and
\begin{equation*}
\mb{r} (t) = \threevec{x (t)}{y (t)}{z (t)}
\end{equation*}
a path in \(\mathbb{R}^3\text{.}\)
(a)
Write out the differential equation
\begin{equation*}
\mb{r}^\prime (t) = A \mb{r} (t)
\end{equation*}
as three differential equations in \(x(t), y(t)\) and \(z(t)\text{.}\)
(b)
Give the particular solution to the differential equation with initial conditions
\begin{equation*}
\mb{r} (0 ) = \threevec{1}{-1}{1} .
\end{equation*}