Skip to main content

Section 5.4 Linear Constant Coefficient ODE III

In this section we will complete the study of this type of ODE by removing the homogeneous condition. The form of such an equation in \(K^n\) is
\begin{equation} \dot{\mb{x}} = A \mb{x} + \mb{f}\tag{5.4.1} \end{equation}
where
\begin{equation*} \mb{f} = \threevec{f_1 (t)}{\vdots}{f_n (t)}. \end{equation*}
As always, if \(A\) is diagonalizable, we can change to our eigenbasis and get an immediate solution. This will be the case we focus on.

Proof.

Since we are asserting only that we have found a solution, we need only check that equation (5.4.1) holds. First we observe that
\begin{align*} y_j^\prime (t) \amp = \frac{d}{dt} \left( e^{\lambda_j t} \int e^{-\lambda_j t} g_j (t) \diff{t} \right), \\ \amp = \left( \frac{d}{dt} e^{\lambda_j t} \right) \int e^{-\lambda_j t} g_j (t) \diff{t} + e^{\lambda_j t} \left(\frac{d}{dt} \int e^{-\lambda_j t} g_j (t) \diff{t} \right), \\ \amp = \lambda_j e^{\lambda_j t} \int e^{-\lambda_j t} g_j (t) \diff{t} + e^{\lambda_j t} e^{-\lambda_j t} g_j (t) , \\ \amp = \lambda_j y_j (t) + g_j (t). \end{align*}
Using this, we then compute
\begin{align*} \mb{x}^\prime (t) \amp = \frac{d}{dt} \left( P \,\mb{y} (t) \right), \\ \amp = P \, \mb{y}^\prime (t), \\ \amp = P \, \threevec{y_1^\prime (t)}{\vdots}{y_n^\prime (t)}, \\ \amp = P \left( \threevec{\lambda_1 y_1 (t) + g_1 (t)}{\vdots}{\lambda_n y_n (t) + g_n (t)} \right), \\ \amp = P\, \left( \threevec{\lambda_1 y_1 (t)}{\vdots}{\lambda_n y_n (t)} + \threevec{ g_1 (t)}{\vdots}{g_n (t)} \right), \\ \amp = P\, \left( \text{Diag} (\lambda_1 , \ldots, \lambda_n)\, \mb{y} (t) + \mb{g} (t) \right), \\ \amp = P\, \text{Diag} (\lambda_1 , \ldots, \lambda_n)\,\mb{y} (t) + P\, \mb{g} (t), \\ \amp = P\, \text{Diag} (\lambda_1 , \ldots, \lambda_n)\, P^{-1}\, \mb{x} (t) + P\, \mb{g} (t), \\ \amp = A\, \mb{x} (t) + \mb{f} (t). \end{align*}
As it turns out, once we obtain a particular solution to the non-homogeneous equation, we can use our previous homogeneous solutions to get the general solution.

Proof.

It is an exercise to see that \(\mb{x}\) is a solution to the non-homogeneous equation. If \(\mb{y}\) is any solution, then from the uniqueness theorem, any solution \(\mb{x}\) with \(\mb{x} (0) = \mb{y} (0)\) must equal \(\mb{y}\text{.}\) But since \(\mb{x}_1, \ldots, \mb{x}_n\) are linearly independent, so are \(\mb{x}_1 (0), \ldots, \mb{x}_n (0)\) which implies they form a basis of \(\mathbb{R}^n\text{.}\) Thus there are constants \(C_1, \ldots, C_n\) so that
\begin{equation*} \mb{y}(0) - \mb{x}_p (0) = C_1 \mb{x}_1 (0) + \cdots + C_n \mb{x}_n (0). \end{equation*}
This then means that \(\mb{y} (0) = \mb{x} (0)\) for the solution in equation (5.4.2) and we are done.
Combining Theorem 5.4.1, Theorem 5.2.5 and Corollary 5.4.2 gives us the methods to obtain general solutions to any constant coefficient linear ODE. That said, these methods can contain obstacles that are difficult to overcome such as finding roots of polynomials or integrating difficult functions. Moreover, many linear ordinary differential equations of interest do not have constant coefficients and of course, there are non-linear ordinary differential equations that are also of great interest. In coming sections, we will build some new tools to address some of these cases. For now though, let us look at a couple of examples of the nonhomogeneous case.

Example 5.4.3. A non-homogeneous two dimensional equation.

In Example 5.3.2 we considered the linear system
\begin{align*} x^\prime (t) \amp = -4 x(t) - 3 y(t), \\ y^\prime (t) \amp = 6 x (t) + 5 y(t). \end{align*}
Let us consider a non-homogeneous version of this system such as
\begin{align*} x^\prime (t) \amp = -4 x(t) - 3 y(t) + t, \\ y^\prime (t) \amp = 6 x (t) + 5 y(t) - 3t. \end{align*}
We can write this as the matrix equation
\begin{equation*} \dot{\mb{x}} = \left[ \begin{matrix} -4 \amp -3 \\ 6 \amp 5 \end{matrix} \right] \mb{x} + \twovec{t}{-3t}. \end{equation*}
In this case we have
\begin{equation*} \mb{f} (t) = \twovec{t}{-3t}. \end{equation*}
Looking back to the example, we see that we found the \((-1)\)-eigenvector \(\mb{v}_1\) and \(2\)-eigenvector \(\mb{v}_2\) which form an eigenbasis
\begin{equation*} \mathcal{B} = \left\{ \mb{v}_1 , \mb{v}_2 \right\} = \left\{ \twovec{1}{-1}, \twovec{-1}{2} \right\}. \end{equation*}
From these we had the change of basis matrices
\begin{equation*} P = \left[ \begin{matrix} 1 \amp -1 \\ -1 \amp 2 \end{matrix} \right] , \hspace{.3in} P^{-1} = \left[ \begin{matrix} 2 \amp 1 \\ 1 \amp 1 \end{matrix} \right]. \end{equation*}
Using this, we apply Theorem 5.4.1 to find
\begin{equation*} \mb{g} (t) = \twovec{g_1 (t)}{g_2 (t)} = P^{-1} \mb{f} (t) = \twovec{2 t - 3t}{t - 3t} = \twovec{-t}{-2t}. \end{equation*}
We then can calculate the integrals using integration by parts
\begin{align*} y_1 (t) \amp = e^{-t} \int e^t (-t) \diff{t}, \\ \amp = - e^{-t} (t e^t - e^t), \\ \amp = 1 - t. \\ y_2 (t) \amp = e^{2t} \int e^{-2t} (-2t) \diff{t}, \\ \amp = e^{2t} \left( t e^{-2t} + \frac{1}{2}e^{-2t}\right), \\ \amp = t + \frac{1}{2}. \end{align*}
Finally, we then obtain a particular solution
\begin{equation*} \mb{x}_p (t) = P \mb{y} (t) = \left[ \begin{matrix} 1 \amp -1 \\ -1 \amp 2 \end{matrix} \right] \twovec{1 - t}{t + 1/2} = \twovec{1/2 - 2t}{3t}. \end{equation*}
Using the homogeneous solutions (5.3.3) gives
\begin{align*} \mb{x} (t) \amp = C_1 e^{-t} \mb{v}_1 + C_2 e^{2t} \mb{v}_2 + \mb{x}_p (t), \\ \amp = \twovec{C_1 e^{-t} - C_2 e^{2t} - 2t + 1/2 }{ - C_1 e^{-t} + 2 C_2 e^{2t} + 3t }. \end{align*}
We now consider a second order scalar ODE.

Example 5.4.4. Resonance.

One of the most fascinating applications of linear ODE’s is the spring-mass explored in Example 5.3.6. Recall the homogeneous equation was
\begin{equation*} x^{\prime \prime} + \tilde{b} x^\prime + \tilde{k} x = 0. \end{equation*}
Adding an inhomgeneous term to this equation gives
\begin{equation*} x^{\prime \prime} + \tilde{b} x^\prime + \tilde{k} x = f(t). \end{equation*}
To adopt this into the first order framework, we may rewrite the equation as
\begin{equation*} \dot{\mb{x}} = \left[ \begin{matrix} 0 \amp 1 \\ -\tilde{k} \amp - \tilde{b} \end{matrix} \right] \mb{x} + \twovec{0}{f(t)} \hspace{.3in} \text{where} \hspace{.2in} \mb{x} = \twovec{x_0 (t)}{x_1 (t)}. \end{equation*}
We now focus our attention on the case of \(\tilde{b} \lt 2 \sqrt{\tilde{k}}\) where there is some oscillation. By equation (5.3.11) we have the eigenvalues
\begin{equation*} \lambda = - \alpha + i \beta, \hspace{.5in} \bar{\lambda} = - \alpha - i \beta. \end{equation*}
which are roots of the characteristic polynomial. Here \(\alpha \geq 0\) and we have the eigenbasis
\begin{equation} \mathcal{B} = \left\{ \mb{w}_1, \bar{\mb{w}}_1 \right\} = \left\{ \twovec{1}{\lambda}, \twovec{1}{\bar{\lambda}} \right\}.\tag{5.4.3} \end{equation}
This gives the matrix
\begin{equation*} P = \left[ \begin{matrix} 1 \amp 1 \\ \lambda \amp \bar{\lambda} \end{matrix} \right] \end{equation*}
which has inverse
\begin{equation*} P^{-1} = \frac{i}{2\beta} \left[ \begin{matrix} \bar{\lambda} \amp -1 \\ -\lambda \amp 1 \end{matrix} \right] \end{equation*}
Now
\begin{equation*} \mb{g} (t) = \twovec{g_1 (t)}{g_2 (t)} = P^{-1} \twovec{0}{f (t)} = \frac{i}{2\beta} \twovec{- f (t)}{f(t)} \end{equation*}
can be used with Theorem 5.4.1 to obtain a particular solution which is
\begin{equation} x_p (t) = \frac{i}{2 \beta} \left( - e^{\lambda t} \int e^{- \lambda t} f(t) \diff{t} + e^{\bar{\lambda} t} \int e^{- \bar{\lambda} t} f(t) \diff{t} \right).\tag{5.4.4} \end{equation}
Thus to obtain the solution, we must start by calculating
\begin{equation*} e^{\zeta t} \int e^{- \zeta t} f(t) \diff{t}. \end{equation*}
Now, given any \(f(t)\) this problem could become quite a difficult (even impossible) integral problem. However, one practical case to consider is when \(f(t)\) is an exponential \(e^{\omega t}\) itself for some complex number \(\omega\text{.}\) The reason this is practical is that such \(f(t)\) can be thought of as an external force acting on the spring-mass system and may involve some periodicity. So we will consider the case of
\begin{equation*} f(t) = e^{\omega t} \end{equation*}
and note we can obtain \(\sin\) and \(\cos\) functions by taking real and complex parts of this one. Writing this out, we obtain
\begin{equation*} e^{\zeta t} \int e^{- \zeta t} e^{\omega t} \diff{t} = e^{\zeta t} \int e^{(\omega - \zeta ) t} \diff{t}, \end{equation*}
It is here that we notice a very special situation can occur, namely the case when \(\omega = \zeta\text{.}\) First let us assume this is not the case and compute
\begin{equation*} e^{\zeta t} \int e^{- \zeta t} e^{\omega t} \diff{t} = \frac{e^{\omega t}}{\omega - \zeta} . \end{equation*}
Then the particular solution in equation (5.4.4) is
\begin{align} x_p (t) \amp = \frac{i}{2 \beta} \left( - \frac{e^{\omega t}}{\omega - \lambda} + \frac{e^{\omega t}}{\omega - \bar{\lambda}} \right), \tag{5.4.5}\\ \amp = \frac{i e^{\omega t}}{2 \beta} \left( \frac{\bar{\lambda} - \lambda}{(\omega - \lambda) (\omega - \bar{\lambda})} \right),\tag{5.4.6}\\ \amp = \frac{e^{\omega t}}{(\omega - \lambda) (\omega - \bar{\lambda})}. \tag{5.4.7} \end{align}
Adding a homogeneous term gives the general solution.
This case when \(\omega = \lambda\) or \(\omega = \bar{\lambda}\) (usually when no damping is present and \(\omega\) is purely imaginary) is called pure resonance. Let’s assume \(\omega = \lambda\) so that equation (5.4.4) gives us the solution
\begin{align*} x_p (t) \amp = \frac{i}{2\beta} \left( - e^{\lambda t} t + \frac{e^{\lambda t}}{\lambda - \bar{\lambda}} \right), \\ \amp = \frac{e^{\lambda t}\left( 1 - 2\beta i t \right)}{4\beta^2} \end{align*}
To understand this solution, let’s consider when there is no damping so that \(\lambda = i \beta\) and take the real part of the homogeneous term \(\operatorname{Re} (f(t)) = \cos (\beta t)\) and thus the real part of our solution to find a shocking solution
\begin{equation*} \operatorname{Re} (x_p (t)) \amp = \frac{1}{4\beta^2} \cos (\beta t) + \frac{t \sin (\beta t )}{2 \beta }. \end{equation*}
Why is this so shocking? Well, observe that there is a \(t\) in the amplitude of the \(\sin \) function which means that this solution oscillates but is unbounded. This resonance occurs when the intrinsic frequency of the spring matches the forcing frequency and explains many tragic disasters (like suspension bridges being destroyed by the wind blowing in certain frequencies). Examining the prior solutions show that all other solutions (with bounded forcing term) are in fact bounded solutions and most experience exponential decay (which is certainly good for bridges and other things!).

Subsection 5.4.1 Higher order linear ODE’s

While we solved the spring-mass system as a first order system of differential equations, it is in fact easier to find these solutions if we consider the whole equation as a single differential operator equation. This generalizes to all higher order linear differential equations with constant coefficients and we take a moment to explain this more elementary approach.
First, we make a definition.

Definition 5.4.5.

For an inner product space \(V\text{,}\) a path \(\mb{x} : I \to V\) is called smooth if tangent vectors exist to any order. The vector space of smooth paths is denoted \(C^\infty (I, V)\text{.}\)
As infinite dimensional vector spaces go, \(C^\infty (I, V)\) is not optimal and is often replaced with a Hilbert space. Nevertheless, for our purpose, it will do just fine. We will consider the case where \(V = \mathbb{C}\text{.}\) In this case, we have the linear transformation
\begin{equation*} \frac{d}{dt} : C^\infty (I , \mathbb{C} ) \to C^\infty (I, \mathbb{C} ) . \end{equation*}
This is the most basic of differential operators, but perhaps it is a new experience to consider this as a linear transformation. If we do, then we note that the eigenvalues are in fact all complex numbers! Indeed, taking \(\lambda \in \mathbb{C}\) we have the eigenfunction
\begin{equation*} e^{\lambda t} \end{equation*}
which spans the \(\lambda\)-eigenspace of the derivative. How can this be used? Well, let’s look at our homogeneous higher order differential equation again
\begin{equation*} x^{(n)} (t) + a_{n - 1}x^{(n - 1)} (t) + \cdots + a_1 x^\prime (t) + a_0 x(t) = 0 . \end{equation*}
and see that if the characteristic equation
\begin{equation*} s^n + a_{n - 1} \lambda^{n - 1} + \cdots + a_1 s + a_0 = 0 \end{equation*}
has solutions \(\lambda_1, \ldots, \lambda_n\) then our equation becomes
\begin{equation*} \left( \frac{d}{dt} - \lambda_1 I \right) \cdots \left( \frac{d}{dt} - \lambda_n I \right) x (t) = 0. \end{equation*}
The expression
\begin{equation*} \left( \frac{d}{dt} - \lambda_1 I \right) \cdots \left( \frac{d}{dt} - \lambda_n \right) \end{equation*}
is a linear transformation from \(C^{\infty} (\mathbb{C})\) to itself and thus solutions to the homogeneous equation are just elements of the kernel of this transformation. If each of the \(\lambda_i\) are distinct, then it is not hard to see that these are just the eigenfunctions of the derivative
\begin{equation*} e^{\lambda_1 t}, \ldots e^{\lambda_n t}. \end{equation*}
In the case when there is multiplicity, we have a factor of the differential operator of the form
\begin{equation*} \left( \frac{d}{dt} - \lambda I \right)^k . \end{equation*}
Finding vectors in the kernel of this operator is precisely the same as finding generalized \(\lambda\)-eigenvectors of \(\frac{d}{dt}\text{.}\) In fact, the space of generalized \(\lambda\)-eigenvectors is
\begin{equation*} e^{\lambda t}, t e^{\lambda t}, \frac{t^2}{2!} e^{\lambda t}, \ldots \end{equation*}
so this derivative operator not only has every number as an eigenvalue, every eigenvalue has an infinite dimensional generalized eigenspace! Of course, only the first \(k\) generalized eigenvectors solve the equation
\begin{equation*} \left( \frac{d}{dt} - \lambda I \right)^k x (t) = 0. \end{equation*}
Thus if the characteristic polynomial is
\begin{equation*} (s - \lambda_1)^{k_1} \cdots (s - \lambda_r)^{k_r} \end{equation*}
then the general solution to the homogeneous differential equation (as a complex valued function) is
\begin{align*} x (t) \amp = e^{\lambda_1 t} \left( C_{1,0} + C_{1,1} t + \cdots + C_{1, k_1 - 1} \frac{t^{k_1 - 1}}{(k_1 - 1)!} \right) + \cdots\\ \amp \cdots + e^{\lambda_r t} \left( C_{r,0} + C_{r,1} t + \cdots + C_{r, k_r - 1} \frac{t^{k_r - 1}}{(k_r - 1)!} \right) \end{align*}
Now, the non-homogeneous case with eigenfunction \(f(t) = e^{\omega t}\) looks like
\begin{equation} \left( \frac{d}{dt} - \lambda_1 I \right)^{k_1} \cdots \left( \frac{d}{dt} - \lambda_r I \right)^{k_r} x (t) = e^{\omega t} .\tag{5.4.8} \end{equation}
If \(\omega\) is not one of the \(\lambda_i\text{,}\) one sees that
\begin{equation} x_p (t) = \frac{e^{\omega t}}{(\omega - \lambda_1)^{k_1} \cdots (\omega - \lambda_r)^{k_r} }\tag{5.4.9} \end{equation}
gives a solution. Using Corollary 5.4.2 gives the general solution in this case.
When \(\omega\) does occur as one of the \(\lambda_i\) (which is the case of resonance), we must use the appropriate generalized \(\omega\)-eigenfunction. Suppose, for example, that \(\omega = \lambda_1\) then
\begin{equation} x_p (t) = \frac{t^{k_1} e^{\omega t}}{k_1! \, (\omega - \lambda_2)^{k_2} \cdots (\omega - \lambda_r)^{k_r}}\tag{5.4.10} \end{equation}
These general solutions then lead to what is called the method of undetermined coefficients. In truth, this is simply writing the general solution and solving the linear algebra problem presented by the initial conditions. Let us work through an example.

Example 5.4.6. Method of undetermined coefficients.

Consider the differential equation
\begin{equation*} x^{\prime \prime \prime} (t) - x^{\prime \prime} (t) + x^\prime (t) - x (t) = \sin (t). \end{equation*}
with initial conditions \(x (0) = 0\text{,}\) \(x^\prime (0) = 1/4\) and \(x^{\prime \prime} (t) = - 1 /2\text{.}\) We calculate and factor the characteristic polynomial to see
\begin{equation*} s^3 - s^2 + s - 1 = (s - 1) (s^2 + 1) = (s - 1) (s - i ) (s + i ). \end{equation*}
So the homogeneous solutions can be written in the form
\begin{equation*} x_h (t) = C_1 e^{i t} + C_2 e^{ - i t} + C_3 e^{t}. \end{equation*}
Now, we take a look at the inhomogeneous term \(t \sin (t)\) and use our impressive knowledge to suggestively rewrite it
\begin{equation*} \sin (t) = \frac{i }{2} \left( e^{-it} - e^{it} \right) = \frac{i }{2} e^{-it} - \frac{i }{2} e^{it} . \end{equation*}
We observe that the functions \(e^{it}\) and \(e^{-it}\) are \(i\) and \((-i)\)-eigenfunctions for the derivative. Applying equation (5.4.9) and linearity gives the particular solution,
\begin{align*} x_p (t) \amp = \frac{i}{2} \frac{t e^{-it}}{(-i - 1)(-i - i)} - \frac{i}{2} \frac{t e^{it}}{(i - 1)(i + i)}, \\ \amp = \frac{t}{4} \left( \frac{e^{-it}}{1 + i} + \frac{e^{it}}{1 - i} \right),\\ \amp = \frac{t}{2} \operatorname{Re} \left( \frac{e^{-it}}{1 + i} \right), \\ \amp = \frac{t}{4} \operatorname{Re} \left( (1 - i ) e^{-it} \right), \\ \amp = \frac{t}{4} \operatorname{Re} \left( \cos (t) - i \sin (t) - i \cos (t) + \sin (t) \right), \\ \amp = \frac{t \cos (t) - t \sin (t)}{4}. \end{align*}
Thus the general solution is
\begin{equation*} x (t) = x_h (t) + x_p (t) = C_1 e^{i t} + C_2 e^{ - i t} + C_3 e^{t} + \frac{t \cos (t) - t \sin (t)}{4}. \end{equation*}
To find our actual solution we simply apply initial conditions and see
\begin{align*} C_1 + C_2 + C_3 \amp = x(0) = 0,\\ iC_1 - iC_2 + C_3 + 1/4 \amp = x^\prime (0) = 1/4,\\ -C_1 - C_2 + C_3 - 1/2 \amp = x^{\prime \prime} (0) = 1/2. \end{align*}
This is a linear system which can be written as
\begin{equation*} \left[ \begin{matrix} 1 \amp 1 \amp 1 \\ i \amp -i \amp 1 \\ -1 \amp -1 \amp 1 \end{matrix} \right] \threevec{C_1}{C_2}{C_3} = \threevec{0}{0}{1}. \end{equation*}
Solving this system gives
\begin{equation*} \threevec{C_1}{C_2}{C_3} = \threevec{\frac{-1 + i}{4}}{\frac{-1 - i}{4}}{\frac{1}{2}} . \end{equation*}
Putting this into our general solution and simplifying gives the real function
\begin{equation*} x (t) = \frac{(t - 2)\cos (t) + (t - 2) \sin (t) + 2e^t}{4} . \end{equation*}

Exercises 5.4.2 Exercises

1.

Find the solution to the non-homogeneous linear system
\begin{align*} x_1^\prime (t) \amp = -3 x_1(t) - 6 x_2(t) + 2t, \\ x_2^\prime (t) \amp = 2 x_1 (t) + 4 x_2 (t) - t. \end{align*}
with initial conditions \(x_1(0) = 1 = x_2(0)\text{.}\)

2.

Show that when \(\omega\) is distinct from \(\lambda_1, \ldots, \lambda_r\text{,}\) \(x_p (t)\) in equation (5.4.9) solves the non-homogeneous equation (5.4.8).

3.

Show that when \(\omega = \lambda_1\text{,}\) \(x_p (t)\) in equation (5.4.10) solves the non-homogeneous equation (5.4.8).

4.

Solve the differential equation
\begin{equation*} x^{\prime \prime} (t) + x^\prime (t) + x(t) = \cos \left(t \right). \end{equation*}
if \(x (0) = x^\prime (0) = 0\text{.}\)