Skip to main content

Section 5.3 Linear Constant Coefficient ODE II

Having solved homogeneous constant coefficient linear ODE’s in principal, we now consider it in practice. It will definitely be helpful for a student to consult online resources and programs, which are plentiful, to visualize the examples in this section. We will approach these equations computationally and geometrically for this section. Some examples applications will be mentioned towards the end of this section and in the exercises. For each solution, we will want to have the following steps or accomplishments in order to feel good about our final assessment.
  1. Translate the differential equation or equations into a single matrix differential equation
    \begin{equation*} \dot{\mb{x}} = A \mb{x} . \end{equation*}
  2. Draw the vector field or graph it with a computer (or obtain it by understanding the Jordan Normal Form of \(A\)).
  3. Find a basis for which \(A\) is in Jordan Normal Form.
  4. Use Theorem 5.2.5 to write the general solution to the equation.
  5. Given an initial condition
    \begin{equation*} \mb{x} (0) = \threevec{C_1}{\vdots}{C_n} \end{equation*}
    find the particular solution to the equation.
Now let us go over several examples.

Example 5.3.1. One real dimensional equation.

The first case we saw way back in Section 1.3. Consider the equation
\begin{equation*} f^\prime (t) = 3 f(t). \end{equation*}
Here, our vector space is just \(\mathbb{R}\) itself. The vector field is given by
\begin{equation*} \mb{F} (x) = 3x. \end{equation*}
Geometrically, this means at \(x\text{,}\) there is a vector pointing away from the origin with three times the magnitude as \(x\text{.}\) Also, our matrix \(A\) is the \(1 \times 1\) matrix \([3]\text{.}\) Of course, we learn in calculus (or in Section 1.3) that
\begin{equation*} f(t) = C_0 e^{3 t} \end{equation*}
solves this equation generally with initial condition \(f(0) = C_0\text{.}\) Were we to think of this in terms of Jordon blocks, we would note that \([3]\) is a Jordon block \(J_{3, 1}\) itself and the equation above is just \(C_0 e^{\lambda t} \mb{x}_0 (t)\text{.}\) There is one thing to note about our solution and that is that it experiences exponential growth as \(t \to \infty\text{.}\) Were we to start with the equation \(f^\prime (t) = -3 f(t)\) instead, we would see exponential decay. Understanding the long term behavior of a solution to a differential equation gives valuable qualitative information on the system you are studying.

Example 5.3.2. Two real dimensional equation.

Now let us consider an authentically higher dimensional example.
\begin{align*} x^\prime (t) \amp = -4 x(t) - 3 y(t), \\ y^\prime (t) \amp = 6 x (t) + 5 y(t). \end{align*}
Our first task in this linear ODE is to write it as a single matrix ODE. We can do this by looking at the coefficients on the right and making a matrix out of them as in
\begin{equation*} A = \left[ \begin{matrix} -4 \amp -3 \\ 6 \amp 5 \end{matrix} \right]. \end{equation*}
Then if we take
\begin{equation*} \mb{x} (t) = \twovec{x (t)}{y (t)} \end{equation*}
we obtain the equation
\begin{equation*} \mb{x}^\prime (t) = A \mb{x} (t). \end{equation*}
The right hand side is indeed our vector field
\begin{equation*} \mb{F} (\mb{x} ) = A \mb{x} \end{equation*}
and to obtain a computational advantage and a geometric picture, it will help to diagonalize \(A\) (if possible). For this, we start by taking the characteristic polynomial of \(A\) (which we will write now with variable \(s\) instead of \(t\) to avoid confusion with \(\mb{x} (t)\)) and find
\begin{equation*} p_A (s) = \det \left[ \begin{matrix} s +4 \amp 3 \\ -6 \amp s - 5 \end{matrix} \right] = s^2 - s - 2 = (s - 2) (s + 1). \end{equation*}
Thus we see two distinct real eigenvalues and can confidently diagonalize \(A\text{.}\) Indeed, working through this gives the \((-1)\)-eigenvector \(\mb{v}_1\) and \(2\)-eigenvector \(\mb{v}_2\) which form an eigenbasis
\begin{equation*} \mathcal{B} = \left\{ \mb{v}_1 , \mb{v}_2 \right\} = \left\{ \twovec{1}{-1}, \twovec{-1}{2} \right\}. \end{equation*}
Why is all this so helpful? Well, now we can picture the vector field very clearly. It will take vectors on the \((-1)\) eigenspace (multiples of \(\mb{v}_1\)) and reverse their direction. On the \(2\)-eigenspace though (multiples of \(\mb{v}_2\)), the vectors will be doubled in scale and point in the same direction. The remaining vectors can be seen to be a linear combination of these two. This geometry will be sketched in class and gives a fantastically simple picture of what our solutions ought to look like.
But what about computing solutions? Well, now that we have an eigenbasis we can write \(\mb{x}\) in terms of the eigenbasis
\begin{equation*} \mb{x} = \twovec{\tilde{x} (t)}{\tilde{y} (t)}_\mathcal{B} = \tilde{x} (t) \mb{v}_1 + \tilde{y} (t) \mb{v}_2 = \twovec{\tilde{x} (t) - \tilde{y} (t)}{2 \tilde{y} (t) - \tilde{x} (t)}. \end{equation*}
But, of course, in this basis the differential equation becomes much simpler
\begin{align*} \twovec{\tilde{x}^\prime (t)}{\tilde{y}^\prime (t)}_\mathcal{B} \amp = \dot{\mb{x}} (t), \\ \amp = A \mb{x}, \\ \amp = A \left( \tilde{x} (t) \mb{v}_1 + \tilde{y} (t) \mb{v}_2 \right), \\ \amp = \tilde{x} (t) A \mb{v}_1 + \tilde{y} (t) A \mb{v}_2, \\ \amp = - \tilde{x} (t) \mb{v}_1 + 2 \tilde{y} (t) \mb{v}_2, \\ \amp = \twovec{ - \tilde{x} (t)}{2 \tilde{y} (t)}_{\mathcal{B}} . \end{align*}
So in particular, our linear system breaks into two independent equations
\begin{align*} \tilde{x}^\prime (t) \amp = - \tilde{x} (t), \\ \tilde{y}^\prime (t) \amp = 2 \tilde{y} (t). \end{align*}
The solutions to these equations are similar to Example 5.3.1 and we obtain
\begin{align*} \tilde{x} (t) \amp = C_1 e^{-t},\\ \tilde{y} (t) \amp = C_2 e^{2t} \end{align*}
with initial condition
\begin{equation*} \mb{x} (0) = \twovec{C_1}{C_2}_\mathcal{B}. \end{equation*}
Of course, while the solution is especially pleasant in the eigenbasis, the original problem came to us in terms of the standard basis, so we must be translate our solution into the usual set of coordinates in order to please our engineering colleagues. If the initial condition were given in the standard basis as
\begin{equation*} \mb{x} (0) = \twovec{A}{B} \end{equation*}
we would need the change of basis matrix \(\cob{1}{\mathcal{C}}{\mathcal{B}}\) to find \(C_1\) and \(C_2\) as above. This is just the inverse of the matrix with columns equal to \(\mb{v}_1\) and \(\mb{v}_2\) so it is
\begin{equation*} Q = \left[ \begin{matrix} 2 \amp 1 \\ 1 \amp 1 \end{matrix} \right] \end{equation*}
and we have
\begin{equation*} \twovec{C_1}{C_2} = Q \twovec{A}{B} = \twovec{2A + B}{A + B} . \end{equation*}
Putting all of this together, we obtain the general solution with respect to the standard basis.
\begin{align} \mb{x} (t) \amp = \twovec{\tilde{x} (t) - \tilde{y} (t)}{ - \tilde{x} (t) + 2 \tilde{y} (t)}, \tag{5.3.1}\\ \amp = \twovec{C_1 e^{-t} - C_2 e^{2t} }{ - C_1 e^{-t} + 2 C_2 e^{2t} }, \tag{5.3.2}\\ \amp = \twovec{( 2A + B) e^{-t} - ( A + B ) e^{2t} }{ - (2A + B) e^{-t} + 2 (A + B) e^{2t} }. \tag{5.3.3} \end{align}
To see Example 5.3.2 geometrically, you may evaluate the sage cell to see an illustration.
The \((-1)\)-eigenspace is drawn as the red line and the \(2\)-eigenspace is the purple line. Along the red line the solution follows a path to the origin because, on this axis in the eigenbasis coordinates, the solution looks like
\begin{equation*} \mb{x} (t) = \twovec{C e^{-t}}{0}_{\mathcal{B}} . \end{equation*}
On the purple line however, the solution follows the path away from the origin with
\begin{equation*} \mb{x} (t) = \twovec{0}{D e^{2t}}_{\mathcal{B}} . \end{equation*}
In between these axes a solution near the red line will flow toward the origin and eventually curve to follow the purple line out to infinity.
This last example shows how easy things become if the matrix is diagonalizable with real eigenvalues. In fact, it is an illustration of Theorem 5.2.5 for the case when there are two \(1 \times 1\) Jordan blocks. However, since we are also acquainted with complex numbers, we note that differential equations involving real paths can sometimes be helped along by thinking of them as landing in a complex vector space instead. The next example illustrates this point.

Example 5.3.3. Complex eigenvalues in the equation.

Suppose we take the linear differential equation
\begin{align*} x^\prime (t) \amp = 2x (t) + y(t), \\ y^\prime (t) \amp = -2 x (t) + 4 y(t). \end{align*}
Writing this out we find the matrix
\begin{equation*} A = \left[ \begin{matrix} 2 \amp 1 \\ -2 \amp 4 \end{matrix} \right] \end{equation*}
and obtain the equation
\begin{equation*} \mb{x}^\prime (t) = A \mb{x} (t). \end{equation*}
Again, the right hand side is the vector field
\begin{equation*} \mb{F} (\mb{x} ) = A \mb{x} \end{equation*}
and it would be nice to diagonalize \(A\) to get a picture of this field. However, computing the characteristic polynomial gives
\begin{equation*} p_A (s) = \det \left[ \begin{matrix} s -2 \amp -1 \\ 2 \amp s - 4 \end{matrix} \right] = s^2 - 6 s + 10. \end{equation*}
One can use the quadratic formula here and see that
\begin{equation*} p_A (s) = (s - (3 + i)) (s - (3 - i)) \end{equation*}
so that the roots of \(p_A (s)\) are complex numbers and \(A\) cannot be diagonalized as a real matrix. Now, one of the benefits of having real eigenvalues and eigenvectors was that we obtained a nice picture of the vector field. There is something to be said in the case of a complex eigenvector as well, which is related to several of the exercises that you have worked through. In particular, if there is an eigenvalue of the form \(\lambda = re^{i\theta}\text{,}\) then one will see a \(\theta\) rotation in the vectors of the vector field (in some coordinate system). For now, we will leave the illustration of this field to a computer and come back to the geometry once we’ve found the solution.
To compute the general solution, there are several ways to proceed, but we will take a principled approach and simply say that our path \(\mb{x}\) was a function to \(\mathbb{C}^2\) all along so that
\begin{equation*} \mb{x} : I \to \mathbb{C}^2. \end{equation*}
Now we can diagonalize because the eigenvalues are distinct. A bit of computation gives the eigenbasis
\begin{equation*} \mathcal{B} = \left\{ \mb{v}_1 , \mb{v}_2 \right\} = \left\{ \twovec{i}{-1 + i}, \twovec{-i}{-1 - i} \right\}. \end{equation*}
Again, we can write our solution in terms of this basis
\begin{align} \mb{x} \amp = \twovec{\tilde{x} (t)}{\tilde{y} (t)}_\mathcal{B} \tag{5.3.4}\\ \amp = \tilde{x} (t) \mb{v}_1 + \tilde{y} (t) \mb{v}_2 \tag{5.3.5}\\ \amp = \twovec{i \tilde{x} (t) - i \tilde{y} (t)}{(-1 + i) \tilde{x} (t) + (-1 - i) \tilde{y} (t)} \tag{5.3.6} \end{align}
and again, in this basis the differential equation becomes much simpler
\begin{equation*} \twovec{\tilde{x}^\prime (t)}{\tilde{y}^\prime (t)} = \twovec{ (3 + i) \tilde{x} (t)}{(3 - i) \tilde{y} (t)}_{\mathcal{B}} . \end{equation*}
This time, the two independent equations are
\begin{align*} \tilde{x}^\prime (t) \amp = (3 + i) \tilde{x} (t), \\ \tilde{y}^\prime (t) \amp = (3 - i) \tilde{y} (t). \end{align*}
It is here that Exercise 5.1.4.3 comes into view and we realize that, in fact, we’ve already solved these equations and obtained
\begin{align*} \tilde{x}(t) \amp = C_1 e^{(3 + i)t}, \\ \tilde{y} (t) \amp = C_2 e^{(3 - i)t}. \end{align*}
So, were we inclined to write our solutions in terms of the basis \(\mathcal{B}\) in \(\mathbb{C}^2\text{,}\) we would have solved the differential equation with
\begin{gather*} \mb{x} (t) = \twovec{ C_1 e^{(3 + i)t}}{C_2 e^{(3 - i)t}}_{\mathcal{B}} \text{ with initial condition } \mb{x} (0) = \twovec{C_1}{C_2}_{\mathcal{B}}. \end{gather*}
I can hear the overwhelming chorus of objections to this solution from engineer and mathematician alike. After all, the constants \(C_1\) and \(C_2\) are possibly complex numbers and the function is too. We started in the real plane and have ended in what appears to be a terrifying \(4\)-dimensional mess (since \(\mathbb{C}^2\) is \(4\) real dimensions). Well, I contend that appearances can be deceiving! Let’s unwind this a bit with a few simple observations.
The first thing to recognize about our solution is that the basis \(\mathcal{B}\) which we chose had a hidden symmetry. Namely, if we take the complex conjugate of \(\mb{v}_1\) we obtain \(\mb{v}_2\) so that
\begin{equation*} \bar{\mb{v}}_1 = \mb{v}_2. \end{equation*}
Now, the solution we obtain for \(\mb{x}\) will be real, which means it equals its conjugate. Thus
\begin{align*} \tilde{x} (t) \mb{v}_1 + \tilde{y} (t) \mb{v}_2 \amp = \overline{\tilde{x} (t) \mb{v}_1 + \tilde{y} (t) \mb{v}_2}, \\ \amp = \overline{\tilde{x} (t)} \bar{\mb{v}}_1 + \overline{\tilde{y} (t)} \bar{\mb{v}}_2,\\ \amp = \overline{\tilde{y} (t)} \mb{v}_1 + \overline{\tilde{x} (t)} \mb{v}_2, \end{align*}
But since our coefficients are unique, we have that
\begin{equation} \overline{\tilde{x} (t)} = \tilde{y} (t).\tag{5.3.7} \end{equation}
Putting our solution in to this equation gives
\begin{equation*} \bar{C}_1 = C_2. \end{equation*}
Even more helpful, we note that for any complex number \(z\) we can get the real and imaginary parts of \(z\) by simply checking that
\begin{align*} \operatorname{Re} (z) \amp = \frac{1}{2} \left( z + \bar{z} \right), \\ \operatorname{Im} (z) \amp = - \frac{i}{2} \left( z - \bar{z} \right). \end{align*}
Using this, equations (5.3.6) and (5.3.7) gives
\begin{equation*} \mb{x} (t) = \twovec{ -2 \operatorname{Im} (\tilde{x} (t))}{-2 \operatorname{Re} (\tilde{x} (t)) - 2\operatorname{Im} (\tilde{x} (t)) } = \twovec{ \operatorname{Im} ( -2\tilde{x} (t))}{ \operatorname{Re} (-2\tilde{x} (t)) + \operatorname{Im} (-2\tilde{x} (t)) }. \end{equation*}
Now, note that \(C_1\) is zero only for the zero solution, so assuming it is not zero, we can find an \(a, b\) so that in polar coordinates
\begin{equation*} -2C_1 = e^{a + bi}. \end{equation*}
This gives us a nice way to rewrite our solution as
\begin{equation*} -2\tilde{x} (t) = -2 C_1 e^{(3 + i)t} = e^{(3t + a) + (t + b)i} = e^{3t + a} \cos (t + b) + i e^{3t + a} \sin (t + b). \end{equation*}
Pulling out the real and imaginary parts, we obtain a very real looking solution
\begin{equation*} \mb{x} (t) = \twovec{e^{3t + a} \sin (t + b)}{ e^{3t + a} \cos (t + b) + e^{3t + a} \sin (t + b)}. \end{equation*}
To finish the solution, one would have to solve \(a\) and \(b\) for an initial condition
\begin{equation*} \mb{x} (0) = \twovec{A}{B} \end{equation*}
but this will be left as a linear algebra exercise for the student (which can be done in multiple ways).
However, it is interesting to note here that the solution \(\mb{x} (t)\) can be written
\begin{equation*} \mb{x} (t) = e^{3t + a} \twovec{ \sin (t + b)}{ \cos (t + b) + \sin (t + b)}. \end{equation*}
The scaling factor increases exponentially which means that the solution will head off to infinity. What about the vector portion? Well, it is not hard to see that this is a parameterization of the conic section
\begin{equation*} 2x^2 - 2xy + y^2 = 1. \end{equation*}
A meticulous student will check and see that this equation is that of a (rotated) ellipse. So the solution is simply following a parameterization of an ellipse, but scaling it simultaneously and spiraling away from the origin.
To see Example 5.3.3 geometrically, you may evaluate the sage cell.
Because the paths exponentially spiral out, it may not be as clear to a casual observe that there is a rotation.
We will generalize and summarize the previous result as a Theorem.

Example 5.3.5. Another two dimensional equation.

Now let us consider Jordan normal form example.
\begin{align*} x^\prime (t) \amp = -3 x(t) + y(t), \\ y^\prime (t) \amp = - x (t) - y(t). \end{align*}
Here our matrix is
\begin{equation*} A = \left[ \begin{matrix} -3 \amp 1 \\ -1 \amp -1 \end{matrix} \right]. \end{equation*}
And we obtain the equation
\begin{equation*} \mb{x}^\prime (t) = A \mb{x} (t). \end{equation*}
We check that
\begin{equation*} p_A (s) = \det \left[ \begin{matrix} s + 3 \amp -1 \\ 1 \amp s +1 \end{matrix} \right] = s^2 + 4s + 4 = (s + 2)^2 . \end{equation*}
Thus we see \(A\) has only \(-2\) as an eigenvalue. Since \(A\) is not a diagonal matrix, we may conclude that it is not diagonalizable (this is only true because it has one eigenvalue). We consider the matrix
\begin{equation*} N = A + 2I = \left[ \begin{matrix} -1 \amp 1 \\ -1 \amp 1 \end{matrix} \right] \end{equation*}
and generate a basis of the form \(\{ N \mb{v} , \mb{v} \}\) by taking \(\mb{v} = \mb{e}_1\text{.}\) Here we obtain
\begin{equation*} \mathcal{B} = \left\{ \mb{v}_0 , \mb{v}_1 \right\} = \left\{ \twovec{-1}{-1}, \twovec{1}{0} \right\}. \end{equation*}
Now we have a basis for which we can directly apply Theorem 5.2.5. Here we get
\begin{align*} \mb{x}_0 (t) \amp = e^{-2 t} \mb{v}_0 = \twovec{- e^{-2t}}{-e^{-2t}}, \\ \mb{x}_1 (t) \amp = e^{-2 t} t \mb{v}_0 + e^{-2t} \mb{v}_1 = \twovec{- e^{-2t}(t - 1) }{-e^{-2t} t }. \end{align*}
And the general solution with initial conditions in terms of the basis \(\mathcal{B}\) is
\begin{align*} \mb{x} (t) \amp = C_0 \mb{x}_0 (t) + C_1 \mb{x}_1 (t), \\ \amp = -e^{-2t} \twovec{C_0 + C_1 (t - 1)}{C_0 + C_1 t }, \end{align*}
with initial condition
\begin{equation*} \mb{x} (0) = \twovec{C_0}{C_1}_\mathcal{B}. \end{equation*}
If the initial condition were given in the standard basis as
\begin{equation*} \mb{x} (0) = \twovec{A}{B} \end{equation*}
we would need the change of basis matrix \(\cob{1}{\mathcal{C}}{\mathcal{B}}\) which is just the inverse of the matrix with columns equal to \(\mb{v}_1\) and \(\mb{v}_2\) so it is
\begin{equation*} Q = \left[ \begin{matrix} 0 \amp -1 \\ 1 \amp -1 \end{matrix} \right] \end{equation*}
and we have
\begin{equation*} \twovec{C_0}{C_1} = Q \twovec{A}{B} = \twovec{-B}{A - B} . \end{equation*}
Putting all of this together, we obtain the general solution with respect to the standard basis.
\begin{equation*} \mb{x} (t) = -e^{-2t} \twovec{ - B + (A - B) (t - 1)}{-B + (A - B) t } = -e^{-2t} \twovec{ (A - B) t - A }{(A - B) t -B} \end{equation*}
As an important application of our general solution, we consider higher order homogeneous constant coefficient ODE’s. Suppose we are confronted with a scalar valued function \(x(t)\) of a variable \(t\text{,}\) but now with an equation
\begin{equation} x^{(n)} (t) + a_{n - 1}x^{(n - 1)} (t) + \cdots + a_1 x^\prime (t) + a_0 x(t) = 0 .\tag{5.3.8} \end{equation}
Here we recall that \(x^{(n)} (t)\) is the \(n\)-th derivative of \(x\) with respect to \(t\) (so we will assume our functions are differentiable enough). There are two equivalent ways of approaching the solution to this equation. The first and most common way is to guess the solution \(x (t) = e^{\lambda t}\) and solve the resulting polynomial equation
\begin{equation*} \lambda^n + a_{n - 1} \lambda^{n - 1} + \cdots + a_1 \lambda + a_0 = 0. \end{equation*}
This polynomial is called the characteristic polynomial of the ODE. Indeed, any solution will give a corresponding solution to the equation. If \(\lambda\) is a multiple root of the characteristic polynomial, then one guesses again with
\begin{equation*} x (t) = \frac{t^k e^{\lambda t}}{k!} \end{equation*}
and finds (so long as \(k\) is less than the order of the root) that we have found another solution. Hobbling these solutions together, one can solve equations where now our initial conditions are not just \(x (0) = C_0\text{,}\) but also \(x^\prime (0) = C_1, \ldots, x^{(n - 1)} (0) = C_{n - 1}\text{.}\)
The solution can also be found using our tools, which has the added benefit of simplifying the higher order equation into a first order equation (a technique that is used frequently in classical mechanics). To force equation (5.3.8) into a first order equation, rewrite it as a collection of equations
\begin{align*} x^\prime_0 (t) \amp = x_1 (t), \\ x^\prime_1 (t) \amp = x_2 (t), \\ \vdots \amp \\ x^\prime_{n - 2} \amp = x_{n - 1} (t), \\ x_{n - 1}^\prime (t) \amp = -a_{n - 1} x_{n - 1} (t) - a_{n - 2} x_{n - 2} (t) - \cdots - a_1 x_1 (t) - a_0 x_0 (t). \end{align*}
Notice that if \(x (t) = x_0 (t)\text{,}\) then the first \((n - 1)\) equations are just saying that \(x_i (t) = x^{(i)} (t)\) for each \(1 \leq i \leq n - 1\text{.}\) The last equation is then just equation (5.3.8). Letting
\begin{equation*} \mb{x} (t) = \threevec{x_0 (t)}{\vdots}{x_{n - 1} (t)} \end{equation*}
and writing this as the single matrix equation
\begin{equation*} \dot{\mb{x}} = A \mb{x} \end{equation*}
forces us to use the matrix
\begin{equation*} A = \left[ \begin{matrix} 0 \amp 1 \amp 0 \amp \cdots \amp 0 \\ 0 \amp 0 \amp 1 \amp \cdots \amp 0 \\ \vdots \amp \ddots \amp \ddots \amp \ddots \amp \vdots \\ 0 \amp \cdots \amp 0 \amp 0 \amp 1 \\ - a_0 \amp - a_1 \amp -a_2 \amp \cdots \amp - a_{n - 1} \end{matrix} \right]. \end{equation*}
In Exercise 4.1.4.4, you showed that the characteristic polynomial of \(A\) is simply
\begin{equation*} p_A (s) = s^n + a_{n - 1} s^{n - 1} + \cdots + a_1 s + a_0. \end{equation*}
Thus the characteristic polynomial of the ODE is the same as the characteristic polynomial of our matrix. If \(\lambda_i\) is a root of \(p_A (s)\text{,}\) it is easy to check that
\begin{equation*} \mb{v}_i = \left[ \begin{matrix} 1 \\ \lambda_i \\ \vdots \\ \lambda_i^{n - 1} \end{matrix} \right]. \end{equation*}
is a \(\lambda_i\)-eigenvector of \(A\text{.}\) In the event that \(\lambda_i\) has multiplicity \(k\text{,}\) one can also check that
\begin{equation} \mathcal{B}_i = \left\{ \left[ \begin{matrix} 1 \\ \lambda_i \\ \lambda_i^2 \\ \vdots \\ \lambda_i^{n - 2} \\ \lambda_i^{n - 1} \end{matrix} \right], \left[ \begin{matrix} 0 \\ 1 \\ 2 \lambda_i \\ \vdots \\ (n - 2) \lambda_i^{n - 3} \\ (n - 1) \lambda_i^{n - 2} \end{matrix} \right], \cdots , \left[ \begin{matrix} 0 \\ \vdots \\ 0 \\ \binom{k - 1}{k - 1} \\ \binom{k}{k - 1} \lambda_i \\ \vdots \\ \binom{n - 1}{k - 1} \lambda_i^{n - k} \end{matrix} \right] \right\}\tag{5.3.9} \end{equation}
is a basis for the generalized \(\lambda_i\)-eigenspace \(V_{\lambda_i}\text{.}\) A less complicated expression for \(\mathcal{B}_i\) (and more useful) is
\begin{equation*} \mathcal{B}_i = \left\{ \mb{v}_i , \frac{d}{d \lambda_i} \mb{v}_i, \frac{1}{2!} \left( \frac{d}{d \lambda_i} \right)^2 \mb{v}_i, \cdots , \frac{1}{(k - 1)!} \left( \frac{d}{d \lambda_i} \right)^{k - 1} \mb{v}_i \right\} \end{equation*}
Moreover, it is a basis of the form
\begin{equation} \mathcal{B}_i = \left\{ (A - \lambda_i I )^{k - 1} \mb{v}, \ldots, ( A - \lambda_i I) \mb{v}, \mb{v} \right\}\tag{5.3.10} \end{equation}
so that it may be used to give the Jordan normal form of \(A\) and via Theorem 5.2.5 to find the general solution.
Let us give an example of this which we will study in more depth next section.

Example 5.3.6. .

Consider the case of
\begin{equation*} m x^{\prime \prime} + b x^\prime + k x = 0 \end{equation*}
where \(k > 0\) and \(b > 0\text{.}\)
This differential equation describes the motion of a mass on the end of a spring. The mass is \(m\text{,}\) spring constant \(k\) (which describes how tight the spring is) and damping constant is \(b\) (which describes the amount of kinetic friction). Dividing by \(m\) gives
\begin{equation*} x^{\prime \prime} + \tilde{b} x^\prime + \tilde{k} x = 0 \end{equation*}
where \(\tilde{b} = \frac{b}{m}\) and \(\tilde{k} = \frac{k}{m}\text{.}\) Converting to a matrix equation \(\dot{\mb{x}} = A \mb{x}\) gives
\begin{equation*} A = \left[ \begin{matrix} 0 \amp 1 \\ - \tilde{k} \amp - \tilde{b} \end{matrix} \right] \end{equation*}
with characteristic polynomial
\begin{equation*} p_A (s) = s^2 + \tilde{b} s + \tilde{k}. \end{equation*}
Using the quadratic formula, we obtain the roots
\begin{equation*} \lambda = - \frac{\tilde{b}}{2} \pm \frac{\sqrt{\tilde{b}^2 - 4 \tilde{k}}}{2} \end{equation*}
Of which there are two main cases (and one case we will ignore). Let
\begin{align*} \alpha \amp = \frac{\tilde{b}}{2}, \\ \beta \amp = \frac{\sqrt{ | \tilde{b}^2 - 4 \tilde{k} |}}{2} \end{align*}
and notice that \(\alpha \geq 0\) (with equality when there is no damping), \(\beta \gt 0\) and \(\alpha \gt \beta\text{.}\) The first case we have \(\tilde{b} \lt 2 \sqrt{\tilde{k}}\) in which case the roots of the characteristic polynomial are
\begin{equation*} \lambda = - \alpha + i \beta, \hspace{.5in} \bar{\lambda} = - \alpha - i \beta. \end{equation*}
with eigenbasis
\begin{equation} \mathcal{B} = \left\{ \mb{w}_1, \bar{\mb{w}}_1 \right\} = \left\{ \twovec{1}{\lambda} \twovec{1}{\bar{\lambda}} \right\}.\tag{5.3.11} \end{equation}
Using Theorem 5.3.4, we see that solutions then are linear combinations of the solutions
\begin{align*} \mb{x}_R (t) = \operatorname{Re} \left( e^{-\alpha t + i \beta t} \mb{w}_1 \right) \amp = \twovec{e^{-\alpha t} \cos (\beta t)}{ - \alpha e^{-\alpha t} \cos (\beta t) + \beta e^{-\alpha t} \sin (\beta t)}, \\ \mb{x}_I (t) = \operatorname{Im} \left( e^{-\alpha t + i \beta t} \mb{w}_1 \right) \amp = \twovec{e^{-\alpha t} \sin (\beta t)}{ - \alpha e^{-\alpha t} \sin (\beta t) - \beta e^{-\alpha t} \cos (\beta t)}. \end{align*}
Recalling that the first coordinate is the scalar solution to the original second order differential equation, we obtain
\begin{equation*} x (t) = A e^{-\alpha t} \cos (\beta t) + B e^{-\alpha t} \sin (\beta t) \end{equation*}
with initial conditions
\begin{equation*} x (0) = A, \hspace{.5in}x^\prime (0) = - \alpha A + \beta B. \end{equation*}
Of course, if the point \((A, B)\) has polar coordinate \((r \cos \theta, r \sin \theta)\text{,}\) then the trigonometric sum formulas will give the simpler formula
\begin{equation*} x (t) = r e^{-\alpha t} \cos (\beta t - \theta ). \end{equation*}
Thus when \(\tilde{b} \lt 2 \sqrt{\tilde{k}}\) we see some oscillation in the spring-mass system. Notice that if there is no damping and \(\alpha = 0\text{,}\) then the spring-mass system simply oscillates with amplitude \(r\) and period \(\beta\) (often called \(\omega\) in applications). On the other hand, if there is damping, this oscillation’s amplitude experiences exponential decay.
The case where \(\tilde{b} \geq 2 \sqrt{\tilde{k}}\) gives the real roots
\begin{equation*} \lambda_1 = - \alpha + \beta, \hspace{.5in} \lambda_2 = - \alpha - \beta. \end{equation*}
One notes that both \(\lambda_1\) and \(\lambda_2\) are negative numbers (since \(\alpha > \beta\)). Thus here our solution is
\begin{equation*} x (t) = A e^{\lambda_1 t} + B e^{\lambda_2 t}. \end{equation*}
This is solution experiences exponential decay and the spring-mass system is called overdamped. The reason is that the kinetic friction force is overcoming the spring force and simply slowing the mass to a stop before oscillation occurs. The initial conditions here are
\begin{equation*} x (0) = A + B, \hspace{.5in} x^\prime (0) = \lambda_1 A + \lambda_2 B. \end{equation*}

Exercises Exercises

1.

Go through the five steps for the system of differential equations
\begin{align*} x^\prime (t) \amp = -3 x(t) - 6 y(t), \\ y^\prime (t) \amp = 2 x (t) + 4 y(t). \end{align*}

2.

Go through the five steps for the system of differential equations
\begin{align*} x^\prime (t) \amp = 3 x(t) + 4 y(t), \\ y^\prime (t) \amp = - x (t) - y(t). \end{align*}

3.

Go through the five steps for the system of differential equations
\begin{align*} x^\prime (t) \amp = 5 x(t) + 2 y(t), \\ y^\prime (t) \amp = -4 x (t) + y(t). \end{align*}

4.

Give three linearly independent solutions to the differential equation
\begin{equation*} x^{\prime \prime \prime} (t) = x (t). \end{equation*}

5.

Verify the claim that the basis \(\mathcal{B}_i\) in equation (5.3.9) gives a Jordan block (i.e. is of the form in equation (5.3.10)).

6.

Find the motion \(x(t)\) of a spring-mass system with mass \(1\text{,}\) spring constant \(4\) and damping constant \(10\) assuming \(x(0) = 4\) and \(x^\prime (0) = 2\text{.}\)