Let \(A\) be a square matrix of dimension \(n \times n,\) \(\overrightarrow{v}\) be a non-zero column vector of dimension \(n,\) and \(\lambda\) be a scalar such that \[A\overrightarrow{v} = \lambda\overrightarrow{v}.\] We say \(\overrightarrow{v}\) is an eigenvector of \(A\) with eigenvalue \(\lambda.\)
In other words, an eigenvector \(\overrightarrow{v}\) of the matrix \(A\) is a vector such that multiplying \(A\overrightarrow{v}\) is the same as doing scalar multiplication \(\lambda\overrightarrow{v}\) for some scalar \(\lambda.\)
If \(\overrightarrow{v}\) is an eigenvector, so is \(x\overrightarrow{v}\) for any scalar \(x\) since multiplying both sides of an equation by a scalar preserves equality: \[A\overrightarrow{v} = \lambda\overrightarrow{v} \Rightarrow xA\overrightarrow{v} = x\lambda\overrightarrow{v}\] Eigenvectors are considered distinct if they are not scalar multiples. For example, we don't consider \(\overrightarrow{v}\) and \(2\overrightarrow{v}\) to be different eigenvectors.
Let \[ A = \begin{bmatrix} 0 & 2 \\ 1 & 1 \end{bmatrix} \]
The column vector \(\overrightarrow{v} = < 1, 1 >\) is an eigenvector of \(A\) since multiplying by \(A\) is the same as multiplying by \(2.\) \[ A\overrightarrow{v} = \begin{bmatrix} 0 & 2 \\ 1 & 1 \end{bmatrix} \begin{bmatrix} 1 \\ 1 \end{bmatrix} = \begin{bmatrix} 2 \\ 2 \end{bmatrix} = 2\overrightarrow{v} \]
As mentioned above, any scalar multiple of \(\overrightarrow{2} = < 1, 1 >\) is an eigenvector of \(A\) with eigenvalue \(2\) but we don't consider those to be distinct solutions. In particular, \(< 2, 2 >, < 0.5, 0,5 >, < -8, -8 >\) are all eigenvalues of \(A\) with eigenvector \(2\) but they follow from \(< 1, 1 >\) being an eigenvector since they are all scalar multiples of \(< 1, 1 >.\)
Let \(\overrightarrow{v}\) be an eigenvector of a matrix \(A\) with eigenvalue \(\lambda.\) Then for any positive integer \(n,\) \[A^n \overrightarrow{v} = \lambda^n \overrightarrow{v}\]
In the above example, \[A^2 \begin{bmatrix} 1 \\ 1 \end{bmatrix} = A(A\begin{bmatrix} 1 \\ 1 \end{bmatrix}) = A(2\begin{bmatrix} 1 \\ 1 \end{bmatrix}) = 2A\begin{bmatrix} 1 \\ 1 \end{bmatrix} = 2^2 \begin{bmatrix} 1 \\ 1 \end{bmatrix}\]
In this lesson we will show how you generally go about finding eigenvalues and eigenvectors. Special cases will be handled in a later lesson.
Given a matrix \(A,\) we want to find the vectors such that \(A\overrightarrow{v} = \lambda\overrightarrow{v}\) for some scalar \(\lambda.\)
First, subtract \(\lambda \overrightarrow{v}\) from both sides to get \[A\overrightarrow{v} - \lambda\overrightarrow{v} = \overrightarrow{0}\]
We could factor the \(\overrightarrow{v}\) vector is \(A - \lambda\) made sense, but there is no definition for subtracting a scalar from a matrix. This can be fixed by multiplying \(\overrightarrow{v}\) by the \(n \times n\) identity matrix since \(I\overrightarrow{v} = \overrightarrow{v}.\) \[A\overrightarrow{v} - \lambda I\overrightarrow{v} = \overrightarrow{0} \Rightarrow (A - \lambda I)\overrightarrow{v} = \overrightarrow{0}\]
By definition of eigenvectors, we are only interested in non-zero vectors \(\overrightarrow{v}.\) Therefore, the eigenvectors of \(A\) are non-zero vectors \(\overrightarrow{v}\) in the null space of \(A - \lambda I\) where \(lambda\) could be any scalar.
That brings us to the question: When does \(A - \lambda I\) have a null space that isn't \(\{\overrightarrow{0}\}\)? We know a matrix has a non-zero null space when it has determinant \(0.\) Therefore, we can solve for the eigenvalues by solving \[\mbox{det}(A - \lambda I) = 0\]
Once we find the eigenvalues, we can plug them in to \(A - \lambda I\) and find the non-zero null space vectors. Those vectors will be the eigenvectors.
We will find all the eigenvalues and eigenvectors of A = \begin{bmatrix} 0 & 2 \\ 1 & 1 \end{bmatrix} using the method described above.
First we need to solve \(\mbox{det}(A - \lambda I) = 0.\) \[ A - \lambda I = \begin{bmatrix} 0 & 2 \\ 1 & 1 \end{bmatrix} - \begin{bmatrix} \lambda & 0 \\ 0 & \lambda \end{bmatrix} = \begin{bmatrix} -\lambda & 2 \\ 1 & 1-\lambda \end{bmatrix} \] So, \[\mbox{det}(A - \lambda I) = -\lambda(1-\lambda) - 2 = \lambda^2 - \lambda - 2\] Now we can solve \(\mbox{det}(A-\lambda I) = 0:\) \[\lambda^2 - \lambda - 2 = 0 \Rightarrow (\lambda - 2)(\lambda + 1) = 0 \Rightarrow \lambda = 2, \lambda = -1\]
We can find an eigenvector for each solution.
Case 1: \(\lambda = 2\)
Plugging in \(2\) for \(\lambda\) in \(A - \lambda I\) gives
\[A - 2I = \begin{bmatrix} -2 & 2 \\ 1 & -1 \end{bmatrix}\]
The null space is spanned by \(\overrightarrow{v} = < 1, 1 >.\)
Case 2: \(\lambda = -1\)
Plugging in \(-1\) for \(\lambda\) in \(A - \lambda I\) gives
\[A + I = \begin{bmatrix} 1 & 2 \\ 1 & 2 \end{bmatrix}\]
The null space is spanned by \(\overrightarrow{v} = < -2, 1 >\)
The eigenvectors of \(A\) are \(< 1, 1, >\) and \(< -2, 1 >\) with eigenvalues \(2\) and \(-1,\) respectively.
As a sanity check, we can multiply \(A\) by \(< -2, 1 >\) and make sure it is the same as multiplying \(< -2, 1 >\) by \(-1.\) \[ \begin{bmatrix} 0 & 2 \\ 1 & 1 \end{bmatrix}\begin{bmatrix} -2 \\ 1 \end{bmatrix} = \begin{bmatrix} 2 \\ -1 \end{bmatrix} = -1\begin{bmatrix} -2 \\ 1 \end{bmatrix} \]
Compute: \[A^5 \begin{bmatrix} -5 \\ 4 \end{bmatrix}\]
This computation could be tedious (who wants to multiply the \(A\) matrix to itself \(5\) times?) but since we have the eigenvectors of \(A\) we can simplify the work. First, notice that the eigenvectors of \(A\) span all of \(\mathbb{R}^2.\) Therefore, we can write \(< -5, 4 >\) as a linear combination of the eigenvectors: \[\begin{bmatrix} -5 \\ 4 \end{bmatrix} = \begin{bmatrix} 1 \\ 1 \end{bmatrix} + 3 \begin{bmatrix} -2 \\ 1 \end{bmatrix}\]
Now we can use the properties of eigenvalues and eigenvectors to simplify the computation: \[ \begin{align} A^5 \begin{bmatrix} -5 \\ 4 \end{bmatrix} & = A^5 \begin{bmatrix} 1 \\ 1 \end{bmatrix} + 3 A^5 \begin{bmatrix} -2 \\ 1 \end{bmatrix} \\ & = 2^5 \begin{bmatrix} 1 \\ 1 \end{bmatrix} + 3 (-1)^5 \begin{bmatrix} -2 \\ 1 \end{bmatrix} \\ & = \begin{bmatrix} 32 \\ 32 \end{bmatrix} + \begin{bmatrix} 6 \\ -3 \end{bmatrix} \\ & = \begin{bmatrix} 38 \\ 29 \end{bmatrix} \end{align} \]