Linear Algebra Home

Basics of Eigenvalues and Eigenvectors

    


Definition. Let \(A\) be an \(n\times n\) matrix. If \(A\overrightarrow{x}=\lambda \overrightarrow{x}\) for some nonzero vector \(\overrightarrow{x}\) and some scalar \(\lambda\), then \(\lambda\) is an eigenvalue of \(A\) and \(\overrightarrow{x}\) is an eigenvector of \(A\) corresponding to \(\lambda\).

Example. Consider \(A=\left[\begin{array}{rr}1&2\\0&3\end{array} \right],\;\lambda=3,\; \overrightarrow{v}=\left[\begin{array}{r}1\\1\end{array} \right],\; \overrightarrow{u}=\left[\begin{array}{r}-2\\1\end{array} \right]\).
Since \(A\overrightarrow{v} =\left[\begin{array}{rr}1&2\\0&3\end{array} \right]\left[\begin{array}{r}1\\1\end{array} \right] =\left[\begin{array}{r}3\\3\end{array} \right] =3\left[\begin{array}{r}1\\1\end{array} \right] =\lambda\overrightarrow{v}\), \(3\) is an eigenvalue of \(A\) and \(\overrightarrow{v}\) is an eigenvector of \(A\) corresponding to the eigenvalue \(3\).
Since \(A\overrightarrow{u} =\left[\begin{array}{rr}1&2\\0&3\end{array} \right]\left[\begin{array}{r}-2\\1\end{array} \right] =\left[\begin{array}{r}0\\3\end{array} \right] \neq \lambda\left[\begin{array}{r}-2\\1\end{array} \right] =\lambda\overrightarrow{u}\) for all scalars \(\lambda\), \(\overrightarrow{u}\) is not an eigenvector of \(A\).

Remark. For a real matrix, an eigenvalue can be a complex number and an eigenvector can be a complex vector.

Example. Consider \(A=\left[\begin{array}{rr}0&1\\-1&0\end{array} \right]\). Since \(\left[\begin{array}{rr}0&1\\-1&0\end{array} \right]\left[\begin{array}{r}1\\i\end{array} \right] =\left[\begin{array}{r}i\\-1\end{array} \right] =i\left[\begin{array}{r}1\\i\end{array} \right]\), \(i\) is an eigenvalue of \(A\) and \(\left[\begin{array}{r}1\\i\end{array} \right]\) is an eigenvector of \(A\) corresponding to the eigenvalue \(i\).

Remark. An eigenvector must be a nonzero vector by definition. So the following are equivalent:

  1. \(\lambda\) is an eigenvalue of \(A\).

  2. \(A\overrightarrow{x}=\lambda \overrightarrow{x}\) for some nonzero vector \(\overrightarrow{x}\).

  3. \((A-\lambda I)\overrightarrow{x}=\overrightarrow{0}\) for some nonzero vector \(\overrightarrow{x}\).

  4. \((A-\lambda I)\overrightarrow{x}=\overrightarrow{0}\) has a nontrivial solution \(\overrightarrow{x}\).

  5. \(A-\lambda I\) is not invertible (by IMT).

  6. \(\det(A-\lambda I)=0\).

Definition. \(\det(\lambda I-A)\) is a polynomial of \(\lambda\) and it is the characteristic polynomial of \(A\). \(\det(\lambda I-A)=0\) is the characteristic equation of \(A\).

Remark. Since the roots of the characteristic polynomial are the eigenvalues of the \(n\times n\) matrix \(A\), \(A\) has \(n\) eigenvalues, not necessarily distinct.

Definition. The multiplicity of a root \(\lambda\) in \(\det(\lambda I-A)\) is the algebraic multiplicity of the eigenvalue \(\lambda\) of \(A\).

Remark. If \(\lambda\) is an eigenvalue of \(A\), then \(\operatorname{NS}(A-\lambda I)\) is the union of \(\{\overrightarrow{0}\}\) and the set of all eigenvectors of \(A\) corresponding to the eigenvalue \(\lambda\).

Definition. Suppose \(\lambda\) is an eigenvalue of the matrix \(A\). Then \[\operatorname{NS}(A-\lambda I)=\{\overrightarrow{x}\;|\;(A-\lambda I)\overrightarrow{x}=\overrightarrow{0}\}\] is the eigenspace of \(A\) corresponding to the eigenvalue \(\lambda\) and \(\operatorname{dim}(\operatorname{NS}(A-\lambda I))\) is the geometric multiplicity of the eigenvalue \(\lambda\).

Example. Let \(A=\left[\begin{array}{rrr}3&0&0\\0&4&1\\0&-2&1\end{array} \right]\).

  1. Find the characteristic polynomial of \(A\).

  2. Find the eigenvalues of \(A\) with their algebraic multiplicities.

  3. Find the eigenspaces of \(A\) and geometric multiplicities of the eigenvalues of \(A\).

Solution. (a) The characteristic polynomial of \(A\) is \[\begin{eqnarray*} \det(\lambda I-A)&=&\left|\begin{array}{ccc}\lambda-3&0&0\\0&\lambda-4&-1\\0&2&\lambda-1\end{array} \right|\\ &=&(\lambda-3)\left|\begin{array}{cc}\lambda-4&-1\\2&\lambda-1\end{array} \right|-0+0\\ &=&(\lambda-3)(\lambda^2-5\lambda+6)\\ &=&(\lambda-3)(\lambda-3)(\lambda-2). \end{eqnarray*}\] (b) \(\det(\lambda I-A)=(\lambda-2)(\lambda-3)^2=0\implies \lambda=2,3,3\). So \(2\) and \(3\) are eigenvalue of \(A\) with algebraic multiplicities \(1\) and \(2\) respectively.

(c) The eigenspace of \(A\) corresponding to the eigenvalue \(3\) is \[\operatorname{NS}(A-3I)=\{\overrightarrow{x}\;|\;(A-3I)\overrightarrow{x}=\overrightarrow{0}\}.\] \[[A-3I\;|\;\overrightarrow{0}]=\left[\begin{array}{rrr|r}0&0&0&0\\0&1&1&0\\0&-2&-2&0\end{array} \right] \xrightarrow{2R_2+R_3} \left[\begin{array}{rrr|r}0&0&0&0\\0&1&1&0\\0&0&0&0\end{array} \right] \xrightarrow{R_1\leftrightarrow R_2} \left[\begin{array}{rrr|r}0&\boxed{1}&1&0\\0&0&0&0\\0&0&0&0\end{array} \right]\] So we get \(x_2+x_3=0\) where \(x_1\) and \(x_3\) are free variable. Thus \[\overrightarrow{x}=\left[\begin{array}{r}x_1\\x_2\\x_3 \end{array}\right] =\left[\begin{array}{r}x_1\\-x_3\\x_3 \end{array}\right] =x_1\left[\begin{array}{r}1\\0\\0\end{array}\right]+ x_3\left[\begin{array}{r}0\\-1\\1\end{array}\right] \in \operatorname{Span} \left\{\left[\begin{array}{r}1\\0\\0\end{array}\right],\; \left[\begin{array}{r}0\\-1\\1\end{array}\right]\right\}.\] Thus the eigenspace of \(A\) corresponding to the eigenvalue \(3\) is \[\operatorname{NS}(A-3I)=\operatorname{Span} \left\{\left[\begin{array}{r}1\\0\\0\end{array}\right],\; \left[\begin{array}{r}0\\-1\\1\end{array}\right]\right\},\] and the geometric multiplicity of the eigenvalue \(3\) is \(\operatorname{dim}(\operatorname{NS}(A-3I))=2\).

The eigenspace of \(A\) corresponding to the eigenvalue \(2\) is \[\operatorname{NS}(A-2I)=\{\overrightarrow{x}\;|\;(A-2I)\overrightarrow{x}=\overrightarrow{0}\}.\] \[[A-2I\;|\;\overrightarrow{0}]=\left[\begin{array}{rrr|r}1&0&0&0\\0&2&1&0\\0&-2&-1&0\end{array} \right] \xrightarrow{R_2+R_3} \left[\begin{array}{rrr|r}1&0&0&0\\0&2&1&0\\0&0&0&0\end{array} \right] \xrightarrow{\frac{R_2}{2}} \left[\begin{array}{rrr|r}\boxed{1}&0&0&0\\0&\boxed{1}&\frac{1}{2}&0\\0&0&0&0\end{array} \right]\] So we get \(x_1=0,\; x_2+\frac{x_3}{2}=0\) where \(x_3\) is a free variable. Thus \[\overrightarrow{x}=\left[\begin{array}{r}x_1\\x_2\\x_3 \end{array}\right] =\left[\begin{array}{r}0\\-\frac{x_3}{2}\\x_3 \end{array}\right] =\frac{x_3}{2}\left[\begin{array}{r}0\\-1\\2 \end{array}\right] \in \operatorname{Span} \left\{\left[\begin{array}{r}0\\-1\\2 \end{array}\right]\right\}.\] Thus the eigenspace of \(A\) corresponding to the eigenvalue \(2\) is \[\operatorname{NS}(A-2I)=\operatorname{Span} \left\{\left[\begin{array}{r}0\\-1\\2 \end{array}\right]\right\},\] and the geometric multiplicity of the eigenvalue \(2\) is \(\operatorname{dim}(\operatorname{NS}(A-2I))=1\).

Remark. Recall that \(\overrightarrow{x}\mapsto A\overrightarrow{x}\) is a linear transformation from \(\mathbb R^n\) to \(\mathbb R^n\). This linear transformation is invariant on the eigenspaces of \(A\):
If \(\lambda\) is an eigenvalue of \(A\) and \(\overrightarrow{x}\in \operatorname{NS}(A-\lambda I)\), then \(A\overrightarrow{x}\in \operatorname{NS}(A-\lambda I)\).

Example. In the preceding example, \(\overrightarrow{x}=\left[\begin{array}{r}4\\-5\\5\end{array}\right] \in \operatorname{NS}(A-3I)=\operatorname{Span} \left\{\left[\begin{array}{r}1\\0\\0\end{array}\right],\; \left[\begin{array}{r}0\\-1\\1\end{array}\right]\right\},\) and also \(A\overrightarrow{x}=A\left[\begin{array}{r}4\\-5\\5\end{array}\right] =\left[\begin{array}{r}12\\-15\\15\end{array}\right] =3\left[\begin{array}{r}4\\-5\\5\end{array}\right] \in \operatorname{NS}(A-3I)=\operatorname{Span} \left\{\left[\begin{array}{r}1\\0\\0\end{array}\right],\; \left[\begin{array}{r}0\\-1\\1\end{array}\right]\right\}\).

Theorem.(IMT contd.) Let \(A\) be an \(n\times n\) matrix. Then the following are equivalent:

\(0\) is an eigenvalue of \(A\) iff \(A\overrightarrow{x}=\overrightarrow{0}\) has a nontrivial solution. By the IMT, \(A\overrightarrow{x}=\overrightarrow{0}\) has a nontrivial solution iff \(A\) is not invertible.


Some useful results:

Theorem. Let \(A\) be an \(n\times n\) matrix with eigenvalues \(\lambda_1,\lambda_2,\ldots,\lambda_n\). Then \(\det A=\lambda_1\lambda_2\cdots\lambda_n.\)

Note that \(\det(\lambda I-A)=(\lambda-\lambda_1)(\lambda-\lambda_2)\cdots(\lambda-\lambda_n)\). Plugging \(\lambda=0\), we get \((-1)^n\det A=(-1)^n\lambda_1\lambda_2\cdots\lambda_n \implies \det A=\lambda_1\lambda_2\cdots\lambda_n\).

Theorem. The eigenvalues of a triangular matrix (e.g., diagonal matrix) are the entries on its main diagonal.

Consider an upper-triangular matrix \(A=\left[\begin{array}{rrrrr}d_1&&&&\\0&d_2&&*&\\0&0&d_3&&\\ \vdots&\vdots&&\ddots&\\0&0&0&\cdots&d_n\end{array} \right].\) Its characteristic polynomial is \(\det(\lambda I-A)=(\lambda-d_1)(\lambda-d_2)\cdots(\lambda-d_n)\). So \(\det(\lambda I-A)=0\implies \lambda=d_1,\ldots,d_n\).

Theorem. Let \(A\) be a square matrix. If \(\lambda\) is an eigenvalue of \(A\), then \(\lambda^k\) is an eigenvalue of \(A^k\).

Suppose \(A\overrightarrow{v}=\lambda \overrightarrow{v}\), \(\overrightarrow{v}\neq \overrightarrow{0}\). Then \(A(A\overrightarrow{v})=A(\lambda \overrightarrow{v})\). So \[A^2\overrightarrow{v}=\lambda (A\overrightarrow{v})=\lambda (\lambda \overrightarrow{v})=\lambda^2\overrightarrow{v}.\] Continuing this process, we get \(A^k\overrightarrow{v}=\lambda^k \overrightarrow{v}\).

Theorem. Let \(A\) be an invertible matrix. Then \(\lambda\) is an eigenvalue of \(A\) if and only if \(\frac{1}{\lambda}\) is an eigenvalue of \(A^{-1}\).

Suppose \(A\overrightarrow{v}=\lambda \overrightarrow{v}\), \(\overrightarrow{v}\neq \overrightarrow{0}\). Since \(A\) is invertible, \(\lambda\neq 0\). \[\begin{eqnarray*} A^{-1}(A\overrightarrow{v})&=&A^{-1}(\lambda \overrightarrow{v})\\ I\overrightarrow{v}=\overrightarrow{v}&=&\lambda(A^{-1} \overrightarrow{v})\\ \frac{1}{\lambda}\overrightarrow{v}&=&A^{-1} \overrightarrow{v} \end{eqnarray*}\] So \(\frac{1}{\lambda}\) is an eigenvalue of \(A^{-1}\). The converse follows by a similar argument.

Theorem. Let \(A\) be a square matrix. If \(\overrightarrow{v_1},\overrightarrow{v_2},\ldots,\overrightarrow{v_k}\) are eigenvectors of \(A\) corresponding to distinct eigenvalues \(\lambda_1,\lambda_2,\ldots,\lambda_k\) of \(A\) respectively, then \(\{\overrightarrow{v_1},\overrightarrow{v_2},\ldots,\overrightarrow{v_k}\}\) is linearly independent.

Let \(\lambda_1,\lambda_2,\ldots,\lambda_k\) be distinct and \(A\overrightarrow{v_i}=\lambda_i\overrightarrow{v_i},\;\overrightarrow{v_i}\neq\overrightarrow{0}\) for \(i=1,\ldots,k\). Suppose \(\{\overrightarrow{v_1},\overrightarrow{v_2},\ldots,\overrightarrow{v_k}\}\) is linearly dependent. WLOG let \(\{\overrightarrow{v_1},\overrightarrow{v_2},\ldots,\overrightarrow{v_p}\}\) be a maximal linearly independent subset of \(\{\overrightarrow{v_1},\overrightarrow{v_2},\ldots,\overrightarrow{v_k}\}\) for some \(p < k\). Then \(\{\overrightarrow{v_1},\overrightarrow{v_2},\ldots,\overrightarrow{v_p},\overrightarrow{v_{p+1}}\}\) is linearly dependent and consequently \[\begin{equation} \overrightarrow{v_{p+1}}=c_1\overrightarrow{v_1}+\cdots+c_p\overrightarrow{v_p}, \tag{1} \end{equation}\] for some scalars \(c_1,\ldots,c_p\), not all zero (since \(\overrightarrow{v_{p+1}}\neq \overrightarrow{0}\)). \[\begin{eqnarray} A\overrightarrow{v_{p+1}}&=&A(c_1\overrightarrow{v_1}+\cdots+c_p\overrightarrow{v_p})\nonumber\\ \lambda_{p+1}\overrightarrow{v_{p+1}}&=&c_1A\overrightarrow{v_1}+\cdots+c_pA\overrightarrow{v_p}\nonumber\\ \lambda_{p+1}\overrightarrow{v_{p+1}}&=&c_1\lambda_1\overrightarrow{v_1}+\cdots+c_p\lambda_p\overrightarrow{v_p} \tag{2} \end{eqnarray}\] \(\lambda_{p+1}(1)-(2)\) gives \[\begin{equation} \overrightarrow{0}=c_1(\lambda_{p+1}-\lambda_1)\overrightarrow{v_1}+\cdots+c_p(\lambda_{p+1}-\lambda_p)\overrightarrow{v_p} \tag{3} \end{equation}\] Since \(\lambda_{p+1}-\lambda_i\neq 0\) for \(i=1,\ldots,p\) and \(c_1,\ldots,c_p\) are not all zero, \(c_1(\lambda_{p+1}-\lambda_1),\ldots,c_p(\lambda_{p+1}-\lambda_p)\) are not all zero. So (3) implies \(\{\overrightarrow{v_1},\overrightarrow{v_2},\ldots,\overrightarrow{v_p}\}\) is linearly dependent, a contradiction.

Remark. The converse of the preceding theorem is not true. Consider \(A=\left[\begin{array}{rrr}3&0&0\\0&4&1\\0&-2&1\end{array} \right]\) in the last example. \(\left[\begin{array}{r}1\\0\\0\end{array}\right]\) and \(\left[\begin{array}{r}0\\-1\\1\end{array}\right]\) are linearly independent eigenvectors of \(A\) and they are eigenvectors corresponding to the same eigenvalue \(3\) of \(A\).


Last edited