Linear Algebra Home

Properties of Determinants

    


Theorem. For an \(n\times n\) matrix \(A\) and \(n\times n\) elementary matrices \(E_{ij}\), \(E_i(c)\), \(E_{ij}(c)\), we have \(\det E_{ij}=-1\), \(\det E_i(c)=c\), \(\det E_{ij}(c)=1\), and \[\begin{align*} \det (E_{ij}A)& =-\det A=(\det E_{ij}) (\det A),\\ \det (E_i(c)A)& =c\det A=(\det E_i(c)) (\det A),\\ \det(E_{ij}(c)A)& =\det A=(\det E_{ij}(c)) (\det A). \end{align*}\]

Use cofactor expansion and induction on \(n\).

Theorem. Let \(A\) be an \(n\times n\) matrix. Then \(A\) is invertible if and only if \(\det(A)\neq 0\).

Suppose \(A\) is invertible. Then \(A^{-1}\) is invertible and there are elementary matrices \(E_1,E_2,\ldots,E_k\) such that \(E_kE_{k-1}\cdots E_1A^{-1}=I_n\). Postmultiplying by \(A\), we get \[E_kE_{k-1}\cdots E_1 =A \implies \det(E_kE_{k-1}\cdots E_1)=\det(A).\] By successively applying the preceding theorem, we get \[\det(A)=\det(E_kE_{k-1}\cdots E_1)=\det(E_k)\det(E_{k-1})\cdots\det(E_1)\neq 0.\] For the converse, suppose that \(A\) is not invertible. Then the RREF \(R\) of \(A\) is not \(I_n\). So \(R\) is an upper-triangular matrix with the last row being a zero row and consequently \(\det(R)=0\). Suppose \(E_1',E_2',\ldots,E_t'\) are elementary matrices for which \(E_t'E_{t-1}'\cdots E_1'A=R\). Then \[\det(E_t'E_{t-1}'\cdots E_1'A)=\det(R)=0 \implies \det(E_t')\det(E_{t-1}')\cdots\det(E_1')\det(A)=0,\] by the preceding theorem. Since \(\det(E_i')\neq 0\) for \(i=1,2,\ldots,t\), \(\det(A)= 0\).

Remark. We extend the IMT by adding one more equivalent condition:

Theorem. Let \(A\) and \(B\) be two \(n\times n\) matrices. Then \(\det(AB)=\det(A)\det(B)\).

Case 1. \(A\) is not invertible.
By the IMT, \(\operatorname{rank}(A) < n\). Since \(\operatorname{CS}\left(AB\right)\subseteq \operatorname{CS}\left(A\right)\), \(\operatorname{rank}(AB)\leq \operatorname{rank}(A) < n\) and consequently \(AB\) is also not invertible. By the IMT, \(\det(A)=0\) and \(\det(AB)=0\). Thus \[\det(AB)=0=\det(A)\det(B).\]
Case 2. \(A\) is invertible.
There are elementary matrices \(E_1,E_2,\ldots,E_k\) such that \(E_kE_{k-1}\cdots E_1=A\). Postmultiplying by \(B\), we get \(AB=E_kE_{k-1}\cdots E_1B\). By successively applying the first theorem of this section, we get \[\begin{align*} \det(AB) &=\det(E_kE_{k-1}\cdots E_1B)\\ &=\det(E_k)\det(E_{k-1})\cdots\det(E_1)\det(B)\\ &=\det(E_kE_{k-1}\cdots E_1)\det(B)\\ &=\det(A)\det(B). \end{align*}\]

Corollary. Let \(A\) be an \(n\times n\) matrix.

  1. For all scalars \(c\), \(\det(cA)=\det(cI_nA)=\det(cI_n)\det(A)=c^n\det(A)\).

  2. If \(A\) is invertible, then \(\det(A) \det(A^{-1})=\det(AA^{-1})=\det(I_n)=1\).

Example. \(A=\left[\begin{array}{rrr} 1&2&3\\ 3&5&1\\ 0&0&2 \end{array} \right].\) Is \(A\) invertible? Compute \(\det(A^T)\), \(\det(4A^5)\), and \(\det(A(^{-1})\).

Solution. Since \(\det(A)=-2\neq 0\), \(A\) is invertible and we have \(\det(A^T)=\det(A)=-2\), \(\det(4A^5)=4^3(\det A)^5=-2048\), and \(\det(A^{-1})=(\det A)^{-1}=(-2)^{-1}=-1/2\).

Theorem.(Cramer's Rule) Let \(A\) be an \(n\times n\) invertible matrix and \(\overrightarrow{b}\in \mathbb R^n\). The unique solution \(\overrightarrow{x}=[x_1,x_2,\ldots,x_n]^T\) of \(A\overrightarrow{x}=\overrightarrow{b}\) is given by \[x_i=\frac{\det(A_i(\overrightarrow{b}))}{\det(A)}, \;i=1,2,\ldots,n,\] where \(A_i(\overrightarrow{b})\) is the matrix obtained from \(A\) by replacing its \(i\)th column by \(\overrightarrow{b}\).

Let \(i\in \{1,2,\ldots,n\}\). Note that \[\begin{align*} A[\overrightarrow{e_1}\cdots \overrightarrow{e_{i-1}} \overrightarrow{x} \overrightarrow{e_{i+1}}\cdots \overrightarrow{e_n}] &= [A\overrightarrow{e_1}\cdots A\overrightarrow{e_{i-1}} A\overrightarrow{x} A\overrightarrow{e_{i+1}}\cdots A\overrightarrow{e_n}]\\ &=[\overrightarrow{A_1}\cdots \overrightarrow{A_{i-1}}\; A\overrightarrow{x}\; \overrightarrow{A_{i+1}}\cdots \overrightarrow{A_n}]\\ &=[\overrightarrow{A_1}\cdots \overrightarrow{A_{i-1}}\; \overrightarrow{b}\; \overrightarrow{A_{i+1}}\cdots \overrightarrow{A_n}]\\ &=A_i(\overrightarrow{b}). \end{align*}\] Since \(\det([\overrightarrow{e_1}\cdots \overrightarrow{e_{i-1}} \overrightarrow{x} \overrightarrow{e_{i+1}}\cdots \overrightarrow{e_n}])=x_i\), \[\begin{align*} \det(A_i(\overrightarrow{b})) &=\det(A[\overrightarrow{e_1}\cdots \overrightarrow{e_{i-1}} \overrightarrow{x} \overrightarrow{e_{i+1}}\cdots \overrightarrow{e_n}])\\ &=\det(A)\det([\overrightarrow{e_1}\cdots \overrightarrow{e_{i-1}} \overrightarrow{x} \overrightarrow{e_{i+1}}\cdots \overrightarrow{e_n}])\\ &=\det(A)x_i. \end{align*}\] Thus \(x_i=\frac{\det(A_i(\overrightarrow{b}))}{\det(A)}\).

Example. We solve \(A\overrightarrow{x}=\overrightarrow{b}\) by Cramer's Rule, where \[A=\left[\begin{array}{rrr} 1&0&2\\3&2&5\\1&1&4 \end{array}\right], \overrightarrow{x}=\left[\begin{array}{c}x_1\\x_2\\ x_3 \end{array} \right], \mbox{ and } \overrightarrow{b}=\left[\begin{array}{c} 1\\8\\1 \end{array} \right].\] Since \(\det(A)=5\neq 0\), there is a unique solution \([x_1,x_2,x_3]^T\) and by Cramer's Rule, \[\begin{align*} x_1 &=\frac{\det(A_1(\overrightarrow{b}))}{\det(A)}=\frac{ \left|\begin{array}{rrr} 1&0&2\\8&2&5\\1&1&4 \end{array}\right| }{5}=\frac{15}{5}=3\\ x_2 &=\frac{\det(A_2(\overrightarrow{b}))}{\det(A)}=\frac{ \left|\begin{array}{rrr} 1&1&2\\3&8&5\\1&1&4 \end{array}\right| }{5}=\frac{10}{5}=2\\ x_3 &=\frac{\det(A_3(\overrightarrow{b}))}{\det(A)}=\frac{ \left|\begin{array}{rrr} 1&0&1\\3&2&8\\1&1&1 \end{array}\right| }{5}=\frac{-5}{5}=-1. \end{align*}\] Thus the unique solution is \([x_1,x_2,x_3]^T=[3,2,-1]^T\).

Definition. Let \(A\) be an \(n\times n\) matrix. The cofactor matrix, denoted by \(C=[c_{ij}]\), is an \(n\times n\) matrix where \(c_{ij}\) is the \((i,j)\) cofactor of \(A\). The adjoint or adjugate of \(A\), denoted by \(\operatorname{adj} A\) or \(\operatorname{adj}(A)\), is the transpose of the cofactor matrix of \(A\), i.e., \(\operatorname{adj} A=C^T\).

Theorem. Let \(A\) be an \(n\times n\) invertible matrix. Then \[A^{-1}=\frac{1}{\det(A)}\operatorname{adj} A.\]

Since \(AA^{-1}=I_n\), \(A(\text{the column \(j\) of } A^{-1})=\overrightarrow{e_j}\). By Cramer's Rule, the \((i,j)\)-entry of \(A^{-1}\), i.e., the \(i\)th entry of the column \(j\) of \(A^{-1}\), is \[\frac{\det(A_i(\overrightarrow{e_j}))}{\det(A)}=\frac{(-1)^{i+j}\det(A(j,i))}{\det(A)}=\frac{c_{ji}}{\det(A)}=\frac{(C^T)_{ij}}{\det(A)}=\frac{(\operatorname{adj} A)_{ij}}{\det(A)}.\]

Example.

  1. For invertible \(A=\left[\begin{array}{cc}a&b\\c&d\end{array}\right]\), \[A^{-1}=\frac{1}{\det(A)}\operatorname{adj} A =\frac{1}{ad-bc}\left[\begin{array}{cc} c_{11}&c_{12}\\c_{21}&c_{22}\end{array}\right]^T =\frac{1}{ad-bc} \left[\begin{array}{rr}d&-c\\-b&a\end{array}\right]^T =\frac{1}{ad-bc} \left[\begin{array}{rr}d&-b\\-c&a\end{array}\right].\]

  2. For invertible \(A=\left[\begin{array}{rrr} 1&0&2\\3&2&5\\1&1&4\end{array}\right]\), \[A^{-1}=\frac{1}{\det(A)}\operatorname{adj} A =\frac{1}{5} \left[\begin{array}{ccc} c_{11}&c_{12}&c_{13}\\c_{21}&c_{22}&c_{23}\\c_{31}&c_{32}&c_{33}\end{array}\right]^T =\frac{1}{5} \left[\begin{array}{rrr} 3&-7&1\\2&2&-1\\-4&1&2\end{array}\right]^T =\frac{1}{5} \left[\begin{array}{rrr} 3&2&-4\\-7&2&1\\1&-1&2\end{array}\right].\]


We end by the following useful multilinear property of determinant:
Theorem. Let \(A=[\overrightarrow{a_1}\:\overrightarrow{a_2}\:\cdots\overrightarrow{a_n}]\) be an \(n\times n\) matrix. Then for all \(\overrightarrow{x},\overrightarrow{y}\in \mathbb R^n\) and for all scalars \(c,d\), \[\begin{align*} det [\overrightarrow{a_1}\:\cdots \overrightarrow{a_{i-1}}\: (c\overrightarrow{x}+d\overrightarrow{y}) \:\overrightarrow{a_{i+1}} \cdots \overrightarrow{a_n}] &=c\det [\overrightarrow{a_1}\:\cdots \overrightarrow{a_{i-1}}\: \overrightarrow{x} \:\overrightarrow{a_{i+1}} \cdots \overrightarrow{a_n}]\\ &+d\det [\overrightarrow{a_1}\:\cdots \overrightarrow{a_{i-1}}\: \overrightarrow{y} \:\overrightarrow{a_{i+1}} \cdots \overrightarrow{a_n}]. \end{align*}\]

Find determinants by the cofactor expansion along the \(i\)th column.

Example. \[\begin{align*} \left|\begin{array}{cc}3a+4s&3b+4t\\c&d\end{array}\right| &=\left|\begin{array}{cc}3a+4s&c\\3b+4t&d\end{array}\right| (\text{by transposing})\\ &=\left|\begin{array}{cc}3a&c\\3b&d\end{array}\right| +\left|\begin{array}{cc}4s&c\\4t&d\end{array}\right| (\text{by multilinearity of determinant})\\ &=3\left|\begin{array}{cc}a&c\\b&d\end{array}\right| +4\left|\begin{array}{cc}s&c\\t&d\end{array}\right| (\text{by multilinearity of determinant})\\ &=3(ad-cb)+4(sd-ct). \end{align*}\]


Last edited