In this section we study the determinant of an \(n\times n\) matrix \(A=[a_{ij}]\), denoted by \(\det(A)\) or \(\det A\) or \(|A|\) or
\[\left| \begin{array}{cccc}
a_{11}&a_{12}&\cdots &a_{1n}\\
a_{21}&a_{22}&\cdots &a_{2n}\\
\vdots&\vdots& \ddots &\vdots\\
a_{m1}&a_{m2}&\cdots &a_{mn}
\end{array} \right|.\]
To define \(\det(A)\) recursively, we denote \(A(i,j)\) for the the matrix obtained from \(A\) by deleting row \(i\) and
column \(j\) of \(A\).
Definition.
If \(A=[a_{11}]\), then \(\det(A)=a_{11}\). If \(A=\left[\begin{array}{cc}a_{11}&a_{12}\\a_{21}&a_{22}\end{array}\right]\),
then \(\det(A)=a_{11}a_{22}-a_{12}a_{21}\). For an \(n\times n\) matrix \(A=[a_{ij}]\) where \(n\geq 3\),
\[\det(A)=\sum_{i=1}^n (-1)^{1+i} a_{1i} \det A(1,i)=a_{11} \det A(1,1)-a_{12}\det A(1,2)+\cdots+(-1)^{n+1} a_{1n} \det A(1,n).\]
Definition.
For an \(n\times n\) matrix \(A=[a_{ij}]\) where \(n\geq 2\), the \((i,j)\) minor, denoted by \(m_{ij}\),
is \(m_{ij}=\det A(i,j)\) and the \((i,j)\) cofactor, denoted by \(c_{ij}\), is
\[c_{ij}=(-1)^{i+j} m_{ij} =(-1)^{i+j}\det A(i,j).\]
Remark.
We defined \(\det(A)\) as the cofactor expansion along the first row of \(A\):
\[\det(A)=\sum_{i=1}^n (-1)^{1+i}a_{1i} \det A(1,i)= \sum_{i=1}^n a_{1i} c_{1i}.\]
But it can be proved that \(\det(A)\) is the cofactor expansion along any row or column of \(A\).
Theorem.
Let \(A\) be an \(n\times n\) matrix. Then for each \(i,j=1,2,\ldots,n\),
\[\det(A)= \sum_{j=1}^n a_{ij} c_{ij} = \sum_{i=1}^n a_{ij} c_{ij} .\]
The preceding theorem can be proved using the following equivalent definition of determinant:
\[\det(A)=\sum_{\sigma \in S_n} \left( \operatorname{sign}(\sigma) \prod_{i=1}^n a_{i\sigma(i)} \right),\]
where \(\sigma\) runs over all \(n!\) permutations \(\sigma\) of \(\{1,2,\ldots,n\}\). (This requires study of
permutations)
Corollary.
Let \(A=[a_{ij}]\) be an \(n\times n\) matrix.
\(\det(A^T)=\det(A)\).
If \(A\) is a triangular matrix, then \(\det(A)=a_{11}a_{22}\cdots a_{nn}\).
(a) Note that the \((i,j)\) cofactor of \(A\) is the \((j,i)\) cofactor of \(A^T\). The cofactor expansions along
the first rows to get \(\det(A)\) would be same as cofactor expansions along the first columns to get \(\det(A^T)\).
(b) If \(A\) is an upper-triangular matrix, then by cofactor expansions along the first rows we get
\(\det(A)=a_{11}a_{22}\cdots a_{nn}\). Similarly if \(A\) is a lower-triangular matrix, then by cofactor expansions
along the first columns we get \(\det(A)=a_{11}a_{22}\cdots a_{nn}\).
Example.
\(A=\left[\begin{array}{rrrrr}
1&2&3&4&5\\
3&0&1&3&2\\
0&0&4&3&0\\
0&0&0&2&1\\
2&0&0&0&3 \end{array} \right].\)
We compute \(\det(A)\) using rows or columns with maximum number of zeros at a step. So first we choose column 2 and
do cofactor expansion along it:
\[\det(A)=-2 \left|\begin{array}{rrrr}
3&1&3&2\\
0&4&3&0\\
0&0&2&1\\
2&0&0&3 \end{array} \right|\]
Now we have 5 choices: row 2,3,4 and column 1,2. We do cofactor expansion along row 4:
\[\det(A)=-2 \left( -2
\left|\begin{array}{rrr}
1&3&2\\
4&3&0\\
0&2&1 \end{array} \right| +3
\left|\begin{array}{rrr}
3&1&3\\
0&4&3\\
0&0&2 \end{array} \right|
\right)
\]
Since the second determinant is a determinant of an upper-triangular matrix, its determinant is \(3\cdot 4 \cdot 2=24\).
We do cofactor expansion along column 3 for the first determinant.
\[\begin{align*}
\det(A) &=-2 \left( -2
\left(
2 \left|\begin{array}{rr}
4&3\\
0&2\end{array} \right|
+1 \left|\begin{array}{rrrr}
1&3\\
4&3 \end{array} \right|
\right)+3\cdot 24
\right)\\
&= -2 \left( -2 \left(2(4\cdot 2-0) +1 (1\cdot 3-3\cdot 4)\right)+72 \right)\\
&=-116
\end{align*}\]
Some applications of determinants:
Determinant as volume: Suppose a hypersolid \(S\) in \(\mathbb R^n\) is given by \(n\) concurrent edges that are
represented by column vectors of an \(n\times n\) matrix \(A\). Then the volume of \(S\) is \(|\det(A)|\).
Example. Let \(\overrightarrow{r_1}=[a_1,b_1,c_1]^T\), \(\overrightarrow{r_2}=[a_2,b_2,c_2]^T\),
\(\overrightarrow{r_3}=[a_3,b_3,c_3]^T\).
\(A=[\overrightarrow{r_1}\;\overrightarrow{r_2}\;\overrightarrow{r_3}]=\left[\begin{array}{ccc}a_1&a_2&a_3\\b_1&b_2&b_3\\c_1&c_2&c_3\end{array}\right]\)
and the volume of the parallelepiped with concurrent edges given by \(\overrightarrow{r_1},\overrightarrow{r_2},\overrightarrow{r_3}\) is
\[|\det(A)|=|a_1(b_2c_3-b_3c_2)-a_2(b_1c_3-b_3c_1)+a_3(b_1c_2-b_2c_1)|.\]
Equation of a plane: Consider the plane passing through three distinct points \(P_1(x_1,y_1,z_1)\),
\(P_2(x_2,y_2,z_2)\), and \(P_3(x_3,y_3,z_3)\). Let \(P(x,y,z)\) be a point on the plane. So the volume of
the parallelepiped with concurrent edges \(\overrightarrow{P_1P}\), \(\overrightarrow{P_2P}\), and \(\overrightarrow{P_3P}\)
is zero.
\[\left|\begin{array}{ccc}x-x_1&x-x_2&x-x_3\\y-y_1&y-y_2&y-y_3\\z-z_1&z-z_2&z-z_3\end{array}\right|=0.\]
Volume after transformation: Let \(T:\mathbb R^n\to \mathbb R^n\) be a linear transformation with the standard
matrix \(A\). Let \(S\) be a bounded hypersolid in \(\mathbb R^n\). Then the volume of \(T(S)\) is \(|\det(A)|\) times
the volume of \(S\).
Example. Let \(A=\left[\begin{array}{cc}a&0\\0&b\end{array}\right]\) and
\(D=\{(x,y)\;|\;x^2+y^2\leq 1\}\). Consider \(T:\mathbb R^2\to \mathbb R^2\) defined by \(T([x, y]^T)=A[x, y]^T\).
Note \(T(D)=\{(x,y)\;|\;\frac{x^2}{a^2}+\frac{y^2}{b^2}\leq 1\}\). So the area of ellipse = the area of \(T(D)=\det(A)\cdot A(D)=ab\cdot \pi 1^2=\pi ab\).
Change of variables: Suppose variables \(x_1,\ldots,x_n\) are changed to \(v_1,\ldots,v_n\) by \(n\) differentiable
functions \(f_1,\ldots,f_n\) so that
\[\begin{eqnarray*}
v_1&=&f_1(x_1,\ldots,x_n)\\
v_2&=&f_2(x_1,\ldots,x_n)\\
&\vdots &\\
v_n&=&f_n(x_1,\ldots,x_n).
\end{eqnarray*}\]
So we have a function \(F:\mathbb R^n\to \mathbb R^n\) defined by
\[F(x_1,\ldots,x_n)=(f_1(x_1,\ldots,x_n),\ldots,f_n(x_1,\ldots,x_n)).\]
The Jacobian matrix of \(F:\mathbb R^n\to \mathbb R^n\) is the following
\[\frac{\partial(f_1,\ldots,f_n)}{\partial(x_1,\ldots,x_n)}=
\left[\begin{array}{ccc}\frac{\partial f_1}{\partial x_1}&\cdots&\frac{\partial f_1}{\partial x_n}\\
\vdots&\ddots&\vdots\\
\frac{\partial f_n}{\partial x_1}&\cdots&\frac{\partial f_n}{\partial x_n}
\end{array} \right].\]
The change of variables formula for integrals is
\[\int_{F(U)}G(\overrightarrow{v})d\overrightarrow{v}=
\int_{U}G(\overrightarrow{x})\left|\frac{\partial(f_1,\ldots,f_n)}{\partial(x_1,\ldots,x_n)}\right| d\overrightarrow{x}.\]
Example. So \((x,y)=F(r,\theta)=(ar\cos\theta,br\sin\theta)\) and \(F([0,1]\times[0,2\pi])\) is the region inscribed
by the ellipse \(\frac{x^2}{a^2}+\frac{y^2}{b^2}=1\). The Jacobian matrix is
\[\frac{\partial(x,y)}{\partial(r,\theta)}=
\left[\begin{array}{cc}
\frac{\partial x}{\partial r}&\frac{\partial x}{\partial \theta}\\
\frac{\partial y}{\partial r}&\frac{\partial y}{\partial \theta}
\end{array} \right]=
\left[\begin{array}{cc}
a\cos\theta&-ar\sin\theta\\
b\sin\theta&br\cos\theta
\end{array} \right]\text{ and } \left|\frac{\partial(x,y)}{\partial(r,\theta)}\right|=abr.\]
By the change of variables formula,
\[\int_{F([0,1]\times[0,2\pi])}1\;d\overrightarrow{v}=
\int_{\theta=0}^{2\pi}\int_{r=0}^11\; \left|\frac{\partial(x,y)}{\partial(r,\theta)}\right| drd\theta=ab\cdot \pi.\]
Wronskian: The Wroskian of \(n\) real-values differentiable functions \(f_1,\ldots,f_n\) is
\[W(f_1,\ldots,f_n)(x)=
\left|\begin{array}{ccc} f_1(x)&\cdots&f_n(x)\\
f^{'}_1(x)&\cdots&f^{'}_n(x)\\
\vdots&\ddots&\vdots\\
f^{(n-1)}_1(x)&\cdots&f^{(n-1)}_n(x)\\
\end{array} \right|.\]
\(f_1,\ldots,f_n\) are linearly independent functions iff \(W(f_1,\ldots,f_n)\) is not identically zero.