Definition.
The inner product or the dot product of two vectors \(\overrightarrow{u}\) and \(\overrightarrow{v}\)
in \(\mathbb R^n\), denoted by \(\overrightarrow{u} \cdot \overrightarrow{v}\), is defined by
\(\overrightarrow{u} \cdot \overrightarrow{v}=\overrightarrow{u}^T \overrightarrow{v}\).
Example.
For \(\overrightarrow{u}=\left[\begin{array}{r}1\\-2\\3\end{array} \right]\) and \(\overrightarrow{v}=\left[\begin{array}{r}2\\1\\-1\end{array} \right]\),
\(\overrightarrow{u} \cdot \overrightarrow{v}=\overrightarrow{u}^T \overrightarrow{v}=1\cdot 2-2\cdot 1+3\cdot(-1)=-3\).
Theorem.
The following are true for all \(\overrightarrow{u}\), \(\overrightarrow{v}\), \(\overrightarrow{w}\) in \(\mathbb R^n\)
and for all scalars \(c\), \(d\) in \(\mathbb R\).
\(\overrightarrow{u} \cdot \overrightarrow{u}\geq 0\) where \(\overrightarrow{u} \cdot \overrightarrow{u}=0\) if and only if
\(\overrightarrow{u}=\overrightarrow{0}\). (nonnegativity)
Definition.
The length or norm of \(\overrightarrow{v}=[v_1,v_2,\ldots,v_n]^T\) in \(\mathbb R^n\), denoted by
\(\left\lVert\overrightarrow{v}\right\rVert\), is defined by \(\left\lVert\overrightarrow{v}\right\rVert
=\sqrt{v_1^2+v_2^2+\cdots+v_n^2}\).
\(\overrightarrow{v}\in \mathbb R^n\) is a unit vector if \(\left\lVert\overrightarrow{v}\right\rVert=1\).
Remark.
The following are true for all \(\overrightarrow{v}\) in \(\mathbb R^n\) and for all scalars \(c\) in \(\mathbb R\).
The unit vector in the direction of \(\overrightarrow{v}\neq \overrightarrow{0}\) is
\(\frac{1}{\left\lVert\overrightarrow{v}\right\rVert}\overrightarrow{v}\).
Example.
The unit vector in the opposite direction of \(\overrightarrow{v}=\left[\begin{array}{r}1\\-2\\3\end{array} \right]\)
is \(\frac{-1}{\left\lVert\overrightarrow{v}\right\rVert}\overrightarrow{v}
=\frac{1}{\sqrt{14}} \left[\begin{array}{r}-1\\2\\-3\end{array} \right]\).
Definition.
The distance between \(\overrightarrow{u},\overrightarrow{v}\) in \(\mathbb R^n\), denoted by
\(\operatorname{d} (\overrightarrow{u},\overrightarrow{v})\), is defined by
\[\operatorname{d}(\overrightarrow{u},\overrightarrow{v})=\left\lVert\overrightarrow{u}-\overrightarrow{v}\right\rVert.\]
Note that \(\operatorname{d}(\overrightarrow{u},\overrightarrow{v})^2=\left\lVert\overrightarrow{u}-\overrightarrow{v}\right\rVert^2
=\left\lVert\overrightarrow{u}\right\rVert^2+\left\lVert\overrightarrow{v}\right\rVert^2-2 \overrightarrow{u} \cdot \overrightarrow{v}\)
and
\(\operatorname{d}(\overrightarrow{u},-\overrightarrow{v})^2=\left\lVert\overrightarrow{u}+\overrightarrow{v}\right\rVert^2
=\left\lVert\overrightarrow{u}\right\rVert^2+\left\lVert\overrightarrow{v}\right\rVert^2+2 \overrightarrow{u} \cdot \overrightarrow{v}\).
So \(\overrightarrow{u}\) and \(\overrightarrow{v}\) are perpendicular if and only if \(\operatorname{d}(\overrightarrow{u},\overrightarrow{v})
=\operatorname{d}(\overrightarrow{u},-\overrightarrow{v})\) if and only if \(\overrightarrow{u} \cdot \overrightarrow{v}=0\).
Definition.
Two vectors \(\overrightarrow{u}\) and \(\overrightarrow{v}\) in \(\mathbb R^n\) are orthogonal
if \(\overrightarrow{u} \cdot \overrightarrow{v}=0\).
Example.
Let \(\overrightarrow{u}=[3,2,-5,0]^T\) and \(\overrightarrow{v}=[-4,1,-2,1]^T\).
Determine if \(\overrightarrow{u}\) and \(\overrightarrow{v}\) are orthogonal.
Solution. (a) Since \(\overrightarrow{u} \cdot \overrightarrow{v}=3\cdot(-4)+2\cdot1-5\cdot(-2)+0\cdot 1=0\),
\(\overrightarrow{u}\) and \(\overrightarrow{v}\) are orthogonal.
Theorem.(Pythagorean Theorem)
Two vectors \(\overrightarrow{u}\) and \(\overrightarrow{v}\) in \(\mathbb R^n\) are orthogonal if and only if
\(\left\lVert\overrightarrow{u}+\overrightarrow{v}\right\rVert^2=\left\lVert\overrightarrow{u}\right\rVert^2
+\left\lVert\overrightarrow{v}\right\rVert^2\).
Note that
\[\left\lVert\overrightarrow{u}+\overrightarrow{v}\right\rVert^2=(\overrightarrow{u}+\overrightarrow{v})\cdot (\overrightarrow{u}+\overrightarrow{v})=\overrightarrow{u}\cdot \overrightarrow{u}+\overrightarrow{u}\cdot \overrightarrow{u}+2\overrightarrow{u}\cdot \overrightarrow{v}
=\left\lVert\overrightarrow{u}\right\rVert^2+\left\lVert\overrightarrow{v}\right\rVert^2+2\overrightarrow{u}\cdot \overrightarrow{v}.\]
Then \(\left\lVert\overrightarrow{u}+\overrightarrow{v}\right\rVert^2=\left\lVert\overrightarrow{u}\right\rVert^2
+\left\lVert\overrightarrow{v}\right\rVert^2\)
if and only if \(\overrightarrow{u}\cdot \overrightarrow{v}=0\).
Definition.
The angle \(\theta\) between two vectors \(\overrightarrow{u}\) and \(\overrightarrow{v}\) in \(\mathbb R^n\)
is the angle in \([0,\pi]\) satisfying
\[\overrightarrow{u} \cdot \overrightarrow{v}=\left\lVert\overrightarrow{u}\right\rVert \left\lVert\overrightarrow{v}\right\rVert \cos \theta.\]
Definition.
Let \(W\) be a subspace of \(\mathbb R^n\). A vector \(\overrightarrow{v}\in \mathbb R^n\) is
orthogonal to \(W\) if \(\overrightarrow{v}\cdot \overrightarrow{w} =0\) for all \(\overrightarrow{w}\in W\).
The orthogonal complement of \(W\), denoted by \(W^{\perp}\), is the set of all vectors in \(\mathbb R^n\)
that are orthogonal to \(W\), i.e.,
\[W^{\perp}=\{\overrightarrow{v}\in \mathbb R^n \;|\; \overrightarrow{v}\cdot \overrightarrow{w} =0 \text{ for all } \overrightarrow{w}\in W\}.\]
Example.
If \(L\) is a line in \(\mathbb R^2\) through the origin, then \(L^{\perp}\) is the line through the origin that is
perpendicular to \(L\).
If \(L\) is a line in \(\mathbb R^3\) through the origin, then \(L^{\perp}\) is the plane through the origin that
is perpendicular to \(L\). Note that \((L^{\perp})^{\perp}=L\).
Theorem.
Let \(W\) be a subspace of \(\mathbb R^n\) and \(W=\operatorname{Span} \{\overrightarrow{w_1},\overrightarrow{w_2},\ldots,\overrightarrow{w_k}\}\).
Then
\(\overrightarrow{v} \in W^{\perp}\) if and only if \(\overrightarrow{v}\cdot \overrightarrow{w_i}=0\)
for \(i=1,2,\ldots,k\).
\(W^{\perp}\) is a subspace of \(\mathbb R^n\).
\((W^{\perp})^{\perp}=W\).
\(W\cap W^{\perp}=\{\overrightarrow{0}\}\).
Let \(\overrightarrow{v} \in W^{\perp}\). Then \(\overrightarrow{v}\cdot \overrightarrow{w} =0\) for all
\(\overrightarrow{w}\in W\). Since \(\overrightarrow{w_i}\in W\) for \(i=1,2,\ldots,k\), \(\overrightarrow{v}\cdot \overrightarrow{w_i} =0\)
for \(i=1,2,\ldots,k\).
Conversely suppose that \(\overrightarrow{v}\cdot \overrightarrow{w_i}=0\) for \(i=1,2,\ldots,k\).
Let \(\overrightarrow{w}\in W=\operatorname{Span} \{\overrightarrow{w_1},\overrightarrow{w_2},\ldots,\overrightarrow{w_k}\}\).
Then \(\overrightarrow{w}=c_1\overrightarrow{w_1} +c_2\overrightarrow{w_2}+\cdots+c_k\overrightarrow{w_k}\) for some scalars
\(c_1,c_2,\ldots,c_k\).
Then
\[\overrightarrow{v}\cdot \overrightarrow{w}= \overrightarrow{v}\cdot (c_1\overrightarrow{w_1} +c_2\overrightarrow{w_2}+\cdots+c_k\overrightarrow{w_k})
=c_1(\overrightarrow{v}\cdot\overrightarrow{w_1}) +c_2(\overrightarrow{v}\cdot\overrightarrow{w_2})+\cdots+c_k(\overrightarrow{v}\cdot\overrightarrow{w_k})=0.\]
Thus \(\overrightarrow{v}\cdot \overrightarrow{w} =0\) for all \(\overrightarrow{w}\in W\) and consequently
\(\overrightarrow{v} \in W^{\perp}\).
\(\overrightarrow{0}\cdot \overrightarrow{w} =0\) for all \(\overrightarrow{w}\in W\), \(\overrightarrow{0} \in W^{\perp}\)
and \(W^{\perp}\neq \varnothing\). Let \(\overrightarrow{u},\overrightarrow{v}\in W^{\perp}\) and \(c,d\in \mathbb R\).
Then for all \(\overrightarrow{w}\in W\),
\[(c\overrightarrow{u}+d\overrightarrow{v})\cdot \overrightarrow{w}=
c(\overrightarrow{u}\cdot \overrightarrow{w})+d(\overrightarrow{v}\cdot \overrightarrow{w})=c\overrightarrow{0}+d\overrightarrow{0}=\overrightarrow{0}.\]
Thus \(c\overrightarrow{u}+d\overrightarrow{v} \in W^{\perp}\). Therefore \(W^{\perp}\) is a subspace of \(\mathbb R^n\).
Exercise.
First note that \(\{\overrightarrow{0}\}\subseteq W\cap W^{\perp}\). Let \(\overrightarrow{v}\in W\cap W^{\perp}\).
Then \(\overrightarrow{v}\in W\) and \(\overrightarrow{v}\in W^{\perp}\).
Thus \(\left\lVert\overrightarrow{v}\right\rVert^2=\overrightarrow{v}\cdot \overrightarrow{v}=0\)
which implies \(\overrightarrow{v}=\overrightarrow{0}\). Therefore \(W\cap W^{\perp}=\{\overrightarrow{0}\}\).
Theorem.
Let \(A\) be an \(m\times n\) real matrix. Then \(\operatorname{RS}(A)^{\perp}=\operatorname{NS}(A)\) and
\(\operatorname{CS}\left(A\right)^{\perp}=\operatorname{NS}(A^T)\).
To show \(\operatorname{NS}(A)\subseteq \operatorname{RS}(A)^{\perp}\), let
\(\overrightarrow{x}\in \operatorname{NS}(A)=\{ \overrightarrow{x}\in \mathbb R^n \;|\;
A\overrightarrow{x}=\overrightarrow{0}\}\). Then each row of \(A\) is orthogonal to \(\overrightarrow{x}\).
Since \(\operatorname{RS}(A)\) is the span of rows of \(A\), \(\overrightarrow{x}\) is orthogonal to each vector of
\(\operatorname{RS}(A)\). Then \(\overrightarrow{x}\in \operatorname{RS}(A)^{\perp}\). Thus
\(\operatorname{NS}(A)\subseteq \operatorname{RS}(A)^{\perp}\). To show \(\operatorname{RS}(A)^{\perp}=\operatorname{NS}(A)\),
it suffices to show \(\operatorname{RS}(A)^{\perp}\subseteq \operatorname{NS}(A)\). Let
\(\overrightarrow{x}\in \operatorname{RS}(A)^{\perp}\). Since rows of \(A\) are in \(\operatorname{RS}(A)\),
\(\overrightarrow{x}\) is orthogonal to each row of \(A\). Then \(A\overrightarrow{x}=\overrightarrow{0}\)
and \(\overrightarrow{x}\in \operatorname{NS}(A)\). Thus \(\operatorname{RS}(A)^{\perp}\subseteq \operatorname{NS}(A)\).
Finally \(\operatorname{NS}(A^T)=\operatorname{RS}(A^T)^{\perp}=\operatorname{CS}\left(A\right)^{\perp}\) because
\(\operatorname{RS}(A^T)=\operatorname{CS}\left(A\right)\).