The next example is important in analysis.
\begin{equation*}
\langle \mathbf{0}, \mathbf{v} \rangle = \langle \mathbf{0} + \mathbf{0}, \mathbf{v} \rangle =
\langle \mathbf{0}, \mathbf{v} \rangle + \langle \mathbf{0}, \mathbf{v} \rangle
\end{equation*}
and it follows that the number
\(\langle\mathbf{0}, \mathbf{v}\rangle\) must be zero. This observation is recorded for reference in the following theorem, along with several other properties of inner products. The other proofs are left as
Exercise 9.4.2.24.
If \(\langle\ , \rangle\) is an inner product on a space \(V\text{,}\) then, given \(\mathbf{u}\text{,}\) \(\mathbf{v}\text{,}\) and \(\mathbf{w}\) in \(V\text{,}\)
\begin{equation*}
\langle r\mathbf{u} + s\mathbf{v}, \mathbf{w} \rangle = \langle r\mathbf{u}, \mathbf{w} \rangle + \langle s\mathbf{v}, \mathbf{w} \rangle = r\langle \mathbf{u}, \mathbf{w} \rangle + s\langle \mathbf{v}, \mathbf{w} \rangle
\end{equation*}
for all
\(r\) and
\(s\) in
\(\R\) by
Item 3 and
Item 4 of
Definition 9.4.1. Moreover, there is nothing special about the fact that there are two terms in the linear combination or that it is in the first component:
\begin{equation*}
\langle r_1\mathbf{v}_1 + r_2\mathbf{v}_2 + \dots + r_n\mathbf{v}_n, \mathbf{w} \rangle =
r_1\langle \mathbf{v}_1, \mathbf{w} \rangle +
r_2\langle \mathbf{v}_2, \mathbf{w} \rangle + \dots +
r_n\langle \mathbf{v}_n, \mathbf{w} \rangle
\end{equation*}
and
\begin{equation*}
\langle \mathbf{v}, s_1\mathbf{w}_1 + s_2\mathbf{w}_2 + \dots + s_m\mathbf{w}_m \rangle =
s_1\langle \mathbf{v}, \mathbf{w}_1 \rangle +
s_2\langle \mathbf{v}, \mathbf{w}_2 \rangle + \dots +
s_m\langle \mathbf{v}, \mathbf{w}_m \rangle
\end{equation*}
hold for all \(r_{i}\) and \(s_{i}\) in \(\R\) and all \(\mathbf{v}\text{,}\) \(\mathbf{w}\text{,}\) \(\mathbf{v}_{i}\text{,}\) and \(\mathbf{w}_{j}\) in \(V\text{.}\) These results are described by saying that inner products ``preserve’’ linear combinations. For example,
\begin{align*}
\langle 2\mathbf{u} - \mathbf{v}, 3\mathbf{u} + 2\mathbf{v} \rangle \amp =
\langle 2\mathbf{u}, 3\mathbf{u} \rangle + \langle 2\mathbf{u}, 2\mathbf{v} \rangle + \langle -\mathbf{v}, 3\mathbf{u} \rangle + \langle -\mathbf{v}, 2\mathbf{v} \rangle \\
\amp = 6 \langle \mathbf{u}, \mathbf{u} \rangle + 4 \langle \mathbf{u}, \mathbf{v} \rangle -3 \langle \mathbf{v}, \mathbf{u} \rangle - 2 \langle \mathbf{v}, \mathbf{v} \rangle \\
\amp = 6 \langle \mathbf{u}, \mathbf{u} \rangle + \langle \mathbf{u}, \mathbf{v} \rangle - 2 \langle \mathbf{v}, \mathbf{v} \rangle
\end{align*}
If \(A\) is a symmetric \(n \times n\) matrix and \(\mathbf{x}\) and \(\mathbf{y}\) are columns in \(\R^n\text{,}\) we regard the \(1 \times 1\) matrix \(\mathbf{x}^{T}A\mathbf{y}\) as a number. If we write
\begin{equation*}
\langle \mathbf{x}, \mathbf{y} \rangle = \mathbf{x}^TA\mathbf{y} \quad \mbox{ for all columns } \mathbf{x}, \mathbf{y} \mbox{ in } \R^n,
\end{equation*}
\begin{equation*}
\mathbf{x}^TA \mathbf{x} \gt 0 \quad \mbox{ for all columns } \mathbf{x} \neq \mathbf{0} \mbox{ in } \R^n
\end{equation*}
and this condition characterizes the positive definite matrices (see
Theorem 10.7.3). This proves the first assertion in the next theorem.
The theorem and its proof may signal that finding this form is difficult. To dispel this, we provide an example with details.
Subsection 9.4.1 Norm and Distance
Definition 9.4.9.
As in \(\R^n\text{,}\) if \(\langle\ , \rangle\) is an inner product on a space \(V\text{,}\) the norm \(\norm{\mathbf{v}}\) of a vector \(\mathbf{v}\) in \(V\) is defined by
\begin{equation*}
\norm{ \mathbf{v} } = \sqrt{\langle \mathbf{v}, \mathbf{v} \rangle}.
\end{equation*}
We define the distance between vectors \(\mathbf{v}\) and \(\mathbf{w}\) in an inner product space \(V\) to be
\begin{equation*}
\mbox{d}(\mathbf{v}, \mathbf{w}) = \norm{ \mathbf{v} - \mathbf{w} }.
\end{equation*}
Remark 9.4.10.
If the dot product is used in \(\R^n\text{,}\) the norm \(\norm{\mathbf{x}}\) of a vector \(\mathbf{x}\) is usually called the length of \(\mathbf{x}\text{.}\)
Note that Property
Item 5 of Definition
Definition 9.4.1 guarantees that
\(\langle\mathbf{v}, \mathbf{v}\rangle \geq 0\text{,}\) so
\(\norm{\mathbf{v}}\) is a real number.
Example 9.4.11.
The norm of a continuous function
\(f = f(x)\) in
\(\mathcal{C}[a, b]\) (with the inner product from
Example 9.4.4) is given by
\begin{equation*}
\norm{ f } = \sqrt{\int_{a}^{b} f(x)^2dx}.
\end{equation*}
Hence \(\norm{ f}^{2}\) is the area beneath the graph of \(y = f(x)^{2}\) between \(x = a\) and \(x = b\text{.}\)
Example 9.4.12.
Show that \(\langle\mathbf{u} + \mathbf{v}, \mathbf{u} - \mathbf{v}\rangle = \norm{\mathbf{u}}^{2} - \norm{\mathbf{v}}^{2}\) in any inner product space.
Answer.
\begin{align*}
\langle \mathbf{u} + \mathbf{v}, \mathbf{u} - \mathbf{v} \rangle \amp = \langle \mathbf{u}, \mathbf{u} \rangle - \langle \mathbf{u}, \mathbf{v} \rangle + \langle \mathbf{v}, \mathbf{u} \rangle - \langle \mathbf{v}, \mathbf{v} \rangle \\
\amp = \norm{ \mathbf{u} }^2 - \langle \mathbf{u}, \mathbf{v} \rangle + \langle \mathbf{u}, \mathbf{v} \rangle - \norm{ \mathbf{v} }^2 \\
\amp = \norm{ \mathbf{u} }^2 - \norm{ \mathbf{v} }^2.
\end{align*}
A vector \(\mathbf{v}\) in an inner product space \(V\) is called a unit vector if \(\norm{\mathbf{v}} = 1\text{.}\) The set of all unit vectors in \(V\) is called the unit ball in \(V\text{.}\) For example, if \(V = \R^2\) (with the dot product) and \(\mathbf{v} = (x, y)\text{,}\) then
\begin{equation*}
\norm{ \mathbf{v} }^2 = 1 \quad \mbox{ if and only if } \quad x^2 + y^2 = 1
\end{equation*}
Hence the unit ball in \(\R^2\) is the unit circle \(x^{2} + y^{2} = 1\) with centre at the origin and radius \(1\text{.}\) However, the shape of the unit ball varies with the choice of inner product.
Unit balls do not have to be "balls". Their shape depend on the norm in play and therefore the inner product. Let us see an example.
Example 9.4.13.
Let \(a \gt 0\) and \(b \gt 0\text{.}\) If \(\mathbf{v} = (x, y)\) and \(\mathbf{w} = (x_{1}, y_{1})\text{,}\) define an inner product on \(\R^2\) by
\begin{equation*}
\langle \mathbf{v}, \mathbf{w} \rangle = \frac{xx_1}{a^2} + \frac{yy_1}{b^2}.
\end{equation*}
The reader can verify
Exercise 9.4.2.5 that this is indeed an inner product. In this case
\begin{equation*}
\norm{ \mathbf{v} }^2 = 1 \quad \mbox{ if and only if } \quad \frac{x^2}{a^2} + \frac{y^2}{b^2} = 1,
\end{equation*}
so the unit ball is the ellipse shown in the diagram.
Theorem 9.4.14.
If \(\mathbf{v} \neq \mathbf{0}\) is any vector in an inner product space \(V\text{,}\) then \(\frac{1}{\norm{ \mathbf{v} }} \mathbf{v}\) is the unique unit vector that is a positive multiple of \(\mathbf{v}\text{.}\)
The next theorem reveals an important and useful fact about the relationship between norms and inner products.
Theorem 9.4.15. Cauchy-Schwarz Inequality.
If \(\mathbf{v}\) and \(\mathbf{w}\) are two vectors in an inner product space \(V\text{,}\) then
\begin{equation*}
\langle \mathbf{v}, \mathbf{w} \rangle^2 \leq \norm{ \mathbf{v} }^2 \norm{ \mathbf{w} }^2.
\end{equation*}
Moreover, equality occurs if and only if one of \(\mathbf{v}\) and \(\mathbf{w}\) is a scalar multiple of the other.
Proof.
Write
\(\norm{\mathbf{v}} = a\) and
\(\norm{\mathbf{w}} = b\text{.}\) Using
Theorem 9.4.5 we compute:
\begin{align}
\norm{ b\mathbf{v} - a \mathbf{w} }^2 \amp = b^2 \norm{ \mathbf{v} }^2 - 2ab \langle \mathbf{v}, \mathbf{w} \rangle + a^2\norm{ \mathbf{w} }^2 \tag{9.4.1}\\
\amp = 2ab(ab - \langle \mathbf{v}, \mathbf{w} \rangle), \tag{9.4.2}\\
\norm{ b\mathbf{v} + a \mathbf{w} }^2 \amp = b^2 \norm{ \mathbf{v} }^2 + 2ab \langle \mathbf{v}, \mathbf{w} \rangle + a^2\norm{ \mathbf{w} }^2 \tag{9.4.3}\\
\amp = 2ab(ab + \langle \mathbf{v}, \mathbf{w} \rangle). \tag{9.4.4}
\end{align}
It follows that \(ab - \langle\mathbf{v}, \mathbf{w}\rangle \geq 0\) and \(ab + \langle\mathbf{v}, \mathbf{w}\rangle \geq 0\text{,}\) and hence that \(-ab \leq \langle\mathbf{v}, \mathbf{w}\rangle \leq ab\text{.}\) But then \(| \langle\mathbf{v}, \mathbf{w}\rangle | \leq ab = \norm{\mathbf{v}} \norm{ \mathbf{w}}\text{,}\) as desired. Conversely, if
\begin{equation*}
|\langle \mathbf{v}, \mathbf{w}\rangle | =
\norm{\mathbf{v}} \norm{ \mathbf{w} } = ab,
\end{equation*}
then \(\langle\mathbf{v}, \mathbf{w}\rangle = \pm ab\text{.}\) This shows that \(b\mathbf{v} - a\mathbf{w} = \mathbf{0}\) or \(b\mathbf{v} + a\mathbf{w} = \mathbf{0}\text{.}\) It follows that one of \(\mathbf{v}\) and \(\mathbf{w}\) is a scalar multiple of the other, even if \(a = 0\) or \(b = 0\text{.}\)
Perhaps the following special case may seem more familiar to students who had a keen eye on C alculus.
Example 9.4.16.
If
\(f\) and
\(g\) are continuous functions on the interval
\([a, b]\text{,}\) then (see
Example 9.4.4)
\begin{equation*}
\left(\int_{a}^{b} f(x)g(x)dx \right) ^2 \leq \int_{a}^{b} f(x)^2 dx \int_{a}^{b} g(x)^2 dx.
\end{equation*}
Another famous inequality, the so-called triangle inequality. This also stems from the Cauchy-Schwarz inequality. It is included in the following list of basic properties of the norm of a vector.
Theorem 9.4.17.
If \(V\) is an inner product space, the norm \(\norm{ \cdot }\) has the following properties.
\(\norm{\mathbf{v}} \geq 0\) for every vector \(\mathbf{v}\) in \(V\text{.}\)
\(\norm{\mathbf{v}} = 0\) if and only if \(\mathbf{v} = \mathbf{0}\text{.}\)
\(\norm{ r \mathbf{v}} = |r|\norm{\mathbf{v}}\) for every \(\mathbf{v}\) in \(V\) and every \(r\) in \(\R\text{.}\)
\(\norm{\mathbf{v} + \mathbf{w}} \leq \norm{\mathbf{v}} + \norm{\mathbf{w}}\) for all \(\mathbf{v}\) and \(\mathbf{w}\) in \(V\) (triangle inequality).
Proof.
\begin{equation*}
\norm{ r\mathbf{v} } ^2 = \langle r\mathbf{v}, r\mathbf{v} \rangle = r^2\langle \mathbf{v}, \mathbf{v} \rangle = r^2\norm{ \mathbf{v} }^2
\end{equation*}
Hence
Item 3 follows by taking positive square roots. Finally, the fact that
\(\langle\mathbf{v}, \mathbf{w}\rangle \leq \norm{\mathbf{v}}\norm{\mathbf{w}}\) by the Cauchy-Schwarz inequality gives
\begin{align*}
\norm{ \mathbf{v} + \mathbf{w} } ^2 =
\langle \mathbf{v} + \mathbf{w}, \mathbf{v} + \mathbf{w} \rangle \amp =
\norm{ \mathbf{v} } ^2 + 2 \langle \mathbf{v}, \mathbf{w} \rangle +
\norm{ \mathbf{w} } ^2 \\
\amp \leq \norm{ \mathbf{v} } ^2 +
2 \norm{ \mathbf{v} } \norm{ \mathbf{w} } +
\norm{ \mathbf{w} } ^2 \\
\amp = (\norm{ \mathbf{v} } + \norm{ \mathbf{w} })^2.
\end{align*}
Hence
Item 4 follows by taking positive square roots.
It is worth noting that the usual triangle inequality for absolute values,
\begin{equation*}
| r + s | \leq |r| + |s| \mbox{ for all real numbers } r \mbox{ and } s
\end{equation*}
is a special case of
Item 4 where
\(V = \R = \R^1\) and the dot product
\(\langle r, s \rangle = rs\) is used.
In many calculations in an inner product space, it is required to show that some vector \(\mathbf{v}\) is zero. This is often accomplished most easily by showing that its norm \(\norm{\mathbf{v}}\) is zero. Here is an example.
Example 9.4.18.
Let \(\{\mathbf{v}_{1}, \dots, \mathbf{v}_{n}\}\) be a spanning set for an inner product space \(V\text{.}\) If \(\mathbf{v}\) in \(V\) satisfies \(\langle\mathbf{v}, \mathbf{v}_{i}\rangle = 0\) for each \(i = 1, 2, \dots, n\text{,}\) show that \(\mathbf{v} = \mathbf{0}\text{.}\)
Answer.
Write \(\mathbf{v} = r_{1}\mathbf{v}_{1} + \dots + r_{n}\mathbf{v}_{n}\text{,}\) \(r_{i}\) in \(\R\text{.}\) To show that \(\mathbf{v} = \mathbf{0}\text{,}\) we show that \(\norm{\mathbf{v}}^{2} = \langle\mathbf{v}, \mathbf{v}\rangle = 0\text{.}\) Compute:
\begin{equation*}
\langle \mathbf{v}, \mathbf{v} \rangle
= \langle \mathbf{v}, r_1\mathbf{v}_1 + \dots + r_n\mathbf{v}_n \rangle
= r_1\langle \mathbf{v}, \mathbf{v}_1 \rangle + \dots + r_n \langle \mathbf{v}, \mathbf{v}_n \rangle
= 0
\end{equation*}
by hypothesis, and the result follows.
The norm properties in
Theorem 9.4.17 translate to the following properties of distance familiar from geometry.
Theorem 9.4.19.
Let \(V\) be an inner product space.
\(\mbox{d}(\mathbf{v}, \mathbf{w}) \geq 0\) for all \(\mathbf{v}\text{,}\) \(\mathbf{w}\) in \(V\text{.}\)
\(\mbox{d}(\mathbf{v}, \mathbf{w}) = 0\) if and only if \(\mathbf{v} = \mathbf{w}\text{.}\)
\(\mbox{d}(\mathbf{v}, \mathbf{w}) = \mbox{d}(\mathbf{w}, \mathbf{v})\) for all \(\mathbf{v}\) and \(\mathbf{w}\) in \(V\text{.}\)
\(\mbox{d}(\mathbf{v}, \mathbf{w}) \leq \mbox{d}(\mathbf{v}, \mathbf{u}) + \mbox{d}(\mathbf{u}, \mathbf{w})\) for all \(\mathbf{v}\text{,}\) \(\mathbf{u}\text{,}\) and \(\mathbf{w}\) in \(V\text{.}\)
Exercises 9.4.2 Exercises
1.
\(V = \R^2\text{,}\) \(\left\langle \begin{bmatrix}x_1\\ y_1\end{bmatrix}, \begin{bmatrix}x_2\\ y_2\end{bmatrix} \right\rangle = x_1y_1x_2y_2\text{.}\)
\(V = \R^3\text{,}\) \\\(\left\langle \begin{bmatrix}x_1\\ x_2\\ x_3\end{bmatrix}, \begin{bmatrix}y_1\\ y_2\\ y_3\end{bmatrix} \right\rangle = x_1y_1 - x_2y_2 + x_3y_3\text{.}\)
\(V = \mathbb{C}\text{,}\) \(\langle z, w \rangle = z\overline{w}\text{,}\) where \(\overline{w}\) is complex conjugation.
\(V = \mathbb{P}^3\text{,}\) \(\langle p(x), q(x) \rangle = p(1)q(1)\text{.}\)
\(V = \mathbb{M}_{22}\text{,}\) \(\langle A, B \rangle = \mbox{det}(AB)\)
\(V = \mathcal{F}[0, 1]\text{,}\) \(\langle f, g \rangle = f(1)g(0) + f(0)g(1).\)
Answer.
(c): Here
Item 1 fails, as sometimes we get a complex number.
2.
Let \(V\) be an inner product space. If \(U \subseteq V\) is a subspace, show that \(U\) is an inner product space using the same inner product.
Hint.
Item 1--
Item 5 hold in
\(U\) because they hold in
\(V\text{.}\)
3.
In each case, find a scalar multiple of \(\mathbf{v}\) that is a unit vector.
\(\mathbf{v} = f\) in \(\mathcal{C}[0, 1]\) where \(f(x) = x^2\) and
\begin{equation*}
\langle f, g \rangle \int_{0}^{1} f(x)g(x)dx.
\end{equation*}
\(\mathbf{v} = f\) in \(\mathcal{C}[-\pi, \pi]\) where \(f(x) = \cos x\) and
\begin{equation*}
\langle f, g \rangle \int_{-\pi}^{\pi} f(x)g(x)dx.
\end{equation*}
\(\mathbf{v} =
[1,3]. \) in \(\R^2\text{,}\) where
\begin{equation*}
\langle \mathbf{v}, \mathbf{w} \rangle = \mathbf{v}^T
\left[ \begin{array}{rr}
1 \amp 1 \\
1 \amp 2
\end{array} \right]
\mathbf{w}.
\end{equation*}
\(\mathbf{v} = [3,-1]\) in \(\R^2\text{,}\) where
\begin{equation*}
\langle \mathbf{v}, \mathbf{w} \rangle = \mathbf{v}^T
\left[ \begin{array}{rr}
1 \amp -1 \\
-1 \amp 2
\end{array} \right]
\mathbf{w}.
\end{equation*}
Answer.
For (b):
\begin{equation*}
\frac{1}{\sqrt{\pi}}f.
\end{equation*}
For (d):
\begin{equation*}
\frac{1}{\sqrt{17}}
\left[ \begin{array}{r}
3 \\
-1
\end{array} \right].
\end{equation*}
4.
In each case, find the distance between \(\mathbf{u}\) and \(\mathbf{v}\text{.}\)
\begin{equation*}
\mathbf{u} = \begin{bmatrix}3\\ -1\\ 2\\ 0\end{bmatrix}, \quad \mathbf{v} = \begin{bmatrix}1\\ 1\\ 1\\ 3\end{bmatrix};
\langle \mathbf{u}, \mathbf{v} \rangle = \mathbf{u} \cdot \mathbf{v}.
\end{equation*}
\begin{equation*}
\mathbf{u} = \begin{bmatrix}1\\ 2\\ -1\\ 2\end{bmatrix}, \quad \mathbf{v} = \begin{bmatrix}2\\ 1\\ -1\\ 3\end{bmatrix};
\langle \mathbf{u}, \mathbf{v} \rangle = \mathbf{u} \cdot \mathbf{v}.
\end{equation*}
\(\mathbf{u} = f\text{,}\) \(\mathbf{v} = g \) in \(\mathcal{C}[0, 1]\) where \(f(x) = x^2 \) and \(g(x) = 1 - x\text{;}\)
\begin{equation*}
\langle f, g \rangle = \int_{0}^{1} f(x)g(x)dx
\end{equation*}
\(\mathbf{u} = f\text{,}\) \(\mathbf{v} = g \) in \(\mathcal{C}[-\pi, \pi]\) where \(f(x) = 1\) and \(g(x) = \cos x\text{;}\)
\begin{equation*}
\langle f, g \rangle = \int_{-\pi}^{\pi} f(x)g(x)dx.
\end{equation*}
Answer.
For (c):
\begin{equation*}
\sqrt{3}
\end{equation*}
For (d):
\begin{equation*}
\sqrt{3\pi}.
\end{equation*}
5.
Let \(a_{1}, a_{2}, \dots, a_{n}\) be positive numbers. Given \(\mathbf{v} = [v_1, v_2, \ldots , v_n]\) and \(\mathbf{w} = [w_1, w_2, \ldots , w_n]\text{,}\) define \(\langle\mathbf{v}, \mathbf{w}\rangle = a_{1}v_{1}w_{1} + \dots + a_{n}v_{n}w_{n}\text{.}\) Show that this is an inner product on \(\R^n\text{.}\)
6.
If \(\{\mathbf{b}_{1}, \dots, \mathbf{b}_{n}\}\) is a basis of \(V\) and if \(\mathbf{v} = v_1\mathbf{b}_1 + \dots + v_n\mathbf{b}_n\) and \(\mathbf{w} = w_1\mathbf{b}_1 + \dots + w_n\mathbf{b}_n\) are vectors in \(V\text{,}\) define
\begin{equation*}
\langle \mathbf{v}, \mathbf{w} \rangle = v_1w_1 + \dots + v_nw_n .
\end{equation*}
Show that this is an inner product on \(V\text{.}\)
7.
Let \(\mbox{re}(z)\) denote the real part of the complex number \(z\text{.}\) Show that \(\langle\ , \rangle\) is an inner product on \(\mathbb{C}\) if \(\langle\mathbf{z}, \mathbf{w}\rangle = \mbox{re}(z\overline{w})\text{.}\)
8.
If \(T : V \to V\) is an isomorphism of the inner product space \(V\text{,}\) show that
\begin{equation*}
\langle \mathbf{v}, \mathbf{w} \rangle_1 = \langle T(\mathbf{v}), T(\mathbf{w}) \rangle
\end{equation*}
defines a new inner product \(\langle\ , \rangle_{1}\) on \(V\text{.}\)
9.
Show that every inner product \(\langle\ , \rangle\) on \(\R^n\) has the form
\begin{equation*}
\langle\mathbf{x}, \mathbf{y}\rangle = (U\mathbf{x}) \cdot (U\mathbf{y})
\end{equation*}
for some upper triangular matrix \(U\) with positive diagonal entries.
Exercise Group.
In each case, show that \(\langle\mathbf{v}, \mathbf{w}\rangle = \mathbf{v}^{T}A\mathbf{w}\) defines an inner product on \(\R^2\) and hence show that \(A\) is positive definite.
10.
\begin{equation*}
A =
\left[ \begin{array}{rr}
2 \amp 1 \\
1 \amp 1
\end{array} \right].
\end{equation*}
11.
\begin{equation*}
A =
\left[ \begin{array}{rr}
5 \amp -3 \\
-3 \amp 2
\end{array} \right].
\end{equation*}
Answer.
\begin{equation*}
\langle \mathbf{v}, \mathbf{v} \rangle = 5v_1^2 - 6v_1v_2 + 2v_2^2 =
\frac{1}{5}[(5v_1 - 3v_2)^2 + v_2^2].
\end{equation*}
12.
\begin{equation*}
A =
\left[ \begin{array}{rr}
3 \amp 2 \\
2 \amp 3
\end{array} \right].
\end{equation*}
13.
\begin{equation*}
A =
\left[ \begin{array}{rr}
3 \amp 4 \\
4 \amp 6
\end{array} \right].
\end{equation*}
Answer.
\begin{equation*}
\langle \mathbf{v}, \mathbf{v} \rangle = 3v_1^2 + 8v_1v_2 + 6v_2^2 =
\frac{1}{3}[(3v_1 + 4v_2)^2 + 2v_2^2].
\end{equation*}
Exercise Group.
In each case, find a symmetric matrix \(A\) such that \(\langle\mathbf{v}, \mathbf{w}\rangle = \mathbf{v}^{T}A\mathbf{w}\text{.}\)
14.
\begin{equation*}
\left\langle
\left[ \begin{array}{r}
v_1 \\
v_2
\end{array} \right], \left[ \begin{array}{r}
w_1 \\
w_2
\end{array} \right]
\right\rangle
= v_1w_1 + 2v_1w_2 + 2v_2w_1 + 5v_2w_2.
\end{equation*}
15.
\begin{equation*}
\left\langle
\left[ \begin{array}{r}
v_1 \\
v_2
\end{array} \right], \left[ \begin{array}{r}
w_1 \\
w_2
\end{array} \right]
\right\rangle
= v_1w_1 - v_1w_2 - v_2w_1 + 2v_2w_2.
\end{equation*}
Answer.
\begin{equation*}
\left[ \begin{array}{rr}
1 \amp -2 \\
-2 \amp 1
\end{array} \right].
\end{equation*}
16.
\begin{equation*}
\left\langle
\left[ \begin{array}{r}
v_1 \\
v_2 \\
v_3
\end{array} \right], \left[ \begin{array}{r}
w_1 \\
w_2 \\
w_3
\end{array} \right]
\right\rangle
= 2v_1w_1 + v_2w_2 + v_3w_3 - v_1w_2 \\ -v_2w_1 + v_2w_3 + v_3w_2.
\end{equation*}
17.
\begin{equation*}
\left\langle
\left[ \begin{array}{r}
v_1 \\
v_2 \\
v_3
\end{array} \right], \left[ \begin{array}{r}
w_1 \\
w_2 \\
w_3
\end{array} \right]
\right\rangle
= v_1w_1 + 2v_2w_2 + 5v_3w_3 \\ - 2v_1w_3 - 2v_3w_1.
\end{equation*}
Answer.
\begin{equation*}
\left[ \begin{array}{rrr}
1 \amp 0 \amp -2 \\
0 \amp 2 \amp 0 \\
-2 \amp 0 \amp 5
\end{array} \right].
\end{equation*}
18.
If \(A\) is symmetric and \(\mathbf{x}^{T}A\mathbf{x} = 0\) for all columns \(\mathbf{x}\) in \(\R^n\text{,}\) show that \(A = 0\text{.}\)
Hint.
Consider \(\langle \mathbf{x} + \mathbf{y}, \mathbf{x} + \mathbf{y} \rangle\) where \(\langle \mathbf{x}, \mathbf{y} \rangle = \mathbf{x}^TA\mathbf{y}\text{.}\)
Answer.
By the condition, \(\langle \mathbf{x}, \mathbf{y} \rangle = \frac{1}{2} \langle \mathbf{x} + \mathbf{y}, \mathbf{x} + \mathbf{y} \rangle = 0\) for all \(\mathbf{x}\text{,}\) \(\mathbf{y}\text{.}\) Let \(\mathbf{e}_{i}\) denote column \(i\) of \(I\text{.}\) If \(A = \left[ a_{ij} \right]\text{,}\) then \(a_{ij} = \mathbf{e}_{i}^{T}A\mathbf{e}_{j} = \{\mathbf{e}_{i}, \mathbf{e}_{j}\} = 0\) for all \(i\) and \(j\text{.}\)
19.
Show that the sum of two inner products on \(V\) is again an inner product.
20.
Let \(\norm{ \mathbf{u} } = 1\text{,}\) \(\norm{ \mathbf{v} } = 2\text{,}\) \(\norm{ \mathbf{w} } = \sqrt{3} \text{,}\) \(\langle \mathbf{u}, \mathbf{v} \rangle = -1\text{,}\) \(\langle\mathbf{u}, \mathbf{w}\rangle = 0\) and \(\langle\mathbf{v}, \mathbf{w}\rangle = 3\text{.}\) Compute:
\(\displaystyle \langle \mathbf{v} + \mathbf{w}, 2\mathbf{u} - \mathbf{v} \rangle\)
\(\langle \mathbf{u} - 2 \mathbf{v} - \mathbf{w}, 3\mathbf{w} - \mathbf{v} \rangle\) \(-15\)
21.
Given the data in
Exercise 9.4.2.20, show that
\(\mathbf{u} + \mathbf{v} = \mathbf{w}\text{.}\)
22.
Show that no vectors exist such that \(\norm{\mathbf{u}} = 1\text{,}\) \(\norm{\mathbf{v}} = 2\text{,}\) and \(\langle\mathbf{u}, \mathbf{v}\rangle = -3\text{.}\)
23.
24.
Answer.
\begin{equation*}
\langle \mathbf{u}, \mathbf{v} + \mathbf{w} \rangle =
\langle \mathbf{v} + \mathbf{w}, \mathbf{u} \rangle =
\langle \mathbf{v}, \mathbf{u} \rangle + \langle \mathbf{w}, \mathbf{u} \rangle =
\langle \mathbf{u}, \mathbf{v} \rangle + \langle \mathbf{u}, \mathbf{w} \rangle.
\end{equation*}
\begin{equation*}
\langle \mathbf{v}, r\mathbf{w} \rangle =
\langle r\mathbf{w}, \mathbf{v} \rangle =
r \langle \mathbf{w}, \mathbf{v} \rangle =
r \langle \mathbf{v}, \mathbf{w} \rangle.
\end{equation*}
\begin{equation*}
\langle \mathbf{0}, \mathbf{v} \rangle =
\langle \mathbf{0} + \mathbf{0}, \mathbf{v} \rangle =
\langle \mathbf{0}, \mathbf{v} \rangle + \langle \mathbf{0}, \mathbf{v} \rangle,
\end{equation*}
so
\(\langle \mathbf{0}, \mathbf{v} \rangle = 0.\) The rest is
Item 2.
Assume that
\(\langle \mathbf{v}, \mathbf{v} \rangle = 0\text{.}\) If
\(\mathbf{v} \neq \mathbf{0}\) this contradicts
Item 5, so
\(\mathbf{v} = \mathbf{0}\text{.}\) Conversely, if
\(\mathbf{v} = \mathbf{0}\text{,}\) then
\(\langle \mathbf{v}, \mathbf{v} \rangle = 0\) by Part 3 of this theorem.
25.
Let \(\mathbf{u}\) and \(\mathbf{v}\) be vectors in an inner product space \(V\text{.}\)
Expand \(\langle2\mathbf{u} - 7\mathbf{v}, 3\mathbf{u} + 5\mathbf{v} \rangle\text{.}\)
Expand \(\langle3\mathbf{u} - 4\mathbf{v}, 5\mathbf{u} + \mathbf{v} \rangle\text{.}\)
Show that \(\norm{ \mathbf{u} + \mathbf{v} } ^2 = \norm{ \mathbf{u} } ^2 + 2 \langle \mathbf{u}, \mathbf{v} \rangle + \norm{ \mathbf{v} } ^2 \text{.}\)
Show that \(\norm{ \mathbf{u} - \mathbf{v} } ^2 = \norm{ \mathbf{u} } ^2 - 2 \langle \mathbf{u}, \mathbf{v} \rangle + \norm{ \mathbf{v} } ^2\)
Answer.
For (b):
\begin{equation*}
15\norm{\mathbf{u}}^{2} - 17 \langle \mathbf{u}, \mathbf{v} \rangle - 4\norm{\mathbf{v}}^{2}.
\end{equation*}
For (d):
\begin{equation*}
\norm{\mathbf{u} + \mathbf{v}}^{2} = \langle \mathbf{u} + \mathbf{v}, \mathbf{u} + \mathbf{v} \rangle = \norm{\mathbf{u}}^{2} + 2\langle \mathbf{u}, \mathbf{v}\rangle + \norm{\mathbf{v}}^{2}.
\end{equation*}
26.
Show that
\begin{equation*}
\norm{ \mathbf{v} } ^2 +
\norm{ \mathbf{w} } ^2 = \frac{1}{2} \{
\norm{ \mathbf{v} + \mathbf{w} } ^2 +
\norm{ \mathbf{v} - \mathbf{w} } ^2\}
\end{equation*}
for any \(\mathbf{v}\) and \(\mathbf{w}\) in an inner product space.
27.
Let \(\langle\ , \rangle\) be an inner product on a vector space \(V\text{.}\) Show that the corresponding distance function is translation invariant. That is, show that
\begin{equation*}
\mbox{d}(\mathbf{v}, \mathbf{w}) = \mbox{d}(\mathbf{v} + \mathbf{u}, \mathbf{w} + \mathbf{u})
\end{equation*}
for all \(\mathbf{v}\text{,}\) \(\mathbf{w}\text{,}\) and \(\mathbf{u}\) in \(V\text{.}\)
28.
Show that \(\langle \mathbf{u}, \mathbf{v} \rangle = \frac{1}{4}[\norm{ \mathbf{u} + \mathbf{v} } ^2 - \norm{ \mathbf{u} - \mathbf{v} } ^2]\) for all \(\mathbf{u}\text{,}\) \(\mathbf{v}\) in an inner product space \(V\text{.}\)
If \(\langle\ , \rangle\) and \(\langle\ , \rangle^\prime\) are two inner products on \(V\) that have equal associated norm functions, show that \(\langle\mathbf{u}, \mathbf{v}\rangle = \langle\mathbf{u}, \mathbf{v}\rangle^\prime\) holds for all \(\mathbf{u}\) and \(\mathbf{v}\text{.}\)
29.
Let \(\mathbf{v}\) denote a vector in an inner product space \(V\text{.}\)
Show that \(W = \{\mathbf{w} \mid \mathbf{w} \mbox{ in } V, \langle\mathbf{v}, \mathbf{w} = 0\}\) is a subspace of \(V\text{.}\)
Let \(W\) be as in (a). If \(V = \R^3\) with the dot product, and if \(\mathbf{v} = \begin{bmatrix}1\\ -1\\ 2\end{bmatrix}\text{,}\) find a basis for \(W\text{.}\)
Answer.
The basis is
\begin{equation*}
\left\{\begin{bmatrix}1\\ 1\\ 0\end{bmatrix}, \begin{bmatrix}0\\ 2\\ 1\end{bmatrix}\right\}.
\end{equation*}
30.
Given vectors \(\mathbf{w}_{1}, \mathbf{w}_{2}, \dots, \mathbf{w}_{n}\) and \(\mathbf{v}\text{,}\) assume that \(\langle\mathbf{v}, \mathbf{w}_{i}\rangle = 0\) for each \(i\text{.}\) Show that \(\langle\mathbf{v}, \mathbf{w}\rangle = 0\) for all \(\mathbf{w}\) in \(\mbox{span}\{\mathbf{w}_{1}, \mathbf{w}_{2}, \dots, \mathbf{w}_{n}\}\text{.}\)
31.
If \(V = \mbox{span}\{\mathbf{v}_{1}, \mathbf{v}_{2}, \dots, \mathbf{v}_{n}\}\) and \(\langle\mathbf{v}, \mathbf{v}_{i}\rangle = \langle\mathbf{w}, \mathbf{v}_i\rangle\) holds for each \(i\text{.}\) Show that \(\mathbf{v} = \mathbf{w}\text{.}\)
Hint.
\(\langle \mathbf{v} - \mathbf{w}, \mathbf{v}_{i} \rangle = \langle \mathbf{v}, \mathbf{v}_{i} \rangle - \langle \mathbf{w}, \mathbf{v}_{i} \rangle = 0\) for each
\(i\text{,}\) so
\(\mathbf{v} = \mathbf{w}\) by
Exercise 9.4.2.30.
32.
Use the Cauchy-Schwarz inequality in an inner product space to show that:
If \(\norm{\mathbf{u}} \leq 1\text{,}\) then \(\langle\mathbf{u}, \mathbf{v}\rangle^{2} \leq \norm{\mathbf{v}}^{2}\) for all \(\mathbf{v}\) in \(V\text{.}\)
\((x \cos \theta + y \sin \theta)^{2} \leq x^{2} + y^{2}\) for all real \(x\text{,}\) \(y\text{,}\) and \(\theta\text{.}\)
\(\norm{ r_1\mathbf{v}_1 + \dots + r_n\mathbf{v}_n } ^2 \leq [r_1 \norm{ \mathbf{v}_1 } + \dots + r_n \norm{ \mathbf{v}_n } ]^2\) for all vectors \(\mathbf{v}_{i}\text{,}\) and all \(r_{i} \gt 0\) in \(\R\text{.}\)
Answer.
For (b): If \(\mathbf{u} = (\cos \theta, \sin \theta)\) in \(\R^2\) (with the dot product) then \(\norm{\mathbf{u}} = 1\text{.}\) Use (a) with \(\mathbf{v} = \begin{bmatrix}x\\ y\end{bmatrix}\text{.}\)
33.
If \(A\) is a \(2 \times n\) matrix, let \(\mathbf{u}\) and \(\mathbf{v}\) denote the rows of \(A\text{.}\)
Show that
\begin{equation*}
AA^T = \left[ \begin{array}{rr}
\norm{ \mathbf{u} } ^2 \amp \mathbf{u} \cdot \mathbf{v} \\
\mathbf{u} \cdot \mathbf{v} \amp \norm{ \mathbf{v} } ^2
\end{array} \right].
\end{equation*}
Show that \(\mbox{det}(AA^{T}) \geq 0\text{.}\)
34.
If \(\mathbf{v}\) and \(\mathbf{w}\) are nonzero vectors in an inner product space \(V\text{,}\) show that \(-1 \leq \frac{\langle \mathbf{v}, \mathbf{w} \rangle}{\norm{ \mathbf{v} } \norm{ \mathbf{w} }} \leq 1,\) and hence that a unique angle \(\theta\) exists such that
\begin{equation*}
\frac{\langle \mathbf{v}, \mathbf{w} \rangle}{\norm{ \mathbf{v} } \norm{ \mathbf{w} }} = \cos \theta
\text{ and } 0 \leq \theta \leq \pi.
\end{equation*}
This angle \(\theta\) is called the angle between \(\mathbf{v}\) and \(\mathbf{w}\text{.}\)
Find the angle between \(\mathbf{v} = [1,2,-1,1,3]\) and \(\mathbf{w} = [2,1,0,2,0]\) in \(\R^5\) with the dot product.
If \(\theta\) is the angle between \(\mathbf{v}\) and \(\mathbf{w}\text{,}\) show that the law of cosines is valid:
\begin{equation*}
\norm{ \mathbf{v} - \mathbf{w} } = \norm{ \mathbf{v} } ^2 + \norm{ \mathbf{w} } ^2 - 2\norm{ \mathbf{v} } \norm{ \mathbf{w} } \cos \theta.
\end{equation*}
35.
If \(V = \R^2\text{,}\) define \(\norm{\begin{bmatrix}x\\ y\end{bmatrix}} = |x| + |y|\text{.}\)
Show that
\(\norm{\cdot}\) satisfies the conditions in
Theorem 9.4.17.
Show that \(\norm{\cdot}\) does not arise from an inner product on \(\R^2\) given by a matrix \(A\text{.}\)
Hint.
If it did, use
Theorem 9.4.6 to find numbers
\(a\text{,}\) \(b\text{,}\) and
\(c\) such that
\begin{equation*}
\norm{\begin{bmatrix}x\\ y\end{bmatrix}}^{2} = ax^{2} + bxy + cy^{2}
\end{equation*}
for all \(x\) and \(y\text{.}\)