Skip to main content
Logo image

Coordinated Linear Algebra

Section 9.4 Extra Topic: Inner Product Spaces

We have used the dot product in \(\R^n\) to compute the length of vectors (Corollary 1.2.9). In this section we define an inner product on an arbitrary vector space \(V\) over the real numbers. It generalizes the dot product.

Definition 9.4.1.

An inner product on a real vector space \(V\) is a function that assigns a real number \(\langle\mathbf{v}, \mathbf{w}\rangle\) to every pair \(\mathbf{v}\text{,}\) \(\mathbf{w}\) of vectors in \(V\) in such a way that the following properties are satisfied.
  1. \(\langle\mathbf{v}, \mathbf{w}\rangle\) is a real number for all \(\mathbf{v}\) and \(\mathbf{w}\) in \(V\text{.}\)
  2. \(\langle\mathbf{v}, \mathbf{w}\rangle = \langle\mathbf{w}, \mathbf{v}\rangle\) for all \(\mathbf{v}\) and \(\mathbf{w}\) in \(V\text{.}\)
  3. \(\langle\mathbf{v} + \mathbf{w}, \mathbf{u}\rangle = \langle\mathbf{v}, \mathbf{u}\rangle + \langle\mathbf{w}, \mathbf{u}\rangle\) for all \(\mathbf{u}\text{,}\) \(\mathbf{v}\text{,}\) and \(\mathbf{w}\) in \(V\text{.}\)
  4. \(\langle r\mathbf{v}, \mathbf{w}\rangle = r\langle\mathbf{v}, \mathbf{w}\rangle\) for all \(\mathbf{v}\) and \(\mathbf{w}\) in \(V\) and all \(r\) in \(\R\text{.}\)
  5. \(\langle\mathbf{v}, \mathbf{v}\rangle \gt 0\) for all \(\mathbf{v} \neq \mathbf{0}\) in \(V\text{.}\)
A real vector space \(V\) with an inner product \(\langle , \rangle\) will be called an inner product space. Note that every subspace of an inner product space is again an inner product space using the same inner product.
We present how \(\R^n \) is "the" examples and then proceed to a more fancy one.

Example 9.4.2.

\(\R^n\) is an inner product space with the dot product as inner product:
\begin{equation*} \langle \mathbf{v}, \mathbf{w} \rangle = \mathbf{v} \cdot \mathbf{w} \quad \mbox{ for all } \mathbf{v}, \mathbf{w} \in \R^n \end{equation*}
See Theorem 1.2.7. This is also called the euclidean inner product, and \(\R^n\text{,}\) equipped with the dot product, is called euclidean \(n\)-space.

Example 9.4.3.

If \(A\) and \(B\) are \(m \times n\) matrices, define \(\langle A, B\rangle = \mbox{tr}(AB^{T})\) where \(\mbox{tr}(X)\) is the trace of the square matrix \(X\text{.}\) Show that \(\langle\ , \rangle\) is an inner product in \(\mathbb{M}_{mn}\text{.}\)
Answer.
Item 1 is clear. Since \(\mbox{tr}(P) = \mbox{tr}(P^{T})\) for every square matrix \(P\text{,}\) we have Item 2:
\begin{equation*} \langle A, B \rangle = \mbox{tr}(AB^T) = \mbox{tr}[(AB^T)^T] = \mbox{tr}(BA^T) = \langle B, A \rangle \end{equation*}
Next, Property Item 3 and Property Item 4 follow because trace is a linear transformation \(\mathbb{M}_{mn} \to \R\) (see Exercise 9.4.2.23). Turning to Item 5, let \(\mathbf{r}_{1}, \mathbf{r}_{2}, \dots, \mathbf{r}_{m}\) denote the rows of the matrix \(A\text{.}\) Then the \((i, j)\)-entry of \(AA^{T}\) is \(\mathbf{r}_{i} \cdot \mathbf{r}_{j}\text{,}\) so
\begin{equation*} \langle A, A \rangle = \mbox{tr}(AA^T) = \mathbf{r}_1 \cdot \mathbf{r}_1 + \mathbf{r}_2 \cdot \mathbf{r}_2 + \dots + \mathbf{r}_m \cdot \mathbf{r}_m. \end{equation*}
But \(\mathbf{r}_{j} \cdot \mathbf{r}_{j}\) is the sum of the squares of the entries of \(\mathbf{r}_{j}\text{,}\) so this shows that \(\langle A, A\rangle\) is the sum of the squares of all \(nm\) entries of \(A\text{.}\) Therefore, Item 5 follows.
The next example is important in analysis.

Example 9.4.4.

Let \(\mathcal{F}[a,b]\) be the set of all functions \(f:[a,b]\rightarrow\R\text{.}\) Observe that \(\mathcal{F}[a,b]\) is a vector space. Let \(\mathcal{C}[a,b]\) be a subset of \(\mathcal{F}[a,b]\) consisting of all continuous functions. Why is \(\mathcal{C}[a,b]\) a subspace of \(\mathcal{F}[a,b]\text{?}\) Show that
\begin{equation*} \langle f, g \rangle = \int_{a}^{b} f(x)g(x)dx \end{equation*}
defines an inner product on \(\mathcal{C}[a, b]\text{.}\)
Answer.
Both Item 1 and Item 2 are clear. As to Item 4,
\begin{equation*} \langle rf, g \rangle = \int_{a}^{b} rf(x)g(x)dx = r\int_{a}^{b} f(x)g(x)dx = r\langle f, g \rangle. \end{equation*}
Item 3 is similar.
Finally, theorems of calculus show that \(\langle f, f \rangle = \int_{a}^{b} f(x)^2dx \geq 0\) and, if \(f\) is continuous, that this is zero if and only if \(f\) is the zero function. This gives Item 5.
If \(\mathbf{v}\) is any vector, then, using Item 3 of Definition 9.4.1, we get
\begin{equation*} \langle \mathbf{0}, \mathbf{v} \rangle = \langle \mathbf{0} + \mathbf{0}, \mathbf{v} \rangle = \langle \mathbf{0}, \mathbf{v} \rangle + \langle \mathbf{0}, \mathbf{v} \rangle \end{equation*}
and it follows that the number \(\langle\mathbf{0}, \mathbf{v}\rangle\) must be zero. This observation is recorded for reference in the following theorem, along with several other properties of inner products. The other proofs are left as Exercise 9.4.2.24.
If \(\langle\ , \rangle\) is an inner product on a space \(V\text{,}\) then, given \(\mathbf{u}\text{,}\) \(\mathbf{v}\text{,}\) and \(\mathbf{w}\) in \(V\text{,}\)
\begin{equation*} \langle r\mathbf{u} + s\mathbf{v}, \mathbf{w} \rangle = \langle r\mathbf{u}, \mathbf{w} \rangle + \langle s\mathbf{v}, \mathbf{w} \rangle = r\langle \mathbf{u}, \mathbf{w} \rangle + s\langle \mathbf{v}, \mathbf{w} \rangle \end{equation*}
for all \(r\) and \(s\) in \(\R\) by Item 3 and Item 4 of Definition 9.4.1. Moreover, there is nothing special about the fact that there are two terms in the linear combination or that it is in the first component:
\begin{equation*} \langle r_1\mathbf{v}_1 + r_2\mathbf{v}_2 + \dots + r_n\mathbf{v}_n, \mathbf{w} \rangle = r_1\langle \mathbf{v}_1, \mathbf{w} \rangle + r_2\langle \mathbf{v}_2, \mathbf{w} \rangle + \dots + r_n\langle \mathbf{v}_n, \mathbf{w} \rangle \end{equation*}
and
\begin{equation*} \langle \mathbf{v}, s_1\mathbf{w}_1 + s_2\mathbf{w}_2 + \dots + s_m\mathbf{w}_m \rangle = s_1\langle \mathbf{v}, \mathbf{w}_1 \rangle + s_2\langle \mathbf{v}, \mathbf{w}_2 \rangle + \dots + s_m\langle \mathbf{v}, \mathbf{w}_m \rangle \end{equation*}
hold for all \(r_{i}\) and \(s_{i}\) in \(\R\) and all \(\mathbf{v}\text{,}\) \(\mathbf{w}\text{,}\) \(\mathbf{v}_{i}\text{,}\) and \(\mathbf{w}_{j}\) in \(V\text{.}\) These results are described by saying that inner products ``preserve’’ linear combinations. For example,
\begin{align*} \langle 2\mathbf{u} - \mathbf{v}, 3\mathbf{u} + 2\mathbf{v} \rangle \amp = \langle 2\mathbf{u}, 3\mathbf{u} \rangle + \langle 2\mathbf{u}, 2\mathbf{v} \rangle + \langle -\mathbf{v}, 3\mathbf{u} \rangle + \langle -\mathbf{v}, 2\mathbf{v} \rangle \\ \amp = 6 \langle \mathbf{u}, \mathbf{u} \rangle + 4 \langle \mathbf{u}, \mathbf{v} \rangle -3 \langle \mathbf{v}, \mathbf{u} \rangle - 2 \langle \mathbf{v}, \mathbf{v} \rangle \\ \amp = 6 \langle \mathbf{u}, \mathbf{u} \rangle + \langle \mathbf{u}, \mathbf{v} \rangle - 2 \langle \mathbf{v}, \mathbf{v} \rangle \end{align*}
If \(A\) is a symmetric \(n \times n\) matrix and \(\mathbf{x}\) and \(\mathbf{y}\) are columns in \(\R^n\text{,}\) we regard the \(1 \times 1\) matrix \(\mathbf{x}^{T}A\mathbf{y}\) as a number. If we write
\begin{equation*} \langle \mathbf{x}, \mathbf{y} \rangle = \mathbf{x}^TA\mathbf{y} \quad \mbox{ for all columns } \mathbf{x}, \mathbf{y} \mbox{ in } \R^n, \end{equation*}
then Item 1 -Item 4 of Definition Definition 9.4.1 follow from matrix arithmetic (only Item 2 of Definition 9.4.1 requires that \(A\) is symmetric). Item 5 of Definition 9.4.1 reads
\begin{equation*} \mathbf{x}^TA \mathbf{x} \gt 0 \quad \mbox{ for all columns } \mathbf{x} \neq \mathbf{0} \mbox{ in } \R^n \end{equation*}
and this condition characterizes the positive definite matrices (see Theorem 10.7.3). This proves the first assertion in the next theorem.

Proof.

Given an inner product \(\langle\ , \rangle\) on \(\R^n\text{,}\) let \(\{\mathbf{e}_{1}, \mathbf{e}_{2}, \dots, \mathbf{e}_{n}\}\) be the standard basis of \(\R^n\text{.}\) If
\begin{equation*} \mathbf{x} = \displaystyle \sum_{i = 1}^{n} x_i\mathbf{e}_i \text{ and } \mathbf{y} = \displaystyle \sum_{j = 1}^{n} y_j\mathbf{e}_j \end{equation*}
are two vectors in \(\R^n\text{,}\) compute \(\langle\mathbf{x}, \mathbf{y}\rangle\) by adding the inner product of each term \(x_{i}\mathbf{e}_{i}\) to each term \(y_{j}\mathbf{e}_{j}\text{.}\) The result is a double sum, namely
\begin{equation*} \langle \mathbf{x}, \mathbf{y} \rangle = \displaystyle \sum_{i = 1}^{n} \sum_{j = 1}^{n} \langle x_i \mathbf{e}_i, y_j\mathbf{e}_j \rangle = \displaystyle \sum_{i = 1}^{n} \sum_{j = 1}^{n} x_i \langle \mathbf{e}_i, \mathbf{e}_j \rangle y_j. \end{equation*}
As the reader can verify, this is a matrix product:
\begin{equation*} \langle \mathbf{x}, \mathbf{y} \rangle = \left[ \begin{array}{cccc} x_1 \amp x_2 \amp \cdots \amp x_n \\ \end{array} \right] \left[ \begin{array}{cccc} \langle \mathbf{e}_1, \mathbf{e}_1 \rangle \amp \langle \mathbf{e}_1, \mathbf{e}_2 \rangle \amp \cdots \amp \langle \mathbf{e}_1, \mathbf{e}_n \rangle \\ \langle \mathbf{e}_2, \mathbf{e}_1 \rangle \amp \langle \mathbf{e}_2, \mathbf{e}_2 \rangle \amp \cdots \amp \langle \mathbf{e}_2, \mathbf{e}_n \rangle \\ \vdots \amp \vdots \amp \ddots \amp \vdots \\ \langle \mathbf{e}_n, \mathbf{e}_1 \rangle \amp \langle \mathbf{e}_n, \mathbf{e}_2 \rangle \amp \cdots \amp \langle \mathbf{e}_n, \mathbf{e}_n \rangle \\ \end{array} \right] \left[ \begin{array}{c} y_1 \\ y_2 \\ \vdots \\ y_n \end{array} \right] \end{equation*}
Hence \(\langle\mathbf{x}, \mathbf{y}\rangle = \mathbf{x}^{T}A\mathbf{y}\text{,}\) where \(A\) is the \(n \times n\) matrix whose \((i, j)\)-entry is \(\langle\mathbf{e}_{i}, \mathbf{e}_{j} \rangle\text{.}\) The fact that
\begin{equation*} \langle\mathbf{e}_{i}, \mathbf{e}_{j}\rangle = \langle\mathbf{e}_{j}, \mathbf{e}_{i}\rangle \end{equation*}
shows that \(A\) is symmetric. Finally, \(A\) is positive definite by Theorem 10.7.3.
Thus, just as every linear operator \(\R^n \to \R^n\) corresponds to an \(n \times n\) matrix, every inner product on \(\R^n\) corresponds to a positive definite \(n \times n\) matrix. In particular, the dot product corresponds to the identity matrix \(I_{n}\text{.}\)

Remark 9.4.7.

If we refer to the inner product space \(\R^n\) without specifying the inner product, we mean that the dot product is to be used.
The theorem and its proof may signal that finding this form is difficult. To dispel this, we provide an example with details.

Example 9.4.8.

Let the inner product \(\langle\ , \rangle\) be defined on \(\R^2\) by
\begin{equation*} \left \langle \left[ \begin{array}{c} v_1 \\ v_2 \end{array} \right], \left[ \begin{array}{c} w_1 \\ w_2 \end{array} \right] \right \rangle = 2v_1w_1 - v_1w_2 - v_2w_1 + v_2w_2 \end{equation*}
Find a symmetric \(2 \times 2\) matrix \(A\) such that \(\langle\mathbf{x}, \mathbf{y}\rangle = \mathbf{x}^{T}A\mathbf{y}\) for all \(\mathbf{x}\text{,}\) \(\mathbf{y}\) in \(\R^2\text{.}\)
Answer.
The \((i, j)\)-entry of the matrix \(A\) is the coefficient of \(v_{i}w_{j}\) in the expression, so
\begin{equation*} A = \left[ \begin{array}{rr} 2 \amp -1 \\ -1 \amp 1 \end{array} \right]. \end{equation*}
Incidentally, if \(\mathbf{x} = [x,y]]\text{,}\) then
\begin{equation*} \langle \mathbf{x}, \mathbf{x} \rangle = 2x^2 - 2xy + y^2 = x^2 +(x - y)^2 \geq 0 \end{equation*}
for all \(\mathbf{x}\text{,}\) so \(\langle\mathbf{x}, \mathbf{x}\rangle = 0\) implies \(\mathbf{x} = \mathbf{0}\text{.}\) Hence \(\langle\ , \rangle\) is indeed an inner product, so \(A\) is positive definite.
Let \(\langle\ , \rangle\) be an inner product on \(\R^n\) given as in Theorem 9.4.6 by a positive definite matrix \(A\text{.}\) If \(\mathbf{x} = \left[ \begin{array}{cccc} x_1 \amp x_2 \amp \cdots \amp x_n \end{array} \right]^T \text{,}\) then \(\langle\mathbf{x}, \mathbf{x}\rangle = \mathbf{x}^{T}A\mathbf{x}\) is an expression in the variables \(x_{1}, x_{2}, \dots, x_{n}\) called a quadratic form.

Subsection 9.4.1 Norm and Distance

Definition 9.4.9.

As in \(\R^n\text{,}\) if \(\langle\ , \rangle\) is an inner product on a space \(V\text{,}\) the norm \(\norm{\mathbf{v}}\) of a vector \(\mathbf{v}\) in \(V\) is defined by
\begin{equation*} \norm{ \mathbf{v} } = \sqrt{\langle \mathbf{v}, \mathbf{v} \rangle}. \end{equation*}
We define the distance between vectors \(\mathbf{v}\) and \(\mathbf{w}\) in an inner product space \(V\) to be
\begin{equation*} \mbox{d}(\mathbf{v}, \mathbf{w}) = \norm{ \mathbf{v} - \mathbf{w} }. \end{equation*}

Remark 9.4.10.

If the dot product is used in \(\R^n\text{,}\) the norm \(\norm{\mathbf{x}}\) of a vector \(\mathbf{x}\) is usually called the length of \(\mathbf{x}\text{.}\)
Note that Property Item 5 of Definition Definition 9.4.1 guarantees that \(\langle\mathbf{v}, \mathbf{v}\rangle \geq 0\text{,}\) so \(\norm{\mathbf{v}}\) is a real number.

Example 9.4.11.

The norm of a continuous function \(f = f(x)\) in \(\mathcal{C}[a, b]\) (with the inner product from Example 9.4.4) is given by
\begin{equation*} \norm{ f } = \sqrt{\int_{a}^{b} f(x)^2dx}. \end{equation*}
Hence \(\norm{ f}^{2}\) is the area beneath the graph of \(y = f(x)^{2}\) between \(x = a\) and \(x = b\text{.}\)

Example 9.4.12.

Show that \(\langle\mathbf{u} + \mathbf{v}, \mathbf{u} - \mathbf{v}\rangle = \norm{\mathbf{u}}^{2} - \norm{\mathbf{v}}^{2}\) in any inner product space.
Answer.
\begin{align*} \langle \mathbf{u} + \mathbf{v}, \mathbf{u} - \mathbf{v} \rangle \amp = \langle \mathbf{u}, \mathbf{u} \rangle - \langle \mathbf{u}, \mathbf{v} \rangle + \langle \mathbf{v}, \mathbf{u} \rangle - \langle \mathbf{v}, \mathbf{v} \rangle \\ \amp = \norm{ \mathbf{u} }^2 - \langle \mathbf{u}, \mathbf{v} \rangle + \langle \mathbf{u}, \mathbf{v} \rangle - \norm{ \mathbf{v} }^2 \\ \amp = \norm{ \mathbf{u} }^2 - \norm{ \mathbf{v} }^2. \end{align*}
A vector \(\mathbf{v}\) in an inner product space \(V\) is called a unit vector if \(\norm{\mathbf{v}} = 1\text{.}\) The set of all unit vectors in \(V\) is called the unit ball in \(V\text{.}\) For example, if \(V = \R^2\) (with the dot product) and \(\mathbf{v} = (x, y)\text{,}\) then
\begin{equation*} \norm{ \mathbf{v} }^2 = 1 \quad \mbox{ if and only if } \quad x^2 + y^2 = 1 \end{equation*}
Hence the unit ball in \(\R^2\) is the unit circle \(x^{2} + y^{2} = 1\) with centre at the origin and radius \(1\text{.}\) However, the shape of the unit ball varies with the choice of inner product.
Unit balls do not have to be "balls". Their shape depend on the norm in play and therefore the inner product. Let us see an example.

Example 9.4.13.

Let \(a \gt 0\) and \(b \gt 0\text{.}\) If \(\mathbf{v} = (x, y)\) and \(\mathbf{w} = (x_{1}, y_{1})\text{,}\) define an inner product on \(\R^2\) by
\begin{equation*} \langle \mathbf{v}, \mathbf{w} \rangle = \frac{xx_1}{a^2} + \frac{yy_1}{b^2}. \end{equation*}
The reader can verify Exercise 9.4.2.5 that this is indeed an inner product. In this case
\begin{equation*} \norm{ \mathbf{v} }^2 = 1 \quad \mbox{ if and only if } \quad \frac{x^2}{a^2} + \frac{y^2}{b^2} = 1, \end{equation*}
so the unit ball is the ellipse shown in the diagram.
The next theorem reveals an important and useful fact about the relationship between norms and inner products.

Proof.

Write \(\norm{\mathbf{v}} = a\) and \(\norm{\mathbf{w}} = b\text{.}\) Using Theorem 9.4.5 we compute:
\begin{align} \norm{ b\mathbf{v} - a \mathbf{w} }^2 \amp = b^2 \norm{ \mathbf{v} }^2 - 2ab \langle \mathbf{v}, \mathbf{w} \rangle + a^2\norm{ \mathbf{w} }^2 \tag{9.4.1}\\ \amp = 2ab(ab - \langle \mathbf{v}, \mathbf{w} \rangle), \tag{9.4.2}\\ \norm{ b\mathbf{v} + a \mathbf{w} }^2 \amp = b^2 \norm{ \mathbf{v} }^2 + 2ab \langle \mathbf{v}, \mathbf{w} \rangle + a^2\norm{ \mathbf{w} }^2 \tag{9.4.3}\\ \amp = 2ab(ab + \langle \mathbf{v}, \mathbf{w} \rangle). \tag{9.4.4} \end{align}
It follows that \(ab - \langle\mathbf{v}, \mathbf{w}\rangle \geq 0\) and \(ab + \langle\mathbf{v}, \mathbf{w}\rangle \geq 0\text{,}\) and hence that \(-ab \leq \langle\mathbf{v}, \mathbf{w}\rangle \leq ab\text{.}\) But then \(| \langle\mathbf{v}, \mathbf{w}\rangle | \leq ab = \norm{\mathbf{v}} \norm{ \mathbf{w}}\text{,}\) as desired. Conversely, if
\begin{equation*} |\langle \mathbf{v}, \mathbf{w}\rangle | = \norm{\mathbf{v}} \norm{ \mathbf{w} } = ab, \end{equation*}
then \(\langle\mathbf{v}, \mathbf{w}\rangle = \pm ab\text{.}\) This shows that \(b\mathbf{v} - a\mathbf{w} = \mathbf{0}\) or \(b\mathbf{v} + a\mathbf{w} = \mathbf{0}\text{.}\) It follows that one of \(\mathbf{v}\) and \(\mathbf{w}\) is a scalar multiple of the other, even if \(a = 0\) or \(b = 0\text{.}\)
Perhaps the following special case may seem more familiar to students who had a keen eye on C alculus.

Example 9.4.16.

If \(f\) and \(g\) are continuous functions on the interval \([a, b]\text{,}\) then (see Example 9.4.4)
\begin{equation*} \left(\int_{a}^{b} f(x)g(x)dx \right) ^2 \leq \int_{a}^{b} f(x)^2 dx \int_{a}^{b} g(x)^2 dx. \end{equation*}
Another famous inequality, the so-called triangle inequality. This also stems from the Cauchy-Schwarz inequality. It is included in the following list of basic properties of the norm of a vector.

Proof.

Because \(\norm{ \mathbf{v} } = \sqrt{\langle \mathbf{v}, \mathbf{v} \rangle}\text{,}\) properties Item 1 and Item 2 follow immediately from Item 3 and Item 4 of Theorem 9.4.5. As to Item 3, compute
\begin{equation*} \norm{ r\mathbf{v} } ^2 = \langle r\mathbf{v}, r\mathbf{v} \rangle = r^2\langle \mathbf{v}, \mathbf{v} \rangle = r^2\norm{ \mathbf{v} }^2 \end{equation*}
Hence Item 3 follows by taking positive square roots. Finally, the fact that \(\langle\mathbf{v}, \mathbf{w}\rangle \leq \norm{\mathbf{v}}\norm{\mathbf{w}}\) by the Cauchy-Schwarz inequality gives
\begin{align*} \norm{ \mathbf{v} + \mathbf{w} } ^2 = \langle \mathbf{v} + \mathbf{w}, \mathbf{v} + \mathbf{w} \rangle \amp = \norm{ \mathbf{v} } ^2 + 2 \langle \mathbf{v}, \mathbf{w} \rangle + \norm{ \mathbf{w} } ^2 \\ \amp \leq \norm{ \mathbf{v} } ^2 + 2 \norm{ \mathbf{v} } \norm{ \mathbf{w} } + \norm{ \mathbf{w} } ^2 \\ \amp = (\norm{ \mathbf{v} } + \norm{ \mathbf{w} })^2. \end{align*}
Hence Item 4 follows by taking positive square roots.
It is worth noting that the usual triangle inequality for absolute values,
\begin{equation*} | r + s | \leq |r| + |s| \mbox{ for all real numbers } r \mbox{ and } s \end{equation*}
is a special case of Item 4 where \(V = \R = \R^1\) and the dot product \(\langle r, s \rangle = rs\) is used.
In many calculations in an inner product space, it is required to show that some vector \(\mathbf{v}\) is zero. This is often accomplished most easily by showing that its norm \(\norm{\mathbf{v}}\) is zero. Here is an example.

Example 9.4.18.

Let \(\{\mathbf{v}_{1}, \dots, \mathbf{v}_{n}\}\) be a spanning set for an inner product space \(V\text{.}\) If \(\mathbf{v}\) in \(V\) satisfies \(\langle\mathbf{v}, \mathbf{v}_{i}\rangle = 0\) for each \(i = 1, 2, \dots, n\text{,}\) show that \(\mathbf{v} = \mathbf{0}\text{.}\)
Answer.
Write \(\mathbf{v} = r_{1}\mathbf{v}_{1} + \dots + r_{n}\mathbf{v}_{n}\text{,}\) \(r_{i}\) in \(\R\text{.}\) To show that \(\mathbf{v} = \mathbf{0}\text{,}\) we show that \(\norm{\mathbf{v}}^{2} = \langle\mathbf{v}, \mathbf{v}\rangle = 0\text{.}\) Compute:
\begin{equation*} \langle \mathbf{v}, \mathbf{v} \rangle = \langle \mathbf{v}, r_1\mathbf{v}_1 + \dots + r_n\mathbf{v}_n \rangle = r_1\langle \mathbf{v}, \mathbf{v}_1 \rangle + \dots + r_n \langle \mathbf{v}, \mathbf{v}_n \rangle = 0 \end{equation*}
by hypothesis, and the result follows.
The norm properties in Theorem 9.4.17 translate to the following properties of distance familiar from geometry.

Exercises 9.4.2 Exercises

1.

In each case, determine which of Item 1-- Item 5 in Definition 9.4.1 fail to hold.
  1. \(V = \R^2\text{,}\) \(\left\langle \begin{bmatrix}x_1\\ y_1\end{bmatrix}, \begin{bmatrix}x_2\\ y_2\end{bmatrix} \right\rangle = x_1y_1x_2y_2\text{.}\)
  2. \(V = \R^3\text{,}\) \\\(\left\langle \begin{bmatrix}x_1\\ x_2\\ x_3\end{bmatrix}, \begin{bmatrix}y_1\\ y_2\\ y_3\end{bmatrix} \right\rangle = x_1y_1 - x_2y_2 + x_3y_3\text{.}\)
  3. \(V = \mathbb{C}\text{,}\) \(\langle z, w \rangle = z\overline{w}\text{,}\) where \(\overline{w}\) is complex conjugation.
  4. \(V = \mathbb{P}^3\text{,}\) \(\langle p(x), q(x) \rangle = p(1)q(1)\text{.}\)
  5. \(V = \mathbb{M}_{22}\text{,}\) \(\langle A, B \rangle = \mbox{det}(AB)\)
  6. \(V = \mathcal{F}[0, 1]\text{,}\) \(\langle f, g \rangle = f(1)g(0) + f(0)g(1).\)
Answer.
(b): Here Item 5 fails.
(c): Here Item 1 fails, as sometimes we get a complex number.
(e): Here Item 5 fails.

2.

Let \(V\) be an inner product space. If \(U \subseteq V\) is a subspace, show that \(U\) is an inner product space using the same inner product.
Hint.
Item 1-- Item 5 hold in \(U\) because they hold in \(V\text{.}\)

3.

In each case, find a scalar multiple of \(\mathbf{v}\) that is a unit vector.
  1. \(\mathbf{v} = f\) in \(\mathcal{C}[0, 1]\) where \(f(x) = x^2\) and
    \begin{equation*} \langle f, g \rangle \int_{0}^{1} f(x)g(x)dx. \end{equation*}
  2. \(\mathbf{v} = f\) in \(\mathcal{C}[-\pi, \pi]\) where \(f(x) = \cos x\) and
    \begin{equation*} \langle f, g \rangle \int_{-\pi}^{\pi} f(x)g(x)dx. \end{equation*}
  3. \(\mathbf{v} = [1,3]. \) in \(\R^2\text{,}\) where
    \begin{equation*} \langle \mathbf{v}, \mathbf{w} \rangle = \mathbf{v}^T \left[ \begin{array}{rr} 1 \amp 1 \\ 1 \amp 2 \end{array} \right] \mathbf{w}. \end{equation*}
  4. \(\mathbf{v} = [3,-1]\) in \(\R^2\text{,}\) where
    \begin{equation*} \langle \mathbf{v}, \mathbf{w} \rangle = \mathbf{v}^T \left[ \begin{array}{rr} 1 \amp -1 \\ -1 \amp 2 \end{array} \right] \mathbf{w}. \end{equation*}
Answer.
For (b):
\begin{equation*} \frac{1}{\sqrt{\pi}}f. \end{equation*}
For (d):
\begin{equation*} \frac{1}{\sqrt{17}} \left[ \begin{array}{r} 3 \\ -1 \end{array} \right]. \end{equation*}

4.

In each case, find the distance between \(\mathbf{u}\) and \(\mathbf{v}\text{.}\)
  1. \begin{equation*} \mathbf{u} = \begin{bmatrix}3\\ -1\\ 2\\ 0\end{bmatrix}, \quad \mathbf{v} = \begin{bmatrix}1\\ 1\\ 1\\ 3\end{bmatrix}; \langle \mathbf{u}, \mathbf{v} \rangle = \mathbf{u} \cdot \mathbf{v}. \end{equation*}
  2. \begin{equation*} \mathbf{u} = \begin{bmatrix}1\\ 2\\ -1\\ 2\end{bmatrix}, \quad \mathbf{v} = \begin{bmatrix}2\\ 1\\ -1\\ 3\end{bmatrix}; \langle \mathbf{u}, \mathbf{v} \rangle = \mathbf{u} \cdot \mathbf{v}. \end{equation*}
  3. \(\mathbf{u} = f\text{,}\) \(\mathbf{v} = g \) in \(\mathcal{C}[0, 1]\) where \(f(x) = x^2 \) and \(g(x) = 1 - x\text{;}\)
    \begin{equation*} \langle f, g \rangle = \int_{0}^{1} f(x)g(x)dx \end{equation*}
  4. \(\mathbf{u} = f\text{,}\) \(\mathbf{v} = g \) in \(\mathcal{C}[-\pi, \pi]\) where \(f(x) = 1\) and \(g(x) = \cos x\text{;}\)
    \begin{equation*} \langle f, g \rangle = \int_{-\pi}^{\pi} f(x)g(x)dx. \end{equation*}
Answer.
For (c):
\begin{equation*} \sqrt{3} \end{equation*}
For (d):
\begin{equation*} \sqrt{3\pi}. \end{equation*}

5.

Let \(a_{1}, a_{2}, \dots, a_{n}\) be positive numbers. Given \(\mathbf{v} = [v_1, v_2, \ldots , v_n]\) and \(\mathbf{w} = [w_1, w_2, \ldots , w_n]\text{,}\) define \(\langle\mathbf{v}, \mathbf{w}\rangle = a_{1}v_{1}w_{1} + \dots + a_{n}v_{n}w_{n}\text{.}\) Show that this is an inner product on \(\R^n\text{.}\)

6.

If \(\{\mathbf{b}_{1}, \dots, \mathbf{b}_{n}\}\) is a basis of \(V\) and if \(\mathbf{v} = v_1\mathbf{b}_1 + \dots + v_n\mathbf{b}_n\) and \(\mathbf{w} = w_1\mathbf{b}_1 + \dots + w_n\mathbf{b}_n\) are vectors in \(V\text{,}\) define
\begin{equation*} \langle \mathbf{v}, \mathbf{w} \rangle = v_1w_1 + \dots + v_nw_n . \end{equation*}
Show that this is an inner product on \(V\text{.}\)

7.

Let \(\mbox{re}(z)\) denote the real part of the complex number \(z\text{.}\) Show that \(\langle\ , \rangle\) is an inner product on \(\mathbb{C}\) if \(\langle\mathbf{z}, \mathbf{w}\rangle = \mbox{re}(z\overline{w})\text{.}\)

8.

If \(T : V \to V\) is an isomorphism of the inner product space \(V\text{,}\) show that
\begin{equation*} \langle \mathbf{v}, \mathbf{w} \rangle_1 = \langle T(\mathbf{v}), T(\mathbf{w}) \rangle \end{equation*}
defines a new inner product \(\langle\ , \rangle_{1}\) on \(V\text{.}\)

9.

Show that every inner product \(\langle\ , \rangle\) on \(\R^n\) has the form
\begin{equation*} \langle\mathbf{x}, \mathbf{y}\rangle = (U\mathbf{x}) \cdot (U\mathbf{y}) \end{equation*}
for some upper triangular matrix \(U\) with positive diagonal entries.
Hint.

Exercise Group.

In each case, show that \(\langle\mathbf{v}, \mathbf{w}\rangle = \mathbf{v}^{T}A\mathbf{w}\) defines an inner product on \(\R^2\) and hence show that \(A\) is positive definite.
10.
\begin{equation*} A = \left[ \begin{array}{rr} 2 \amp 1 \\ 1 \amp 1 \end{array} \right]. \end{equation*}
11.
\begin{equation*} A = \left[ \begin{array}{rr} 5 \amp -3 \\ -3 \amp 2 \end{array} \right]. \end{equation*}
Answer.
\begin{equation*} \langle \mathbf{v}, \mathbf{v} \rangle = 5v_1^2 - 6v_1v_2 + 2v_2^2 = \frac{1}{5}[(5v_1 - 3v_2)^2 + v_2^2]. \end{equation*}
12.
\begin{equation*} A = \left[ \begin{array}{rr} 3 \amp 2 \\ 2 \amp 3 \end{array} \right]. \end{equation*}
13.
\begin{equation*} A = \left[ \begin{array}{rr} 3 \amp 4 \\ 4 \amp 6 \end{array} \right]. \end{equation*}
Answer.
\begin{equation*} \langle \mathbf{v}, \mathbf{v} \rangle = 3v_1^2 + 8v_1v_2 + 6v_2^2 = \frac{1}{3}[(3v_1 + 4v_2)^2 + 2v_2^2]. \end{equation*}

Exercise Group.

In each case, find a symmetric matrix \(A\) such that \(\langle\mathbf{v}, \mathbf{w}\rangle = \mathbf{v}^{T}A\mathbf{w}\text{.}\)
14.
\begin{equation*} \left\langle \left[ \begin{array}{r} v_1 \\ v_2 \end{array} \right], \left[ \begin{array}{r} w_1 \\ w_2 \end{array} \right] \right\rangle = v_1w_1 + 2v_1w_2 + 2v_2w_1 + 5v_2w_2. \end{equation*}
15.
\begin{equation*} \left\langle \left[ \begin{array}{r} v_1 \\ v_2 \end{array} \right], \left[ \begin{array}{r} w_1 \\ w_2 \end{array} \right] \right\rangle = v_1w_1 - v_1w_2 - v_2w_1 + 2v_2w_2. \end{equation*}
Answer.
\begin{equation*} \left[ \begin{array}{rr} 1 \amp -2 \\ -2 \amp 1 \end{array} \right]. \end{equation*}
16.
\begin{equation*} \left\langle \left[ \begin{array}{r} v_1 \\ v_2 \\ v_3 \end{array} \right], \left[ \begin{array}{r} w_1 \\ w_2 \\ w_3 \end{array} \right] \right\rangle = 2v_1w_1 + v_2w_2 + v_3w_3 - v_1w_2 \\ -v_2w_1 + v_2w_3 + v_3w_2. \end{equation*}
17.
\begin{equation*} \left\langle \left[ \begin{array}{r} v_1 \\ v_2 \\ v_3 \end{array} \right], \left[ \begin{array}{r} w_1 \\ w_2 \\ w_3 \end{array} \right] \right\rangle = v_1w_1 + 2v_2w_2 + 5v_3w_3 \\ - 2v_1w_3 - 2v_3w_1. \end{equation*}
Answer.
\begin{equation*} \left[ \begin{array}{rrr} 1 \amp 0 \amp -2 \\ 0 \amp 2 \amp 0 \\ -2 \amp 0 \amp 5 \end{array} \right]. \end{equation*}

18.

If \(A\) is symmetric and \(\mathbf{x}^{T}A\mathbf{x} = 0\) for all columns \(\mathbf{x}\) in \(\R^n\text{,}\) show that \(A = 0\text{.}\)
Hint.
Consider \(\langle \mathbf{x} + \mathbf{y}, \mathbf{x} + \mathbf{y} \rangle\) where \(\langle \mathbf{x}, \mathbf{y} \rangle = \mathbf{x}^TA\mathbf{y}\text{.}\)
Answer.
By the condition, \(\langle \mathbf{x}, \mathbf{y} \rangle = \frac{1}{2} \langle \mathbf{x} + \mathbf{y}, \mathbf{x} + \mathbf{y} \rangle = 0\) for all \(\mathbf{x}\text{,}\) \(\mathbf{y}\text{.}\) Let \(\mathbf{e}_{i}\) denote column \(i\) of \(I\text{.}\) If \(A = \left[ a_{ij} \right]\text{,}\) then \(a_{ij} = \mathbf{e}_{i}^{T}A\mathbf{e}_{j} = \{\mathbf{e}_{i}, \mathbf{e}_{j}\} = 0\) for all \(i\) and \(j\text{.}\)

19.

Show that the sum of two inner products on \(V\) is again an inner product.

20.

Let \(\norm{ \mathbf{u} } = 1\text{,}\) \(\norm{ \mathbf{v} } = 2\text{,}\) \(\norm{ \mathbf{w} } = \sqrt{3} \text{,}\) \(\langle \mathbf{u}, \mathbf{v} \rangle = -1\text{,}\) \(\langle\mathbf{u}, \mathbf{w}\rangle = 0\) and \(\langle\mathbf{v}, \mathbf{w}\rangle = 3\text{.}\) Compute:
  1. \(\displaystyle \langle \mathbf{v} + \mathbf{w}, 2\mathbf{u} - \mathbf{v} \rangle\)
  2. \(\langle \mathbf{u} - 2 \mathbf{v} - \mathbf{w}, 3\mathbf{w} - \mathbf{v} \rangle\) \(-15\)

21.

Given the data in Exercise 9.4.2.20, show that \(\mathbf{u} + \mathbf{v} = \mathbf{w}\text{.}\)

22.

Show that no vectors exist such that \(\norm{\mathbf{u}} = 1\text{,}\) \(\norm{\mathbf{v}} = 2\text{,}\) and \(\langle\mathbf{u}, \mathbf{v}\rangle = -3\text{.}\)

24.

Answer.
Using Item 2:
\begin{equation*} \langle \mathbf{u}, \mathbf{v} + \mathbf{w} \rangle = \langle \mathbf{v} + \mathbf{w}, \mathbf{u} \rangle = \langle \mathbf{v}, \mathbf{u} \rangle + \langle \mathbf{w}, \mathbf{u} \rangle = \langle \mathbf{u}, \mathbf{v} \rangle + \langle \mathbf{u}, \mathbf{w} \rangle. \end{equation*}
Using Item 2 and Item 4:
\begin{equation*} \langle \mathbf{v}, r\mathbf{w} \rangle = \langle r\mathbf{w}, \mathbf{v} \rangle = r \langle \mathbf{w}, \mathbf{v} \rangle = r \langle \mathbf{v}, \mathbf{w} \rangle. \end{equation*}
Using Item 3:
\begin{equation*} \langle \mathbf{0}, \mathbf{v} \rangle = \langle \mathbf{0} + \mathbf{0}, \mathbf{v} \rangle = \langle \mathbf{0}, \mathbf{v} \rangle + \langle \mathbf{0}, \mathbf{v} \rangle, \end{equation*}
so \(\langle \mathbf{0}, \mathbf{v} \rangle = 0.\) The rest is Item 2.
Assume that \(\langle \mathbf{v}, \mathbf{v} \rangle = 0\text{.}\) If \(\mathbf{v} \neq \mathbf{0}\) this contradicts Item 5, so \(\mathbf{v} = \mathbf{0}\text{.}\) Conversely, if \(\mathbf{v} = \mathbf{0}\text{,}\) then \(\langle \mathbf{v}, \mathbf{v} \rangle = 0\) by Part 3 of this theorem.

25.

Let \(\mathbf{u}\) and \(\mathbf{v}\) be vectors in an inner product space \(V\text{.}\)
  1. Expand \(\langle2\mathbf{u} - 7\mathbf{v}, 3\mathbf{u} + 5\mathbf{v} \rangle\text{.}\)
  2. Expand \(\langle3\mathbf{u} - 4\mathbf{v}, 5\mathbf{u} + \mathbf{v} \rangle\text{.}\)
  3. Show that \(\norm{ \mathbf{u} + \mathbf{v} } ^2 = \norm{ \mathbf{u} } ^2 + 2 \langle \mathbf{u}, \mathbf{v} \rangle + \norm{ \mathbf{v} } ^2 \text{.}\)
  4. Show that \(\norm{ \mathbf{u} - \mathbf{v} } ^2 = \norm{ \mathbf{u} } ^2 - 2 \langle \mathbf{u}, \mathbf{v} \rangle + \norm{ \mathbf{v} } ^2\)
Answer.
For (b):
\begin{equation*} 15\norm{\mathbf{u}}^{2} - 17 \langle \mathbf{u}, \mathbf{v} \rangle - 4\norm{\mathbf{v}}^{2}. \end{equation*}
For (d):
\begin{equation*} \norm{\mathbf{u} + \mathbf{v}}^{2} = \langle \mathbf{u} + \mathbf{v}, \mathbf{u} + \mathbf{v} \rangle = \norm{\mathbf{u}}^{2} + 2\langle \mathbf{u}, \mathbf{v}\rangle + \norm{\mathbf{v}}^{2}. \end{equation*}

26.

Show that
\begin{equation*} \norm{ \mathbf{v} } ^2 + \norm{ \mathbf{w} } ^2 = \frac{1}{2} \{ \norm{ \mathbf{v} + \mathbf{w} } ^2 + \norm{ \mathbf{v} - \mathbf{w} } ^2\} \end{equation*}
for any \(\mathbf{v}\) and \(\mathbf{w}\) in an inner product space.

27.

Let \(\langle\ , \rangle\) be an inner product on a vector space \(V\text{.}\) Show that the corresponding distance function is translation invariant. That is, show that
\begin{equation*} \mbox{d}(\mathbf{v}, \mathbf{w}) = \mbox{d}(\mathbf{v} + \mathbf{u}, \mathbf{w} + \mathbf{u}) \end{equation*}
for all \(\mathbf{v}\text{,}\) \(\mathbf{w}\text{,}\) and \(\mathbf{u}\) in \(V\text{.}\)

28.

  1. Show that \(\langle \mathbf{u}, \mathbf{v} \rangle = \frac{1}{4}[\norm{ \mathbf{u} + \mathbf{v} } ^2 - \norm{ \mathbf{u} - \mathbf{v} } ^2]\) for all \(\mathbf{u}\text{,}\) \(\mathbf{v}\) in an inner product space \(V\text{.}\)
  2. If \(\langle\ , \rangle\) and \(\langle\ , \rangle^\prime\) are two inner products on \(V\) that have equal associated norm functions, show that \(\langle\mathbf{u}, \mathbf{v}\rangle = \langle\mathbf{u}, \mathbf{v}\rangle^\prime\) holds for all \(\mathbf{u}\) and \(\mathbf{v}\text{.}\)

29.

Let \(\mathbf{v}\) denote a vector in an inner product space \(V\text{.}\)
  1. Show that \(W = \{\mathbf{w} \mid \mathbf{w} \mbox{ in } V, \langle\mathbf{v}, \mathbf{w} = 0\}\) is a subspace of \(V\text{.}\)
  2. Let \(W\) be as in (a). If \(V = \R^3\) with the dot product, and if \(\mathbf{v} = \begin{bmatrix}1\\ -1\\ 2\end{bmatrix}\text{,}\) find a basis for \(W\text{.}\)
Answer.
The basis is
\begin{equation*} \left\{\begin{bmatrix}1\\ 1\\ 0\end{bmatrix}, \begin{bmatrix}0\\ 2\\ 1\end{bmatrix}\right\}. \end{equation*}

30.

Given vectors \(\mathbf{w}_{1}, \mathbf{w}_{2}, \dots, \mathbf{w}_{n}\) and \(\mathbf{v}\text{,}\) assume that \(\langle\mathbf{v}, \mathbf{w}_{i}\rangle = 0\) for each \(i\text{.}\) Show that \(\langle\mathbf{v}, \mathbf{w}\rangle = 0\) for all \(\mathbf{w}\) in \(\mbox{span}\{\mathbf{w}_{1}, \mathbf{w}_{2}, \dots, \mathbf{w}_{n}\}\text{.}\)

31.

If \(V = \mbox{span}\{\mathbf{v}_{1}, \mathbf{v}_{2}, \dots, \mathbf{v}_{n}\}\) and \(\langle\mathbf{v}, \mathbf{v}_{i}\rangle = \langle\mathbf{w}, \mathbf{v}_i\rangle\) holds for each \(i\text{.}\) Show that \(\mathbf{v} = \mathbf{w}\text{.}\)
Hint.
\(\langle \mathbf{v} - \mathbf{w}, \mathbf{v}_{i} \rangle = \langle \mathbf{v}, \mathbf{v}_{i} \rangle - \langle \mathbf{w}, \mathbf{v}_{i} \rangle = 0\) for each \(i\text{,}\) so \(\mathbf{v} = \mathbf{w}\) by Exercise 9.4.2.30.

32.

Use the Cauchy-Schwarz inequality in an inner product space to show that:
  1. If \(\norm{\mathbf{u}} \leq 1\text{,}\) then \(\langle\mathbf{u}, \mathbf{v}\rangle^{2} \leq \norm{\mathbf{v}}^{2}\) for all \(\mathbf{v}\) in \(V\text{.}\)
  2. \((x \cos \theta + y \sin \theta)^{2} \leq x^{2} + y^{2}\) for all real \(x\text{,}\) \(y\text{,}\) and \(\theta\text{.}\)
  3. \(\norm{ r_1\mathbf{v}_1 + \dots + r_n\mathbf{v}_n } ^2 \leq [r_1 \norm{ \mathbf{v}_1 } + \dots + r_n \norm{ \mathbf{v}_n } ]^2\) for all vectors \(\mathbf{v}_{i}\text{,}\) and all \(r_{i} \gt 0\) in \(\R\text{.}\)
Answer.
For (b): If \(\mathbf{u} = (\cos \theta, \sin \theta)\) in \(\R^2\) (with the dot product) then \(\norm{\mathbf{u}} = 1\text{.}\) Use (a) with \(\mathbf{v} = \begin{bmatrix}x\\ y\end{bmatrix}\text{.}\)

33.

If \(A\) is a \(2 \times n\) matrix, let \(\mathbf{u}\) and \(\mathbf{v}\) denote the rows of \(A\text{.}\)
  1. Show that
    \begin{equation*} AA^T = \left[ \begin{array}{rr} \norm{ \mathbf{u} } ^2 \amp \mathbf{u} \cdot \mathbf{v} \\ \mathbf{u} \cdot \mathbf{v} \amp \norm{ \mathbf{v} } ^2 \end{array} \right]. \end{equation*}
  2. Show that \(\mbox{det}(AA^{T}) \geq 0\text{.}\)

34.

  1. If \(\mathbf{v}\) and \(\mathbf{w}\) are nonzero vectors in an inner product space \(V\text{,}\) show that \(-1 \leq \frac{\langle \mathbf{v}, \mathbf{w} \rangle}{\norm{ \mathbf{v} } \norm{ \mathbf{w} }} \leq 1,\) and hence that a unique angle \(\theta\) exists such that
    \begin{equation*} \frac{\langle \mathbf{v}, \mathbf{w} \rangle}{\norm{ \mathbf{v} } \norm{ \mathbf{w} }} = \cos \theta \text{ and } 0 \leq \theta \leq \pi. \end{equation*}
    This angle \(\theta\) is called the angle between \(\mathbf{v}\) and \(\mathbf{w}\text{.}\)
  2. Find the angle between \(\mathbf{v} = [1,2,-1,1,3]\) and \(\mathbf{w} = [2,1,0,2,0]\) in \(\R^5\) with the dot product.
  3. If \(\theta\) is the angle between \(\mathbf{v}\) and \(\mathbf{w}\text{,}\) show that the law of cosines is valid:
    \begin{equation*} \norm{ \mathbf{v} - \mathbf{w} } = \norm{ \mathbf{v} } ^2 + \norm{ \mathbf{w} } ^2 - 2\norm{ \mathbf{v} } \norm{ \mathbf{w} } \cos \theta. \end{equation*}

35.

If \(V = \R^2\text{,}\) define \(\norm{\begin{bmatrix}x\\ y\end{bmatrix}} = |x| + |y|\text{.}\)
  1. Show that \(\norm{\cdot}\) satisfies the conditions in Theorem 9.4.17.
  2. Show that \(\norm{\cdot}\) does not arise from an inner product on \(\R^2\) given by a matrix \(A\text{.}\)
Hint.
If it did, use Theorem 9.4.6 to find numbers \(a\text{,}\) \(b\text{,}\) and \(c\) such that
\begin{equation*} \norm{\begin{bmatrix}x\\ y\end{bmatrix}}^{2} = ax^{2} + bxy + cy^{2} \end{equation*}
for all \(x\) and \(y\text{.}\)