Skip to main content
Logo image

Coordinated Linear Algebra

Section 10.3 Orthogonal Complements and Decompositions

Subsection 10.3.1 Orthogonal Complements

We will now consider the set of vectors that are orthogonal to every vector in a given subspace. As a quick example, consider the \(xy\)-plane in \(\R^3\text{.}\) Clearly, every scalar multiple of the standard unit vector \(\mathbf{k}\) in \(\R^3\) is orthogonal to every vector in the \(xy\)-plane. We say that the set \(\{c\mathbf{k} \mid c \in\R\}\) is an orthogonal complement of \(\{ a\mathbf{i}+b\mathbf{j} \mid a, b \in\R \}\text{.}\)

Definition 10.3.1. Orthogonal Complement of a Subspace of \(\R^n\).

If \(W\) is a subspace of \(\R^n\text{,}\) define the orthogonal complement \(W^\perp\) of \(W\) (pronounced ``\(W\)-perp’’) by
\begin{equation*} W^\perp = \{\mathbf{x} \in\R^n \mid \mathbf{x} \cdot \mathbf{y} = 0 \mbox{ for all } \mathbf{y} \in W\} . \end{equation*}
Complement of a line
The following theorem collects some useful properties of the orthogonal complement; the proof of Item 1 and Item 2 is left as Exercise 10.3.3.8.

Proof.

[Item 3:] We must show that \(W^\perp = \{\mathbf{x} \mid \mathbf{x} \cdot \mathbf{x}_{i} = 0 \mbox{ for each } i\}\text{.}\) To show that two sets are equal, we must show that all elements of one set are included in the other set, and then we must show the reverse inclusion.
If \(\mathbf{x}\) is in \(W^\perp\) then \(\mathbf{x} \cdot \mathbf{x}_{i} = 0\) for all \(i\) because each \(\mathbf{x}_{i}\) is in \(W\text{.}\) This shows \(W^\perp \subseteq \{\mathbf{x} \mid \mathbf{x} \cdot \mathbf{x}_{i} = 0 \mbox{ for each } i\}\text{.}\) For the reverse inclusion, suppose that \(\mathbf{x} \cdot \mathbf{x}_{i} = 0\) for all \(i\text{;}\) we need to show that \(\mathbf{x}\) is in \(W^\perp\text{.}\) We need to show \(\mathbf{x} \cdot \mathbf{y} = 0\) for each \(\mathbf{y}\) in \(W\text{.}\) We can write
\begin{gather*} \end{gather*}
where each \(c_{i}\) is in \(\R\text{.}\) Then
\begin{align*} \mathbf{x} \cdot \mathbf{y} \amp = c_{1}(\mathbf{x} \cdot \mathbf{x}_{1}) + c_{2}(\mathbf{x} \cdot \mathbf{x}_{2})+ \dots +c_{k}(\mathbf{x} \cdot \mathbf{x}_{k}) \\ \amp = c_{1}0 + c_{2}0 + \dots + c_{k}0 = 0 \end{align*}
as required, and the proof of equality is complete.
Let us put these new concepts into a concrete setting.

Example 10.3.3.

Find a basis for \(W^\perp\) if
\begin{equation*} W = \mbox{span}\left(\begin{bmatrix} 1 \\ -1 \\ 2 \\ 0 \end{bmatrix}, \begin{bmatrix} 1 \\ 0 \\ -2 \\ 3 \end{bmatrix}\right) \end{equation*}
in \(\R^4\text{.}\)
Answer.
By Theorem 10.3.2, \(\mathbf{x} = [x,y,z,w]\) is in \(W^\perp\) if and only if \(\mathbf{x}\) is orthogonal to both \([1,-1,2,0]\) and \([1,0,-2,3]\text{;}\) that is, \(\mathbf{x} \cdot \mathbf{v}_1 = 0\) and \(\mathbf{x} \cdot \mathbf{v}_2 = 0\text{,}\) or
\begin{equation*} \begin{array}{rrrrrrrr} x \amp - \amp y \amp + \amp 2z \amp \amp \amp =0\\ x \amp \amp \amp - \amp 2z \amp +\amp 3w \amp =0 \end{array} \end{equation*}
Using Gaussian elimination on this system gives
\begin{equation*} W^\perp = \mbox{span}\left(\begin{bmatrix} 2 \\ 4 \\ 1 \\ 0 \end{bmatrix}, \begin{bmatrix} 3 \\ 3 \\ 0 \\ -1 \end{bmatrix}\right). \end{equation*}
You are asked to confirm this in Exercise 10.3.3.1 (which serves as a wonderful review of concepts we covered earlier in the course!).
Some of the important subspaces we studied earlier are orthogonal complements of each other. Recall the following definitions associated with an \(m \times n\) matrix \(A\text{.}\)
  1. The null space of \(A\text{,}\) \(\mbox{null}(A) = \{\mathbf{x}\in \R^n \mid A\mathbf{x} = \mathbf{0}\}\text{,}\) is a subspace of \(\R^n\text{.}\)
  2. The row space of \(A\text{,}\) \(\mbox{row}(A) = \mbox{span} ( \mbox{the rows of } A)\text{,}\) is a subspace of \(\R^n\text{.}\)
  3. The column space of \(A\text{,}\) \(\mbox{col}(A) = \mbox{span} ( \mbox{the columns of } A)\text{,}\) is a subspace of \(\R^m\text{.}\)

Exploration 10.3.1.

In the following GeoGebra interactive, you can change the coordinates of the vectors \(\mathbf{v}\) and \(\mathbf{w}\) using the sliders (at this stage make sure that \(\mathbf{v}\) and \(\mathbf{w}\) are not collinear). Let
\begin{equation*} A=\begin{bmatrix}-\amp \mathbf{v}\amp -\\-\amp \mathbf{w}\amp -\end{bmatrix} . \end{equation*}
Then \(\mbox{row}(A) = \mbox{span}(\mathbf{v},\mathbf{w})\text{.}\) RIGHT-CLICK and DRAG to rotate the coordinate system for a better view.
Figure 10.3.4.
  1. Follow the prompts in the interactive to visualize \(\mbox{row}(A)\) and \(\mbox{null}(A)\text{.}\) What relationships do you observe between \(\mbox{row}(A)\) and \(\mbox{null}(A)\text{?}\)
  2. It is possible to ``break" this interactive (for certain choices of \(\mathbf{v}\) and \(\mathbf{w}\)). If \(\mathbf{v}\) and \(\mathbf{w}\) are scalar multiples of each other, then \(\mbox{row}(A)\) is a line, and the dimension of \(\mbox{null}(A)=2\text{.}\) The interactive does not accommodate this situation. To see what happens when \(\mathbf{v}\) and \(\mathbf{w}\) are scalar multiples of each other, see Exercise 10.3.3.2.

Proof.

Let \(\mathbf{x}\in\R^n\text{.}\) \(\mathbf{x}\in\left(\mbox{row}(A)\right)^\perp\) if and only if x is orthogonal to every row of \(A\text{.}\) But this is true if and only if \(A\mathbf{x}=\mathbf{0}\text{,}\) which is equivalent to saying \(\mathbf{x}\in\mbox{null}(A)\text{,}\) which proves Item 1. To prove Item 2, we simply replace \(A\) with \(A^T\text{,}\) and we may apply Item 1 since \(\mbox{col}(A) = \mbox{row}(A^T)\text{.}\)
Let’s examine what it says about a couple of our examples. In Example 10.3.3, we solved for the unknown vectors \(\mathbf{x} = [x,y,z,w] \text{.}\) Notice that this is equivalent to creating a \(2 \times 4\) matrix \(A\) whose rows are \(\mathbf{v}_1\) and \(\mathbf{v}_2\text{,}\) and then finding the null space of that matrix \(A\text{.}\) You can check that a basis for
\begin{equation*} \mbox{null}\left(\begin{bmatrix} 1 \amp -1 \amp 2 \amp 0 \\ 1 \amp 0 \amp -2 \amp 3 \end{bmatrix}\right) \end{equation*}
is given by
\begin{equation*} \left\{\begin{bmatrix} 2 \\ 4 \\ 1 \\ 0 \end{bmatrix}, \begin{bmatrix} 3 \\ 3 \\ 0 \\ -1 \end{bmatrix}\right\}\text{.} \end{equation*}
It is often useful to verify abstract statements in the concrete first. Let us give this a try:

Example 10.3.6.

Let
\begin{equation*} A=\begin{bmatrix}2\amp -1\amp 1\amp -4\amp 1\\1\amp 0\amp 3\amp 3\amp 0\\-2\amp 1\amp -1\amp 5\amp 2\\4\amp -1\amp 7\amp 2\amp 1\end{bmatrix}. \end{equation*}
Verify each of the statements in Theorem 10.3.5.
Answer.
We compute \(\mbox{rref}(A)\) to find a basis for \(\mbox{null}(A)\text{,}\) \(\mbox{row}(A)\text{,}\) and \(\mbox{col}(A)\text{.}\) After some work we arrive at:
\begin{equation*} \mbox{null}(A) = \mbox{span}\left(\begin{bmatrix}-3\\-5\\1\\0\\0\end{bmatrix}, \begin{bmatrix}9\\31\\0\\-3\\1\end{bmatrix}\right) \end{equation*}
and the row space is spanned by
\begin{equation*} \begin{bmatrix}1\amp 0\amp 3\amp 0\amp -9\end{bmatrix}, \begin{bmatrix}0\amp 1\amp 5\amp 0\amp -31\end{bmatrix}, \begin{bmatrix}0\amp 0\amp 0\amp 1\amp 3\end{bmatrix}. \end{equation*}
It is easy to check that each of the basis vectors of \(\mbox{null}(A)\) is orthogonal to each of the basis vectors of \(\mbox{row}(A)\text{,}\) demonstrating the first part of Theorem 10.3.5. You will be asked to demonstrate the second part of Theorem 10.3.5 for this example in Exercise 10.3.3.3.

Subsection 10.3.2 Orthogonal Decomposition Theorem

Now that we have defined the orthogonal complement of a subspace, we are ready to state the main theorem of this section. If you have studied physics or multi-variable calculus, you are familiar with the idea of expressing a vector in as the sum of its tangential and normal components. (If you haven’t yet taken those courses, this section will help to prepare you for them!) The following theorem is a generalization of this idea.

Proof.

This is an example of an ``existence and uniqueness’’ theorem, so there are two things to prove. If we have an orthogonal basis \(\{\mathbf{f}_{1}, \mathbf{f}_{2}, \dots, \mathbf{f}_{m}\}\) for \(W\text{,}\) then it is easy to show that our orthogonal decomposition exists for \(\mathbf{x}\text{.}\) We let \(\mathbf{w}=\mbox{proj}_W(\mathbf{x})\text{,}\) which is clearly in \(W\) and \(\mathbf{w}^\perp = \mathbf{x} - \mathbf{w}\text{.}\) We have
\begin{equation*} \mathbf{w} + \mathbf{w}^\perp = \mathbf{w} + (\mathbf{x} - \mathbf{w}) = \mathbf{x}, \end{equation*}
so we need to see that \(\mathbf{w}^\perp \in W^\perp\text{.}\)
By TTheorem 10.3.2~Item 3, it suffices to show that \(\mathbf{w}^\perp\) is orthogonal to each of the basis vectors \(\mathbf{f}_i, i=1,\ldots,m\text{.}\) We compute for \(i=1,\ldots,m\)
\begin{align*} \mathbf{f}_i \cdot \mathbf{w}^\perp \amp = \mathbf{f}_i \cdot (\mathbf{x} - \mathbf{w}) \\ \amp = \mathbf{f}_i \cdot \mathbf{x} - \mathbf{f}_i \cdot \left(\frac{\mathbf{x} \cdot \mathbf{f}_{1}}{\norm{\mathbf{f}_{1}}^2}\mathbf{f}_{1} + \frac{\mathbf{x} \cdot \mathbf{f}_{2}}{\norm{\mathbf{f}_{2}}^2}\mathbf{f}_{2}+ \dots +\frac{\mathbf{x} \cdot \mathbf{f}_{m}}{\norm{\mathbf{f}_{m}}^2}\mathbf{f}_{m}\right) \\ \amp = \mathbf{f}_i \cdot \mathbf{x} - \left(\frac{\mathbf{x} \cdot \mathbf{f}_{1}}{\norm{\mathbf{f}_{1}}^2}\mathbf{f}_i \cdot\mathbf{f}_{i} \right) = \mathbf{f}_i \cdot \mathbf{x} - (\mathbf{x} \cdot \mathbf{f}_i) = 0. \end{align*}
This proves that \(\mathbf{w}^\perp \in W^\perp\text{.}\)
The reason we need to prove this decomposition is unique is because we started with the orthogonal basis \(\{\mathbf{f}_{1}, \mathbf{f}_{2}, \dots, \mathbf{f}_{m}\}\) for \(W\text{,}\) but what would happen if we chose a different orthogonal basis? Suppose that \(\{\mathbf{f}_1^\prime, \mathbf{f}_2^\prime, \dots, \mathbf{f}_m^\prime \}\) is another orthogonal basis of \(W\text{,}\) and let
\begin{equation*} \mathbf{w}^{\prime} = \left(\frac{\mathbf{x} \cdot \mathbf{f}^{\prime}_{1}}{\norm{\mathbf{f}^{\prime}_{1}}^2}\right)\mathbf{f}^{\prime}_{1} + \left(\frac{\mathbf{x} \cdot \mathbf{f}^{\prime}_{2}}{\norm{\mathbf{f}^{\prime}_{2}}^2}\right)\mathbf{f}^{\prime}_{2} + \dots +\left(\frac{\mathbf{x} \cdot \mathbf{f}^{\prime}_{m}}{\norm{\mathbf{f}^{\prime}_{m}}^2}\right)\mathbf{f}^{\prime}_{m}. \end{equation*}
As before, \(\mathbf{w}^{\prime} \in W\) and \(\mathbf{x} - \mathbf{w}^{\prime} \in W^\perp\text{,}\) and we must show that \(\mathbf{w}^{\prime} = \mathbf{w}\text{.}\) To see this, write the vector \(\mathbf{w} - \mathbf{w}^\prime\) as follows:
\begin{equation*} \mathbf{w} - \mathbf{w}^{\prime} = (\mathbf{x} - \mathbf{w}^{\prime}) - (\mathbf{x} - \mathbf{w}). \end{equation*}
This vector is in \(W\) (because \(\mathbf{w}\) and \(\mathbf{w}^\prime\) are in \(W\)) and it is in \(W^\perp\) (because \(\mathbf{x} - \mathbf{w}^\prime\) and \(\mathbf{x} - \mathbf{w}\) are in \(W^\perp\)), and so it must be the zero vector (it is orthogonal to itself!). This means \(\mathbf{w}^\prime = \mathbf{w}\) as desired.
The decomposition is extremely important. It splits a vector into two managable halves. Further, it is completely computable as the next example highlights.

Example 10.3.8.

Let \(W\) be a subspace given by
\begin{equation*} W = \mbox{span}\left(\begin{bmatrix} 1 \\ 0 \\ 1 \\ 0 \end{bmatrix}, \begin{bmatrix} 0 \\ 1 \\ 0 \\ 2 \end{bmatrix}\right), \end{equation*}
and let \(\mathbf{x}=[1,2,3,4]\text{.}\) Write \(\mathbf{x}\) as the sum of a vector in \(W\) and a vector in \(W^\perp\text{.}\)
Answer.
Following the notation of Theorem 10.3.7, we will write \(\mathbf{x} = \mathbf{w} + \mathbf{w}^\perp\text{,}\) where \(\mathbf{w}=\mbox{proj}_W(\mathbf{x})\) and \(\mathbf{w}^\perp = \mathbf{x} - \mathbf{w}\text{.}\) Let
\begin{equation*} \mathbf{f}_1=\begin{bmatrix} 1 \\ 0 \\ 1 \\ 0 \end{bmatrix} \quad \text{and} \quad\mathbf{f}_2=\begin{bmatrix} 0 \\ 1 \\ 0 \\ 2 \end{bmatrix}. \end{equation*}
We observe that we have the good fortune that \(\mathbf{f}_1,\mathbf{f}_2\) is an orthogonal basis for \(W\) (otherwise, our first step would be to use the Gram-Schmidt procedure to create an orthogonal basis for \(W\)). We compute:
\begin{align*} \mathbf{w} =\mbox{proj}_W(\mathbf{x}) \amp =\mathbf{x}-\mbox{proj}_{\mathbf{f}_1}(\mathbf{x})-\mbox{proj}_{\mathbf{f}_2}(\mathbf{x}) \\ \amp = \begin{bmatrix} 1 \\ 2 \\ 3 \\ 4 \end{bmatrix} - \frac{4}{2}\begin{bmatrix} 1 \\ 0 \\ 1 \\ 0 \end{bmatrix} - \frac{10}{5}\begin{bmatrix} 0 \\ 1 \\ 0 \\ 2 \end{bmatrix} = \begin{bmatrix} 2 \\ 2 \\ 2 \\ 4 \end{bmatrix}, \end{align*}
and then
\begin{equation*} \mathbf{w}^\perp=\begin{bmatrix} 1 \\ 2 \\ 3 \\ 4 \end{bmatrix} - \begin{bmatrix} 2 \\ 2 \\ 2 \\ 4 \end{bmatrix} = \begin{bmatrix} -1 \\ 0 \\ 1 \\ 0 \end{bmatrix}. \end{equation*}
This gives us
\begin{equation*} \mathbf{x}=\mathbf{w}+\mathbf{w}^\perp=\begin{bmatrix}2\\2\\2\\4\end{bmatrix}+\begin{bmatrix}-1\\0\\1\\0\end{bmatrix}. \end{equation*}
The final theorem of this section shows that projection onto a subspace of \(\R^n\) is actually a linear transformation from \(\R^n\) to \(\R^n\text{.}\)

Proof.

If \(W = \{\mathbf{0}\}\text{,}\) then \(W^\perp = \R^n\text{,}\) and so \(T(\mathbf{x}) = \mathbf{0}\) for all \(\mathbf{x}\text{.}\) Thus \(T = 0\) is the zero (linear) operator, so Item 1, Item 2, and Item 3 hold. Hence assume that \(W \neq \{\mathbf{0}\}\text{.}\)
  1. If \(\{\mathbf{q}_{1}, \mathbf{q}_{2}, \dots, \mathbf{q}_{m}\}\) is an orthonormal basis of \(W\text{,}\) then
    \begin{equation} T(\mathbf{x}) = (\mathbf{x} \cdot \mathbf{q}_{1})\mathbf{q}_{1} + (\mathbf{x} \cdot \mathbf{q}_{2})\mathbf{q}_{2} + \dots + (\mathbf{x} \cdot \mathbf{q}_{m})\mathbf{q}_{m}\tag{10.3.1} \end{equation}
    in \(\R^n \) by the definition of the projection. Thus \(T\) is a linear transformation because
    \begin{equation*} (\mathbf{x} + \mathbf{y}) \cdot \mathbf{q}_{i} = \mathbf{x} \cdot \mathbf{q}_{i} + \mathbf{y} \cdot \mathbf{q}_{i} \ \mbox{ and } \ (r\mathbf{x}) \cdot \mathbf{q}_{i} = r(\mathbf{x} \cdot \mathbf{q}_{i}) \quad \mbox{ for all } i. \end{equation*}
  2. We have \(\mbox{im}(T)\) is a subset of \(W\) by (10.3.1) because each \(\mathbf{q}_{i}\) is in \(W\text{.}\) But if \(\mathbf{x}\) is in \(W\text{,}\) then \(\mathbf{x} = T(\mathbf{x})\) by (10.3.1) and Theorem 10.1.5 applied to the space \(W\text{.}\) This shows that \(W\) is a subset of \(\mbox{im}(T)\text{,}\) so \(\mbox{im}(T)\) is \(W\text{.}\)
    Now suppose that \(\mathbf{x}\) is in \(W^\perp\text{.}\) Then \(\mathbf{x} \cdot \mathbf{q}_{i} = 0\) for each \(i\) (again because each \(\mathbf{q}_{i}\) is in \(W\)) so \(\mathbf{x}\) is in \(\mbox{ker}(T)\) by (Theorem 10.3.2). Hence \(W^\perp\) is in \(\mbox{ker}(T)\text{.}\) On the other hand, Theorem 10.3.2 shows that \(\mathbf{x} - T(\mathbf{x})\) is in \(W^\perp\) for all \(\mathbf{x}\) in \(\R^n\text{,}\) and it follows that \(\mbox{ker}(T)\) is in \(W^\perp\text{.}\) Hence \(\mbox{ker}(T)\) is \(W^\perp\text{,}\) proving Item 2.
  3. This follows from Item 1, Item 2, and the Rank-Nullity theorem (see Theorem 6.2.41).

Exercises 10.3.3 Exercises

1.

Solve the linear system in Example Example 10.3.3 and use your result to find a basis for \(W^\perp\) if
\begin{equation*} W = \mbox{span}\left(\begin{bmatrix}1\\ -1\\ 2\\ 0\end{bmatrix}, \begin{bmatrix}1\\ 0\\ -2\\ 3\end{bmatrix}\right) \end{equation*}
in \(\R^4\text{.}\)

2.

In this problem we return to the GeoGebra interactive in Exploration 10.3.1, and we consider the case where the matrix \(A\) has rank 1 (which Exploration 10.3.1 could not handle). This time, the sliders define row 1 of matrix \(A\text{,}\) and row 2 will be 2 times row 1.
Follow the prompts in the interactive to visualize \(\mbox{row}(A)\) and \(\mbox{null}(A)\text{.}\) What relationships do you observe between \(\mbox{row}(A)\) and \(\mbox{null}(A)\text{?}\)
Figure 10.3.10.

3.

In this problem you are asked to finish Example 10.3.6. More specifically, for the matrix
\begin{equation*} A=\begin{bmatrix}2\amp -1\amp 1\amp -4\amp 1\\1\amp 0\amp 3\amp 3\amp 0\\-2\amp 1\amp -1\amp 5\amp 2\\4\amp -1\amp 7\amp 2\amp 1\end{bmatrix}, \end{equation*}
show that \(\mbox{null}(A^T) = (\mbox{col}(A))^\perp\text{.}\)

Exercise Group.

In each case, write \(\mathbf{x}\) as \(\mathbf{x} = \mathbf{w} + \mathbf{w}^\perp\text{,}\) where \(\mathbf{w}=\mbox{proj}_W(\mathbf{x})\) and \(\mathbf{w}^\perp = \mathbf{x} - \mathbf{w}\text{.}\)
4.
\begin{equation*} \mathbf{x} = \begin{bmatrix}2\\ 1\\ 6\end{bmatrix}\quad \text{and} \quad W = \mbox{span}\left(\begin{bmatrix}3\\ -1\\ 2\end{bmatrix}, \begin{bmatrix}2\\ 0\\ -3\end{bmatrix}\right). \end{equation*}
5.
\begin{equation*} \mathbf{x} = \begin{bmatrix}2\\ 0\\ 1\\ 6\end{bmatrix}\quad \text{and} \quad W = \mbox{span}\left(\begin{bmatrix}1\\ 1\\ 1\\ 1\end{bmatrix}, \begin{bmatrix}1\\ 1\\ -1\\ -1\end{bmatrix}, \begin{bmatrix}1\\ -1\\ 1\\ -1\end{bmatrix}\right). \end{equation*}
6.
\begin{equation*} \mathbf{x} = \begin{bmatrix}a\\ b\\ c\\ d\end{bmatrix}\quad \text{and} \quad W = \mbox{span}\left(\begin{bmatrix}1\\ -1\\ 2\\ 0\end{bmatrix}, \begin{bmatrix}-1\\ 1\\ 1\\ 1\end{bmatrix}\right). \end{equation*}

7.

Let \(W = \mbox{span}\left(\mathbf{w}_{1}, \mathbf{w}_{2}, \dots, \mathbf{w}_{k}\right)\text{,}\) \(\mathbf{w}_{i}\in \R^n\text{,}\) and let \(A\) be the \(k \times n\) matrix with the \(\mathbf{w}_{i}\) as rows.
  1. Show that \(W^\perp = \{\mathbf{x} \mid \mathbf{x}\in \R^n, A\mathbf{x}^{T} = \mathbf{0}\}\text{.}\)
  2. Use part (a) to find \(W^\perp\) if
    \begin{equation*} W = \mbox{span}\left(\begin{bmatrix}1\\ -1\\ 2\\ 1\end{bmatrix}, \begin{bmatrix}1\\ 0\\ -1\\ 1\end{bmatrix}\right). \end{equation*}
Answer.
\(W^\perp = \mbox{span}\left((1, 3, 1, 0), (-1, 0, 0, 1)\right)\text{.}\)

9.

Let \(W\) be a subspace of \(\R^n\text{.}\) If \(\mathbf{x}\) in \(\R^n\) can be written in any way at all as \(\mathbf{x} = \mathbf{p} + \mathbf{q}\) with \(\mathbf{p}\) in \(W\) and \(\mathbf{q}\) in \(W^\perp\text{,}\) show that necessarily \(\mathbf{p} = \mbox{proj}_W(\mathbf{x})\text{.}\)

10.

Let \(W\) be a subspace of \(\R^n\) and let \(\mathbf{x}\) be a vector in \(\R^n\text{.}\) Using Practice Problem Exercise 10.3.3.9, or otherwise, show that \(\mathbf{x}\) is in \(W\) if and only if \(\mathbf{x} = \mbox{proj}_W(\mathbf{x})\text{.}\)
Hint.
Write \(\mathbf{w} = \mbox{proj}_W(\mathbf{x})\text{.}\) Then \(\mathbf{w}\) is in \(W\) by definition. If \(\mathbf{x}\) is \(W\text{,}\) then \(\mathbf{x} - \mathbf{w}\) is in \(W\text{.}\) But \(\mathbf{x} - \mathbf{w}\) is also in \(W^\perp\text{,}\) so \(\mathbf{x} - \mathbf{w}\) is in \(W \cap U^\perp = \{\mathbf{0}\}\text{.}\) Thus \(\mathbf{x} = \mathbf{w}\text{.}\)

11.

If \(W\) is a subspace of \(\R^n\text{,}\) show that \(\mbox{proj}_W(\mathbf{x}) = \mathbf{x}\) for all \(\mathbf{x}\) in \(W\text{.}\)
Hint.
Let \(\{\mathbf{q}_{1}, \mathbf{q}_{2}, \dots , \mathbf{q}_{m}\}\) be an orthonormal basis of \(W\text{.}\) If \(\mathbf{x}\) is in \(W\) the expansion theorem gives
\begin{equation*} \mathbf{x} = (\mathbf{x} \cdot \mathbf{q}_{1})\mathbf{q}_{1} + (\mathbf{x} \cdot \mathbf{q}_{2})\mathbf{q}_{2} + \dots + (\mathbf{x} \cdot \mathbf{q}_{m})\mathbf{q}_{m} = \mbox{proj}_W(\mathbf{x}). \end{equation*}

12.

If \(W\) is a subspace of \(\R^n\text{,}\) show that \(\mathbf{x} = \mbox{proj}_W(\mathbf{x}) + \mbox{proj}_{W^\perp}(\mathbf{x})\) for all \(\mathbf{x}\) in \(\R^n\text{.}\)

13.

If \(\{\mathbf{v}_{1}, \dots, \mathbf{v}_{n}\}\) is an orthogonal basis of \(\R^n\) and \(W = \mbox{span}\left(\mathbf{v}_{1}, \dots, \mathbf{v}_{m}\right)\text{,}\) \(m\lt n\text{,}\) show that \(W^\perp = \mbox{span}\left(\mathbf{v}_{m + 1}, \dots, \mathbf{v}_{n}\right)\text{.}\)