Skip to main content
Logo image

Coordinated Linear Algebra

Section 5.2 Bases and Dimension

Subsection 5.2.1 Coordinate Vectors

When we first introduced vectors we learned to represent them using component notation. If we consider \(\mathbf{u}=\begin{bmatrix} 2 \\ 5 \end{bmatrix}\text{,}\) then we know that the head of \(\mathbf{u}\) is located at the point \((2, 5)\text{.}\)
But there is another way to look at the component form. Observe that \(\mathbf{u}\) can be expressed as a linear combination of the standard unit vectors \(\mathbf{i}\) and \(\mathbf{j}\text{:}\)
\begin{equation*} \mathbf{u}=2\begin{bmatrix}1\\0\end{bmatrix}+5\begin{bmatrix}0\\1\end{bmatrix}=2\mathbf{i}+5\mathbf{j}. \end{equation*}
In fact, any vector \(\mathbf{v}=\begin{bmatrix} a \\ b \end{bmatrix}\) of \(\R^2\) can be written as a linear combination of \(\mathbf{i}\) and \(\mathbf{j}\text{:}\)
\begin{equation*} \mathbf{v}=\begin{bmatrix}a\\b\end{bmatrix}=a\mathbf{i}+b\mathbf{j}. \end{equation*}
This gives us an alternative way of interpreting the component notation:
\begin{equation*} \left[\begin{array}{c} a\\b \end{array}\right] \begin{array}{c} \longleftarrow\\ \longleftarrow \end{array} \begin{array}{c} \mbox{coefficient in front of }\\\mbox{coefficient in front of } \end{array} \end{equation*}
We say that \(a\) and \(b\) are coordinates of \(\mathbf{v}\) with respect to \(\{\mathbf{i}, \mathbf{j}\}\text{,}\) and \(\begin{bmatrix} a \\ b \end{bmatrix}\) is said to be the coordinate vector for \(\mathbf{v}\) with respect to \(\{\mathbf{i}, \mathbf{j}\}\text{.}\) Every vector \(\mathbf{v}\) of \(\R^2\) can be thus represented using \(\mathbf{i}\) and \(\mathbf{j}\text{.}\) Moreover, such representation in terms of \(\mathbf{i}\) and \(\mathbf{j}\) is unique for each vector, meaning that we will never have two different coordinate vectors representing the same vector. We will refer to \(\{\mathbf{i}, \mathbf{j}\}\) as a basis of \(\R^2\text{.}\)
The order in which the basis elements are written matters. For example, \(\mathbf{u}\) is represented by the coordinate vector \(\begin{bmatrix} 2 \\ 5 \end{bmatrix}\) with respect to \(\{\mathbf{i}, \mathbf{j}\}\text{,}\) but changing the basis to \(\{\mathbf{j}, \mathbf{i}\}\) would change the coordinate vector to \(\begin{bmatrix} 5 \\ 2 \end{bmatrix}\text{.}\) In our notation:
\begin{equation*} \left[\begin{array}{c} 5\\2 \end{array}\right] \begin{array}{c} \longleftarrow\\ \longleftarrow \end{array} \begin{array}{c} \mbox{coefficient in front of the first basis element }\\\mbox{coefficient in front of the second basis element} \end{array} \end{equation*}
Clearly, standard unit vectors \(\mathbf{i}\) and \(\mathbf{j}\) are very convenient, but other vectors can also be used in place of \(\mathbf{i}\) and \(\mathbf{j}\) to represent \(\mathbf{u}\text{.}\)

Exploration 5.2.1.

The diagram below shows \(\mathbf{u}\) together with vectors \(\mathbf{w}_1\) and \(\mathbf{w}_2\text{.}\)
Three vectors in span of two
It is easy to see that
\begin{equation*} \mathbf{u}=2\mathbf{w}_1+\mathbf{w}_2 \end{equation*}
as shown below.
Span of two of the above vectors graphed
If we declare \(\{\mathbf{w}_1, \mathbf{w}_2\}\) to be a basis of \(\R^2\text{,}\) then we can say that the coordinate vector for \(\mathbf{u}\) with respect to \(\{\mathbf{w}_1, \mathbf{w}_2\}\) is \(\begin{bmatrix} 2\\ 1 \end{bmatrix}\) .
\begin{equation*} \left[\begin{array}{c} 2\\1 \end{array}\right] \begin{array}{c} \longleftarrow\\ \longleftarrow \end{array} \begin{array}{c} \mbox{coefficient in front of the first basis element }\\\mbox{coefficient in front of the second basis element} \end{array} \end{equation*}

Subsection 5.2.2 What Constitutes a Basis?

In the previous section we had used the term basis without defining it. Now is the time to pause and think about what we want a basis to do. Let’s focus on \(\R^n\) and subspaces of \(\R^n\text{.}\) What we establish here will easily generalize to other vector spaces. Based on our previous discussion, given any vector \(\mathbf{v}\) of \(\R^n\) (or a subspace \(V\) of \(\R^n\)), we want to be able to write a coordinate vector for \(\mathbf{v}\) with respect to the given basis of \(\R^n\) (or \(V\)).
Based on this condition, we will require that basis vectors span \(\R^n\) (or \(V\)). For example, consider \(\mathbf{w}_1\) and \(\mathbf{w}_2\) shown below.
Planed formed by basis
The set \(\{\mathbf{w}_1, \mathbf{w}_2\}\) cannot be a basis for \(\R^3\) because \(\mathbf{w}_1\) and \(\mathbf{w}_2\) span a plane in \(\R^3\text{,}\) and any vector not in that plane cannot be written as a linear combination of \(\mathbf{w}_1\) and \(\mathbf{w}_2\text{.}\) On the other hand, the plane spanned by \(\mathbf{w}_1\) and \(\mathbf{w}_2\) is a subspace of \(\R^3\text{.}\) Because every vector in that plane can be written as a linear combination of \(\mathbf{w}_1\) and \(\mathbf{w}_2\text{,}\) the set \(\{\mathbf{w}_1, \mathbf{w}_2\}\) can potentially be a basis for the plane, provided that the set satisfies our second requirement.
Our second requirement is that for a fixed basis of \(\R^n\) (or \(V\)), the coordinate vector for each \(\mathbf{v}\) in \(\R^n\) (or \(V\)) should be unique. Uniqueness of representation in terms of the basis elements will play an important role in our future study of functions that map vector spaces to vector spaces. The following theorem shows that the uniqueness requirement is equivalent to the requirement that the basis vectors be linearly independent. %In other words, a basis may not contain redundant vectors.

Proof.

Suppose that every \(\mathbf{v}\) in \(V\) can be expressed as a unique linear combination of \(\mathbf{w}_1, \mathbf{w}_2,\ldots,\mathbf{w}_p\text{.}\) This means that \(\mathbf{0}\) has a unique representation as a linear combination of \(\mathbf{w}_1, \mathbf{w}_2,\ldots,\mathbf{w}_p\text{.}\) But
\begin{equation*} \mathbf{0}=0\mathbf{w}_1+0\mathbf{w}_2+\ldots+0\mathbf{w}_p \end{equation*}
is a representation of \(\mathbf{0}\) in terms of \(\mathbf{w}_1, \mathbf{w}_2,\ldots,\mathbf{w}_p\text{.}\) Since we are assuming that such a representation is unique, we conclude that there is no other. This means that the vectors \(\mathbf{w}_1, \mathbf{w}_2,\ldots,\mathbf{w}_p\) are linearly independent. Conversely, suppose that vectors \(\mathbf{w}_1, \mathbf{w}_2,\ldots,\mathbf{w}_p\) are linearly independent. An arbitrary element \(\mathbf{v}\) of \(V\) can be expressed as a linear combination of \(\mathbf{w}_1, \mathbf{w}_2,\ldots,\mathbf{w}_p\text{:}\)
\begin{equation*} \mathbf{v}=a_1\mathbf{w}_1+a_2\mathbf{w}_2+\ldots+a_p\mathbf{w}_p. \end{equation*}
Suppose this representation is not unique. Then there may be another linear combination that is also equal to \(\mathbf{v}\text{:}\)
\begin{equation*} \mathbf{v}=b_1\mathbf{w}_1+b_2\mathbf{w}_2+\ldots+b_p\mathbf{w}_p. \end{equation*}
But then
\begin{equation*} a_1\mathbf{w}_1+a_2\mathbf{w}_2+\ldots+a_p\mathbf{w}_p=b_1\mathbf{w}_1+b_2\mathbf{w}_2+\ldots+b_p\mathbf{w}_p. \end{equation*}
This gives us
\begin{equation*} (a_1-b_1)\mathbf{w}_1+(a_2-b_2)\mathbf{w}_2+\ldots+(a_p-b_p)\mathbf{w}_p=\mathbf{0}. \end{equation*}
Because we assumed that \(\mathbf{w}_1, \mathbf{w}_2,\ldots,\mathbf{w}_p\) are linearly independent, we must have
\begin{equation*} a_1-b_1=0,\, a_2-b_2=0,\,\ldots ,\,a_p-b_p=0, \end{equation*}
so that
\begin{equation*} a_1=b_1,\, a_2=b_2,\,\ldots ,\,a_p=b_p. \end{equation*}
This proves the representation of \(\mathbf{v}\) in terms of \(\mathbf{w}_1, \mathbf{w}_2,\ldots,\mathbf{w}_p\) is unique.
Here is a concrete example of a basis and some practice.

Example 5.2.2.

Use \(V=\mbox{span}(\mathcal{S})\text{,}\) where
\begin{equation*} \mathcal{S}=\left\{\begin{bmatrix}5\\2\\4\end{bmatrix},\begin{bmatrix}4\\1\\1\end{bmatrix},\begin{bmatrix}-3\\0\\2\end{bmatrix}\right\} \end{equation*}
to illustrate why a set of linearly dependent vectors cannot be used as a basis for a subspace by showing that linearly dependent vectors fail to ensure uniqueness of coordinate vectors for vectors in \(V\text{.}\)
Answer.
We will first show that the elements of \(\mathcal{S}\) are linearly dependent. Let \(A\) be a matrix whose columns are the vectors in \(\mathcal{S}\text{.}\)
\begin{equation*} A=\begin{bmatrix}5\amp 4\amp -3\\2\amp 1\amp 0\\4\amp 1\amp 2\end{bmatrix}. \end{equation*}
We find that
\begin{equation*} \mbox{rref}(A) = \begin{bmatrix} 1\amp 0\amp 1\\0\amp 1\amp -2\\0\amp 0\amp 0 \end{bmatrix}. \end{equation*}
Therefore the matrix equation \(A\mathbf{x}=\mathbf{0}\) has infinitely many solutions:
\begin{equation*} \mathbf{x}=\begin{bmatrix}-1\\2\\1\end{bmatrix}t. \end{equation*}
This tells us that there are infinitely many nontrivial linear relations among the elements of \(\mathcal{S}\text{.}\) Letting \(t=1\) gives us one such nontrivial relation.
\begin{equation*} -\begin{bmatrix}5\\2\\4\end{bmatrix}+2\begin{bmatrix}4\\1\\1\end{bmatrix}+\begin{bmatrix}-3\\0\\2\end{bmatrix}=\mathbf{0} \end{equation*}
Now let’s pick an arbitrary vector \(\mathbf{v}\) in \(V\text{.}\) Any vector will do, so let
\begin{equation*} \mathbf{v}=\begin{bmatrix}5\\2\\4\end{bmatrix}+ (-1)\begin{bmatrix}4\\1\\1\end{bmatrix}+0\begin{bmatrix}-3\\0\\2\end{bmatrix}. \end{equation*}
Based on this representation of \(\mathbf{v}\text{,}\) the coordinate vector for \(\mathbf{v}\) with respect to \(\mathcal{S}\) is
\begin{equation*} \begin{bmatrix}1\\-1\\0\end{bmatrix}. \end{equation*}
But
\begin{equation*} \begin{bmatrix}5\\2\\4\end{bmatrix}=2\begin{bmatrix}4\\1\\1\end{bmatrix}+\begin{bmatrix}-3\\0\\2\end{bmatrix}. \end{equation*}
So, by substitution, we have:
\begin{align*} \mathbf{v} \amp =\left(2\begin{bmatrix}4\\1\\1\end{bmatrix}+\begin{bmatrix}-3\\0\\2\end{bmatrix}\right)+ (-1)\begin{bmatrix}4\\1\\1\end{bmatrix}+0\begin{bmatrix}-3\\0\\2\end{bmatrix} \\ \amp =0\begin{bmatrix}5\\2\\4\end{bmatrix}+ 1\begin{bmatrix}4\\1\\1\end{bmatrix}+1\begin{bmatrix}-3\\0\\2\end{bmatrix}. \end{align*}
Problem 5.2.3.
Based on this representation, the coordinate vector for \(\mathbf{v}\) with respect to \(\mathcal{S}\) is what?
Answer.
\begin{equation*} \begin{bmatrix}0\\1\\1\end{bmatrix}. \end{equation*}
The set \(\mathcal{S}\) is linearly dependent. As a result, coordinate vectors for elements of \(V\) are not unique and we do not want to use \(\mathcal{S}\) as a basis for \(V\text{.}\)

Subsection 5.2.3 Definition of a Basis

Definition 5.2.4.

A set \(\mathcal{S}\) of vectors is called a basis of \(\R^n\) (or a basis of a subspace \(V\) of \(\R^n\)) provided that
  1. \(\mbox{span}(\mathcal{S})=\R^n\) (or \(V\))
  2. \(\mathcal{S}\) is linearly independent.
The prototype example is the standard one, which we showcase in the next example. It is the one we implicitly worked with so far.

Example 5.2.5.

The standard unit vectors \(\mathbf{e}_1, \ldots ,\mathbf{e}_n\) are linearly independent and span \(\R^n\text{.}\) Thus \(\{\mathbf{e}_1, \ldots ,\mathbf{e}_n\}\) is a basis of \(\R^n\text{.}\)

Definition 5.2.6.

The set \(\{\mathbf{e}_1, \ldots ,\mathbf{e}_n\}\) is called the standard basis of \(\R^n\text{.}\)
Bases are not unique. For example, we know that vectors \(\mathbf{i}\) and \(\mathbf{j}\) form the standard basis of \(\R^2\text{.}\) But, as we discussed in Example Example 3.1.19, vectors
\begin{equation*} \begin{bmatrix}2\\2\end{bmatrix}, \begin{bmatrix}-1\\0\end{bmatrix} \end{equation*}
are linearly independent vectors that span \(\R^2\text{.}\) Therefore
\begin{equation*} \left\{\begin{bmatrix}2\\2\end{bmatrix}, \begin{bmatrix}-1\\0\end{bmatrix}\right\} \end{equation*}
is also a basis for \(\R^2\text{.}\)
Any linearly independent spanning set in \(\R^n\) (or a subspace of \(\R^n\)) is a basis of \(\R^n\) (or the subspace). The plural form of the word basis is bases. It is easy to see that \(\R^n\) and its subspaces each has infinitely many bases.

Example 5.2.7.

Let \(V=\mbox{span} ( \begin{bmatrix} -2\\ 1\\ 3 \end{bmatrix}, \begin{bmatrix} 2\\ -4\\ 1 \end{bmatrix} )\text{.}\) The set
\begin{equation*} \mathcal{B}=\left\{\begin{bmatrix}-2\\1\\3\end{bmatrix},\begin{bmatrix}2\\-4\\1\end{bmatrix}\right\} \end{equation*}
is a basis for \(V\) because the two vectors in \(\mathcal{B}\) are linearly independent and span \(V\text{.}\) Find the coordinate vector for \(\mathbf{v}=\begin{bmatrix} 2\\ -10\\ 9 \end{bmatrix}\) with respect to \(\mathcal{B}\text{.}\)
Explanation.
We need to express \(\begin{bmatrix} 2\\ -10\\ 9 \end{bmatrix}\) as a linear combination of the elements of \(\mathcal{B}\text{.}\) To this end, we need to solve the vector equation:
\begin{equation*} a_1\begin{bmatrix}-2\\1\\3\end{bmatrix}+a_2\begin{bmatrix}2\\-4\\1\end{bmatrix}=\begin{bmatrix}2\\-10\\9\end{bmatrix}. \end{equation*}
The augmented matrix and the reduced row-echelon form are:
\begin{equation*} \left[\begin{array}{cc|c} -2\amp 2\amp 2\\-1\amp -4\amp -10\\3\amp 1\amp 9 \end{array}\right]\rightsquigarrow\left[\begin{array}{cc|c} 1\amp 0\amp 2\\0\amp 1\amp 3\\0\amp 0\amp 0 \end{array}\right]. \end{equation*}
We conclude that \(a_1=2\text{,}\) \(a_2=3\text{.}\) This gives us
\begin{equation*} 2\begin{bmatrix}-2\\1\\3\end{bmatrix}+3\begin{bmatrix}2\\-4\\1\end{bmatrix}=\begin{bmatrix}2\\-10\\9\end{bmatrix}. \end{equation*}
The coefficient in front of the first basis vector is \(2\text{,}\) the coefficient in front of the second basis vector is \(3\text{.}\) This means that the coordinate vector for \(\begin{bmatrix} 2\\ -10\\ 9 \end{bmatrix}\) with respect to \(\mathcal{B}\) is \(\begin{bmatrix} 2\\ 3\end{bmatrix}\text{.}\)

Remark 5.2.8.

It may seem strange to you that the coordinate vector for a vector in \(\R^3\) only has two components. But remember that subspace \(V\) is a plane. When viewed as a vector in the plane, it makes sense that the coordinate vector for \(\begin{bmatrix} 2\\ -10\\ 9 \end{bmatrix}\) only requires two components. This issue is related to the question of dimension, which will be addressed in the next sections.

Remark 5.2.9.

To construct the coordinate vector for \(\begin{bmatrix} 2\\ -10\\ 9 \end{bmatrix}\) with respect to \(\mathcal{B}\text{,}\) we had to be mindful of the order of the elements in \(\mathcal{B}\text{.}\) Ordinarily, the order of elements in a set is irrelevant, and the basis
\begin{equation*} \left\{\begin{bmatrix}-2\\1\\3\end{bmatrix},\begin{bmatrix}2\\-4\\1\end{bmatrix}\right\} \end{equation*}
is considered to be the same as
\begin{equation*} \left\{\begin{bmatrix}2\\-4\\1\end{bmatrix},\begin{bmatrix}-2\\1\\3\end{bmatrix}\right\}. \end{equation*}
When dealing with coordinate vectors, however, the order of the elements dictates the order of the components of the coordinate vector coefficients. If we switch the order of the elements in \(\mathcal{B}\text{,}\) the coordinate vector becomes \(\begin{bmatrix} 3\\ 2\end{bmatrix}\text{.}\) For this reason, when we come back to studying coordinate vectors in more detail, we will use the term ordered basis to avoid confusion.

Subsection 5.2.4 Exploring Dimension

A basis of a subspace \(V\) of \(\R^n\) is a subset of \(V\) that is linearly independent and spans \(V\text{.}\) A basis allows us to uniquely express every element of \(V\) as a linear combination of the elements of the basis. Several questions may come to mind at this time. Does every subspace of \(\R^n\) have a basis? We know that bases are not unique. If there is more than one basis, what, if anything, do they have in common?

Exploration 5.2.2.

How would you describe
\begin{equation*} V=\mbox{span}\left(\begin{bmatrix}1\\-2\\3\end{bmatrix}, \begin{bmatrix}-2\\4\\-6\end{bmatrix}\right)? \end{equation*}
If you answered that \(V\) is a line in \(\R^3\text{,}\) you are correct. While the two vectors span the line, it is not necessary to have both of them in the spanning set to describe the line.
Problem 5.2.10.
What is the minimum number of vectors needed to span a line?
Answer.
\(1\text{.}\)
Observe also that the vectors in the given spanning set are not linearly independent, so they do not form a basis for \(V\text{.}\)
Problem 5.2.11.
How many vectors would a basis for \(V\) have?
Answer.
\(1\text{.}\)
Now consider another subspace of \(\R^3\text{:}\)
\begin{equation*} W=\mbox{span}\left(\begin{bmatrix}1\\0\\2\end{bmatrix}, \begin{bmatrix}0\\-3\\0\end{bmatrix}\right) \end{equation*}
Geometrically, \(W\) is a plane in \(\R^3\text{.}\) Note that the vectors in the spanning set are linearly independent. Can we remove one of the vectors and have the remaining vector span the plane?
Problem 5.2.12.
What is the minimum number of vectors needed to span a plane? How many vectors would a basis need for a plane?
Answer.
\(2\)
Our observations in Exploration 5.2.2 hint at the idea of dimension. We know that a line is a one-dimensional object, a plane is a two-dimensional object, and the space we reside in is three-dimensional.
Based on our observations in Exploration 5.2.2, it makes sense for us to define dimension of a vector space (or a subspace) as the minimum number of vectors required to span the space (subspace). We can accomplish this by defining dimension as the number of elements in a basis. We have to proceed carefully because we don’t want the dimension to depend on our choice of a basis. So, before we state our definition, we need to make sure that every basis for a given vector space (or subspace) has the same number of elements.

Proof.

Suppose \(s\neq t\text{.}\) Without loss of generality, assume that \(s\gt t\text{.}\) Because \(\mathcal{B}\) spans \(V\text{,}\) every \(\mathbf{w}_i\) of \(\mathcal{C}\) can be written as a linear combination of elements of \(\mathcal{B}\text{:}\)
\begin{equation*} \mathbf{w}_i=a_{1i}\mathbf{v}_1+a_{2i}\mathbf{v}_{2}+\ldots +a_{ti}\mathbf{v}_t. \end{equation*}
Consider the vector equation
\begin{equation} b_1\mathbf{w}_1+b_2\mathbf{w}_2+\ldots +b_s\mathbf{w}_s=\mathbf{0}.\tag{5.2.1} \end{equation}
By substitution, we have:
\begin{align*} \amp b_1\mathbf{w}_1+b_2\mathbf{w}_2+\ldots +b_s\mathbf{w}_s= \\ =\amp b_1(a_{11}\mathbf{v}_1+a_{21}\mathbf{v}_{2}+\ldots +a_{t1}\mathbf{v}_t)+b_2(a_{12}\mathbf{v}_1+a_{22}\mathbf{v}_{2}+\ldots +a_{t2}\mathbf{v}_t)+\ldots \\ \amp +b_s(a_{1s}\mathbf{v}_1+a_{2s}\mathbf{v}_{2}+\ldots +a_{ts}\mathbf{v}_t) \\ =\amp (b_1a_{11}+b_2a_{12}+\ldots +b_sa_{1s})\mathbf{v}_1 +(b_1a_{21}+b_2a_{22}+\ldots +b_sa_{2s})\mathbf{v}_2+ \ldots \\ \amp +(b_1a_{t1}+b_2a_{t2}+\ldots +b_sa_{ts})\mathbf{v}_t \\ =\amp \mathbf{0}. \end{align*}
Because \(\mathbf{v}_j\)’s are linearly independent, we must have
\begin{equation*} b_1a_{j1}+b_2a_{j2}+\ldots +b_sa_{js}=0. \end{equation*}
For all \(1\leq j\leq t\text{.}\) This gives us a system of \(t\) equations and \(s\) unknowns. We can write the system as a matrix equation.
\begin{equation*} \begin{bmatrix}a_{11}\amp a_{12}\amp \ldots \amp a_{1s}\\a_{21}\amp a_{22}\amp \ldots \amp a_{2s}\\\vdots\amp \vdots\amp \ddots\amp \vdots\\a_{t1}\amp a_{t2}\amp \ldots\amp a_{ts}\end{bmatrix}\begin{bmatrix}b_1\\b_2\\\vdots\\b_s\end{bmatrix}=\mathbf{0}. \end{equation*}
Recall our assumption that \(s\gt t\text{.}\) By Theorem 2.3.11, we know that the system has infinitely many solutions. This shows that (5.2.1) has a nontrivial solution. But this shows that \(\{\mathbf{w}_1, \mathbf{w}_2,\ldots ,\mathbf{w}_s\}\) is linearly dependent and contradicts our assumption that \(\mathcal{C}\) is a basis of \(V\text{.}\) We conclude that \(s=t\text{.}\)

Definition 5.2.14.

Let \(V\) be a subspace of \(\R^n\text{.}\) The dimension of \(V\) is the number, \(m\text{,}\) of elements in any basis of \(V\text{.}\) We write
\begin{equation*} \mbox{dim}(V)=m \end{equation*}

Example 5.2.15.

We know that vectors \(\mathbf{e}_1, \ldots ,\mathbf{e}_n\) form a basis of \(\R^n\text{.}\) Therefore \(\mbox{dim}(\R^n)=n\text{.}\)
The following section will guarantee that dimension is defined for every subspace of \(\R^n\text{.}\)

Proof.

Proof.

Consider the equation
\begin{equation} a\mathbf{u}+a_1\mathbf{v}_1+\ldots +a_k\mathbf{v}_k=\mathbf{0}.\tag{5.2.2} \end{equation}
We need to show that \(a=a_1=\ldots =a_k=0\text{.}\) Suppose \(a\neq 0\text{,}\) then
\begin{equation*} \mathbf{u}=\frac{-a_1}{a}\mathbf{v}_1+\ldots +\frac{-a_k}{a}\mathbf{v}_k.\text{.} \end{equation*}
But this contradicts the assumption that \(\mathbf{u}\) is not in the span of \(\mathbf{v}_1,\ldots ,\mathbf{v}_k\text{.}\) So, \(a=0\text{.}\) But \(a_1=\ldots =a_k=0\) because \(\mathbf{v}_1,\ldots ,\mathbf{v}_k\) are linearly independent. This means that (5.2.2) has only the trivial solution and \(\{\mathbf{u},\mathbf{v}_1,\ldots ,\mathbf{v}_k\}\) is linearly independent.

Proof.

Suppose that \(X=\{\mathbf{v}_1,\ldots ,\mathbf{v}_k\}\) is a linearly independent subset of \(V\text{.}\) If \(\mbox{span}(X) = V\) then \(X\) is already a basis of \(V\text{.}\)
If \(\mbox{span}(X) \neq V\text{,}\) choose \(\mathbf{u}_1\) in \(V\) such that \(\mathbf{u}_1\) is not in \(\mbox{span}(X)\text{.}\) The set \(\{\mathbf{u}_1, \mathbf{v}_1,\ldots ,\mathbf{v}_k\}\) is linearly independent by Lemma 5.2.17.
If \(\mbox{span}(\mathbf{u}_1, \mathbf{v}_1,\ldots ,\mathbf{v}_k) = V\) we are done; otherwise choose \(\mathbf{u}_{2} \in V\) such that \(\mathbf{u}_{2}\) is not in \(\mbox{span}(\mathbf{u}_1, \mathbf{v}_1,\ldots ,\mathbf{v}_k)\text{.}\) Then \(\{\mathbf{u}_1,\mathbf{u}_2, \mathbf{v}_1,\ldots ,\mathbf{v}_k\}\) is linearly independent, and the process continues.
We claim that a basis of \(V\) will be reached eventually. If no basis of \(V\) is ever reached, the process creates arbitrarily large independent sets in \(\R^n\text{.}\) But this is impossible by Lemma 5.2.16.

Exercises 5.2.5 Exercises

Exercise Group.

Let \(\mathcal{B}=\left\{\begin{bmatrix}1\\1\end{bmatrix},\begin{bmatrix}-1\\2\end{bmatrix}\right\}\) be a basis for \(\R^2\text{.}\) (Do a mental verification that \(\mathcal{B}\) is a basis.) For each \(\mathbf{v}\) given below, find the coordinate vector for \(\mathbf{v}\) with respect to \(\mathcal{B}\text{.}\)
1.
Vector \(\mathbf{v}\) as drawn blow.
First case drawn
Answer.
\begin{equation*} \begin{bmatrix}-2\\1\end{bmatrix} \end{equation*}
2.
Vector \(\mathbf{v}\) as drawn below.
Second case drawn
Answer.
\begin{equation*} \begin{bmatrix}3\\2\end{bmatrix} \end{equation*}

3.

Let
\begin{equation*} \mathcal{B}=\left\{\begin{bmatrix}1\\-1\\3\end{bmatrix},\begin{bmatrix}2\\1\\-1\end{bmatrix}\right\} \quad \text{be a basis for} \quad \mbox{span}\left(\begin{bmatrix}1\\-1\\3\end{bmatrix},\begin{bmatrix}2\\1\\-1\end{bmatrix}\right) \end{equation*}
Find the coordinate vector for \([-4,-2,2]\) with respect to \(\mathcal{B}\text{.}\)
Answer.
\begin{equation*} \begin{bmatrix}0\\-2\end{bmatrix} \end{equation*}

4.

Suppose
\begin{equation*} \mathcal{B}=\left\{\begin{bmatrix}1\\1\\1\end{bmatrix},\begin{bmatrix}1\\0\\1\end{bmatrix}, \mathbf{w}\right\} \end{equation*}
is a basis for \(\R^3\text{.}\) Find \(\mathbf{w}\) if the coordinate vector for \([-2,-7,4]\) is \([-1,2,-3]\text{.}\)
Answer.
\begin{equation*} \begin{bmatrix}1\\2\\-1\end{bmatrix} \end{equation*}

5.

    Which of the following is a basis for \(\R^2\text{?}\)
  • \(\left\{\begin{bmatrix}1\\1\end{bmatrix},\begin{bmatrix}-1\\-1\end{bmatrix}, \begin{bmatrix}1\\2\end{bmatrix}\right\} \)
  • \(\left\{\begin{bmatrix}1\\1\end{bmatrix}, \begin{bmatrix}1\\2\end{bmatrix}\right\} \)
  • \(\left\{\begin{bmatrix} 3\\-1\end{bmatrix},\begin{bmatrix}1\\2\end{bmatrix}, \begin{bmatrix}-4\\3\end{bmatrix}\right\}\)
  • \(\left\{\begin{bmatrix}1\\-3\end{bmatrix}, \begin{bmatrix}-2\\6\end{bmatrix}\right\}\)

6.

    Which of the following is a basis for \(V\) given below?
    \begin{equation*} V=\mbox{span}\left(\begin{bmatrix}1\\1\\1\end{bmatrix}, \begin{bmatrix}1\\-2\\1\end{bmatrix}\right) \end{equation*}
  • \(\left\{\begin{bmatrix} 2\\-1\\2\end{bmatrix},\begin{bmatrix}1\\-2\\1\end{bmatrix}\right\}\)
  • \(\left\{\begin{bmatrix}0\\3\\0\end{bmatrix}, \begin{bmatrix}3\\-3\\3\end{bmatrix}\right\} \)
  • \(\left\{\begin{bmatrix} 1\\0\\0\end{bmatrix},\begin{bmatrix}0\\0\\1\end{bmatrix}\right\}\)
  • \(\left\{\begin{bmatrix} 1\\1\\1\end{bmatrix},\begin{bmatrix}2\\-1\\2\end{bmatrix}, \begin{bmatrix}1\\-2\\1\end{bmatrix}\right\}\)

Exercise Group.

For each given set \(S\) of vectors, find \(\mbox{dim}(\mbox{span}(S))\text{.}\)
7.
\begin{equation*} S=\left\{\begin{bmatrix}1\\1\\0\\1\end{bmatrix}, \begin{bmatrix}0\\1\\1\\1\end{bmatrix}, \begin{bmatrix}1\\0\\1\\1\end{bmatrix}, \begin{bmatrix}1\\1\\0\\1\end{bmatrix} \right\} \end{equation*}
Answer.
\(\mbox{dim}(\mbox{span}(S))=3\)
8.
\begin{equation*} S=\left\{\begin{bmatrix}3\\-2\\1\\1\end{bmatrix}, \begin{bmatrix}2\\3\\3\\-2\end{bmatrix}, \begin{bmatrix}1\\-5\\-2\\3\end{bmatrix}\right\} \end{equation*}
Answer.
\(\mbox{dim}(\mbox{span}(S))=2\)
9.
\begin{equation*} S=\left\{\begin{bmatrix}1\\1\\-3\end{bmatrix}, \begin{bmatrix}-3\\2\\1\end{bmatrix}, \begin{bmatrix}5\\-2\\4\end{bmatrix}\right\} \end{equation*}
Answer.
\(\mbox{dim}(\mbox{span}(S))=3\)

11.

Let \(\mathcal{B}=\{\mathbf{v}_1, \mathbf{v}_2, \mathbf{v}_3\}\) be a basis of \(\R^3\text{.}\) Suppose \(A\) is a nonsingular \(3\times 3 \) matrix. Show that \(\mathcal{C}=\{A\mathbf{v}_1, A\mathbf{v}_2, A\mathbf{v}_3\}\) is also a basis of \(\R^3\text{.}\)
Hint.
To show that \(\mathcal{C}\) spans \(\R^3\text{,}\) express \(A^{-1}\mathbf{v}\) as a linear combination of \(\mathbf{v}_1\text{,}\) \(\mathbf{v}_2\) and \(\mathbf{v}_3\text{.}\)