Skip to main content
Logo image

Coordinated Linear Algebra

Section 4.2 Bases and Dimension

Subsection 4.2.1 Coordinate Vectors for Vector Spaces

When we first introduced vectors we learned to represent them using component notation. If we consider \(\mathbf{u}=\begin{bmatrix} 2 \\ 5 \end{bmatrix}\text{,}\) then we know that the head of \(\mathbf{u}\) is located at the point \((2, 5)\text{.}\)
But there is another way to look at the component form. Observe that \(\mathbf{u}\) can be expressed as a linear combination of the standard unit vectors \(\mathbf{i}\) and \(\mathbf{j}\text{:}\)
\begin{equation*} \mathbf{u}=2\begin{bmatrix}1\\0\end{bmatrix}+5\begin{bmatrix}0\\1\end{bmatrix}=2\mathbf{i}+5\mathbf{j}. \end{equation*}
Further, for the fixed vector \(\mathbf{u}\text{,}\) the numbers 2 and 5 are unique choice of coefficients that gives \(\mathbf{u}\text{.}\) Put another way, the only scalars \(x\) and \(y\) so that \(\mathbf{u}=x\mathbf{i}+y\mathbf{j}\) are \(x=2\) and \(y=5\text{.}\)
In fact, any vector \(\mathbf{v}=\begin{bmatrix} a \\ b \end{bmatrix}\) of \(\R^2\) can be written as a linear combination of \(\mathbf{i}\) and \(\mathbf{j}\text{:}\)
\begin{equation*} \mathbf{v}=\begin{bmatrix}a\\b\end{bmatrix}=a\mathbf{i}+b\mathbf{j}. \end{equation*}
This gives us an alternative way of interpreting the component notation:
\begin{equation*} \left[\begin{array}{c} a\\b \end{array}\right] \begin{array}{c} \longleftarrow\\ \longleftarrow \end{array} \begin{array}{c} \mbox{coefficient in front of }\mathbf{i}\\\mbox{coefficient in front of } \mathbf{j} \end{array} \end{equation*}
Again, to get vector \(\mathbf{v}\text{,}\) the only choice of coefficients that works are \(a\) and \(b\text{.}\) We say that \(a\) and \(b\) are coordinates of \(\mathbf{v}\) with respect to \((\mathbf{i}, \mathbf{j})\text{,}\) and \(\begin{bmatrix} a \\ b \end{bmatrix}\) is said to be the coordinate vector for \(\mathbf{v}\) with respect to \((\mathbf{i}, \mathbf{j})\text{.}\) Every vector \(\mathbf{v}\) of \(\R^2\) can be thus represented using \(\mathbf{i}\) and \(\mathbf{j}\text{.}\) Moreover, such representation in terms of \(\mathbf{i}\) and \(\mathbf{j}\) is unique for each vector, meaning that we will never have two different coordinate vectors representing the same vector. We will refer to \((\mathbf{i}, \mathbf{j})\) as a basis of \(\R^2\text{.}\)
The order in which the basis elements are written matters. For example, \(\mathbf{u}\) is represented by the coordinate vectors \(\begin{bmatrix} 2 \\ 5 \end{bmatrix}\) with respect to \((\mathbf{i}, \mathbf{j})\text{,}\) but changing the basis to \((\mathbf{j}, \mathbf{i})\) would change the coordinate vectors to \(\begin{bmatrix} 5 \\ 2 \end{bmatrix}\text{.}\) In our notation:
\begin{equation*} \left[\begin{array}{c} 5\\2 \end{array}\right] \begin{array}{c} \longleftarrow\\ \longleftarrow \end{array} \begin{array}{c} \mbox{coefficient in front of the first basis element }\\\mbox{coefficient in front of the second basis element} \end{array} \end{equation*}
Clearly, standard unit vectors \(\mathbf{i}\) and \(\mathbf{j}\) are very convenient, but other vectors can also be used in place of \(\mathbf{i}\) and \(\mathbf{j}\) to represent \(\mathbf{u}\text{.}\)

Exploration 4.2.1.

The diagram below shows \(\mathbf{u}\) together with vectors \(\mathbf{w}_1\) and \(\mathbf{w}_2\text{.}\)
Three vectors in span of two
It is easy to see that
\begin{equation*} \mathbf{u}=2\mathbf{w}_1+\mathbf{w}_2 \end{equation*}
as shown below.
Span of two of the above vectors graphed
If we declare \((\mathbf{w}_1, \mathbf{w}_2)\) to be a basis of \(\R^2\text{,}\) then we can say that the coordinate vector for \(\mathbf{u}\) with respect to \((\mathbf{w}_1, \mathbf{w}_2)\) is \(\begin{bmatrix} 2\\ 1 \end{bmatrix}\) .
\begin{equation*} \left[\begin{array}{c} 2\\1 \end{array}\right] \begin{array}{c} \longleftarrow\\ \longleftarrow \end{array} \begin{array}{c} \mbox{coefficient in front of the first basis element }\\\mbox{coefficient in front of the second basis element} \end{array} \end{equation*}

Subsection 4.2.2 What Constitutes a Basis?

The goal of this section is to define basis in a way that works in general, motivated by the exploration of the previous section. To be a basis, we will need two conditions: there have to be enough basis vectors and there cannot be too many basis vectors. We focus on \(\R^n\) and subspaces of \(\R^n\text{,}\) but what we establish here will generalize to other vector spaces.
Based on our previous discussion, given any vector \(\mathbf{v}\) of \(\R^n\) (or a subspace \(V\) of \(\R^n\)), we want to be able to write \(\mathbf{v}\) as a linear combination of the given basis vectors. In other words, we must have that basis vectors span \(\R^n\) (or \(V\)). For example, consider \(\mathbf{w}_1\) and \(\mathbf{w}_2\) shown below.
Planed formed by basis
The set \(\{\mathbf{w}_1, \mathbf{w}_2\}\) cannot be a basis for \(\R^3\) because the vertical vector \(\mathbf{k}\) is not a linear combniation of \(\mathbf{w}_1\) and \(\mathbf{w}_2\text{.}\) More generally, \(\mathbf{w}_1\) and \(\mathbf{w}_2\) span a plane in \(\R^3\text{,}\) and any vector not in that plane cannot be written as a linear combination of \(\mathbf{w}_1\) and \(\mathbf{w}_2\text{.}\) On the other hand, the plane spanned by \(\mathbf{w}_1\) and \(\mathbf{w}_2\) is a subspace of \(\R^3\text{.}\) Because every vector in that plane can be written as a linear combination of \(\mathbf{w}_1\) and \(\mathbf{w}_2\text{,}\) the set \(\{\mathbf{w}_1, \mathbf{w}_2\}\) could potentially be a basis for the plane, provided that the set satisfies our second requirement.
Our second requirement is that for a basis, there is only one way write a fixed vector \(\mathbf{v}\) as a linear combination of the basis vectors. In other words, the coordinate vector for each \(\mathbf{v}\) in \(\R^n\) (or \(V\)) should be unique. Uniqueness of representation in terms of the basis elements will play an important role in our future study of functions that map vector spaces to vector spaces. The following theorem shows that the uniqueness requirement is equivalent to the requirement that the basis vectors be linearly independent. These equivalent ideas are ways of requiring that a basis does not contain redundant vectors.

Proof.

Suppose that every \(\mathbf{v}\) in \(V\) can be expressed as a unique linear combination of \(\mathbf{w}_1, \mathbf{w}_2,\ldots,\mathbf{w}_p\text{.}\) This means that \(\mathbf{0}\) has a unique representation as a linear combination of \(\mathbf{w}_1, \mathbf{w}_2,\ldots,\mathbf{w}_p\text{.}\) But
\begin{equation*} \mathbf{0}=0\mathbf{w}_1+0\mathbf{w}_2+\ldots+0\mathbf{w}_p \end{equation*}
is a representation of \(\mathbf{0}\) in terms of \(\mathbf{w}_1, \mathbf{w}_2,\ldots,\mathbf{w}_p\text{.}\) Since we are assuming that such a representation is unique, we conclude that there is no other. This means that the vectors \(\mathbf{w}_1, \mathbf{w}_2,\ldots,\mathbf{w}_p\) are linearly independent.
Conversely, suppose that vectors \(\mathbf{w}_1, \mathbf{w}_2,\ldots,\mathbf{w}_p\) are linearly independent. An arbitrary element \(\mathbf{v}\) of \(V\) can be expressed as a linear combination of \(\mathbf{w}_1, \mathbf{w}_2,\ldots,\mathbf{w}_p\text{:}\)
\begin{equation*} \mathbf{v}=a_1\mathbf{w}_1+a_2\mathbf{w}_2+\ldots+a_p\mathbf{w}_p. \end{equation*}
Suppose this representation is not unique. Then there may be another linear combination that is also equal to \(\mathbf{v}\text{:}\)
\begin{equation*} \mathbf{v}=b_1\mathbf{w}_1+b_2\mathbf{w}_2+\ldots+b_p\mathbf{w}_p. \end{equation*}
But then
\begin{equation*} a_1\mathbf{w}_1+a_2\mathbf{w}_2+\ldots+a_p\mathbf{w}_p=b_1\mathbf{w}_1+b_2\mathbf{w}_2+\ldots+b_p\mathbf{w}_p. \end{equation*}
This gives us
\begin{equation*} (a_1-b_1)\mathbf{w}_1+(a_2-b_2)\mathbf{w}_2+\ldots+(a_p-b_p)\mathbf{w}_p=\mathbf{0}. \end{equation*}
Because we assumed that \(\mathbf{w}_1, \mathbf{w}_2,\ldots,\mathbf{w}_p\) are linearly independent, we must have
\begin{equation*} a_1-b_1=0,\, a_2-b_2=0,\,\ldots ,\,a_p-b_p=0, \end{equation*}
so that
\begin{equation*} a_1=b_1,\, a_2=b_2,\,\ldots ,\,a_p=b_p. \end{equation*}
This proves the representation of \(\mathbf{v}\) in terms of \(\mathbf{w}_1, \mathbf{w}_2,\ldots,\mathbf{w}_p\) is unique.
Here is a concrete example of a basis and some practice.

Example 4.2.2.

Use \(V=\mbox{span}(\mathcal{S})\text{,}\) where
\begin{equation*} \mathcal{S}=\left\{\begin{bmatrix}5\\2\\4\end{bmatrix},\begin{bmatrix}4\\1\\1\end{bmatrix},\begin{bmatrix}-3\\0\\2\end{bmatrix}\right\} \end{equation*}
to illustrate why a set of linearly dependent vectors cannot be used as a basis for a subspace by showing that linearly dependent vectors fail to ensure uniqueness of coordinate vectors for vectors in \(V\text{.}\)
Answer.
We will first show that the elements of \(\mathcal{S}\) are linearly dependent. Let \(A\) be a matrix whose columns are the vectors in \(\mathcal{S}\text{.}\)
\begin{equation*} A=\begin{bmatrix}5\amp 4\amp -3\\2\amp 1\amp 0\\4\amp 1\amp 2\end{bmatrix}. \end{equation*}
We find that
\begin{equation*} \mbox{rref}(A) = \begin{bmatrix} 1\amp 0\amp 1\\0\amp 1\amp -2\\0\amp 0\amp 0 \end{bmatrix}. \end{equation*}
Therefore the matrix equation \(A\mathbf{x}=\mathbf{0}\) has infinitely many solutions:
\begin{equation*} \mathbf{x}=\begin{bmatrix}-1\\2\\1\end{bmatrix}t. \end{equation*}
This tells us that there are infinitely many nontrivial linear relations among the elements of \(\mathcal{S}\text{.}\) Letting \(t=1\) gives us one such nontrivial relation.
\begin{equation*} -\begin{bmatrix}5\\2\\4\end{bmatrix}+2\begin{bmatrix}4\\1\\1\end{bmatrix}+\begin{bmatrix}-3\\0\\2\end{bmatrix}=\mathbf{0} \end{equation*}
Now let’s pick an arbitrary vector \(\mathbf{v}\) in \(V\text{.}\) Any vector will do, so let
\begin{equation*} \mathbf{v}=\begin{bmatrix}5\\2\\4\end{bmatrix}+ (-1)\begin{bmatrix}4\\1\\1\end{bmatrix}+0\begin{bmatrix}-3\\0\\2\end{bmatrix}. \end{equation*}
Based on this representation of \(\mathbf{v}\text{,}\) the coordinate vector for \(\mathbf{v}\) with respect to \(\mathcal{S}\) is
\begin{equation*} \begin{bmatrix}1\\-1\\0\end{bmatrix}. \end{equation*}
But
\begin{equation*} \begin{bmatrix}5\\2\\4\end{bmatrix}=2\begin{bmatrix}4\\1\\1\end{bmatrix}+\begin{bmatrix}-3\\0\\2\end{bmatrix}. \end{equation*}
So, by substitution, we have:
\begin{align*} \mathbf{v} \amp =\left(2\begin{bmatrix}4\\1\\1\end{bmatrix}+\begin{bmatrix}-3\\0\\2\end{bmatrix}\right)+ (-1)\begin{bmatrix}4\\1\\1\end{bmatrix}+0\begin{bmatrix}-3\\0\\2\end{bmatrix} \\ \amp =0\begin{bmatrix}5\\2\\4\end{bmatrix}+ 1\begin{bmatrix}4\\1\\1\end{bmatrix}+1\begin{bmatrix}-3\\0\\2\end{bmatrix}. \end{align*}
Problem 4.2.3.
Based on this representation, the coordinate vector for \(\mathbf{v}\) with respect to \(\mathcal{S}\) is what?
Answer.
\begin{equation*} \begin{bmatrix}0\\1\\1\end{bmatrix}. \end{equation*}
The set \(\mathcal{S}\) is linearly dependent. As a result, coordinate vectors for elements of \(V\) are not unique and we do not want to use \(\mathcal{S}\) as a basis for \(V\text{.}\)

Subsection 4.2.3 Definition of a Basis

Before we can define a basis, we need to define linear independence in an abstract vector space.

Definition 4.2.4. Linear Independence.

Let \(V\) be a vector space. Let \(\mathbf{v}_1, \mathbf{v}_2,\ldots ,\mathbf{v}_p\) be vectors of \(V\text{.}\) We say that the set \(\{\mathbf{v}_1, \mathbf{v}_2,\ldots ,\mathbf{v}_p\}\) is linearly independent if the only solution to
\begin{equation*} a_1\mathbf{v}_1+a_2\mathbf{v}_2+\ldots +a_p\mathbf{v}_p=\mathbf{0} \end{equation*}
is the trivial solution \(a_1=a_2=\ldots =a_p=0\text{.}\)
If, in addition to the trivial solution, a non-trivial solution (not all \(a_1, a_2,\ldots ,a_p\) are zero) exists, then we say that the set \(\{\mathbf{v}_1, \mathbf{v}_2,\ldots ,\mathbf{v}_p\}\) is linearly dependent.
Let us examine this abstract version of bases in the context of polynomials, to get a feeling for these concepts.

Example 4.2.5.

Show that \(P=\{1 + x, 3x + x^{2}, 2 + x - x^{2}\}\) is linearly independent in \(\mathbb{P}^{2}\text{.}\)
Answer.
Consider the linear combination equation
\begin{align*} a(1 + x) + b(3x + x^2) + c(2 + x - x^2) \amp = 0 \\ a+ax+3bx+bx^2+2c+cx-cx^2\amp =0 \\ (a+2c)+(a+3b+c)x+(b-c)x^2\amp =0 \end{align*}
The constant term, as well as the coefficients in front of \(x\) and \(x^2\text{,}\) must be equal to \(0\text{.}\) This gives us the following system of equations.
\begin{equation*} \begin{array}{rlrlrcr} a \amp + \amp \amp + \amp 2c \amp = \amp 0 \\ a \amp + \amp 3b \amp + \amp c \amp = \amp 0 \\ \amp \amp b \amp - \amp c \amp = \amp 0 \\ \end{array} \end{equation*}
The only solution is \(a = b = c = 0\text{.}\) We conclude that \(P\) is linearly independent in \(\mathbb{P}^2\text{.}\)
Now we are ready to define a basis of an abstract vector space. We will see immediately in the theorem that follows the definition, that bases (the plurual of basis) provide us with a unique way to write vectors in a vector space.

Definition 4.2.6.

Let \(V\) be a vector space. A set \(\mathcal{B}\) of vectors of \(V\) is called a basis of \(V\) provided that
  1. \(\displaystyle \mbox{span}(\mathcal{B})=V\)
  2. \(\mathcal{B}\) is linearly independent.

Proof.

This result is a special case of Theorem 4.2.1. By the definition of basis, we know the vectors in the basis are linearly independent and so by Theorem 4.2.1, every vector has a unique representation as a linear combination of basis vectors.
The prototypical example of a basis is the standard one, which we showcase in the next example. It is the one we implicitly worked with so far.

Example 4.2.8.

The standard unit vectors \(\mathbf{e}_1, \ldots ,\mathbf{e}_n\) are linearly independent and span \(\R^n\text{.}\) Thus \(\{\mathbf{e}_1, \ldots ,\mathbf{e}_n\}\) is a basis of \(\R^n\text{.}\)

Definition 4.2.9.

The set \(\{\mathbf{e}_1, \ldots ,\mathbf{e}_n\}\) is called the standard basis of \(\R^n\text{.}\)
Recall that \(\mathbb{P}^n\) is the set of all polynomials of degree \(n\) or less (see Example 4.1.14). It too has a standard basis.

Example 4.2.10.

Show that
\begin{equation*} \lbrace 1, x, x^{2}, \dots, x^{n} \rbrace \end{equation*}
is a basis of \(\mathbb{P}^{n}\text{.}\)
Answer.
Each polynomial
\begin{equation*} p(x) = a_{0} + a_{1}x + \ldots + a_{n}x^{n}, \quad \text{in } \mathbb{P}^{n}, \end{equation*}
is clearly a linear combination of \(1, x, \dots, x^{n}\text{,}\) so
\begin{equation*} \mathbb{P}^{n} = \mbox{span} \lbrace 1, x, \dots, x^{n} \rbrace.\text{.} \end{equation*}
Suppose \(a_{0}1 + a_{1}x + \dots + a_{n}x^{n} = 0\text{,}\) then \(a_{0} = a_{1} = \ldots = a_{n} = 0\text{.}\) So \(\{1, x, \dots, x^{n}\}\) is linearly independent and is therefore a basis containing \(n + 1\) vectors.
Bases are far from unique. For example, we could take a basis and multiply any vector in it by a non-zero scalar and a have different basis; so \(5\mathbf{i}, -\mathbf{j}\) is a basis of \(\R^2\) that differs from the standard basis of \(\R^2\text{,}\) which is \(\mathbf{i}, \mathbf{j}\text{.}\) For a more interesting example, as we discussed in Example Example 2.2.16, vectors
\begin{equation*} \begin{bmatrix}2\\2\end{bmatrix}, \begin{bmatrix}-1\\0\end{bmatrix} \end{equation*}
are linearly independent vectors that span \(\R^2\text{.}\) Therefore
\begin{equation*} \left\{\begin{bmatrix}2\\2\end{bmatrix}, \begin{bmatrix}-1\\0\end{bmatrix}\right\} \end{equation*}
is also a basis for \(\R^2\text{.}\)
Any linearly independent spanning set of a vector space \(V\) is a basis of \(V\text{.}\) It is easy to see that \(\R^n\) and its subspaces each has infinitely many bases.

Example 4.2.11.

Let \(V=\mbox{span} ( \begin{bmatrix} -2\\ 1\\ 3 \end{bmatrix}, \begin{bmatrix} 2\\ -4\\ 1 \end{bmatrix} )\text{.}\) The set
\begin{equation*} \mathcal{B}=\left\{\begin{bmatrix}-2\\1\\3\end{bmatrix},\begin{bmatrix}2\\-4\\1\end{bmatrix}\right\} \end{equation*}
is a basis for \(V\) because the two vectors in \(\mathcal{B}\) are linearly independent and span \(V\text{.}\) Find the coordinate vector for \(\mathbf{v}=\begin{bmatrix} 2\\ -10\\ 9 \end{bmatrix}\) with respect to \(\mathcal{B}\text{.}\)
Explanation.
We need to express \(\begin{bmatrix} 2\\ -10\\ 9 \end{bmatrix}\) as a linear combination of the elements of \(\mathcal{B}\text{.}\) To this end, we need to solve the vector equation:
\begin{equation*} a_1\begin{bmatrix}-2\\1\\3\end{bmatrix}+a_2\begin{bmatrix}2\\-4\\1\end{bmatrix}=\begin{bmatrix}2\\-10\\9\end{bmatrix}. \end{equation*}
The augmented matrix and the reduced row-echelon form are:
\begin{equation*} \left[\begin{array}{cc|c} -2\amp 2\amp 2\\-1\amp -4\amp -10\\3\amp 1\amp 9 \end{array}\right]\rightsquigarrow\left[\begin{array}{cc|c} 1\amp 0\amp 2\\0\amp 1\amp 3\\0\amp 0\amp 0 \end{array}\right]. \end{equation*}
We conclude that \(a_1=2\text{,}\) \(a_2=3\text{.}\) This gives us
\begin{equation*} 2\begin{bmatrix}-2\\1\\3\end{bmatrix}+3\begin{bmatrix}2\\-4\\1\end{bmatrix}=\begin{bmatrix}2\\-10\\9\end{bmatrix}. \end{equation*}
The coefficient in front of the first basis vector is \(2\text{,}\) the coefficient in front of the second basis vector is \(3\text{.}\) This means that the coordinate vector for \(\begin{bmatrix} 2\\ -10\\ 9 \end{bmatrix}\) with respect to \(\mathcal{B}\) is \(\begin{bmatrix} 2\\ 3\end{bmatrix}\text{.}\)

Remark 4.2.12.

It may seem strange to you that the coordinate vector for a vector in \(\R^3\) only has two components. But remember that subspace \(V\) is a plane and its basis has two elements. As a vector in the plane, the coordinate vector for \(\begin{bmatrix} 2\\ -10\\ 9 \end{bmatrix}\) only requires two components, one for each basis element. This issue is related to the question of dimension, which will be addressed in the next sections.

Remark 4.2.13.

To construct the coordinate vector for \(\begin{bmatrix} 2\\ -10\\ 9 \end{bmatrix}\) with respect to \(\mathcal{B}\text{,}\) we had to be mindful of the order of the elements in \(\mathcal{B}\text{.}\) Ordinarily, the order of elements in a set is irrelevant, and the basis
\begin{equation*} \left\{\begin{bmatrix}-2\\1\\3\end{bmatrix},\begin{bmatrix}2\\-4\\1\end{bmatrix}\right\} \end{equation*}
is considered to be the same as
\begin{equation*} \left\{\begin{bmatrix}2\\-4\\1\end{bmatrix},\begin{bmatrix}-2\\1\\3\end{bmatrix}\right\}. \end{equation*}
When dealing with coordinate vectors, however, the order of the elements dictates the order of the components of the coordinate vector coefficients. If we switch the order of the elements in \(\mathcal{B}\text{,}\) the coordinate vector becomes \(\begin{bmatrix} 3\\ 2\end{bmatrix}\text{.}\) For this reason, when we come back to studying coordinate vectors in more detail, we will use the term ordered basis to avoid confusion.

Subsection 4.2.4 Exploring Dimension

A basis of a vector space \(V\) is a subset of \(V\) that is linearly independent and spans \(V\text{.}\) By Theorem 4.2.7, a basis allows us to uniquely express every element of \(V\) as a linear combination of the elements of the basis. Several questions may come to mind at this time. Does every vector space have a basis? We know that bases are not unique. If there is more than one basis, what, if anything, do they have in common?

Exploration 4.2.2.

How would you describe
\begin{equation*} V=\mbox{span}\left(\begin{bmatrix}1\\-2\\3\end{bmatrix}, \begin{bmatrix}-2\\4\\-6\end{bmatrix}\right)? \end{equation*}
If you answered that \(V\) is a line in \(\R^3\text{,}\) you are correct. While the two vectors span the line, it is not necessary to have both of them in the spanning set to describe the line.
Problem 4.2.14.
What is the minimum number of vectors needed to span a line?
Answer.
\(1\text{.}\)
Because the vectors in the given spanning set are not linearly independent, so they do not form a basis for \(V\text{.}\)
Problem 4.2.15.
How many vectors would a basis for \(V\) have?
Answer.
\(1\text{.}\)
Now consider another subspace of \(\R^3\text{:}\)
\begin{equation*} W=\mbox{span}\left(\begin{bmatrix}1\\0\\2\end{bmatrix}, \begin{bmatrix}0\\-3\\0\end{bmatrix}\right) \end{equation*}
Geometrically, \(W\) is a plane in \(\R^3\text{.}\) Note that the vectors in the spanning set are linearly independent. Can we remove one of the vectors and have the remaining vector span the plane?
Problem 4.2.16.
What is the minimum number of vectors needed to span a plane? How many vectors are needed for a basis of a plane?
Answer.
\(2\)
Our observations in Exploration 4.2.2 hint at the idea of dimension. We know that a line is a one-dimensional object, a plane is a two-dimensional object, and the space we reside in is three-dimensional.
Based on our observations in Exploration 4.2.2, it makes sense for us to define dimension of a vector space (or a subspace) as the minimum number of vectors required to span the space (subspace). We can accomplish this by defining dimension as the number of elements in a basis. We have to proceed carefully because we don’t want the dimension to change if we look at another basis. So, before we state our definition, we need to make sure that every basis for a given vector space (or subspace) has the same number of elements.

Proof.

Suppose \(s\neq t\text{.}\) Without loss of generality, assume that \(s\gt t\text{.}\) Because \(\mathcal{B}\) spans \(V\text{,}\) every \(\mathbf{w}_i\) of \(\mathcal{C}\) can be written as a linear combination of elements of \(\mathcal{B}\text{:}\)
\begin{equation*} \mathbf{w}_i=a_{1i}\mathbf{v}_1+a_{2i}\mathbf{v}_{2}+\ldots +a_{ti}\mathbf{v}_t. \end{equation*}
Consider the vector equation
\begin{equation} b_1\mathbf{w}_1+b_2\mathbf{w}_2+\ldots +b_s\mathbf{w}_s=\mathbf{0}.\tag{4.2.1} \end{equation}
By substitution, we have:
\begin{align*} \amp b_1\mathbf{w}_1+b_2\mathbf{w}_2+\ldots +b_s\mathbf{w}_s= \\ =\amp b_1(a_{11}\mathbf{v}_1+a_{21}\mathbf{v}_{2}+\ldots +a_{t1}\mathbf{v}_t)+b_2(a_{12}\mathbf{v}_1+a_{22}\mathbf{v}_{2}+\ldots +a_{t2}\mathbf{v}_t)+\ldots \\ \amp +b_s(a_{1s}\mathbf{v}_1+a_{2s}\mathbf{v}_{2}+\ldots +a_{ts}\mathbf{v}_t) \\ =\amp (b_1a_{11}+b_2a_{12}+\ldots +b_sa_{1s})\mathbf{v}_1 +(b_1a_{21}+b_2a_{22}+\ldots +b_sa_{2s})\mathbf{v}_2+ \ldots \\ \amp +(b_1a_{t1}+b_2a_{t2}+\ldots +b_sa_{ts})\mathbf{v}_t \\ =\amp \mathbf{0}. \end{align*}
Because \(\mathbf{v}_j\)’s are linearly independent, we must have
\begin{equation*} b_1a_{j1}+b_2a_{j2}+\ldots +b_sa_{js}=0. \end{equation*}
For all \(1\leq j\leq t\text{.}\) This gives us a system of \(t\) equations and \(s\) unknowns. We can write the system as a matrix equation.
\begin{equation*} \begin{bmatrix}a_{11}\amp a_{12}\amp \ldots \amp a_{1s}\\a_{21}\amp a_{22}\amp \ldots \amp a_{2s}\\\vdots\amp \vdots\amp \ddots\amp \vdots\\a_{t1}\amp a_{t2}\amp \ldots\amp a_{ts}\end{bmatrix}\begin{bmatrix}b_1\\b_2\\\vdots\\b_s\end{bmatrix}=\mathbf{0}. \end{equation*}
Recall our assumption that \(s\gt t\text{;}\) this means the number of leading ones must be less thant the number of unknowns. By Observation 1.2.14, we know that the system has infinitely many solutions. This shows that (4.2.1) has a nontrivial solution. But this shows that \(\{\mathbf{w}_1, \mathbf{w}_2,\ldots ,\mathbf{w}_s\}\) is linearly dependent and contradicts our assumption that \(\mathcal{C}\) is a basis of \(V\text{.}\) So our assumption that \(s \gt t\) leads to a contradiction and so cannot be true. We conclude that \(s=t\text{.}\)
As a very technical point, notice that in the theorem we assumed that the vector space \(V\) has a basis with some finite number \(t\) of elements. So this theorem does not apply to the vector space \(\mathbb{P}\) of all polynomials, which does not have a basis with finitely many elements. We will come back to this issue in Subsection 4.2.5

Definition 4.2.18.

Let \(V\) be a subspace of a vector space \(W\) so that \(V\) has a basis with finitely many elements. The dimension of \(V\) is the number, \(m\text{,}\) of elements in any basis of \(V\text{.}\) We write
\begin{equation*} \mbox{dim}(V)=m. \end{equation*}

Example 4.2.19.

We know that vectors \(\mathbf{e}_1, \ldots ,\mathbf{e}_n\) form a basis of \(\R^n\text{.}\) Therefore \(\mbox{dim}(\R^n)=n\text{.}\)

Example 4.2.20.

In Example Example 4.2.10, we showed know that the set \(\{1, x, x^{2}, \ldots , x^{n}\}\) is a basis of \(\mathbb{P}^{n}\text{.}\) Hence, \(\mbox{dim}(\mathbb{P}^{n})=n+1\text{.}\)
In our discussions up to this point, we have always assumed that a basis is nonempty and hence that the dimension of the space is at least \(1\text{.}\) However, the zero space \(\{\mathbf{0}\}\) has no basis. To accommodate for this, we will say that the zero vector space \(\{\mathbf{0}\}\) is defined to have dimension \(0\text{:}\)
\begin{equation*} \mbox{dim }\{\mathbf{0}\} = 0. \end{equation*}
Our insistence that \(\mbox{dim}\{\mathbf{0}\} = 0\) amounts to saying that the empty set of vectors is a basis of \(\{\mathbf{0}\}\text{.}\) Thus the statement that ``the dimension of a vector space is the number of vectors in any basis’’ holds even for the zero space.

Example 4.2.21.

Recall that the vector space \(\mathbb{M}_{m,n}\) consists of all \(m\times n\) matrices (see Example 4.1.8). Find a basis and the dimension of \(\mathbb{M}_{m,n}\text{.}\)
Answer.
Let \(\mathcal{B}\) consist of \(m\times n\) matrices with exactly one entry equal to \(1\) and all other entries equal to \(0\text{.}\) It is clear that every \(m\times n\) matrix can be written as a linear combination of elements of \(\mathcal{B}\text{.}\) It is also easy to see that the elements of \(\mathcal{B}\) are linearly independent. Thus \(\mathcal{B}\) is a basis for \(\mathbb{M}_{m,n}\text{.}\) The set \(\mathcal{B}\) contains \(mn\) elements, so \(\mbox{dim}(\mathbb{M}_{m,n})=mn\text{.}\)

Example 4.2.22.

Consider the subset
\begin{equation*} C_A = \lbrace X \in\mathbb{M}_{2,2} : AX = XA \rbrace. \end{equation*}
of \(\mathbb{M}_{2,2}\text{.}\) It was shown in Example 4.1.19 of that \(C_A\) is a subspace for any choice of the matrix \(A\text{.}\) Let
\begin{equation*} A = \begin{bmatrix} 1 \amp 1 \\ 0 \amp 0 \end{bmatrix}. \end{equation*}
Show that \(\mbox{dim}(C_A) = 2\) and find a basis of \(C_A\text{.}\)
Answer.
Suppose
\begin{equation*} X = \begin{bmatrix} a \amp b \\ c \amp d \end{bmatrix} \end{equation*}
is in \(C_A\text{.}\) Then
\begin{equation*} \begin{bmatrix}1\amp 1\\0\amp 0\end{bmatrix}\begin{bmatrix}a\amp b\\c\amp d\end{bmatrix}=\begin{bmatrix}a\amp b\\c\amp d\end{bmatrix}\begin{bmatrix}1\amp 1\\0\amp 0\end{bmatrix}. \end{equation*}
\begin{equation*} \begin{bmatrix}a+c\amp b+d\\0\amp 0\end{bmatrix}=\begin{bmatrix}a\amp a\\c\amp c\end{bmatrix}. \end{equation*}
This gives us two relationships:
\begin{equation*} b+d=a\quad\text{and}\quad c=0. \end{equation*}
We can now express a generic element \(X\) of \(C_A\) as
\begin{align*} X=\begin{bmatrix}a\amp b\\c\amp d\end{bmatrix} \amp = \begin{bmatrix}b+d\amp b\\0\amp d\end{bmatrix} \\ \amp =\begin{bmatrix}b\amp b\\0\amp 0\end{bmatrix}+\begin{bmatrix}d\amp 0\\0\amp d\end{bmatrix} \\ \amp =b\begin{bmatrix}1\amp 1\\0\amp 0\end{bmatrix}+d\begin{bmatrix}1\amp 0\\0\amp 1\end{bmatrix}. \end{align*}
Let
\begin{equation*} \mathcal{B}=\left \lbrace \begin{bmatrix}1\amp 1\\0\amp 0\end{bmatrix},\begin{bmatrix}1\amp 0\\0\amp 1\end{bmatrix}\right \rbrace. \end{equation*}
The set \(\mathcal{B}\) is linearly independent (see Exercise 4.2.7.12) Every element \(X\) of \(C_A\) can be written as a linear combination of elements of \(\mathcal{B}\text{.}\) Thus \(C_A=\mbox{span}\mathcal{B}\text{.}\) Therefore \(\mathcal{B}\) is a basis of \(C_A\text{,}\) and \(\mbox{dim}(C_A) = 2\text{.}\)

Example 4.2.23.

In Exercise 4.1.6.18 of you demonstrated that the set of all symmetric \(n\times n\) matrices is a subspace of \(\mathbb{M}_{n,n}\text{.}\) Let \(V\) be a subspace of \(\mathbb{M}_{2,2}\) consisting of all \(2\times 2\) symmetric matrices. Find the dimension of \(V\text{.}\)
Answer.
A matrix \(A\) is symmetric if \(A^{T} = A\text{.}\) In other words, a matrix \(A\) is symmetric when entries directly across the main diagonal are equal, so each \(2 \times 2\) symmetric matrix has the form
\begin{equation*} \begin{bmatrix} a \amp c \\ c \amp b \end{bmatrix} = a\begin{bmatrix} 1 \amp 0 \\ 0 \amp 0 \end{bmatrix} + b\begin{bmatrix} 0 \amp 0 \\ 0 \amp 1 \end{bmatrix} + c\begin{bmatrix} 0 \amp 1 \\ 1 \amp 0 \end{bmatrix}. \end{equation*}
Hence the set
\begin{equation*} \mathcal{B} = \left \lbrace \begin{bmatrix} 1 \amp 0 \\ 0 \amp 0 \end{bmatrix}, \begin{bmatrix} 0 \amp 0 \\ 0 \amp 1 \end{bmatrix}, \begin{bmatrix} 0 \amp 1 \\ 1 \amp 0 \end{bmatrix} \right \rbrace \end{equation*}
spans \(V\text{.}\) The reader can verify that \(\mathcal{B}\) is linearly independent. Thus \(\mathcal{B}\) is a basis of \(V\text{,}\) so \(\mbox{dim}(V) = 3\text{.}\)

Subsection 4.2.5 Finite-Dimensional Vector Spaces

Our definition of dimension of a vector space depends on the vector space having a basis with finitely many elements. In this section we will establish that any vector space spanned by finitely many vectors has such a basis.

Definition 4.2.24.

A vector space is said to be finite-dimensional if it is spanned by finitely many vectors.
As we have suggested above, the vector space of all polynomials with real coefficents, \(\mathb{P}\text{,}\) is not finite dimensional. Most of the time, we only care about finite-dimensional vector spaces in this text.
Given a finite-dimensional vector space \(V\) we can find a basis for \(V\) by starting with any linearly independent subset of \(V\) and expanding it to a basis. The following results justify this claim.

Proof.

Proof.

Consider the equation
\begin{equation} a\mathbf{u}+a_1\mathbf{v}_1+\ldots +a_k\mathbf{v}_k=\mathbf{0}.\tag{4.2.2} \end{equation}
We need to show that \(a=a_1=\ldots =a_k=0\text{.}\) Suppose \(a\neq 0\text{,}\) then
\begin{equation*} \mathbf{u}=\frac{-a_1}{a}\mathbf{v}_1+\ldots +\frac{-a_k}{a}\mathbf{v}_k.\text{.} \end{equation*}
But this contradicts the assumption that \(\mathbf{u}\) is not in the span of \(\mathbf{v}_1,\ldots ,\mathbf{v}_k\text{.}\) So, \(a=0\text{.}\) But \(a_1=\ldots =a_k=0\) because \(\mathbf{v}_1,\ldots ,\mathbf{v}_k\) are linearly independent. This means that (4.2.2) has only the trivial solution and \(\{\mathbf{u},\mathbf{v}_1,\ldots ,\mathbf{v}_k\}\) is linearly independent.

Proof.

Suppose that \(X=\{\mathbf{v}_1,\ldots ,\mathbf{v}_k\}\) is a linearly independent subset of \(V\text{.}\) If \(\mbox{span}(X) = V\) then \(X\) is already a basis of \(V\text{.}\)
If \(\mbox{span}(X) \neq V\text{,}\) choose \(\mathbf{u}_1\) in \(V\) such that \(\mathbf{u}_1\) is not in \(\mbox{span}(X)\text{.}\) The set \(\{\mathbf{u}_1, \mathbf{v}_1,\ldots ,\mathbf{v}_k\}\) is linearly independent by Lemma 4.2.26.
If \(\mbox{span}(\mathbf{u}_1, \mathbf{v}_1,\ldots ,\mathbf{v}_k) = V\) we are done; otherwise choose \(\mathbf{u}_{2} \in V\) such that \(\mathbf{u}_{2}\) is not in \(\mbox{span}(\mathbf{u}_1, \mathbf{v}_1,\ldots ,\mathbf{v}_k)\text{.}\) Then \(\{\mathbf{u}_1,\mathbf{u}_2, \mathbf{v}_1,\ldots ,\mathbf{v}_k\}\) is linearly independent, and the process continues.
We claim that a basis of \(V\) will be reached eventually. If no basis of \(V\) is ever reached, the process creates arbitrarily large independent sets in \(\R^n\text{.}\) But this is impossible by Lemma 4.2.25.

Subsection 4.2.6 Coordinate Vectors Revisited

Recall that, for any vector space \(V\text{,}\) we proved in Theorem 4.2.7 that every element of the vector space has a unique representation in terms of the elements of the basis
Because of this uniqueness, we associate every element of \(V\) with a unique coordinate vector with respect to a given basis. Although we have discussed coordinates in \(\R^2\) informally in Subsection 4.2.1, We now give a formal definition.

Definition 4.2.28.

Let \(V\) be a vector space, and let \(\mathcal{B}=\{\mathbf{v}_1, \ldots ,\mathbf{v}_n\}\) be a basis for \(V\text{.}\) If \(\mathbf{v}=a_1\mathbf{v}_1+\ldots +a_n\mathbf{v}_n\text{,}\) then the vector in \(\R^n\) whose components are the coefficients \(a_1, \ldots ,a_n\) is said to be the coordinate vector for \(\mathbf{v}\) with respect to \(\mathcal{B}\text{.}\) We denote the coordinate vector by \([\mathbf{v}]_{\mathcal{B}}\) and write:
\begin{equation*} [\mathbf{v}]_{\mathcal{B}}=\begin{bmatrix}a_1\\\vdots \\a_n\end{bmatrix}. \end{equation*}

Remark 4.2.29.

The order of in which vectors \(\mathbf{v}_1, \ldots ,\mathbf{v}_n\) appear in \(\mathcal{B}\) of Definition 4.2.28 is important. Switching the order of these vectors would switch the order of the coordinate vector components. For this reason, we will often use the term ordered basis for \(\mathcal{B}\) to emphasize the ordering.
Coordinate vectors may seem abstract as described above. In examples, however, one can nearly always pinpoint exactly what the coordinates are. Try looking again at Exploration 4.2.1 or the following examples:

Example 4.2.30.

The coordinate vector for \(p(x)=4-3x^2+5x^3\) in \(\mathbb{P}^4\) with respect to the ordered basis \(\mathcal{B}_1=\{1, x, x^2, x^3, x^4\}\) is
\begin{equation*} [p(x)]_{\mathcal{B}_1}=\begin{bmatrix}4\\0\\-3\\5\\0\end{bmatrix}. \end{equation*}
Now let’s change the order of the elements in \(\mathcal{B}_1\text{.}\) The coordinate vector for \(p(x)=4-3x^2+5x^3\) with respect to the ordered basis \(\mathcal{B}_2=\{x^4, x^3, x^2, x, 1\}\) is
\begin{equation*} [p(x)]_{\mathcal{B}_2}=\begin{bmatrix}0\\5\\-3\\0\\4\end{bmatrix}. \end{equation*}

Example 4.2.31.

Show that the set \(\mathcal{B}=\{x, 1+x, x+x^2\}\) is a basis for \(\mathbb{P}^2\text{.}\) Keep the order of elements in \(\mathcal{B}\) and find the coordinate vector for \(p(x)=4-x+3x^2\) with respect to the ordered basis \(\mathcal{B}\text{.}\)
Answer.
We will begin by showing that the elements of \(\mathcal{B}\) are linearly independent. Suppose
\begin{equation*} ax+b(1+x)+c(x+x^2)=0. \end{equation*}
Then
\begin{equation*} b+(a+b+c)x+cx^2=0. \end{equation*}
This gives us the following system of equations:
\begin{equation*} \begin{array}{ccccccc} \amp \amp b \amp = \amp 0, \amp \amp a \amp + \amp b \amp +\amp c\amp = \amp 0, \amp \amp c \amp = \amp 0. \end{array} \end{equation*}
The solution \(a=b=c=0\) is unique. We conclude that \(\mathcal{B}\) is linearly independent.
Next, we need to show that \(\mathcal{B}\) spans \(\mathbb{P}^2\text{.}\) To this end, we will consider a generic element \(p(x)=\alpha+\beta x+\gamma x^2\) of \(\mathbb{P}^2\) and attempt to express it as a linear combination of the elements of \(\mathcal{B}\text{.}\)
\begin{equation*} ax+b(1+x)+c(x+x^2)=\alpha+\beta x+\gamma x^2. \end{equation*}
then
\begin{equation*} b+(a+b+c)x+cx^2=\alpha+\beta x+\gamma x^2. \end{equation*}
Setting the coefficients of like terms equal to each other gives us
\begin{equation*} \begin{array}{ccccccc} \amp \amp b\amp \amp \amp =\amp \alpha\\ a \amp +\amp b\amp +\amp c\amp = \amp \beta \\ \amp \amp \amp \amp c\amp =\amp \gamma \end{array} \end{equation*}
Solving this linear system of \(a\text{,}\) \(b\) and \(c\) gives us
\begin{equation*} a=\beta-\alpha-\gamma,\quad b=\alpha,\quad c=\gamma . \end{equation*}
(You should verify this.) This shows that every element of \(\mathbb{P}^2\) can be written as a linear combination of elements of \(\mathcal{B}\text{.}\) Therefore \(\mathcal{B}\) is a basis for \(\mathbb{P}^2\text{.}\) To find the coordinate vector for \(p(x)=4-x+3x^2\) with respect to \(\mathcal{B}\) we need to express \(p(x)\) as a linear combination of the elements of \(\mathcal{B}\text{.}\) Fortunately, we have already done all the necessary work. For \(p(x)\text{,}\) \(\alpha=4\text{,}\) \(\beta=-1\) and \(\gamma=3\text{.}\) This gives us the coefficients of the linear combination: \(a=\beta-\alpha-\gamma=-8\text{,}\) \(b=\alpha=4\text{,}\) \(c=\gamma=3\text{.}\) We now write \(p(x)\) as a linear combination
\begin{equation*} p(x)=-8(x)+4(1+x)+3(x+x^2) \end{equation*}
The coordinate vector for \(p(x)\) with respect to \(\mathcal{B}\) is
\begin{equation*} [p(x)]_{\mathcal{B}}=\begin{bmatrix}-8\\4\\3\end{bmatrix} \end{equation*}

Example 4.2.32.

Recall that the set \(V\) of all symmetric \(2\times 2\) matrices is a subspace of \(\mathbb{M}_{2,2}\text{.}\) In Example 4.2.23, we demonstrated that
\begin{equation*} \mathcal{B} = \left \lbrace \begin{bmatrix} 1 \amp 0 \\ 0 \amp 0 \end{bmatrix}, \begin{bmatrix} 0 \amp 0 \\ 0 \amp 1 \end{bmatrix}, \begin{bmatrix} 0 \amp 1 \\ 1 \amp 0 \end{bmatrix} \right \rbrace \end{equation*}
is a basis for \(V\text{.}\) Let \(A=\begin{bmatrix}2\amp -3\\-3\amp 1\end{bmatrix}\text{.}\) Observe that \(A\) is an element of \(V\text{.}\)
  1. Find the coordinate vector with respect to the ordered basis \(\mathcal{B}\) for \(A\text{.}\)
  2. Let
    \begin{equation*} \mathcal{B}'=\left \lbrace \begin{bmatrix} 0 \amp 0 \\ 0 \amp 1 \end{bmatrix}, \begin{bmatrix} 1 \amp 0 \\ 0 \amp 0 \end{bmatrix}, \begin{bmatrix} 0 \amp 1 \\ 1 \amp 0 \end{bmatrix} \right \rbrace \end{equation*}
    be another ordered basis for \(V\text{.}\) Find the coordinate vector for \(A\) with respect to \(\mathcal{B}'\text{.}\)
Answer.
We write \(A\) as a linear combination of the elements of \(\mathcal{B}\text{.}\)
\begin{equation*} A=\begin{bmatrix}2\amp -3\\-3\amp 1\end{bmatrix}=2\begin{bmatrix}1\amp 0\\0\amp 0\end{bmatrix}+\begin{bmatrix} 0 \amp 0 \\ 0 \amp 1 \end{bmatrix}-3\begin{bmatrix} 0 \amp 1 \\ 1 \amp 0 \end{bmatrix} \end{equation*}
Thus, the coordinate vector with respect to \(\mathcal{B}\) is
\begin{equation*} [A]_{\mathcal{B}}=\begin{bmatrix}2\\1\\-3\end{bmatrix} \end{equation*}
The coordinate vector with respect to \(\mathcal{B}'\) is
\begin{equation*} [A]_{\mathcal{B}'}=\begin{bmatrix}1\\2\\-3\end{bmatrix}. \end{equation*}
Coordinate vectors will play a vital role in establishing one of the most fundamental results in linear algebra, that all \(n\)-dimensional vector spaces have the same structure as \(\R^n\text{.}\) In Example 4.5.8, for instance, we will show that \(\mathbb{P}^2\) is essentially the same as \(\R^3\text{.}\)

Exercises 4.2.7 Exercises

Exercise Group.

Let \(\mathcal{B}=\left\{\begin{bmatrix}1\\1\end{bmatrix},\begin{bmatrix}-1\\2\end{bmatrix}\right\}\) be a basis for \(\R^2\text{.}\) (Do a mental verification that \(\mathcal{B}\) is a basis.) For each \(\mathbf{v}\) given below, find the coordinate vector for \(\mathbf{v}\) with respect to \(\mathcal{B}\text{.}\)
1.
Vector \(\mathbf{v}\) as drawn blow.
First case drawn
Answer.
\begin{equation*} \begin{bmatrix}-2\\1\end{bmatrix} \end{equation*}
2.
Vector \(\mathbf{v}\) as drawn below.
Second case drawn
Answer.
\begin{equation*} \begin{bmatrix}3\\2\end{bmatrix} \end{equation*}

3.

Let
\begin{equation*} \mathcal{B}=\left\{\begin{bmatrix}1\\-1\\3\end{bmatrix},\begin{bmatrix}2\\1\\-1\end{bmatrix}\right\} \quad \text{be a basis for} \quad \mbox{span}\left(\begin{bmatrix}1\\-1\\3\end{bmatrix},\begin{bmatrix}2\\1\\-1\end{bmatrix}\right) \end{equation*}
Find the coordinate vector for \([-4,-2,2]\) with respect to \(\mathcal{B}\text{.}\)
Answer.
\begin{equation*} \begin{bmatrix}0\\-2\end{bmatrix} \end{equation*}

4.

Suppose
\begin{equation*} \mathcal{B}=\left\{\begin{bmatrix}1\\1\\1\end{bmatrix},\begin{bmatrix}1\\0\\1\end{bmatrix}, \mathbf{w}\right\} \end{equation*}
is a basis for \(\R^3\text{.}\) Find \(\mathbf{w}\) if the coordinate vector for \([-2,-7,4]\) is \([-1,2,-3]\text{.}\)
Answer.
\begin{equation*} \begin{bmatrix}1\\2\\-1\end{bmatrix} \end{equation*}

5.

Which of the following is a basis for \(\R^2\text{?}\)
  • \(\left\{\begin{bmatrix}1\\1\end{bmatrix},\begin{bmatrix}-1\\-1\end{bmatrix}, \begin{bmatrix}1\\2\end{bmatrix}\right\} \)
  • \(\left\{\begin{bmatrix}1\\1\end{bmatrix}, \begin{bmatrix}1\\2\end{bmatrix}\right\} \)
  • \(\left\{\begin{bmatrix} 3\\-1\end{bmatrix},\begin{bmatrix}1\\2\end{bmatrix}, \begin{bmatrix}-4\\3\end{bmatrix}\right\}\)
  • \(\left\{\begin{bmatrix}1\\-3\end{bmatrix}, \begin{bmatrix}-2\\6\end{bmatrix}\right\}\)

6.

Which of the following is a basis for \(V\) given below?
\begin{equation*} V=\mbox{span}\left(\begin{bmatrix}1\\1\\1\end{bmatrix}, \begin{bmatrix}1\\-2\\1\end{bmatrix}\right) \end{equation*}
  • \(\left\{\begin{bmatrix} 2\\-1\\2\end{bmatrix},\begin{bmatrix}1\\-2\\1\end{bmatrix}\right\}\)
  • \(\left\{\begin{bmatrix}0\\3\\0\end{bmatrix}, \begin{bmatrix}3\\-3\\3\end{bmatrix}\right\} \)
  • \(\left\{\begin{bmatrix} 1\\0\\0\end{bmatrix},\begin{bmatrix}0\\0\\1\end{bmatrix}\right\}\)
  • \(\left\{\begin{bmatrix} 1\\1\\1\end{bmatrix},\begin{bmatrix}2\\-1\\2\end{bmatrix}, \begin{bmatrix}1\\-2\\1\end{bmatrix}\right\}\)

Exercise Group.

For each given set \(S\) of vectors, find \(\mbox{dim}(\mbox{span}(S))\text{.}\)
7.
\begin{equation*} S=\left\{\begin{bmatrix}1\\1\\0\\1\end{bmatrix}, \begin{bmatrix}0\\1\\1\\1\end{bmatrix}, \begin{bmatrix}1\\0\\1\\1\end{bmatrix}, \begin{bmatrix}1\\1\\0\\1\end{bmatrix} \right\} \end{equation*}
Answer.
\(\mbox{dim}(\mbox{span}(S))=3\)
8.
\begin{equation*} S=\left\{\begin{bmatrix}3\\-2\\1\\1\end{bmatrix}, \begin{bmatrix}2\\3\\3\\-2\end{bmatrix}, \begin{bmatrix}1\\-5\\-2\\3\end{bmatrix}\right\} \end{equation*}
Answer.
\(\mbox{dim}(\mbox{span}(S))=2\)
9.
\begin{equation*} S=\left\{\begin{bmatrix}1\\1\\-3\end{bmatrix}, \begin{bmatrix}-3\\2\\1\end{bmatrix}, \begin{bmatrix}5\\-2\\4\end{bmatrix}\right\} \end{equation*}
Answer.
\(\mbox{dim}(\mbox{span}(S))=3\)

11.

Let \(\mathcal{B}=\{\mathbf{v}_1, \mathbf{v}_2, \mathbf{v}_3\}\) be a basis of \(\R^3\text{.}\) Suppose \(A\) is a nonsingular \(3\times 3 \) matrix. Show that \(\mathcal{C}=\{A\mathbf{v}_1, A\mathbf{v}_2, A\mathbf{v}_3\}\) is also a basis of \(\R^3\text{.}\)
Hint.
\(\mathcal{C}\)\(\R^3\text{,}\)\(A^{-1}\mathbf{v}\)\(\mathbf{v}_1\text{,}\)\(\mathbf{v}_2\)\(\mathbf{v}_3\text{.}\)

12.

Prove that set
\begin{equation*} \mathcal{B}=\left\{\begin{bmatrix}1\amp 1\\0\amp 0\end{bmatrix},\begin{bmatrix}1\amp 0\\0\amp 1\end{bmatrix}\right\} \end{equation*}
of Example 4.2.22 is linearly independent.

Exercise Group.

Show that each of the following sets of vectors is linearly independent.
13.
\begin{equation*} \lbrace 1 + x, 1 - x, x + x^{2} \rbrace \quad \text{in } \mathbb{P}^{2}. \end{equation*}
14.
\begin{equation*} \lbrace x^{2}, x + 1, 1 - x - x^{2} \rbrace \quad \text{in } \mathbb{P}^{2}. \end{equation*}
15.
\begin{equation*} \left \lbrace \begin{bmatrix} 1 \amp 1 \\ 0 \amp 0 \end{bmatrix} , \begin{bmatrix} 1 \amp 0 \\ 1 \amp 0 \end{bmatrix} , \begin{bmatrix} 0 \amp 0 \\ 1 \amp -1 \end{bmatrix} ,\ \begin{bmatrix} 0 \amp 1 \\ 0 \amp 1 \end{bmatrix} \right \rbrace \quad \text{in } \mathbb{M}_{2,2}. \end{equation*}

Exercise Group.

Find the coordinate vector for \(p(x)=6-2x+4x^2\) with respect to the given ordered basis \(\mathcal{B}\) of \(\mathbb{P}^2\text{.}\)
17.
\begin{equation*} \mathcal{B}= \lbrace 1 + x, 1 - x, x + x^{2} \rbrace. \end{equation*}
Answer.
\begin{equation*} [p(x)]_{\mathcal{B}}=\begin{bmatrix}0\\6\\4\end{bmatrix}. \end{equation*}
18.
\begin{equation*} \mathcal{B}=\{x^{2}, x + 1, 1 - x - x^{2}\}. \end{equation*}
Answer.
\begin{equation*} [p(x)]_{\mathcal{B}}=\begin{bmatrix}8\\2\\4\end{bmatrix}. \end{equation*}

19.

Find the coordinate vector for
\begin{equation*} A=\begin{bmatrix}4\amp -3\\1\amp 2\end{bmatrix} \end{equation*}
with respect to the ordered basis
\begin{equation*} \mathcal{B}= \left \lbrace \begin{bmatrix} 1 \amp 1 \\ 0 \amp 0 \end{bmatrix} , \begin{bmatrix} 1 \amp 0 \\ 1 \amp 0 \end{bmatrix} , \begin{bmatrix} 0 \amp 0 \\ 1 \amp -1 \end{bmatrix} ,\ \begin{bmatrix} 0 \amp 1 \\ 0 \amp 1 \end{bmatrix} \right \rbrace. \end{equation*}
Answer.
\begin{equation*} [A]_{\mathcal{B}}=\begin{bmatrix}-1\\5\\-4\\-2\end{bmatrix}. \end{equation*}

20.

Let \(V\) be a vector space of dimension \(3\text{.}\) Suppose \(S=\{\mathbf{v}_1, \mathbf{v}_2, \mathbf{v}_3\}\) is linearly independent in \(V\text{.}\) Show that \(S\) is a basis for \(V\text{.}\)