Skip to main content
Logo image

Coordinated Linear Algebra

Section 4.3 Subspaces Associated with Matrices

In this section, we look at three subspaces connected to an \(m\) by \(n\) matrix, call it \(A\text{.}\) They are:
  1. the row space of \(A\text{,}\) that is, the span of the rows of \(A\) in \(\R^n\text{,}\) called \(\mbox{row}(A)\text{,}\)
  2. the column space of \(A\text{,}\) that is, the span of the columns of \(A\) in \(\R^m\text{,}\) called \(\mbox{col}(A)\text{,}\)
  3. the null space \(A\text{,}\) that is, the set of all \(\mathbf{x}\) in \(\R^n\) so that \(A\mathbf{x}=0\text{,}\) called \(\mbox{null}(A)\text{.}\)
After we look at each of them separately, we will show that the dimensions of these three subspaces are connected. These dimensions give information about the matrix \(A\) and about the matrix transformation that sends \(\mathbf{x}\) in \(\R^n\) to \(A\mathbf{x}\) in \(\R^m\text{.}\)

Subsection 4.3.1 Row Space of a Matrix

Recall when learning Gaussian elimination, we observed that every row-echelon form of a given matrix has the same number of nonzero rows. This result suggests that there are certain characteristics associated with the rows of a matrix that are not affected by elementary row operations. We are now in the position to examine this question and to supply the proof we omitted earlier.

Definition 4.3.1.

Let \(A\) be an \(m\times n\) matrix. The row space of \(A\text{,}\) denoted by \(\mbox{row}(A)\text{,}\) is the subspace of \(\R^n\) spanned by the rows of \(A\text{.}\)

Exploration 4.3.1.

Consider the matrix
\begin{equation*} A=\begin{bmatrix}-2\amp 2\amp 1\\4\amp -2\amp 1\end{bmatrix}. \end{equation*}
Let \(\mathbf{r}_1\) and \(\mathbf{r}_2\) be the rows of \(A\text{:}\)
\begin{equation*} \mathbf{r}_1=\begin{bmatrix}-2\amp 2\amp 1\end{bmatrix},\quad \mathbf{r}_2=\begin{bmatrix}4\amp -2\amp 1\end{bmatrix}. \end{equation*}
Then \(\mbox{row}(A)=\mbox{span}(\mathbf{r}_1, \mathbf{r}_2)\) is a plane through \((0,0)\) containing \(\mathbf{r}_1\) and \(\mathbf{r}_2\text{.}\)
Row space plotted
We will use elementary row operations to reduce \(A\) to \(\mbox{rref}(A)\text{,}\)namely
\begin{equation*} \begin{bmatrix}-2\amp 2\amp 1\\4\amp -2\amp 1\end{bmatrix}\rightsquigarrow\begin{bmatrix}1\amp 0\amp 1\\0\amp 1\amp 3/2\end{bmatrix}. \end{equation*}
Let \(\mathbf{\rho}_1\) and \(\mathbf{\rho}_2\) be the rows of \(\mbox{rref}(A)\text{:}\)
\begin{equation*} \mathbf{\rho}_1=\begin{bmatrix}1\amp 0\amp 1\end{bmatrix},\quad \mathbf{\rho}_2=\begin{bmatrix}0\amp 1\amp 3/2\end{bmatrix}. \end{equation*}
What do you think \(\mbox{span}(\mathbf{\rho}_1, \mathbf{\rho}_2)\) looks like? The following video will help us visualize \(\mbox{span}(\mathbf{\rho}_1, \mathbf{\rho}_2)\) and compare it to \(\mbox{span}(\mathbf{r}_1, \mathbf{r}_2)\text{.}\)
Based on what we observed in the video, we may conjecture that
\begin{equation*} \mbox{span}(\mathbf{\rho}_1, \mathbf{\rho}_2)=\mbox{span}(\mathbf{r}_1, \mathbf{r}_2) \end{equation*}
New vectors for span added in above
But why does this make sense? Vectors \(\mathbf{\rho}_1\) and \(\mathbf{\rho}_2\) were obtained from \(\mathbf{r}_1\) and \(\mathbf{r}_2\) by repeated applications of elementary row operations. At every stage of the row reduction process, the rows of the matrix are linear combinations of \(\mathbf{r}_1\) and \(\mathbf{r}_2\text{.}\) Thus, at every stage of the row reduction process, the rows of the matrix lie in the span of \(\mathbf{r}_1\) and \(\mathbf{r}_2\text{.}\) Our next video shows a step-by-step row reduction process accompanied by sketches of vectors.
The observations made in Exploration 4.3.1 makes a convincing case for the following theorem.

Proof.

Let \(\mathbf{r}_1,\ldots ,\mathbf{r}_m\) be the rows of \(A\text{.}\) There are three elementary row operations. Clearly, switching the order of vectors in \(\mbox{span}(\mathbf{r}_1,\ldots ,\mathbf{r}_m)\) will not affect the span. Suppose that \(B\) was obtained from \(A\) by multiplying the \(i^{th}\) row of \(A\) by a non-zero constant \(k\text{.}\) We need to show that
\begin{equation*} \mbox{span}(\mathbf{r}_1,\ldots ,k\mathbf{r}_i,\ldots ,\mathbf{r}_m)=\mbox{span}(\mathbf{r}_1,\ldots ,\mathbf{r}_i,\ldots ,\mathbf{r}_m) \end{equation*}
To do this we will assume that some vector \(\mathbf{v}\) is in
\begin{equation*} \mbox{span}(\mathbf{r}_1,\ldots ,k\mathbf{r}_i,\ldots ,\mathbf{r}_m) \end{equation*}
and show that \(\mathbf{v}\) is in \(\mbox{span}(\mathbf{r}_1,\ldots ,\mathbf{r}_i,\ldots ,\mathbf{r}_m)\text{.}\) We will then assume that some vector \(\mathbf{w}\) is in
\begin{equation*} \mbox{span}(\mathbf{r}_1,\ldots ,\mathbf{r}_i,\ldots ,\mathbf{r}_m) \end{equation*}
and show that \(\mathbf{w}\) must be in \(\mbox{span}(\mathbf{r}_1,\ldots ,k\mathbf{r}_i,\ldots ,\mathbf{r}_m)\text{.}\)
Suppose that \(\mathbf{v}\) is in \(\mbox{span}(\mathbf{r}_1,\ldots ,k\mathbf{r}_i,\ldots ,\mathbf{r}_m)\text{.}\) Then
\begin{equation*} \mathbf{v}=a_1\mathbf{r}_1+\ldots +a_i(k\mathbf{r}_i)+\ldots +a_m\mathbf{r}_m \end{equation*}
But this implies
\begin{equation*} \mathbf{v}=a_1\mathbf{r}_1+\ldots +(a_ik)\mathbf{r}_i+\ldots +a_m\mathbf{r}_m. \end{equation*}
So \(\mathbf{v}\) is in \(\mbox{span}(\mathbf{r}_1,\ldots ,\mathbf{r}_i,\ldots ,\mathbf{r}_m)\text{.}\)
Now suppose \(\mathbf{w}\) is in \(\mbox{span}(\mathbf{r}_1,\ldots ,\mathbf{r}_i,\ldots ,\mathbf{r}_m)\text{,}\) then
\begin{equation*} \mathbf{w}=b_1\mathbf{r}_1+\ldots +b_i\mathbf{r}_i+\ldots +b_m\mathbf{r}_m. \end{equation*}
But because \(k\neq 0\text{,}\) we can do the following:
\begin{equation*} \mathbf{w}=b_1\mathbf{r}_1+\ldots +\frac{b_i}{k}(k\mathbf{r}_i)+\ldots +b_m\mathbf{r}_m. \end{equation*}
Therefore, \(\mathbf{w}\) is in \(\mbox{span}(\mathbf{r}_1,\ldots ,k\mathbf{r}_i,\ldots ,\mathbf{r}_m)\text{.}\)
We leave it to the reader to verify that adding a multiple of one row of \(A\) to another does not change the row space. See also Exercise 4.3.6.17.)

Proof.

Example 4.3.5.

Let
\begin{equation*} A=\begin{bmatrix}2\amp -1\amp 1\amp -4\amp 1\\1\amp 0\amp 3\amp 3\amp 0\\-2\amp 1\amp -1\amp 5\amp 2\\4\amp -1\amp 7\amp 2\amp 1\end{bmatrix}. \end{equation*}
Find two distinct bases for \(\mbox{row}(A)\text{.}\)
Answer.
By Corollary 4.3.4 a basis for \(\mbox{row}(\mbox{rref}(A))\) will also be a basis for \(\mbox{row}(A)\text{.}\) Row reduction gives us:
\begin{equation*} \begin{bmatrix}2\amp -1\amp 1\amp -4\amp 1\\1\amp 0\amp 3\amp 3\amp 0\\-2\amp 1\amp -1\amp 5\amp 2\\4\amp -1\amp 7\amp 2\amp 1\end{bmatrix}\rightsquigarrow\begin{bmatrix}1\amp 0\amp 3\amp 0\amp -9\\0\amp 1\amp 5\amp 0\amp -31\\0\amp 0\amp 0\amp 1\amp 3\\0\amp 0\amp 0\amp 0\amp 0\end{bmatrix}=\mbox{rref}(A). \end{equation*}
Since the zero row contributes nothing to the span, we conclude that the nonzero rows of \(\mbox{rref}(A)\) span \(\mbox{row}(\mbox{rref}(A))\text{.}\) Therefore, a basis for the row space consists of the vectors
\begin{equation*} [1, 0, 3, 0, -9], \ [0, 1, 5, 0, -3], \ [0, 0, 0, 1, 3]. \end{equation*}
It follows that the nonzero rows of \(\mbox{rref}(A)\) are linearly independent.
To find a second basis for \(\mbox{row}(A)\text{,}\) observe that by Corollary 4.3.3 the row space of any row-echelon form of \(A\) will be equal to \(\mbox{row}(A)\text{.}\) Matrix \(A\) has many row-echelon forms. Here is one of them:
\begin{equation*} B=\begin{bmatrix}1\amp 0\amp 3\amp 3\amp 0\\0\amp -1\amp -5\amp -10\amp 1\\0\amp 0\amp 0\amp 1\amp 3\\0\amp 0\amp 0\amp 0\amp 0 \end{bmatrix}. \end{equation*}
The nonzero rows of \(B\) span \(\mbox{row}(A)\text{.}\) Once again the nonzero rows of \(B\) are linearly independent. Thus the nonzero rows of \(B\) form a basis for \(\mbox{row}(A)\text{.}\)
Our observations in Example 4.3.5 can be generalized to all matrices. Given any matrix \(A\text{,}\)
  1. The nonzero rows of \(\mbox{rref}(A)\) are linearly independent (Why?) and span \(\mbox{row}(A)\) (Corollary 4.3.4).
  2. The nonzero rows of any row-echelon form of \(A\) are linearly independent (Why?) and span \(\mbox{row}(A)\) (Corollary 4.3.3).
Therefore nonzero rows of \(\mbox{rref}(A)\) or the nonzero rows of any row-echelon form of \(A\) constitute a basis of \(\mbox{row}(A)\text{.}\) Since all bases for \(\mbox{row}(A)\) must have the same number of elements ( Theorem 4.2.17), we have just proved the following theorem.
This result was first introduced without proof when learning of Guassian elimination, where we used it to define the rank of a matrix as the number of nonzero rows in its row-echelon forms. We can now update the definition of rank as follows.

Definition 4.3.7.

For any matrix \(A\text{,}\)
\begin{equation*} \mbox{rank}(A)=\mbox{dim}\Big(\mbox{row}(A)\Big). \end{equation*}

Subsection 4.3.2 Column Space of a Matrix

Definition 4.3.8.

Let \(A\) be an \(m\times n\) matrix. The column space of \(A\text{,}\) denoted by \(\mbox{col}(A)\text{,}\) is the subspace of \(\R^m\) spanned by the columns of \(A\text{.}\)

Exploration 4.3.2.

Let
\begin{equation*} B=\begin{bmatrix}2\amp -1\amp 3\amp 1\\1\amp -1\amp 2\amp 2\\1\amp 3\amp -2\amp -3\end{bmatrix}. \end{equation*}
Our goal is to find a basis for \(\mbox{col}(B)\text{.}\) To do this we need to find a linearly independent subset of the columns of \(B\) that spans \(\mbox{col}(B)\text{.}\)
Consider the linear relation:
\begin{equation} a_1\begin{bmatrix}2\\1\\1\end{bmatrix}+a_2\begin{bmatrix}-1\\-1\\3\end{bmatrix}+a_3\begin{bmatrix}3\\2\\-2\end{bmatrix}+a_4\begin{bmatrix}1\\2\\-3\end{bmatrix}=\mathbf{0}.\tag{4.3.1} \end{equation}
Solving this homogeneous equation amounts to finding \(\mbox{rref}(B)\text{.}\) So,
\begin{equation*} \begin{bmatrix}2\amp -1\amp 3\amp 1\\1\amp -1\amp 2\amp 2\\1\amp 3\amp -2\amp -3\end{bmatrix}\rightsquigarrow\begin{bmatrix}1\amp 0\amp 1\amp 0\\0\amp 1\amp -1\amp 0\\0\amp 0\amp 0\amp 1\end{bmatrix}=\mbox{rref}(B). \end{equation*}
We now see that (4.3.1) has infinitely many solutions.
Observe that the homogeneous equation
\begin{equation} a_1\begin{bmatrix}1\\0\\0\end{bmatrix}+a_2\begin{bmatrix}0\\1\\0\end{bmatrix}+a_3\begin{bmatrix}1\\-1\\0\end{bmatrix}+a_4\begin{bmatrix}0\\0\\1\end{bmatrix}=\mathbf{0}.\tag{4.3.2} \end{equation}
has the same solution set as (4.3.1). In particular, \(a_1=1\text{,}\) \(a_2=-1\text{,}\) \(a_3=-1\text{,}\) \(a_4=0\) is a non-trivial solution of (4.3.1) and (4.3.2). This means that the third column of \(B\) and the third column of \(\mbox{rref}(B)\) can be expressed as the first column minus the second column of their respective matrices.
We conclude that the third column of \(B\) can be eliminated from the spanning set for \(\mbox{col}(B)\) and
\begin{align*} \mbox{col}(B) \amp =\mbox{span}\left(\begin{bmatrix}2\\1\\1\end{bmatrix},\begin{bmatrix}-1\\-1\\3\end{bmatrix}, \begin{bmatrix}3\\2\\-2\end{bmatrix}, \begin{bmatrix}1\\2\\-3\end{bmatrix}\right) \\ \amp =\mbox{span}\left(\begin{bmatrix}2\\1\\1\end{bmatrix},\begin{bmatrix}-1\\-1\\3\end{bmatrix}, \begin{bmatrix}1\\2\\-3\end{bmatrix}\right). \end{align*}
Having gotten rid of one of the vectors, we need to determine whether the remaining three vectors are linearly independent. To do this we need to find all solutions of
\begin{equation} b_1\begin{bmatrix}2\\1\\1\end{bmatrix}+b_2\begin{bmatrix}-1\\-1\\3\end{bmatrix}+b_3\begin{bmatrix}1\\2\\-3\end{bmatrix}=\mathbf{0}.\tag{4.3.3} \end{equation}
Fortunately, we do not have to start from scratch. Observe that crossing out the third column in the previous row reduction process yields the desired reduced row-echelon form.
Matrix with column cut out
This time the reduced row-echelon form tells us that ((4.3.3) has only the trivial solution. We conclude that the three vectors are linearly independent and
\begin{equation*} \left\{\begin{bmatrix}2\\1\\1\end{bmatrix},\begin{bmatrix}-1\\-1\\3\end{bmatrix}, \begin{bmatrix}1\\2\\-3\end{bmatrix}\right\} \end{equation*}
is a basis for \(\mbox{col}(B)\text{.}\)
The approach we took to find a basis for \(\mbox{col}(B)\) in Exploration 4.3.2 uses the reduced row-echelon form of \(B\text{.}\) It is true, however, that any row-echelon form of \(B\) could have been used in place of \(\mbox{rref}(B)\text{.}\) (Why?). We generalize the steps as follows:

Proof.

Let \(\mathbf{b}_1,\ldots ,\mathbf{b}_n\) be the columns of \(B\text{,}\) and let \(\mathbf{b}'_1,\ldots ,\mathbf{b}'_n\) be the columns of \(\mbox{rref}(B)\) (or \(B'\)). Observe that the equations
\begin{equation} a_1\mathbf{b}_1+\ldots +a_n\mathbf{b}_n=\mathbf{0}\tag{4.3.4} \end{equation}
\begin{equation} a_1\mathbf{b}'_1+\ldots +a_n\mathbf{b}'_n=\mathbf{0}\tag{4.3.5} \end{equation}
have the same solution set. This means that any non-trivial relation among the columns of \(\mbox{rref}(B)\) (or \(B'\)) translates into a non-trivial relation among the columns of \(B\text{.}\) Likewise, any collection of linearly independent columns of \(\mbox{rref}(B)\) (or \(B'\)) corresponds to linearly independent columns of \(B\text{.}\)
Now, the pivot columns of \(\mbox{rref}(B)\) (or \(B'\)) are linearly independent. Therefore the corresponding columns of \(B\) are linearly independent whereas non-pivot columns can be expressed as linear combinations of the pivot columns, therefore they contribute nothing to the span and can be removed from the spanning set.
The proof of Algorithm 4.3.9 shows that the number of basis elements for the column space of a matrix is equal to the number of pivot columns. But the number of pivot columns is the same as the number of pivots in a row-echelon form, which is equal to the number of nonzero rows and the rank of the matrix. This gives us the following important result.

Example 4.3.11.

We will return to matrix \(A\) of Example 4.3.5 and find a basis for \(\mbox{col}(A)\text{.}\)

Subsection 4.3.3 The Null Space

Definition 4.3.12.

Let \(A\) be an \(m\times n\) matrix. The null space of \(A\text{,}\) denoted by \(\mbox{null}(A)\text{,}\) is the set of all vectors \(\mathbf{x}\) in \(\R^n\) such that \(A\mathbf{x}=\mathbf{0}\text{.}\)
Before digging further, let us examine the notion through an example.

Example 4.3.13.

Find \(\mbox{null}(A)\) if
\begin{equation*} A=\begin{bmatrix}3\amp -1\\-6\amp 2\end{bmatrix}. \end{equation*}
Answer.
We need to solve the equation \(A\mathbf{x}=\mathbf{0}\text{.}\) Row reduction gives us
\begin{equation*} \begin{bmatrix}3\amp -1\\-6\amp 2\end{bmatrix}\rightsquigarrow\begin{bmatrix}1\amp -1/3\\0\amp 0\end{bmatrix}=\mbox{rref}(A). \end{equation*}
We conclude that \(\mathbf{x}=\begin{bmatrix}1/3\\1\end{bmatrix}t\text{.}\) Thus \(\mbox{null}(A)\) consists of all vectors of the form
\begin{equation*} t \begin{bmatrix}1/3\\1\end{bmatrix}. \end{equation*}
We might write
\begin{equation*} \mbox{null}(A)=\left\{\begin{bmatrix}1/3\\1\end{bmatrix}t\right\} \end{equation*}
or
\begin{equation*} \mbox{null}(A)=\mbox{span}\left(\begin{bmatrix}1/3\\1\end{bmatrix}\right). \end{equation*}
The approach in Example 4.3.13 allows us to make an important observation. Note that every scalar multiple of \([1/3,1]\) is contained in \(\mbox{null}(A)\text{.}\) This means that \(\mbox{null}(A)\) is closed under vector addition and scalar multiplication.
Recall that this property makes \(\mbox{null}(A)\) a subspace of \(\R^n\text{.}\) This result was first presented as Exercise 4.1.6.15. We now formalize it as a theorem.

Proof.

To see that \(\mbox{null}(A)\) is not empty, we notice that \(A\mathbf{0}=\mathbf{0}\) and so \(\mathbf{0}\) is in \(\mbox{null}(A)\text{.}\) To deduce that \(\mbox{null}(A)\) is closed under vector addition and scalar multiplication we will show that a linear combination of any two elements of \(\mbox{null}(A)\) is contained in \(\mbox{null}(A)\text{.}\)
Suppose \(\mathbf{x}_1\) and \(\mathbf{x}_2\) are in \(\mbox{null}(A)\text{.}\) Then \(A\mathbf{x}_1=\mathbf{0}\) and \(A\mathbf{x}_2=\mathbf{0}\text{.}\) But then
\begin{equation*} A(a_1\mathbf{x}_1+a_2\mathbf{x}_2)=a_1A\mathbf{x}_1+a_2A\mathbf{x}_2=\mathbf{0} \end{equation*}
We conclude that \(a_1\mathbf{x}_1+a_2\mathbf{x}_2\) is also in \(\mbox{null}(A)\text{.}\)

Example 4.3.15.

Find a basis for \(\mbox{null}(A)\text{,}\) where \(A\) is the matrix in Example 4.3.5.
Solution.
Elements in the null space of \(A\) are solutions to the equation
\begin{equation*} \begin{bmatrix}2\amp -1\amp 1\amp -4\amp 1\\1\amp 0\amp 3\amp 3\amp 0\\-2\amp 1\amp -1\amp 5\amp 2\\4\amp -1\amp 7\amp 2\amp 1\end{bmatrix}\mathbf{x}=\mathbf{0} \end{equation*}
Row reduction yields \(\mbox{rref}(A)\)
\begin{equation*} \begin{bmatrix}2\amp -1\amp 1\amp -4\amp 1\\1\amp 0\amp 3\amp 3\amp 0\\-2\amp 1\amp -1\amp 5\amp 2\\4\amp -1\amp 7\amp 2\amp 1\end{bmatrix}\rightsquigarrow\begin{bmatrix}1\amp 0\amp 3\amp 0\amp -9\\0\amp 1\amp 5\amp 0\amp -31\\0\amp 0\amp 0\amp 1\amp 3\\0\amp 0\amp 0\amp 0\amp 0\end{bmatrix} \end{equation*}
Therefore, elements of \(\mbox{null}(A)\) are of the form
\begin{equation*} \mathbf{x}=\begin{bmatrix}9t-3s\\31t-5s\\s\\-3t\\t\end{bmatrix}=\begin{bmatrix}-3\\-5\\1\\0\\0\end{bmatrix}s+\begin{bmatrix}9\\31\\0\\-3\\1\end{bmatrix}t \end{equation*}
Thus
\begin{equation*} \mbox{null}(A)=\mbox{span}\left( \begin{bmatrix}-3\\-5\\1\\0\\0\end{bmatrix}, \begin{bmatrix}9\\31\\0\\-3\\1\end{bmatrix}\right) \end{equation*}
Now we need to find a basis for \(\mbox{null}(A)\) we need to find linearly independent vectors that span \(\mbox{null}(A)\text{.}\) Take a closer look at the vectors
\begin{equation*} \begin{bmatrix}-3\\-5\\{\color{red}\fbox{}}\\0\\{\color{blue}\fbox{}}\end{bmatrix}, \begin{bmatrix}9\\31\\{\color{red}\fbox{}}\\-3\\{\color{blue}\fbox{}}\end{bmatrix} \end{equation*}
Because of the locations of \(1's\) and \(0's\text{,}\) it is clear that one vector is not a scalar multiple of the other. Therefore the two vectors are linearly independent. We conclude that
\begin{equation*} \left\{\begin{bmatrix}-3\\-5\\1\\0\\0\end{bmatrix}, \begin{bmatrix}9\\31\\0\\-3\\1\end{bmatrix}\right\} \end{equation*}
is a basis of \(\mbox{null}(A)\text{,}\) and \(\mbox{dim}\Big(\mbox{null}(A)\Big)=2\text{.}\)
It is not a coincidence that the steps we used in Example 4.3.15 produced linearly independent vectors, and it is worth while to try to understand why this procedure will always produce linearly independent vectors. Take a closer look at the elements of the null space:
\begin{equation*} \mathbf{x}=\begin{bmatrix}9t-3s\\31t-5s\\{\color{red}\fbox{}}\\-3t\\{\color{blue}\fbox{}}\end{bmatrix}=\begin{bmatrix}-3\\-5\\{\color{red}\fbox{}}\\0\\{\color{blue}\fbox{}}\end{bmatrix}s+\begin{bmatrix}9\\31\\{\color{red}\fbox{}}\\-3\\{\color{blue}\fbox{}}\end{bmatrix}t \end{equation*}
The parameter \(s\) in the third component of \(\mathbf{x}\) produces a \(1\) in the third component of the first vector and a \(0\) in the third component of the second vector, while parameter \(t\) in the fifth component of \(\mathbf{x}\) produces a \(1\) in the fifth component of the second vector and a \(0\) in the fifth component of the first vector. This makes it clear that the two vectors are linearly independent.
This pattern will hold for any number of parameters, each parameter producing a \(1\) in exactly one vector and \(0's\) in the corresponding components of the other vectors.
\begin{equation*} \begin{bmatrix}\vdots \\t_1\\\vdots\\t_2\\\vdots\\t_3\\\vdots\\t_n\\\vdots \end{bmatrix}=\begin{bmatrix}\vdots \\1\\\vdots\\0\\\vdots\\0\\\vdots\\0\\\vdots \end{bmatrix}t_1+\ldots +\begin{bmatrix}\vdots \\0\\\vdots\\1\\\vdots\\0\\\vdots\\0\\\vdots \end{bmatrix}t_2+\ldots+\begin{bmatrix}\vdots \\0\\\vdots\\0\\\vdots\\1\\\vdots\\0\\\vdots \end{bmatrix}t_3+\ldots+\begin{bmatrix}\vdots \\0\\\vdots\\0\\\vdots\\0\\\vdots\\1\\\vdots \end{bmatrix}t_n \end{equation*}
Therefore, vectors obtained in this way will always be linearly independent.

Subsection 4.3.4 Rank and Nullity Theorem

Definition 4.3.16.

Let \(A\) be a matrix. The dimension of the null space of \(A\) is called the nullity of \(A\text{.}\)
\begin{equation*} \mbox{dim}\Big(\mbox{null}(A)\Big)=\mbox{nullity}(A). \end{equation*}
We know that the dimension of the row space and the dimension of the column space of a matrix are the same and are equal to the rank of the matrix (or the number of nonzero rows in any row-echelon form of the matrix).
As we observed in Example 4.3.15, the dimension of the null space of a matrix is equal to the number of free variables in the solution vector of the homogeneous system associated with the matrix. Since the number of pivots and the number of free variables add up to the number of columns in a matrix we have the following significant result.
We will see the geometric implications of this theorem when we study linear transformations.

Subsection 4.3.5 Subspaces Associated with Matrix Transformations

Recall that an \(m \times n\) matrix \(A\) defines a linear transformation \(T:\R^n\to\R^m\) by the rule \(T(\mathbf{x})=A\mathbf{x}\text{.}\) The subspaces associated with the matrix \(A\) have interesting interpretations in terms of the linear transformation \(T\text{.}\)

Subsubsection 4.3.5.1 The Image of a Matrix Transformation

In this section we use \(U\text{,}\) \(V\) and \(W\) to denote finite-dimensional vector spaces, such as subspaces of \(\R^n\text{.}\)
Definition 4.3.18.
Let \(T:V\rightarrow W\) be a linear transformation. The image of \(T\text{,}\) denoted by \(\mbox{im}(T)\text{,}\) is the set
\begin{equation*} \mbox{im}(T)=\{T(\mathbf{v}):\mathbf{v}\in V\}. \end{equation*}
In other words, the image of \(T\) consists of individual images of all vectors of \(V\text{.}\)
Example 4.3.19.
Consider the linear transformation \(T:\R^3\rightarrow \R^2\) with standard matrix
\begin{equation*} A=\begin{bmatrix}1\amp 2\amp 3\\2\amp 4\amp 6\end{bmatrix}. \end{equation*}
  1. Find \(\mbox{im}(T)\text{.}\)
  2. Illustrate the action of \(T\) with a sketch.
Answer.
Item 1: Let \(\mathbf{v}=\begin{bmatrix}a\\b\\c\end{bmatrix}\) then
\begin{equation*} T(\mathbf{v})=A\mathbf{v}=\begin{bmatrix}1\amp 2\amp 3\\2\amp 4\amp 6\end{bmatrix}\begin{bmatrix}a\\b\\c\end{bmatrix}=a\begin{bmatrix}1\\2\end{bmatrix}+b\begin{bmatrix}2\\4\end{bmatrix}+c\begin{bmatrix}3\\6\end{bmatrix}. \end{equation*}
Thus, every element of the image can be written as a linear combination of the columns of \(A\text{.}\) We conclude that
\begin{equation*} \mbox{im}(T)=\mbox{span}\left(\begin{bmatrix}1\\2\end{bmatrix}, \begin{bmatrix}2\\4\end{bmatrix}, \begin{bmatrix}3\\6\end{bmatrix}\right)=\mbox{col}(A). \end{equation*}
Every column of \(A\) is a scalar multiple of \([1,2]\text{.}\) Thus,
\begin{equation*} \mbox{im}(T)=\mbox{span}\left(\begin{bmatrix}1\\2\end{bmatrix}, \begin{bmatrix}2\\4\end{bmatrix}, \begin{bmatrix}3\\6\end{bmatrix}\right)=\mbox{span}\left(\begin{bmatrix}1\\2\end{bmatrix}\right). \end{equation*}
The image of \(T\) is a line in \(\R^2\) determined by the vector \([1,2]\text{.}\)
Item 2: The action of \(T\) can be illustrated with a sketch.
Image of T graphed
In Example 4.3.19 we observed that the image of the linear transformation was equal to the column space of its standard matrix. In general, it is easy to see that if \(T:\R^n\rightarrow \R^m\) is a linear transformation with standard matrix \(A\) then the following relationship holds:
\begin{equation*} \mbox{im}(T)=\mbox{col}(A). \end{equation*}
In addition, by Theorem 4.3.10, we know that
\begin{equation*} \mbox{dim}(\mbox{im}(T))=\mbox{dim}(\mbox{col}(A))=\mbox{rank}(A). \end{equation*}
Example 4.3.20.
Let \(T:\R^5\rightarrow \R^4\) be a linear transformation with standard matrix
\begin{equation*} A=\begin{bmatrix}1 \amp 2 \amp 2 \amp -1 \amp 0\\-1 \amp 3 \amp 1 \amp 0 \amp -1\\3 \amp 0 \amp 0 \amp 3 \amp 6\\ 1 \amp -1 \amp 1 \amp -2 \amp -1\end{bmatrix}. \end{equation*}
Find \(\mbox{im}(T)\) and \(\mbox{dim}(\mbox{im}(T))\text{.}\)
Answer.
As in Example 4.3.19, the image of \(T\) is given by
\begin{equation*} \mbox{im}(T)=\mbox{span}\left(\begin{bmatrix}1\\-1\\3\\1\end{bmatrix}, \begin{bmatrix}2\\3\\0\\-1\end{bmatrix}, \begin{bmatrix}2\\1\\0\\1\end{bmatrix}, \begin{bmatrix}-1\\0\\3\\-2\end{bmatrix}, \begin{bmatrix}0\\-1\\6\\-1\end{bmatrix}\right)=\mbox{col}(A). \end{equation*}
This time it is harder to detect the vectors that can be eliminated from the spanning set without affecting the span. We have to rely on the reduced row-echelon form of \(A\text{.}\)
\begin{equation*} \begin{bmatrix}1 \amp 2 \amp 2 \amp -1 \amp 0\\-1 \amp 3 \amp 1 \amp 0 \amp -1\\3 \amp 0 \amp 0 \amp 3 \amp 6\\ 1 \amp -1 \amp 1 \amp -2 \amp -1\end{bmatrix} \rightsquigarrow \begin{bmatrix} 1 \amp 0 \amp 0 \amp 1 \amp 2\\0 \amp 1 \amp 0 \amp 1 \amp 1\\0 \amp 0 \amp 1 \amp -2 \amp -2\\ 0 \amp 0 \amp 0 \amp 0 \amp 0 \end{bmatrix}. \end{equation*}
We can see that \(\mbox{rank}(A)=3\text{,}\) so \(\mbox{dim}(\mbox{im}(T))=3\text{.}\) To identify vectors that span \(\mbox{im}(T)\text{,}\) we turn to Algorithm 4.3.9. We identify the first three columns as pivot columns. These columns are linearly independent and span \(\mbox{col}(A)\text{.}\) Therefore,
\begin{equation*} \mbox{im}(T)=\mbox{col}(A)=\mbox{span}\left(\begin{bmatrix}1\\-1\\3\\1\end{bmatrix}, \begin{bmatrix}2\\3\\0\\-1\end{bmatrix}, \begin{bmatrix}2\\1\\0\\1\end{bmatrix}\right) \end{equation*}
By Theorem 4.1.27 and Definition 4.3.8, we know that for an \(m\times n\) matrix \(A\text{,}\) \(\mbox{col}(A)\) is a subspace of \(\R^m\text{.}\) However, when vector spaces other than \(\R^m\) are involved, it is not yet clear that \(\mbox{im}(T)\) is a subspace of the codomain. The following theorem resolves this issue.
Proof.
To show that \(\mbox{im}(T)\) is a subspace, we need to show that \(\mbox{im}(T)\) is non-empty and is closed under addition and scalar multiplication. First, \(\mathbf{0}=T\mathbf{0}\) is in \(\mbox{im}(T)\text{.}\) Suppose \(\mathbf{w}_1\) and \(\mathbf{w}_2\) are in \(\mbox{im}(T)\text{.}\) Then there are vectors \(\mathbf{v}_1\) and \(\mathbf{v}_2\) in \(V\) such that \(T(\mathbf{v}_1)=\mathbf{w}_1\) and \(T(\mathbf{v}_2)=\mathbf{w}_2\text{.}\) Then
\begin{equation*} \mathbf{w}_1+\mathbf{w}_2=T(\mathbf{v}_1)+T(\mathbf{v}_2)=T(\mathbf{v}_1+\mathbf{v}_2). \end{equation*}
This shows that \(\mathbf{w}_1+\mathbf{w}_2\) is in \(\mbox{im}(T)\text{.}\) For any scalar \(a\text{,}\) we have:
\begin{equation*} a\mathbf{w}_1=aT(\mathbf{v}_1)=T(a\mathbf{v}_1). \end{equation*}
This shows that \(a\mathbf{w}_1\) is in \(\mbox{im}(T)\text{.}\)
We can now define the rank of a linear transformation.
Definition 4.3.22.
The rank of a linear transformation \(T:V\rightarrow W\text{,}\) is the dimension of the image of \(T\text{.}\)
\begin{equation*} \mbox{rank}(T)=\mbox{dim}(\mbox{im}(T)). \end{equation*}
This definition gives us the following relationship between the rank of a linear transformation \(T:\R^n\rightarrow\R^m\) and the rank of the standard matrix \(A\) associated with it.

Subsubsection 4.3.5.2 The Kernel of a Linear Transformation

Exactly as in the preceding section, we use \(U\text{,}\) \(V\) and \(W\) to denote finite-dimensional vector spaces, such as subspaces of \(\R^n\text{.}\)
Definition 4.3.24.
Let \(T:V\rightarrow W\) be a linear transformation. The kernel of \(T\text{,}\) denoted by \(\mbox{ker}(T)\text{,}\) is the set
\begin{equation*} \mbox{ker}(T)=\{\mathbf{v}:T(\mathbf{v})=\mathbf{0}\}. \end{equation*}
In other words, the kernel of \(T\) consists of all vectors of \(V\) that map to \(\mathbf{0}\) in \(W\text{.}\)
It is important to pay attention to the locations of the kernel and the image. We already proved that \(\mbox{im}(T)\) is a subspace of the codomain. In contrast, \(\mbox{ker}(T)\) is located in the domain. (We will prove shortly that it is a subspace of the domain.)
Kernel diagram shown
Example 4.3.25.
Let \(T:\R^5\rightarrow \R^4\) be a linear transformation with standard matrix
\begin{equation*} A=\begin{bmatrix}1 \amp 2 \amp 2 \amp -1 \amp 0\\-1 \amp 3 \amp 1 \amp 0 \amp -1\\3 \amp 0 \amp 0 \amp 3 \amp 6\\ 1 \amp -1 \amp 1 \amp -2 \amp -1\end{bmatrix}. \end{equation*}
  1. Find \(\mbox{ker}(T)\text{.}\)
  2. Is \(\mbox{ker}(T)\) a subspace of \(\R^5\text{?}\) If so, find \(\mbox{dim}(\mbox{ker}(T))\text{.}\)
Answer.
Item 1 To find the kernel of \(T\text{,}\) we need to find all vectors of \(\R^5\) that map to \(\mathbf{0}\) in \(\R^4\text{.}\) This amounts to solving the equation \(A\mathbf{x}=\mathbf{0}\text{.}\) Gauss-Jordan elimination yields:
\begin{equation*} \begin{bmatrix}1 \amp 2 \amp 2 \amp -1 \amp 0\\-1 \amp 3 \amp 1 \amp 0 \amp -1\\3 \amp 0 \amp 0 \amp 3 \amp 6\\ 1 \amp -1 \amp 1 \amp -2 \amp -1\end{bmatrix} \rightsquigarrow \begin{bmatrix} 1 \amp 0 \amp 0 \amp 1 \amp 2\\0 \amp 1 \amp 0 \amp 1 \amp 1\\0 \amp 0 \amp 1 \amp -2 \amp -2\\ 0 \amp 0 \amp 0 \amp 0 \amp 0 \end{bmatrix}. \end{equation*}
Thus, the kernel of \(T\) consists of all elements of the form:
\begin{equation*} \begin{bmatrix}-1\\-1\\2\\1\\0\end{bmatrix}s+\begin{bmatrix}-2\\-1\\2\\0\\1\end{bmatrix}t. \end{equation*}
We conclude that
\begin{equation*} \mbox{ker}(T)=\mbox{span}\left(\begin{bmatrix}-1\\-1\\2\\1\\0\end{bmatrix}, \begin{bmatrix}-2\\-1\\2\\0\\1\end{bmatrix}\right). \end{equation*}
Item 2: Since \(\mbox{ker}(T)\) is the span of two vectors of \(\R^5\text{,}\) we know that \(\mbox{ker}(T)\) is a subspace of \(\R^5\text{.}\) (See Theorem 4.1.27.) Observe that the two vectors in the spanning set are linearly independent. (How can we see this without performing computations?) Therefore \(\mbox{dim}(\mbox{ker}(T))=2\text{.}\)
Recall that the null space of an \(m \times n\) matrix \(A\) is defined to be set of all solutions to the homogeneous equation \(A\mathbf{x}=\mathbf{0}\text{.}\) This means that if \(T:\R^n\rightarrow \R^m\) is a linear transformation with standard matrix \(A\) then
\begin{equation*} \mbox{ker}(T)=\mbox{null}(A). \end{equation*}
We know that \(\mbox{null}(A)\) of an \(m\times n\) matrix is a subspace of \(\R^n\text{.}\) (See Theorem 4.3.14.) We conclude this section by showing that even when vector spaces other than \(\R^n\) are involved, the kernel of a linear transformation is a subspace of the domain of the transformation.
Proof.
To show that \(\mbox{ker}(T)\) is a subspace, we need to show that \(\mbox{ker}(T)\) is non-empty and is closed under addition and scalar multiplication. Since \(T(\mathbf{0})=\mathbf{0}\text{,}\) we have that \(\mathbf{0}\) is in \(\mbox{ker}(T)\text{.}\) Suppose that \(\mathbf{v}_1\) and \(\mathbf{v}_2\) are in \(\mbox{ker}(T)\text{.}\) Then,
\begin{equation*} T(\mathbf{v}_1+\mathbf{v}_2)=T(\mathbf{v}_1)+T(\mathbf{v}_2)=\mathbf{0}+\mathbf{0}=\mathbf{0}. \end{equation*}
This shows that \(\mathbf{v}_1+\mathbf{v}_2\) is in \(\mbox{ker}(T)\text{.}\) For any scalar \(a\) we have:
\begin{equation*} T(a\mathbf{v}_1)=aT(\mathbf{v}_1)=a\mathbf{0}=\mathbf{0}. \end{equation*}
This shows that \(a\mathbf{v}_1\) is in \(\mbox{ker}(T)\text{.}\)
Definition 4.3.27.
The nullity of a linear transformation \(T:V\rightarrow W\text{,}\) is the dimension of the kernel of \(T\text{.}\)
\begin{equation*} \mbox{nullity}(T)=\mbox{dim}(\mbox{ker}(T)). \end{equation*}
This definition gives us the following relationship between nullity of a linear transformation \(T:\R^n\rightarrow\R^m\) and the nullity of the standard matrix \(A\) associated with it.

Subsubsection 4.3.5.3 Rank-Nullity Theorem for Linear Transformations

In Example 4.3.20 and Example 4.3.25, we found the image and the kernel of the linear transformation \(T:\R^5\rightarrow \R^4\) with standard matrix
\begin{equation*} A=\begin{bmatrix}1 \amp 2 \amp 2 \amp -1 \amp 0\\-1 \amp 3 \amp 1 \amp 0 \amp -1\\3 \amp 0 \amp 0 \amp 3 \amp 6\\ 1 \amp -1 \amp 1 \amp -2 \amp -1\end{bmatrix}. \end{equation*}
We also found that
\begin{equation*} \mbox{rank}(T)=\mbox{dim}(\mbox{im}(T))=\mbox{dim}(\mbox{col}(A))=\mbox{rank}(A)=3 \end{equation*}
and
\begin{equation*} \mbox{nullity}(T)=\mbox{dim}(\mbox{ker}(T))=\mbox{dim}(\mbox{null}(A))=\mbox{nullity}(A)=2. \end{equation*}
Because of the Rank-Nullity Theorem for matrices (Theorem 4.3.17), it is not surprising that
\begin{equation*} \mbox{rank}(T)+\mbox{nullity}(T)=3+2=5=\mbox{dim}(\R^5). \end{equation*}
The following theorem is a generalization of this result.
Proof.
By Theorem 4.3.21, \(\mbox{im}(T)\) is a subspace of \(W\text{.}\) There exists a basis for \(\mbox{im}(T)\) of the form \(\{T(\mathbf{v}_1), \ldots,T(\mathbf{v}_r)\}\text{.}\) By Theorem 4.3.26, \(\mbox{ker}(T)\) is a subspace of \(V\text{.}\) Let \(\{\mathbf{u}_1,\ldots,\mathbf{u}_s\}\) be a basis for \(\mbox{ker}(T)\text{.}\) We will show that \(\{\mathbf{u}_1,\ldots ,\mathbf{u}_s, \mathbf{v}_1,\ldots ,\mathbf{v}_r\}\) is a basis for \(V\text{.}\) For any vector \(\mathbf{v}\) in \(V\text{,}\) we have:
\begin{equation*} T(\mathbf{v})=c_1T(\mathbf{v}_1)+\ldots +c_rT(\mathbf{v}_r) \end{equation*}
for some scalars \(c_i\) \((1\leq i\leq r)\text{.}\) Thus,
\begin{equation*} T(\mathbf{v})-\big(c_1T(\mathbf{v}_1)+\ldots +c_rT(\mathbf{v}_r)\big)=\mathbf{0}. \end{equation*}
By linearity,
\begin{equation*} T((\mathbf{v}-(c_1\mathbf{v}_1+\ldots +c_r\mathbf{v}_r))=\mathbf{0}. \end{equation*}
Therefore \(\mathbf{v}-(c_1\mathbf{v}_1+\ldots +c_r\mathbf{v}_r)\) is in \(\mbox{ker}(T)\text{.}\) Hence there are scalars \(a_i\) \((1\leq i\leq s)\) such that
\begin{equation*} \mathbf{v}-(c_1\mathbf{v}_1+\ldots +c_r\mathbf{v}_r)=a_1\mathbf{u}_1+\ldots +a_s\mathbf{u}_s. \end{equation*}
Thus,
\begin{equation*} \mathbf{v}=(c_1\mathbf{v}_1+\ldots +c_r\mathbf{v}_r)+(a_1\mathbf{u}_1+\ldots +a_s\mathbf{u}_s). \end{equation*}
We conclude that
\begin{equation*} V=\mbox{span}(\mathbf{u}_1,\ldots ,\mathbf{u}_s, \mathbf{v}_1,\ldots ,\mathbf{v}_r). \end{equation*}
Now we need to show that \(\{\mathbf{u}_1,\ldots ,\mathbf{u}_s, \mathbf{v}_1,\ldots ,\mathbf{v}_r\}\) is linearly independent. Suppose
\begin{equation} c_1\mathbf{v}_1+\ldots +c_r\mathbf{v}_r+a_1\mathbf{u}_1+\ldots +a_s\mathbf{u}_s=\mathbf{0}.\tag{4.3.6} \end{equation}
Applying \(T\) to both sides, we get
\begin{equation*} T(c_1\mathbf{v}_1+\ldots +c_r\mathbf{v}_r+a_1\mathbf{u}_1+\ldots +a_s\mathbf{u}_s)=T(\mathbf{0}), \end{equation*}
\begin{equation*} c_1T(\mathbf{v}_1)+\ldots +c_rT(\mathbf{v}_r)+a_1T(\mathbf{u}_1)+\ldots +a_sT(\mathbf{u}_s)=\mathbf{0}. \end{equation*}
But \(T(\mathbf{u}_i)=\mathbf{0}\) for \(1\leq i\leq s\text{,}\) thus
\begin{equation*} c_1T(\mathbf{v}_1)+\ldots +c_rT(\mathbf{v}_r)=\mathbf{0}. \end{equation*}
Since \(\{T(\mathbf{v}_1),\ldots ,T(\mathbf{v}_r)\}\) is linearly independent, it follows that each \(c_i=0\text{.}\) But then (4.3.6) implies that \(a_1\mathbf{u}_1+\ldots +a_s\mathbf{u}_s=\mathbf{0}\text{.}\) Because \(\{\mathbf{u}_1, \ldots ,\mathbf{u}_s\}\) is linearly independent, it follows that each \(a_i=0\text{.}\) We conclude that \(\{\mathbf{u}_1,\ldots ,\mathbf{u}_s,\mathbf{v}_1,\ldots ,\mathbf{v}_r\}\) is a basis for \(V\text{.}\) Thus,
\begin{equation*} \mbox{dim}(V)=r+s=\mbox{dim}(\mbox{im}(T))+\mbox{dim}(\mbox{ker}(T))=\mbox{rank}(T)+\mbox{nullity}(T). \end{equation*}

Exercises 4.3.6 Exercises

Exercise Group.

In the following four problems, the matrix \(A\) given below is studied.
\begin{equation} A=\begin{bmatrix}2\amp 0\amp 2\amp 4\\1\amp 3\amp -2\amp -1\\-1\amp -2\amp 1\amp 0\end{bmatrix}.\tag{4.3.7} \end{equation}
1.
Let Find \(\mbox{rref}(A)\text{.}\)
Answer.
\begin{equation*} \mbox{rref}(A)=\begin{bmatrix}1\amp 0\amp 1\amp 2\\0\amp 1\amp -1\amp -1\\0\amp 0\amp 0\amp 0\end{bmatrix}. \end{equation*}
2.
Compute \(\text{rank}(A)\text{,}\) \(\text{dim}(\text{row}(A))\) and \(\text{dim}(\text{col}(A)) \)
Answer.
\begin{equation*} \mbox{rank}(A)=\mbox{dim}(\mbox{row}(A))=\mbox{dim}(\mbox{col}(A))=2. \end{equation*}
3.
Use \(\mbox{rref}(A)\) and the procedure outlined in Example 4.3.5 to find a basis for \(\mbox{row}(A)\text{.}\)
Answer.
A basis for \(\mbox{row}(A)\) is
\begin{equation*} \left\{\begin{bmatrix}1\amp 0\amp 1 \amp 2\end{bmatrix},\begin{bmatrix}0 \amp 1\amp -1 \amp -1\end{bmatrix} \right\}. \end{equation*}
4.
Use Algorithm 4.3.9 to find a basis for \(\mbox{col}(A)\text{.}\)
Answer.
A basis for \(\mbox{col}(A) \) is
\begin{equation*} \left\{ \begin{bmatrix}2\\1\\-1\end{bmatrix}, \begin{bmatrix}0\\3\\-2\end{bmatrix}\right\}. \end{equation*}

Exercise Group.

In the following four problems, we matrix in question is
\begin{equation} B=\begin{bmatrix}1\amp 2\amp 3\\-1\amp 1\amp 3\\2\amp 0\amp -2\\1\amp -2\amp -5\\0\amp 1\amp 2\end{bmatrix}.\tag{4.3.8} \end{equation}
5.
Find \(\mbox{rref}(B)\text{.}\)
Answer.
\begin{equation*} \mbox{rref}(B)=\begin{bmatrix}1\amp 0\amp -1\\0\amp 1\amp 2\\0\amp 0\amp 0\\0\amp 0\amp 0\\0\amp 0\amp 0\end{bmatrix}. \end{equation*}
6.
Find \(\mbox{rank}(B)=\mbox{dim}(\mbox{row}(B))=\mbox{dim}(\mbox{col}(B)).\)
Answer.
The answer is \(\mbox{rank}(B)=\mbox{dim}(\mbox{row}(B))=\mbox{dim}(\mbox{col}(B)) = 2 \text{.}\)
7.
Use \(\mbox{rref}(B)\) and the procedure outlined in Example 4.3.5 to find a basis for \(\mbox{row}(B)\text{.}\)
Answer.
A basis for \(\mbox{row}(B)\) is
\begin{equation*} \left\{\begin{bmatrix}1\amp 0\amp -1\end{bmatrix},\begin{bmatrix}0 \amp 1\amp 2\end{bmatrix} \right\}. \end{equation*}
8.
Use Algorithm 4.3.9 to find a basis for \(\mbox{col}(B)\text{.}\)
Answer.
A basis for \(\mbox{col}(B)\) is
\begin{equation*} \left\{ \begin{bmatrix}1\\-1\\2\\1\\0\end{bmatrix}, \begin{bmatrix}2\\1\\0\\-2\\1\end{bmatrix}\right\}. \end{equation*}
9.
Prove that \(\mbox{rank}(A)=\mbox{rank}(A^T)\)
10.
Find a basis for \(V\text{,}\) where
\begin{equation*} V=\mbox{span}\left( \begin{bmatrix}1\\0\\2\end{bmatrix}, \begin{bmatrix}-1\\2\\-1\end{bmatrix}, \begin{bmatrix}1\\2\\3\end{bmatrix}, \begin{bmatrix}3\\-1\\0\end{bmatrix}, \begin{bmatrix}3\\1\\1\end{bmatrix}\right). \end{equation*}
Hint.
Find a basis for the column space of a matrix whose columns are the given vectors.

Exercise Group.

In the next two problems, \(A\) and \(B\) refer to matrices of (4.3.7) and (4.3.8).
11.
Find a basis for \(\mbox{null}(A)\text{,}\) demonstrate that the Rank-Nullity Theorem (see Theorem 4.3.17) holds for \(A\) and explain how you can quickly tell that the vectors you selected for your basis are linearly independent.
Answer.
Basis for \(\mbox{null}(A) \)
\begin{equation*} \left\{ \begin{bmatrix}-1\\1\\1\\0\end{bmatrix}, \begin{bmatrix}-2\\1\\0\\1\end{bmatrix}\right\}. \end{equation*}
12.
Find a basis for \(\mbox{null}(B)\) and demonstrate that the Rank-Nullity Theorem (see Theorem 4.3.17) holds for \(B\text{.}\)
Answer.
Basis for \(\mbox{null}(B) \text{:}\)
\begin{equation*} \left\{ \begin{bmatrix}1\\-2\\1\end{bmatrix}\right\} \end{equation*}

Exercise Group.

Suppose matrix \(M\) is such that
\begin{equation*} \mbox{rref}(M)=\begin{bmatrix}1\amp 0\amp 2\amp 0\amp 3\amp 1\\0\amp 1\amp -1\amp 0\amp 1\amp -2\\0\amp 0\amp 0\amp 1\amp -2\amp 1\\0\amp 0\amp 0\amp 0\amp 0\amp 0\end{bmatrix} \end{equation*}
13.
Follow the process used in Example 4.3.15 to find a basis for \(\mbox{null}(M)\text{.}\) Explain why the basis elements obtained in this way are linearly independent.
Answer.
Basis of \(\mbox{null}(M):\quad\left\{\begin{bmatrix}-2\\1\\1\\0\\0\\0\end{bmatrix}, \begin{bmatrix}-3\\-1\\0\\2\\1\\0\end{bmatrix}, \begin{bmatrix}-1\\2\\0\\-1\\0\\1\end{bmatrix} \right\}\)
14.
Let \(\mathbf{v}_1,\ldots,\mathbf{v}_6\) denote the columns of \(M\text{.}\) Express \(\mathbf{v}_3\) as a linear combination of \(\mathbf{v}_1\) and \(\mathbf{v}_2\text{.}\)
Answer.
\begin{equation*} \mathbf{v}_3=2\mathbf{v}_1+-1\mathbf{v}_2 \end{equation*}

15.

Suppose \(A\) is a \(3\times 5\) matrix. Which of the following statements could be true?
  • \(\mbox{dim}(\mbox{col}(A))=5\)
  • \(\mbox{dim}(\mbox{row}(A))=3\)
  • \(\mbox{dim}(\mbox{null}(A))=1\)
  • \(\mbox{dim}(\mbox{null}(A))=2\)
  • \(\mbox{dim}(\mbox{null}(A))=3\)

16.

Suppose \(A\) is a \(7\times 3\) matrix. Which of the following statements could be true?
  • \(\mbox{dim}(\mbox{col}(A))=3\)
  • \(\mbox{dim}(\mbox{row}(A))=3\)
  • \(\mbox{dim} (\mbox{row}(A))=7\)
  • \(\mbox{dim}(\mbox{null}(A))=0 \)
  • \(\mbox{dim}(\mbox{null}(A))=4\)

17.

Complete the proof of Theorem 4.3.2 by showing that adding a scalar multiple of one row of a matrix to another row does not change the row space.

Exercise Group.

For each matrix \(A\) below, find the domain together with codomain of the linear transformation \(T:\R^n\rightarrow\R^m\) induced by \(A\text{;}\) then find and draw the image of \(T\) (Hint: See Example 2.6.13.)
18.
\begin{equation*} A=\begin{bmatrix}0\amp 0\\1\amp 1\\2\amp 0\end{bmatrix}. \end{equation*}
Answer.
Domain: \(\R^n\text{,}\) where \(n=2\text{.}\)
Codomain: \(\R^m\text{,}\) where \(m=3\text{.}\)
19.
\begin{equation*} A=\begin{bmatrix}3\amp -1\\-3\amp 1\end{bmatrix}. \end{equation*}

Exercise Group.

Describe the image and find the rank for each linear transformation \(T:\R^n\rightarrow \R^m\) with standard matrix \(A\) given below.
20.
\(T:\R^5\rightarrow \R^2\text{,}\)
\begin{equation*} A=\begin{bmatrix}3\amp 2\amp 4\amp 7\amp 1\\-1\amp -9\amp 7\amp 6\amp 8\end{bmatrix}. \end{equation*}
  • \(\mbox{im}(T)=\R^2.\)
  • \(\mbox{im}\) is a line in \(\R^2\text{.}\)
  • \(\mbox{im} (T)=\{\mathbf{0}\}\text{.}\)
  • \(\mbox{im}(T)=\R^5\text{.}\)
  • \(\mbox{im} (T)\) is a plane in \(\R^5\text{.}\)
Answer.
\(\mbox{rank}(T)=2\)
21.
\(T:\R^2\rightarrow\R^3\text{,}\)
\begin{equation*} A=\begin{bmatrix}1\amp 1\\1\amp 1\\1\amp 1\end{bmatrix} \end{equation*}
  • \(\mbox{im}(T)=\R^3\text{.}\)
  • \(\mbox{im} \) is a line in \(\R^2\text{.}\)
  • \(\mbox{im} (T)\) is a line in \(\R^3\text{.}\)
  • \(\mbox{im}(T)=\{\mathbf{0}\}\text{.}\)
  • \(\mbox{im} \) is a plane in \(\R^3\text{.}\)
Answer.
\(\mbox{rank}(T)=1\text{.}\)

22.

Suppose linear transformations \(T:\R^2\rightarrow \R^2\) and \(S:\R^2\rightarrow \R^2\) are such that
\begin{equation*} \mbox{im}(T)=\mbox{im}(S)=\mbox{span}\left(\begin{bmatrix}1\\-3\end{bmatrix}\right). \end{equation*}
Does this mean that \(T\) and \(S\) are the same transformation? Justify your claim.

Exercise Group.

Describe the kernel and find the nullity for each linear transformation \(T:\R^n\rightarrow \R^m\) with standard matrix \(A\) given below.
23.
\(T:\R^3\rightarrow \R^2\text{,}\)
\begin{equation*} A=\begin{bmatrix}2\amp 1\amp 0\\-1\amp 1\amp -3\end{bmatrix}. \end{equation*}
  • \(\mbox{ker}(T)=\R^3\text{.}\)
  • \(\mbox{ker}(T)=\{\mathbf{0}\}\text{.}\)
  • \(\mbox{ker}(T)=\R^2\text{.}\)
  • \(\mbox{ker}(T)\) is a plane in \(\R^3\text{.}\)
  • \(\mbox{ker}(T)\) is a line in \(\R^3\text{.}\)
Answer.
\(\mbox{nullity}(T)=1\)
24.
\(T:\R^2\rightarrow \R^2\text{,}\)
\begin{equation*} A=\begin{bmatrix}2\amp -1\\3\amp 0\end{bmatrix}. \end{equation*}
  • \(\mbox{ker}(T)=\R^2\text{.}\)
  • \(\mbox{ker}(T)=\{\mathbf{0}\}\text{.}\)
  • \(\mbox{ker}(T)\) is a line in \(\R^2\text{.}\)
Answer.
\(\mbox{nullity}(T)=0\)
25.
\(T:\R^3\rightarrow \R^5\text{,}\)
\begin{equation*} A=\begin{bmatrix}1\amp 2\amp -1\\1\amp 2\amp -1\\1\amp 2\amp -1\\1\amp 2\amp -1\\1\amp 2\amp -1\end{bmatrix}. \end{equation*}
  • \(\mbox{ker}(T)\) is a plane in \(\R^3\text{.}\)
  • \(\mbox{ker}(T)\) is a line in \(\R^3\text{.}\)
  • \(\mbox{ker} (T)\) is a line in \(\R^5\text{.}\)
  • \(\mbox{ker}(T)=\R^3\text{.}\)
  • \(\mbox{ker}(T)=\{\mathbf{0}\}\text{.}\)
Answer.
\(\mbox{nullity}(T)=2\text{.}\)

26.

Suppose a linear transformation \(T:\R^3\rightarrow \R^3\) is such that \(\mbox{im}(T)\) is a plane in \(\R^3\text{.}\) What is the rank and nulity of \(T\text{?}\)
Answer.
\begin{equation*} \mbox{rank}(T)=2 \end{equation*}
\begin{equation*} \mbox{nullity}(T)=1 \end{equation*}

27.

Suppose a linear transformation \(T:\R^5\rightarrow \R^5\) is such that \(T(\mathbf{v})=\mathbf{0}\) for all \(\mathbf{v}\) in \(\R^5\text{.}\) What is the rank and nulity of \(T\text{?}\)
Answer.
\begin{equation*} \mbox{rank}(T)=0 \end{equation*}
\begin{equation*} \mbox{nullity}(T)=5 \end{equation*}

28.

Let \(T:\R^6\rightarrow \R^4\) be a linear transformation with standard matrix
\begin{equation*} A=\begin{bmatrix}2\amp -1\amp 1\amp -2\amp 1\amp 1\\1\amp 2\amp 3\amp 6\amp -4\amp 1\\0\amp 2\amp 2\amp 4\amp -2\amp -1\\1\amp 3\amp 2\amp 6\amp -3\amp 2\end{bmatrix} \end{equation*}
Find \(\mbox{im}(T)\) and \(\mbox{ker}(T)\) if the reduced row-echelon form of \(A\) is
\begin{equation*} \text{rref}(A)=\begin{bmatrix}1\amp 0\amp 0\amp 1\amp -1\amp 0\\0\amp 1\amp 0\amp 1\amp 0\amp 0\\0\amp 0\amp 1\amp 1\amp -1\amp 0\\0\amp 0\amp 0\amp 0\amp 0\amp 1\end{bmatrix} \end{equation*}

29.

Let
\begin{equation*} V=\mbox{span}\left(\begin{bmatrix}1\\1\end{bmatrix}\right) \end{equation*}
and let \(T:V\rightarrow \R^2\) be a linear transformation defined by \(T(\mathbf{v})=2\mathbf{v}\text{.}\) Find \(\mbox{im}(T)\) and \(\mbox{ker}(T)\text{.}\)

30.

Suppose a linear transformation \(T\) is induced by a \(4\times 6\) matrix \(A\text{.}\) Let \(S\) be a linear transformation induced by \(A^T\text{.}\) Find \(\mbox{nullity}(S)\text{,}\) if \(\mbox{nullity}(T)=3\text{.}\) Prove your claim.