Skip to main content
Logo image

Coordinated Linear Algebra

Section 9.2 Linear Transformations of Abstract Vector Spaces

Recall that a transformation \(T:\mathbb{R}^n\rightarrow \mathbb{R}^m\) is called a linear transformation if the following are true for all vectors \(u\) and \(v\) in \(\mathbb{R}^n\text{,}\) and scalars \(k\text{.}\)
\begin{equation*} T(ku)= kT(u), \end{equation*}
\begin{equation*} T(u+v)= T(u)+T(v). \end{equation*}
We generalize this definition as follows.

Definition 9.2.1.

Let \(V\) and \(W\) be vector spaces. A transformation \(T:V\rightarrow W\) is called a linear transformation if the following are true for all vectors \(u\) and \(v\) in \(V\text{,}\) and scalars \(k\text{.}\)
\begin{equation*} T(k u) = kT(u), \end{equation*}
\begin{equation*} T(u+v) = T(u)+T(v). \end{equation*}
This generalization allows for more interesting examples to be studied. For example:

Example 9.2.2.

Recall that \(\mathbb{M}_{n,n}\) is the set of all \(n\times n\) matrices. In Example 9.1.2, we demonstrated that \(\mathbb{M}_{n,n}\) together with operations of matrix addition and scalar multiplication is a vector space. Let \(T_Q:\mathbb{M}_{n,n}\rightarrow \mathbb{M}_{n,n}\) be a transformation defined by
\begin{equation*} T_Q(A)=QA, \end{equation*}
where \(Q\) is fixed \(n\times n\) matrix. Show that \(T_Q\) is a linear transformation.
Answer.
We verify the linearity properties using properties of matrix-matrix and matrix-scalar multiplication (see Theorem 4.1.20). For \(A\) and \(B\) in \(\mathbb{M}_{n,n}\) and a scalar \(k\) we have:
\begin{equation*} T_Q(kA)=Q(kA)=k(QA)=kT_Q(A) \end{equation*}
together with
\begin{equation*} T_Q(A+B)=Q(A+B)=QA+QB=T_Q(A)+T_Q(B). \end{equation*}

Example 9.2.3.

Recall that \(\mathbb{P}^3\) is the set of polynomials of degree \(3\) or less than \(3\text{.}\) In Example 9.1.9, we showed that \(\mathbb{P}^3\) together with operations of polynomial addition and scalar multiplication is a vector space. Suppose \(T:\R^3\rightarrow\mathbb{P}^3\) is a linear transformation such that
\begin{equation*} T(\mathbf{i})=1+x-2x^2+x^3, \end{equation*}
\begin{equation*} T(\mathbf{j})=x+2x^3, \end{equation*}
\begin{equation*} T(\mathbf{k})=3+x^3. \end{equation*}
Find the image of \(\begin{bmatrix}1\\-2\\1\end{bmatrix}\) under \(T\text{.}\)
Answer.
\begin{align*} T\left(\begin{bmatrix}1\\-2\\1\end{bmatrix}\right)\amp =T(\mathbf{i}-2\mathbf{j}+\mathbf{k})=T(\mathbf{i})-2T(\mathbf{j})+T(\mathbf{k}) \\ \amp =(1+x-2x^2+x^3)-2(x+2x^3)+(3+x^3) \\ \amp =4-x-2x^2-2x^3. \end{align*}

Example 9.2.4.

Let \(T:\mathbb{M}_{3,3}\rightarrow \R\) be a transformation such that \(T(A)=\mbox{rank}(A)\text{.}\) Show that \(T\) is not linear.
Answer.
To show that \(T\) is not linear it suffices to find two matrices \(A\) and \(B\) such that \(T(A+B)\neq T(A)+T(B)\text{.}\)
Observe that if we pick \(A\) and \(B\) so that each has rank \(3\) we would have \(T(A)+T(B)=\mbox{rank}(A)+\mbox{rank}(B)=6\) while \(T(A+B)=\mbox{rank}(A+B)\leq 3\text{.}\) Clearly \(T(A+B)\neq T(A)+T(B)\text{.}\)
This argument is sufficient, but if we want to have a specific example, we can find one.
Let
\begin{equation*} A=\begin{bmatrix}1\amp 0\amp 0\\0\amp 1\amp 0\\0\amp 0\amp 1\end{bmatrix} \quad\text{and}\quad B=\begin{bmatrix}-1\amp 0\amp 0\\0\amp 1\amp 0\\0\amp 0\amp -1\end{bmatrix}. \end{equation*}
Then
\begin{equation*} T(A)=3\quad\text{and}\quad T(B)=3 \end{equation*}
and
\begin{equation*} T(A+B)=T\left(\begin{bmatrix}0\amp 0\amp 0\\0\amp 2\amp 0\\0\amp 0\amp 0\end{bmatrix}\right)=1. \end{equation*}
Thus, \(1=T(A+B)\neq T(A)+T(B)=6\text{.}\)

Subsection 9.2.1 Linear Transformations and Bases

Exploration 9.2.1.

Suppose we want to define a linear transformation \(T:\R^2\rightarrow \R^2\) by
\begin{equation*} T(\mathbf{i})=\begin{bmatrix}3\\-2\end{bmatrix}\quad\text{and}\quad T(\mathbf{j})=\begin{bmatrix}-1\\2\end{bmatrix}. \end{equation*}
Is this information sufficient to define \(T\text{?}\) To answer this question we will try to determine what \(T\) does to an arbitrary vector of \(\R^2\text{.}\) If \(\mathbf{v}\) is a vector in \(\R^2\text{,}\) then \(\mathbf{v}\) can be uniquely expressed as a linear combination of \(\mathbf{i}\) and \(\mathbf{j}\)
\begin{equation*} \mathbf{v}=a\mathbf{i}+b\mathbf{j}. \end{equation*}
By linearity of \(T\) we have
\begin{equation*} T(\mathbf{v})=T(a\mathbf{i}+b\mathbf{j})=aT(\mathbf{i})+bT(\mathbf{j})=a\begin{bmatrix}3\\-2\end{bmatrix}+b\begin{bmatrix}-1\\2\end{bmatrix}. \end{equation*}
This shows that the image of every vector of \(\R^2\) under \(T\) is completely determined by the action of \(T\) on the standard unit vectors \(\mathbf{i}\) and \(\mathbf{j}\text{.}\) Vectors \(\mathbf{i}\) and \(\mathbf{j}\) form a standard basis of \(\R^2\text{.}\) What if we want to use a different basis? Let
\begin{equation*} \mathcal{B}=\left \lbrace \begin{bmatrix}1\\1\end{bmatrix},\begin{bmatrix}2\\-1\end{bmatrix}\right \rbrace \end{equation*}
be our basis of choice for \(\R^2\text{.}\) (How would you verify that \(\mathcal{B}\) is a basis of \(\R^2\text{?}\)) And suppose we want to define a linear transformation \(S:\R^2\rightarrow \R^2\) by
\begin{equation*} S\left(\begin{bmatrix}1\\1\end{bmatrix}\right)=\begin{bmatrix}0\\-1\end{bmatrix}\quad\text{and}\quad S\left(\begin{bmatrix}2\\-1\end{bmatrix}\right)=\begin{bmatrix}2\\0\end{bmatrix}. \end{equation*}
Is this enough information to define \(S\text{?}\) Because \([1,1],[2,-1]\) form a basis of \(\R^2\text{,}\) every element \(\mathbf{v}\) of \(\R^2\) can be written as a unique linear combination
\begin{equation*} \mathbf{v}=a\begin{bmatrix}1\\1\end{bmatrix}+b\begin{bmatrix}2\\-1\end{bmatrix}. \end{equation*}
We can find \(S(\mathbf{v})\) as follows:
\begin{equation*} S(\mathbf{v})=S\left(a\begin{bmatrix}1\\1\end{bmatrix}+b\begin{bmatrix}2\\-1\end{bmatrix}\right)=a\begin{bmatrix}0\\-1\end{bmatrix}+b\begin{bmatrix}2\\0\end{bmatrix}. \end{equation*}
Again, we see how a linear transformation is completely determined by its action on a basis. Theorem 9.1.23 assures us that given a basis, every vector has a unique representation as a linear combination of the basis vectors. Imagine what would happen if this were not the case.
In the first part of this exploration, for instance, we might have been able to represent \(\mathbf{v}\) as \(a\mathbf{i}+b\mathbf{j}\) and \(c\mathbf{i}+d\mathbf{j}\) (\(a\neq c\) or \(b\neq d\)). This would have resulted in \(\mathbf{v}\) mapping to two different elements: \(aT(\mathbf{i})+bT(\mathbf{j})\) and \(cT(\mathbf{i})+dT(\mathbf{j})\text{,}\) implying that \(T\) is not even a function.
Let \(\mathcal{B}=\{\mathbf{v}_1,\ldots,\mathbf{v}_p\}\) be a basis of a vector space \(V\text{.}\) To define a linear transformation \(T:V\rightarrow W\text{,}\) it is sufficient to state the image of each basis vector under \(T\text{.}\) Once the images of the basis vectors are established, we can determine the images of all vectors of \(V\) as follows: Given any vector \(\mathbf{v}\) of \(V\text{,}\) write \(\mathbf{v}\) as a linear combination of the elements of \(\mathcal{B}\)
\begin{equation*} \mathbf{v}=a_1\mathbf{v}_1+\ldots+a_p\mathbf{v}_p. \end{equation*}
Then
\begin{equation*} T(\mathbf{v})=T(a_1\mathbf{v}_1+\ldots+a_p\mathbf{v}_p)=a_1T(\mathbf{v}_1)+\ldots+a_pT(\mathbf{v}_p). \end{equation*}

Subsection 9.2.2 Coordinate Vectors

Transformations that map vectors to their coordinate vectors will prove to be of great importance. In this section we will prove that such transformations are linear and give several examples. If \(V\) is a vector space, and \(\mathcal{B}=\{\mathbf{v}_1, \ldots ,\mathbf{v}_n\}\) is an ordered basis for \(V\) then any vector \(\mathbf{v}\) of \(V\) can be uniquely expressed as \(\mathbf{v}=a_1\mathbf{v}_1+\ldots +a_n\mathbf{v}_n\) for some scalars \(a_1, \ldots ,a_n\text{.}\) Vector \([\mathbf{v}]_{\mathcal{B}}\) in \(\R^n\) given by
\begin{equation*} [\mathbf{v}]_{\mathcal{B}}=\begin{bmatrix}a_1\\a_2\\\vdots\\a_n\end{bmatrix} \end{equation*}
is said to be the coordinate vector for \(\mathbf{v}\) with respect to the ordered basis \(\mathcal{B}\) (see Definition 9.1.34). It turns out that the transformation \(T:V\rightarrow \R^n\) defined by
\begin{equation*} T(\mathbf{v})=[\mathbf{v}]_{\mathcal{B}} \end{equation*}
is linear. Before we prove linearity of \(T\text{,}\) consider the following examples.

Example 9.2.5.

Consider \(\mathbb{M}_{2,2}\text{.}\) Let
\begin{equation*} \mathcal{B}=\left\{\begin{bmatrix}1\amp 0\\0\amp 0\end{bmatrix}, \begin{bmatrix}0\amp 1\\0\amp 0\end{bmatrix}, \begin{bmatrix}0\amp 0\\1\amp 0\end{bmatrix}, \begin{bmatrix}0\amp 0\\0\amp 1\end{bmatrix}\right\} \end{equation*}
be an ordered basis for \(\mathbb{M}_{2,2}\) (You should do a quick mental check that \(\mathcal{B}\) is a legitimate basis). Define \(T:\mathbb{M}_{2,2}\rightarrow \R^4\) by \(T(A)=[A]_{\mathcal{B}}\text{.}\) Find
\begin{equation*} T\left(\begin{bmatrix}-2\amp 3\\1\amp -5\end{bmatrix}\right). \end{equation*}
Answer.
We need to find the coordinate vector for \(\begin{bmatrix}-2\amp 3\\1\amp -5\end{bmatrix}\) with respect to \(\mathcal{B}\text{.}\) Firstly,
\begin{equation*} \begin{bmatrix}-2\amp 3\\1\amp -5\end{bmatrix}=-2\begin{bmatrix}1\amp 0\\0\amp 0\end{bmatrix}+ 3\begin{bmatrix}0\amp 1\\0\amp 0\end{bmatrix}+ \begin{bmatrix}0\amp 0\\1\amp 0\end{bmatrix}+ (-5)\begin{bmatrix}0\amp 0\\0\amp 1\end{bmatrix}. \end{equation*}
This gives us:
\begin{equation*} T\left(\begin{bmatrix}-2\amp 3\\1\amp -5\end{bmatrix}\right)=\left[\begin{bmatrix}-2\amp 3\\1\amp -5\end{bmatrix}\right]_{\mathcal{B}}=\begin{bmatrix}-2\\3\\1\\-5\end{bmatrix}. \end{equation*}

Example 9.2.6.

Recall that \(\mathbb{P}^2\) is the set of polynomials of degree \(2\) or less than \(2\text{.}\) In Example 9.1.8, we showed that \(\mathbb{P}^2\) is a vector space.
  1. Let \(\mathcal{B}_1=\{1, x, x^{2}\}\) be an ordered basis for \(\mathbb{P}^2\text{.}\) (It is easy to verify that \(\mathcal{B}_1\) is a basis.) If \(T:\mathbb{P}^2\rightarrow \R^3\) is given by \(T(p)=[p]_{\mathcal{B}_1}\text{,}\) find
    \begin{equation*} T(2x^2-3x). \end{equation*}
  2. Let \(\mathcal{B}_2=\{1 + x, 1 - x, x + x^{2}\}\) be an ordered basis for \(\mathbb{P}^2\)- In (see Exercise 9.1.9.16), you demonstrated that \(\mathcal{B}_2\) is a basis.) If \(T:\mathbb{P}^2\rightarrow \R^3\) is given by \(T(p)=[p]_{\mathcal{B}_2}\text{,}\) find
    \begin{equation*} T(2x^2-3x). \end{equation*}
Answer.
Item 1 We express \(2x^2-3x\) as a linear combination of elements of \(\mathcal{B}_1\text{.}\)
\begin{equation*} 2x^2-3x=0\cdot 1+ (-3)x+2x^2. \end{equation*}
Therefore
\begin{equation*} [2x^2-3x]_{\mathcal{B}_1}=\begin{bmatrix}0\\-3\\2\end{bmatrix}. \end{equation*}
Note that it is important to keep the basis elements in the same order in which they are listed, as the order of components of the coordinate vector depends on the order of the basis elements. We conclude that
\begin{equation*} T(2x^2-3x)=\begin{bmatrix}0\\-3\\2\end{bmatrix}. \end{equation*}
For Item 2: Our goal is to express \(2x^2-3x\) as a linear combination of the elements of \(\mathcal{B}_2\text{.}\) Thus, we need to find coefficients \(a\text{,}\) \(b\) and \(c\) such that
\begin{align*} 2x^2-3x \amp =a(1+x)+b(1-x)+c(x+x^2) \\ \amp =(a+b)+(a-b+c)x+cx^2. \end{align*}
This gives us a system of linear equations:
\begin{equation*} \begin{array}{ccccccc} a \amp +\amp b\amp \amp \amp = \amp 0 \\ a\amp -\amp b\amp +\amp c\amp =\amp -3\\ \amp \amp \amp \amp c\amp =\amp 2 \end{array} \end{equation*}
Solving the system yields \(a=-\frac{5}{2}\text{,}\) \(b=\frac{5}{2}\) and \(c=2\text{.}\) Thus
\begin{equation*} T(2x^2-3x)=[2x^2-3x]_{\mathcal{B}_2}=\begin{bmatrix}-5/2\\5/2\\2\end{bmatrix}. \end{equation*}

Proof.

First observe that Theorem 9.1.23 of guarantees that there is only one way to represent each element of \(V\) as a linear combination of elements of \(\mathcal{B}\text{.}\) Thus each element of \(V\) maps to exactly one element of \(\R^n\text{,}\) as long as the order in which elements of \(\mathcal{B}\) appear is taken into account. This proves that \(T\) is a function, or a transformation.
We will now prove that \(T\) is linear. Let \(\mathbf{v}\) be an element of \(V\text{.}\) We will first show that \(T(k\mathbf{v})=kT(\mathbf{v})\text{.}\) Suppose \(\mathcal{B}=\{\mathbf{v}_1, \ldots ,\mathbf{v}_n\}\text{,}\) then \(\mathbf{v}\) can be written as a unique linear combination:
\begin{equation*} \mathbf{v}=a_1\mathbf{v}_1+ \ldots +a_n\mathbf{v}_n \end{equation*}
We have:
\begin{align*} T(k\mathbf{v})\amp =T(k(a_1\mathbf{v}_1+ \ldots +a_n\mathbf{v}_n)) \\ \amp =T((ka_1)\mathbf{v}_1+ \ldots +(ka_n)\mathbf{v}_n) \\ \amp =\begin{bmatrix}ka_1\\\vdots\\ka_n\end{bmatrix}=k\begin{bmatrix}a_1\\\vdots\\a_n\end{bmatrix}=kT(\mathbf{v}). \end{align*}
We leave it to the reader to verify that \(T(\mathbf{v}+\mathbf{w})=T(\mathbf{v})+T(\mathbf{w})\) (see Exercise 9.2.6.15).
In our final example, we will consider \(T\) in the context of a basis of the codomain, as well as a basis of the domain. This will later help us tackle the question of the matrix of \(T\) associated with bases other than the standard one.

Example 9.2.8.

Let
\begin{equation*} \mathbf{v}_1=\begin{bmatrix}1\\2\\0\end{bmatrix}\quad\text{and}\quad\mathbf{v}_2=\begin{bmatrix}0\\1\\1\end{bmatrix}, \end{equation*}
\begin{equation*} \mathbf{w}_1=\begin{bmatrix}1\\0\\1\end{bmatrix}\quad\text{and}\quad\mathbf{w}_2=\begin{bmatrix}1\\0\\0\end{bmatrix}, \end{equation*}
and
\begin{equation*} V=\text{span}(\mathbf{v}_1, \mathbf{v}_2)\quad\text{and}\quad W=\text{span}(\mathbf{w}_1, \mathbf{w}_2). \end{equation*}
Because each of \(\{\mathbf{v}_1, \mathbf{v}_2\}\) and \(\{\mathbf{w}_1, \mathbf{w}_2\}\) is linearly independent, let
\begin{equation*} \mathcal{B}=\{\mathbf{v}_1, \mathbf{v}_2\}\quad\text{and}\quad\mathcal{C}=\{\mathbf{w}_1, \mathbf{w}_2\} \end{equation*}
be ordered bases of \(V\) and \(W\text{,}\) respectively. Define a linear transformation \(T:V\rightarrow W\) by
\begin{equation*} T(\mathbf{v}_1)=2\mathbf{w}_1-3\mathbf{w}_2\quad\text{and} \quad T(\mathbf{v}_2)=-\mathbf{w}_1+4\mathbf{w}_2. \end{equation*}
  1. Verify that \(\mathbf{v}=[2,5,1]\) is in \(V\) and find the coordinate vector \([\mathbf{v}]_{\mathcal{B}}\text{.}\)
  2. Find \(T(\mathbf{v})\) and the coordinate vector \([T(\mathbf{v})]_{\mathcal{C}}\text{.}\)
Answer.
For Item 1, we need to express \(\mathbf{v}\) as a linear combination of \(\mathbf{v}_1\) and \(\mathbf{v}_2\text{.}\) This can be done by observation or by solving the equation
\begin{equation*} \begin{bmatrix}1\amp 0\\2\amp 1\\0\amp 1\end{bmatrix}\begin{bmatrix}a\\b\end{bmatrix}=\begin{bmatrix}2\\5\\1\end{bmatrix}. \end{equation*}
We find that \(a=2\) and \(b=1\text{,}\) so \(\mathbf{v}=2\mathbf{v}_1+\mathbf{v}_2\text{.}\) Thus \(\mathbf{v}\) is in \(V\text{.}\) The coordinate vector for \(\mathbf{v}\) with respect to the ordered basis \(\mathcal{B}\) is
\begin{equation*} [\mathbf{v}]_{\mathcal{B}}=\begin{bmatrix}2\\1\end{bmatrix}. \end{equation*}
For Item 2, by linearity of \(T\) we have
\begin{align*} T(\mathbf{v})=T(2\mathbf{v}_1+\mathbf{v}_2)\amp =2T(\mathbf{v}_1)+T(\mathbf{v}_2) \\ \amp =2(2\mathbf{w}_1-3\mathbf{w}_2)+(-\mathbf{w}_1+4\mathbf{w}_2) \\ \amp =3\mathbf{w}_1-2\mathbf{w}_2=\begin{bmatrix}1\\0\\3\end{bmatrix}. \end{align*}
The coordinate vector for \(T(\mathbf{v})\) with respect to the ordered basis \(\mathcal{C}\) is
\begin{equation*} [T(\mathbf{v})]_{\mathcal{C}}=\begin{bmatrix}3\\-2\end{bmatrix}. \end{equation*}

Subsection 9.2.3 Inverses of a Linear Transformations

In Exploration 6.3.1, we examined a linear transformation \(T:\R^2\rightarrow \R^2\) that doubles all input vectors, and its inverse \(S:\R^2\rightarrow \R^2\text{,}\) that halves all input vectors. We observed that the composite functions \(S\circ T\) and \(T\circ S\) are both identity transformations. Diagrammatically, we can represent \(T\) and \(S\) as follows:
Idea of inverse diagram
This gives us a way of thinking about an inverse of \(T\) as a transformation that ``undoes" the action of \(T\) by ``reversing" the mapping arrows. We will now use these intuitive ideas to understand which linear transformations are invertible and which are not.
Given an arbitrary linear transformation \(T:V\rightarrow W\text{,}\) ``reversing the arrows" may not always result in a transformation. Recall that transformations are functions. The figures below show two ways in which our attempt to ``reverse" \(T\) may fail to produce a function. First, if two distinct vectors \(\mathbf{v}_1\) and \(\mathbf{v}_2\) map to the same vector \(\mathbf{w}\) in \(W\text{,}\) then reversing the arrows gives us a mapping that is clearly not a function.
Second, observe that our definition of an inverse of \(T:V\rightarrow W\) requires that the domain of the inverse transformation be \(W\) (remember the inverse is intuitively the opposite one!)). If there is a vector \(\mathbf{b}\) in \(W\) that is not an image of any vector in \(V\text{,}\) then \(\mathbf{b}\) cannot be in the domain of an inverse transformation.
We now illustrate these potential issues with specific examples.

Example 9.2.9.

Let \(T:\R^2\rightarrow \R^2\) be a linear transformation whose standard matrix is
\begin{equation*} \begin{bmatrix}1\amp 1\\2\amp 2\end{bmatrix}. \end{equation*}
Does \(T\) have an inverse? Show that multiple vectors of the domain map to \(\mathbf{0}\) in the codomain.
Answer.
The matrix
\begin{equation*} \begin{bmatrix}1\amp 1\\2\amp 2\end{bmatrix} \end{equation*}
is not invertible, so \(T\) does not have an inverse. We now dig a little deeper to get additional insights into why \(T\) does not have an inverse. Observe that all vectors of the form \([k,-k]\) map to \(\mathbf{0}\text{.}\) To verify this, use matrix multiplication:
\begin{equation*} \begin{bmatrix}1\amp 1\\2\amp 2\end{bmatrix}\begin{bmatrix}k\\-k\end{bmatrix}=\begin{bmatrix}0\\0\end{bmatrix}. \end{equation*}
This shows that there are infinitely many vectors that map to \(\mathbf{0}\text{.}\) So, ``reversing the arrows" would not result in a function. (See Figure 1)

Example 9.2.10.

Let \(T:\R^2\rightarrow \R^3\) be a linear transformation whose standard matrix is
\begin{equation*} \begin{bmatrix}1\amp 0\\0\amp 1\\2\amp 0\end{bmatrix} \end{equation*}
Does \(T\) have an inverse? Show that there exists a vector \(\mathbf{b}\) in \(\R^3\) such that no vector of \(\R^2\) maps to \(\mathbf{b}\text{.}\)
Answer.
The matrix
\begin{equation*} \begin{bmatrix}1\amp 0\\0\amp 1\\2\amp 0\end{bmatrix} \end{equation*}
is not invertible (it’s not even a square matrix!), so \(T\) does not have an inverse. We now get another insight into why \(T\) is not invertible. To find a vector \(\mathbf{b}\) such that no vector of \(\R^2\) maps to \(\mathbf{b}\text{,}\) we need to find \(\mathbf{b}\) for which the matrix equation
\begin{equation} \begin{bmatrix}1\amp 0\\0\amp 1\\2\amp 0\end{bmatrix}\mathbf{x}=\mathbf{b}.\tag{9.2.1} \end{equation}
has no solution.
Let \([b_1, b_2, b_3]\text{.}\) Gauss-Jordan elimination yields:
\begin{equation*} \left[\begin{array}{cc|c} 1 \amp 0 \amp b_1\\ 0 \amp 1 \amp b_2\\ 2 \amp 0 \amp b_3 \end{array}\right] \rightsquigarrow \left[\begin{array}{cc|c} 1 \amp 0 \amp b_1\\ 0 \amp 1 \amp b_2\\ 0 \amp 0 \amp b_3-2b_1 \end{array}\right] \end{equation*}
Now, (9.2.1) has a solution if and only if \(b_3-2b_1=0\text{.}\) Since we do not want (9.2.1) to have a solution, all we need to do is pick values \(b_1\text{,}\) \(b_2\) and \(b_3\) such that \(b_3-2b_1\neq 0\text{.}\) Let \(\mathbf{b}=[1,1,1]\text{.}\) Then no element of \(\R^2\) maps to \(\mathbf{b}\text{.}\) This shows that we cannot ``reverse the arrows" in an attempt to produce an inverse of \(T\text{.}\) (See Figure 2)
Our next goal is to develop vocabulary that would allow us to discuss issues illustrated in Figures \(1\) and \(2\text{.}\)

Subsection 9.2.4 One-to-one and Onto Linear Transformations

Figure \(1\) gave us a diagrammatic representation of a transformation that maps two distinct elements, \(\mathbf{v}_1\) and \(\mathbf{v}_2\) to the same element \(\mathbf{w}\text{,}\) making it impossible for us to ``reverse the arrows" in an attempt to find the inverse transformation. Based on this example, it is reasonable to conjecture that for a transformation to be invertible, the transformation must be such that each output is the image of exactly one input. Such transformations are called one-to-one.

Definition 9.2.11. One-to-One.

A linear transformation \(T:V\rightarrow W\) is one-to-one if
\begin{equation*} T(\mathbf{v}_1)=T(\mathbf{v}_2)\quad \text{implies that}\quad \mathbf{v}_1=\mathbf{v}_2. \end{equation*}
The transformation in figure \(1\) is not one-to-one because \(\mathbf{v}_1\) and \(\mathbf{v}_2\) map to the same vector \(\mathbf{w}\text{,}\) (i.e. \(T(\mathbf{v}_1)=T(\mathbf{v}_2)\)), yet the diagram suggests that \(\mathbf{v}_1\neq\mathbf{v}_2\text{.}\)
Let us rexamine the previous examples with this new terminology.

Example 9.2.12.

Transformation \(T\) in Example 9.2.9 is not one-to-one.
Answer.
We can use any two vectors of the form \(\begin{bmatrix}k\\-k\end{bmatrix}\) to make our case.
\begin{equation*} T\left(\begin{bmatrix}1\\-1\end{bmatrix}\right)=\mathbf{0}=T\left(\begin{bmatrix}-2\\2\end{bmatrix}\right)\quad \text{but}\quad\begin{bmatrix}1\\-1\end{bmatrix}\neq \begin{bmatrix}-2\\2\end{bmatrix}. \end{equation*}
In other words, we have more than one vector that maps to the zero vector.

Example 9.2.13.

Prove that the transformation in Example 9.2.10 is one-to-one.
Answer.
Suppose
\begin{equation*} T\left(\begin{bmatrix}x_1\\x_2\end{bmatrix}\right)=T\left(\begin{bmatrix}y_1\\y_2\end{bmatrix}\right) \end{equation*}
Then
\begin{equation*} \begin{bmatrix}1\amp 0\\0\amp 1\\2\amp 0\end{bmatrix}\begin{bmatrix}x_1\\x_2\end{bmatrix}=\begin{bmatrix}1\amp 0\\0\amp 1\\2\amp 0\end{bmatrix}\begin{bmatrix}y_1\\y_2\end{bmatrix}. \end{equation*}
\begin{equation*} x_1\begin{bmatrix}1\\0\\2\end{bmatrix}+x_2\begin{bmatrix}0\\1\\0\end{bmatrix}=y_1\begin{bmatrix}1\\0\\2\end{bmatrix}+y_2\begin{bmatrix}0\\1\\0\end{bmatrix}. \end{equation*}
\begin{equation*} (x_1-y_1)\begin{bmatrix}1\\0\\2\end{bmatrix}+(x_2-y_2)\begin{bmatrix}0\\1\\0\end{bmatrix}=\mathbf{0}. \end{equation*}
It is clear that \(\begin{bmatrix}1\\0\\2\end{bmatrix}\) and \(\begin{bmatrix}0\\1\\0\end{bmatrix}\) are linearly independent. Therefore, we must have \(x_1-y_1=0\) and \(x_2-y_2=0\text{.}\) But then \(x_1=y_1\) and \(x_2=y_2\text{,}\) so
\begin{equation*} \begin{bmatrix}x_1\\x_2\end{bmatrix}=\begin{bmatrix}y_1\\y_2\end{bmatrix}. \end{equation*}
Since transformation in Example 9.2.10 is one-to-one but not invertible we can conjecture that being one-to-one is a necessary, but not a sufficient condition for a linear transformation to have an inverse. We will consider the other necessary condition next.
Figure \(2\) makes a convincing case that for a transformation to be invertible every element of the codomain must have something mapping to it. Transformations such that every element of the codomain is an image of some element of the domain are called onto.

Definition 9.2.14. Onto.

A linear transformation \(T:V\rightarrow W\) is onto if for every element \(\mathbf{w}\) of \(W\text{,}\) there exists an element \(\mathbf{v}\) of \(V\) such that \(T(\mathbf{v})=\mathbf{w}\text{.}\)
Once again, we place preceding examples in the light of "onto".

Example 9.2.15.

The transformation in Example 9.2.10 is not onto.
Answer.
No element of \(\R^2\) maps to \(\begin{bmatrix}1\\1\\1\end{bmatrix}\text{.}\)

Example 9.2.16.

Prove that the linear transformation \(T:\R^2\rightarrow \R^2\) whose standard matrix is
\begin{equation*} A=\begin{bmatrix}1\amp 0\\2\amp 1\end{bmatrix} \end{equation*}
is onto.
Answer.
Let \(\mathbf{b}\) be an element of the codomain (\(\R^2\)). We need to find \(\mathbf{x}\) in the domain (\(\R^2\)) such that \(T(\mathbf{x})=\mathbf{b}\text{.}\) Observe that \(A\) is invertible, and
\begin{equation*} A^{-1}=\begin{bmatrix}1\amp 0\\-2\amp 1\end{bmatrix}. \end{equation*}
Let \(\mathbf{x}=\begin{bmatrix}1\amp 0\\-2\amp 1\end{bmatrix}\mathbf{b}\text{,}\) then
\begin{equation*} T(\mathbf{x})=\begin{bmatrix}1\amp 0\\2\amp 1\end{bmatrix}\left(\begin{bmatrix}1\amp 0\\-2\amp 1\end{bmatrix}\mathbf{b}\right)=I\mathbf{b}=\mathbf{b}. \end{equation*}

Example 9.2.17.

Prove that the linear transformation \(T:\R^3\rightarrow \R^2\) induced by
\begin{equation*} A=\begin{bmatrix}1\amp 1\amp -1\\2\amp 3\amp -1\end{bmatrix} \end{equation*}
is onto.
Answer.
Let \(\mathbf{b}\) be an element of \(\R^2\text{.}\) We need to show that there exists \(\mathbf{x}\) in \(\R^3\) such that \(T(\mathbf{x})=A\mathbf{x}=\mathbf{b}\text{.}\) Observe that
\begin{equation*} \mbox{rref}(A)=\begin{bmatrix}1 \amp 0 \amp -2\\0 \amp 1 \amp 1\end{bmatrix}. \end{equation*}
This means that \(A\mathbf{x}=\mathbf{b}\) has a solution (in fact, it has infinitely many solutions) for every \(\mathbf{b}\) in \(\R^2\text{.}\) Therefore every \(\mathbf{b}\) in \(\R^2\) is an image of some \(\mathbf{x}\) in \(\R^3\text{.}\) We conclude that \(T\) is onto.

Example 9.2.18.

Let
\begin{equation*} V=\text{span}\left(\begin{bmatrix}1\\0\\0\end{bmatrix}, \begin{bmatrix}1\\1\\1\end{bmatrix}\right). \end{equation*}
Define a linear transformation
\begin{equation*} T:V\rightarrow \R^2 \end{equation*}
by
\begin{equation*} T\left(\begin{bmatrix}1\\0\\0\end{bmatrix}\right)=\begin{bmatrix}1\\1\end{bmatrix}\quad \text{and} \quad T\left(\begin{bmatrix}1\\1\\1\end{bmatrix}\right)=\begin{bmatrix}0\\1\end{bmatrix}. \end{equation*}
Show that \(T\) is one-to-one and onto.
Answer.
We will now show that \(T\) is one-to-one. Suppose
\begin{equation*} T(\mathbf{u})=T(\mathbf{v}) \end{equation*}
for some \(\mathbf{u}\) and \(\mathbf{v}\) in \(V\text{.}\) Vectors \(\mathbf{u}\) and \(\mathbf{v}\) are in the span of \(\begin{bmatrix}1\\0\\0\end{bmatrix}\) and \(\begin{bmatrix}1\\1\\1\end{bmatrix}\text{,}\) so
\begin{equation*} \mathbf{u}=a\begin{bmatrix}1\\0\\0\end{bmatrix}+b\begin{bmatrix}1\\1\\1\end{bmatrix}\quad\text{and}\quad \mathbf{v}=c\begin{bmatrix}1\\0\\0\end{bmatrix}+d\begin{bmatrix}1\\1\\1\end{bmatrix} \end{equation*}
for some scalars \(a, b, c, d\text{.}\)
\begin{align*} T(\mathbf{u}) \amp =aT\left(\begin{bmatrix}1\\0\\0\end{bmatrix}\right)+bT\left(\begin{bmatrix}1\\1\\1\end{bmatrix}\right) =a\begin{bmatrix}1\\1\end{bmatrix}+b\begin{bmatrix}0\\1\end{bmatrix} =\begin{bmatrix}a\\a+b\end{bmatrix}, \\ T(\mathbf{v}) \amp =cT\left(\begin{bmatrix}1\\0\\0\end{bmatrix}\right)+dT\left(\begin{bmatrix}1\\1\\1\end{bmatrix}\right) =c\begin{bmatrix}1\\1\end{bmatrix}+d\begin{bmatrix}0\\1\end{bmatrix} =\begin{bmatrix}c\\c+d\end{bmatrix}. \end{align*}
Thus,
\begin{equation*} \begin{bmatrix}a\\a+b\end{bmatrix}=\begin{bmatrix}c\\c+d\end{bmatrix}. \end{equation*}
This implies that \(a=c\) which, in turn, implies \(b=d\text{.}\) This gives us \(\mathbf{u}=\mathbf{v}\text{,}\) and we conclude that \(T\) is one-to-one.
Next we will show that \(T\) is onto. The key observation is that vectors \([1,1]\) and \([0,1]\) span \(\R^2\text{.}\) This means that given a vector \(\mathbf{v}\) in \(\R^2\text{,}\) we can write \(\mathbf{v}\) as
\begin{equation*} \mathbf{v}=a\begin{bmatrix}1\\1\end{bmatrix}+b\begin{bmatrix}0\\1\end{bmatrix}. \end{equation*}
But this means that \(\mathbf{v}=T\left(a\begin{bmatrix}1\\0\\0\end{bmatrix}+b\begin{bmatrix}1\\1\\1\end{bmatrix}\right).\) We conclude that \(T\) is onto.

Subsection 9.2.5 Existence and Uniqueness of Inverses

Proof.

We will first assume that \(T\) is one-to-one and onto, and show that there exists a transformation \(S:W\rightarrow V\) such that \(S\circ T=\id_V\) and \(T\circ S=\id_W\text{.}\)
Because \(T\) is onto, for every \(\mathbf{w}\) in \(W\text{,}\) there exists \(\mathbf{v}\) in \(V\) such that \(T(\mathbf{v})=\mathbf{w}\text{.}\) Moreover, because \(T\) is one-to-one, vector \(\mathbf{v}\) is the only vector that maps to \(\mathbf{w}\text{.}\) To stress this, we will say that for every \(\mathbf{w}\text{,}\) there exists \(\mathbf{v}_{\mathbf{w}}\) such that \(T(\mathbf{v}_{\mathbf{w}})=\mathbf{w}\text{.}\) (Since every \(\mathbf{v}\) maps to exactly one \(\mathbf{w}\text{,}\) this notation makes sense for elements of \(V\) as well.) We can now define \(S:W\rightarrow V\) by \(S(\mathbf{w})=\mathbf{v}_{\mathbf{w}}\text{.}\) Then
\begin{align*} (S\circ T)(\mathbf{v}_{\mathbf{w}}) \amp= S(T(\mathbf{v}_{\mathbf{w}})) \amp =S (\mathbf{w}) = \mathbf{v}_{\mathbf{w}}, \\ (T\circ S)(\mathbf{w}) \amp= T(S(\mathbf{w})) \amp =T (\mathbf{v}_{\mathbf{w}}) = \mathbf{w}. \end{align*}
We conclude that \(S\circ T=\id_V\) and \(T\circ S=\id_W\text{.}\) Therefore \(S\) is an inverse of \(T\text{.}\) We will now assume that \(T\) has an inverse \(S\) and show that \(T\) must be one-to-one and onto. Suppose
\begin{equation*} T(\mathbf{v}_1)=T(\mathbf{v}_2). \end{equation*}
then
\begin{equation*} S(T(\mathbf{v}_1))=S(T(\mathbf{v}_2)), \end{equation*}
but then
\begin{equation*} \mathbf{v}_1=\mathbf{v}_2. \end{equation*}
We conclude that \(T\) is one-to-one. Now suppose that \(\mathbf{w}\) is in \(W\text{.}\) We need to show that some element of \(V\) maps to \(\mathbf{w}\text{.}\) Let \(\mathbf{v}=S(\mathbf{w})\text{.}\) Then
\begin{equation*} T(\mathbf{v})=T(S(\mathbf{w}))=(T\circ S)(\mathbf{w})=\id_W(\mathbf{w})=\mathbf{w}. \end{equation*}
We conclude that \(T\) is onto.
The theorem together with its proof is all very formal. In practice, one can verify whether an inverse exists by verifying onto and one-to-one (often easier than pinpointing an inverse). Here is a case in point:

Example 9.2.20.

Transformation \(T\) in Example 9.2.18 is invertible.
Answer.
We demonstrated that \(T\) is one-to-one and onto. By Theorem 9.2.19, \(T\) has an inverse. Recall that \(T\) was introduced to demonstrate that Theorem 6.3.11 is not always directly applicable. We now have additional tools. Theorem 9.2.19 assures us that \(T\) has an inverse, but does not help us find it. We will visit this problem again in later sections and find an inverse of \(T\text{.}\)
Having an inverse refers to \(S\) as an inverse of \(T\text{,}\) implying that there may be more than one such transformation \(S\text{.}\) We will now show that if such a transformation \(S\) exists, it is unique. This will allow us to refer to it as the inverse of \(T\) and to start using \(T^{-1}\) to denote the unique inverse of \(T\text{.}\)

Proof.

Let \(T:V\rightarrow W\) be a linear transformation. If \(S\) is an inverse of \(T\text{,}\) then \(S\) satisfies
\begin{equation*} S\circ T=\id_V\quad \text{and}\quad T\circ S=\id_W. \end{equation*}
Suppose there is another transformation, \(S'\text{,}\) such that
\begin{equation*} S'\circ T=\id_V\quad \text{and}\quad T\circ S'=\id_W. \end{equation*}
We now show that \(S=S'\text{.}\)
\begin{equation*} S=S\circ \id_W=S\circ(T\circ S')=(S\circ T)\circ S'=\id_V\circ S'=S'. \end{equation*}

Exercises 9.2.6 Exercises

1.

Suppose \(T:\mathbb{P}^2\rightarrow\mathbb{M}_{2,2}\) is a linear transformation such that
\begin{equation*} T(1)=\begin{bmatrix}1\amp 0\\0\amp 1\end{bmatrix},\quad T(x)=\begin{bmatrix}1\amp 1\\0\amp 1\end{bmatrix},\quad T(x^2)=\begin{bmatrix}1\amp 1\\1\amp 1\end{bmatrix} \end{equation*}
Find
\begin{equation*} T(4-x+3x^2)\text{.} \end{equation*}
Answer.
\begin{equation*} T(4-x+3x^2)=\begin{bmatrix}6\amp 2\\3\amp 6\end{bmatrix}. \end{equation*}

2.

Define \(T:\mathbb{M}_{3,3}\rightarrow \R\) by \(T(A)=\mbox{tr}(A)\text{.}\) (Recall that \(\mbox{tr}(A)\) denotes the trace of \(A\text{,}\) which is the sum of the main diagonal entries of \(A\text{.}\)) Find
\begin{equation*} T\left(\begin{bmatrix}1\amp 2\amp 3\\4\amp 5\amp 6\\7\amp 8\amp 9\end{bmatrix}\right). \end{equation*}
Answer.
\begin{equation*} T\left(\begin{bmatrix}1\amp 2\amp 3\\4\amp 5\amp 6\\7\amp 8\amp 9\end{bmatrix}\right)=15. \end{equation*}

3.

Is \(T\) a linear transformation? If so, prove it. If not, give a counterexample.

4.

Define \(T:\R^2\rightarrow\mathbb{M}_{2,2}\) by
\begin{equation*} T\left(\begin{bmatrix}a\\b\end{bmatrix}\right)=\begin{bmatrix}a\amp 1\\1\amp b\end{bmatrix}. \end{equation*}
Find
\begin{equation*} T\left(\begin{bmatrix}2\\-1\end{bmatrix}\right). \end{equation*}
Answer.
\begin{equation*} T\left(\begin{bmatrix}2\\-1\end{bmatrix}\right)=\begin{bmatrix}2\amp 1\\1\amp -1\end{bmatrix}. \end{equation*}

5.

Is \(T\) a linear transformation? If so, prove it. If not, give a counterexample.

6.

This problem requires the knowledge of how to compute a \(3\times 3\) determinant (for a quick reminder, chapter \(1\)). Define \(T:\mathbb{M}_{3,3}\rightarrow \R\) by \(T(A)=\det(A)\text{.}\) Find
\begin{equation*} T\left(\begin{bmatrix}1\amp 2\amp 3\\4\amp 5\amp 6\\7\amp 8\amp 9\end{bmatrix}\right). \end{equation*}
Answer.
\begin{equation*} T\left(\begin{bmatrix}1\amp 2\amp 3\\4\amp 5\amp 6\\7\amp 8\amp 9\end{bmatrix}\right)=0. \end{equation*}

7.

Is \(T\) a linear transformation? If so, prove it. If not, give a counterexample.

8.

Define \(T:\mathbb{P}^3\rightarrow\mathbb{P}^2\) by \(T(p(x))=p'(x)\) (in other words, \(T\) maps a polynomial to its derivative). Find
\begin{equation*} T(4x^3-2x^2+x+6). \end{equation*}
Answer.
\begin{equation*} T(4x^3-2x^2+x+6)=12x^2-4x+1. \end{equation*}

9.

Is \(T\) a linear transformation? If so, prove it. If not, give a counterexample.

10.

Recall that the set \(V\) of all symmetric \(2\times 2\) matrices is a subspace of \(\mathbb{M}_{2,2}\text{.}\) In Example 9.1.29, we demonstrated that
\begin{equation*} \mathcal{B} = \left\{ \begin{bmatrix} 1 \amp 0 \\ 0 \amp 0 \end{bmatrix}, \begin{bmatrix} 0 \amp 0 \\ 0 \amp 1 \end{bmatrix}, \begin{bmatrix} 0 \amp 1 \\ 1 \amp 0 \end{bmatrix} \right\} \end{equation*}
is a basis for \(V\text{.}\) Define \(T:V\rightarrow \R^3\) by \(T(A)=[A]_{\mathcal{B}}\text{.}\) Find
\begin{equation*} T(I_2) \quad \text{and} \quad T\left(\begin{bmatrix}2\amp -3\\-3\amp 1\end{bmatrix}\right). \end{equation*}
Answer.
\begin{equation*} T(I_2)=\begin{bmatrix}1\\1\\0\end{bmatrix}, \end{equation*}
\begin{equation*} T\left(\begin{bmatrix}2\amp -3\\-3\amp 1\end{bmatrix}\right)=\begin{bmatrix}2\\1\\-3\end{bmatrix}. \end{equation*}

11.

Let \(V\) be a subspace of \(\R^3\) with a basis \(\mathcal{B}=\left\{\begin{bmatrix}2\\1\\-1\end{bmatrix}, \begin{bmatrix}0\\3\\2\end{bmatrix}\right\}\text{.}\) Find the coordinate vector, \([\mathbf{v}]_{\mathcal{B}}\text{,}\) for \(\mathbf{v}=[4,-1,-4]\text{.}\)
Answer.
\begin{equation*} [\mathbf{v}]_{\mathcal{B}}=\begin{bmatrix}2\\-1\end{bmatrix}. \end{equation*}

12.

If the order of the basis elements in Exercise 9.2.6.11 was switched to form a new basis
\begin{equation*} \mathcal{B}'=\left\{\begin{bmatrix}0\\3\\2\end{bmatrix}, \begin{bmatrix}2\\1\\-1\end{bmatrix} \right\}. \end{equation*}
How would this affect the coordinate vector?
Answer.
\begin{equation*} [\mathbf{v}]_{\mathcal{B}'}=\begin{bmatrix}-1\\2\end{bmatrix} \end{equation*}

13.

In Exercise 9.1.9.19, you demonstrated that
\begin{equation*} \mathcal{B}=\{x^{2}, x + 1, 1 - x - x^{2}\} \end{equation*}
is a basis for \(\mathbb{P}^2\text{.}\) Define \(T:\mathbb{P}^2\rightarrow \R^3\) by \(T(p(x))=[p(x)]_{\mathcal{B}}\text{.}\) Find
\begin{equation*} T(0), \quad T(x+1) \quad \text{and} \quad T(x^2-3x+1). \end{equation*}
Answer.
\begin{equation*} T(0)=\begin{bmatrix}0\\0\\0\end{bmatrix}, \end{equation*}
\begin{equation*} T(x+1)=\begin{bmatrix}0\\1\\0\end{bmatrix}, \end{equation*}
\begin{equation*} T(x^2-3x+1)=\begin{bmatrix}3\\-1\\2\end{bmatrix}. \end{equation*}

14.

Let \(V\) and \(W\) be vector spaces, and let \(\mathcal{B}_V=\{\mathbf{v}_1, \mathbf{v}_2, \mathbf{v}_3, \mathbf{v}_4\}\) and \(\mathcal{B}_W=\{\mathbf{w}_1,\mathbf{w}_2, \mathbf{w}_3\}\) be ordered bases of \(V\) and \(W\text{,}\) respectively. Suppose \(T:V\rightarrow W\) is a linear transformation such that:
\begin{equation*} T(\mathbf{v}_1)=\mathbf{w}_2, \end{equation*}
\begin{equation*} T(\mathbf{v}_2)=2\mathbf{w}_1-3\mathbf{w}_2, \end{equation*}
\begin{equation*} T(\mathbf{v}_3)=\mathbf{w}_2+\mathbf{w}_3, \end{equation*}
\begin{equation*} T(\mathbf{v}_4)=-\mathbf{w}_1. \end{equation*}
If \(\mathbf{v}=-2\mathbf{v}_1+3\mathbf{v}_2-\mathbf{v}_4\text{,}\) express \(T(\mathbf{v})\) as a linear combination of vectors of \(\mathcal{B}_W\text{.}\) Now,
\begin{equation*} T(\mathbf{v})=7\mathbf{w}_1-11\mathbf{w}_2+0\mathbf{w}_3. \end{equation*}
Find \([\mathbf{v}]_{\mathcal{B}_V}\) and \([T(\mathbf{v})]_{\mathcal{B}_{W}}\text{.}\)
Answer.
\begin{equation*} [\mathbf{v}]_{\mathcal{B}_V}=\begin{bmatrix}-2\\3\\0\\-1\end{bmatrix},\quad [T(\mathbf{v})]_{\mathcal{B}_{W}}=\begin{bmatrix}7\\-11\\0\end{bmatrix}. \end{equation*}

16.

Show that a linear transformation \(T:\R^2\rightarrow \R^2\) with standard matrix
\begin{equation*} A=\begin{bmatrix}2\amp -4\\-3\amp 6\end{bmatrix} \end{equation*}
is not one-to-one.
Hint.
Show that multiple vectors map to \(\mathbf{0}\text{.}\)

17.

Show that a linear transformation \(T:\R^2\rightarrow \R^3\) with standard matrix
\begin{equation*} A=\begin{bmatrix}1\amp 2\\-1\amp 1\\0\amp 1\end{bmatrix} \end{equation*}
is not onto.
Hint.
Find \(\mathbf{b}\) such that \(A\mathbf{x}=\mathbf{b}\) has no solutions.

18.

Suppose that a linear transformation \(T:\R^3\rightarrow \R^3\) has a standard matrix \(A\) such that \(\text{rref}(A)=I\text{.}\) Prove that \(T\) is one-to-one and onto
Hint 1.
For the one-to-one verification, does \(A\mathbf{x}=\mathbf{b}\) have a solution for every \(\mathbf{b}\text{?}\)
Hint 2.
For the onto verification, how many solutions does \(A\mathbf{x}=\mathbf{b}\) have?

19.

Define a transformation \(T:\R^2\rightarrow \R^2\) by
\begin{equation*} T\left(\begin{bmatrix}x\\y\end{bmatrix}\right)=\begin{bmatrix}x\\-2x+4y\end{bmatrix}. \end{equation*}
Show that \(T\) is a linear transformation that has an inverse.
Hint.
You will need to demonstrate that \(T\) is one-to-one and onto.

20.

Let
\begin{equation*} V=\text{span}\left(\begin{bmatrix}1\\0\\1\end{bmatrix}, \begin{bmatrix}0\\1\\0\end{bmatrix}\right). \end{equation*}
Define a linear transformation \(T:V\rightarrow \R^2\) by
\begin{equation*} T\left(\begin{bmatrix}1\\0\\1\end{bmatrix}\right)=\begin{bmatrix}0\\1\end{bmatrix}\quad \text{and}\quad T\left(\begin{bmatrix}0\\1\\0\end{bmatrix}\right)=\begin{bmatrix}-1\\1\end{bmatrix} \end{equation*}
Prove that \(T\) has an inverse.