Suppose we want to define a linear transformation \(T:\R^2\rightarrow \R^2\) by
\begin{equation*}
T(\mathbf{i})=\begin{bmatrix}3\\-2\end{bmatrix}\quad\text{and}\quad T(\mathbf{j})=\begin{bmatrix}-1\\2\end{bmatrix}.
\end{equation*}
Is this information sufficient to define \(T\text{?}\) To answer this question we will try to determine what \(T\) does to an arbitrary vector of \(\R^2\text{.}\) If \(\mathbf{v}\) is a vector in \(\R^2\text{,}\) then \(\mathbf{v}\) can be uniquely expressed as a linear combination of \(\mathbf{i}\) and \(\mathbf{j}\)
\begin{equation*}
\mathbf{v}=a\mathbf{i}+b\mathbf{j}.
\end{equation*}
By linearity of \(T\) we have
\begin{equation*}
T(\mathbf{v})=T(a\mathbf{i}+b\mathbf{j})=aT(\mathbf{i})+bT(\mathbf{j})=a\begin{bmatrix}3\\-2\end{bmatrix}+b\begin{bmatrix}-1\\2\end{bmatrix}.
\end{equation*}
This shows that the image of every vector of \(\R^2\) under \(T\) is completely determined by the action of \(T\) on the standard unit vectors \(\mathbf{i}\) and \(\mathbf{j}\text{.}\) Vectors \(\mathbf{i}\) and \(\mathbf{j}\) form a standard basis of \(\R^2\text{.}\) What if we want to use a different basis? Let
\begin{equation*}
\mathcal{B}=\left \lbrace \begin{bmatrix}1\\1\end{bmatrix},\begin{bmatrix}2\\-1\end{bmatrix}\right \rbrace
\end{equation*}
be our basis of choice for \(\R^2\text{.}\) (How would you verify that \(\mathcal{B}\) is a basis of \(\R^2\text{?}\)) And suppose we want to define a linear transformation \(S:\R^2\rightarrow \R^2\) by
\begin{equation*}
S\left(\begin{bmatrix}1\\1\end{bmatrix}\right)=\begin{bmatrix}0\\-1\end{bmatrix}\quad\text{and}\quad S\left(\begin{bmatrix}2\\-1\end{bmatrix}\right)=\begin{bmatrix}2\\0\end{bmatrix}.
\end{equation*}
Is this enough information to define \(S\text{?}\) Because \([1,1],[2,-1]\) form a basis of \(\R^2\text{,}\) every element \(\mathbf{v}\) of \(\R^2\) can be written as a unique linear combination
\begin{equation*}
\mathbf{v}=a\begin{bmatrix}1\\1\end{bmatrix}+b\begin{bmatrix}2\\-1\end{bmatrix}.
\end{equation*}
We can find \(S(\mathbf{v})\) as follows:
\begin{equation*}
S(\mathbf{v})=S\left(a\begin{bmatrix}1\\1\end{bmatrix}+b\begin{bmatrix}2\\-1\end{bmatrix}\right)=a\begin{bmatrix}0\\-1\end{bmatrix}+b\begin{bmatrix}2\\0\end{bmatrix}.
\end{equation*}
Again, we see how a linear transformation is completely determined by its action on a basis.
Theorem 9.1.23 assures us that given a basis, every vector has a unique representation as a linear combination of the basis vectors. Imagine what would happen if this were not the case.
In the first part of this exploration, for instance, we might have been able to represent \(\mathbf{v}\) as \(a\mathbf{i}+b\mathbf{j}\) and \(c\mathbf{i}+d\mathbf{j}\) (\(a\neq c\) or \(b\neq d\)). This would have resulted in \(\mathbf{v}\) mapping to two different elements: \(aT(\mathbf{i})+bT(\mathbf{j})\) and \(cT(\mathbf{i})+dT(\mathbf{j})\text{,}\) implying that \(T\) is not even a function.