Skip to main content
Logo image

Coordinated Linear Algebra

Section 9.1 Abstract Vector Spaces

When we examined subspaces of \(\R^n\) we discussed \(\R^n\) as a vector space and introduced the notion of a subspace of \(\R^n\text{.}\) In this section we will consider sets other than \(\R^n\) that have two operations and satisfy the same properties as \(\R^n\text{.}\) Such sets, together with the operations of addition and scalar multiplication, will also be called vector spaces.

Subsection 9.1.1 Properties of Vector Spaces

Recall that \(\R^n\) is said to be a vector space because
  • \(\R^n\) is closed under vector addition,
  • \(\R^n\) is closed under scalar multiplication,
and satisfies the following properties:
  1. Commutative Property of Addition: \(\mathbf{u}+\mathbf{v}=\mathbf{v}+\mathbf{u}.\)
  2. Associative Property of Addition: \((\mathbf{u}+\mathbf{v})+\mathbf{w}=\mathbf{u}+(\mathbf{v}+\mathbf{w}).\)
  3. Existence of Additive Identity: \(\mathbf{u}+\mathbf{0}=\mathbf{u}.\)
  4. Existence of Additive Inverse: \(\mathbf{u}+(-\mathbf{u})=\mathbf{0}.\)
  5. Distributive Property over Vector Addition: \(k(\mathbf{u}+\mathbf{v})=k\mathbf{u}+k\mathbf{v}.\)
  6. Distributive Property over Scalar Addition: \((k+p)\mathbf{u}=k\mathbf{u}+p\mathbf{u}.\)
  7. Associative Property for Scalar Multiplication: \(k(p\mathbf{u})=(kp)\mathbf{u}.\)
  8. Multiplication by \(1\text{:}\) \(1\mathbf{u}=\mathbf{u}.\)

Remark 9.1.1.

All scalars in this chapter are assumed to be real numbers. Complex scalars are considered later.
In the next two examples we will explore two sets other than \(\R^n\) endowed with addition and scalar multiplication and satisfying the same properties.

Example 9.1.2.

Let \(\mathbb{M}_{m,n}\) be the set of all \(m\times n\) matrices. Matrix addition and scalar multiplication were defined in chapter \(4\text{.}\) Observe that the sum of two \(m\times n\) matrices is also an \(m\times n\) matrix. Likewise, a scalar multiple of an \(m\times n\) matrix is an \(m\times n\) matrix. Thus
  • \(\mathbb{M}_{m,n}\) is closed under matrix addition;
  • \(\mathbb{M}_{m,n}\) is closed under scalar multiplication.
In addition, Theorem 4.1.5 and Theorem 4.1.8 give us the following properties of matrix addition and scalar multiplication. Note that these properties are analogous to the eight vector properties above.
  1. Commutative Property of Addition: \(\quad A+B=B+A\text{.}\)
  2. Associative Property of Addition: \(\quad (A+B)+C=A+(B+C)\text{.}\)
  3. Existence of Additive Identity: \(\quad A+O=A \ \) where \(O\) is the \(m \times n\) zero matrix.
  4. Existence of Additive Inverse: \(\quad A+(-A)=O\text{.}\)
  5. Distributive Property over Matrix Addition: \(\quad k(A+B)=kA+kB\text{.}\)
  6. Distributive Property over Scalar Addition: \(\quad (k+p)A=kA+pA\text{.}\)
  7. Associative Property for Scalar Multiplication: \(\quad k(pA)=(kp)A\text{.}\)
  8. Multiplication by \(1\text{:}\) \(\quad 1A=A\text{.}\)

Example 9.1.3.

Consider the set \(\mathbb{L}\) of all linear functions. This set includes all polynomials of degree \(1\) and degree \(0\text{.}\) We will use addition and scalar multiplication of polynomials as the two operations, and show that \(\mathbb{L}\) is closed under those operations and satisfies eight properties analogous to those of vectors of \(\R^n\text{.}\)
Answer.
Elements of \(\mathbb{L}\) are functions \(f\) given by
\begin{equation*} f(x)=mx+b. \end{equation*}
(Note that \(m\) and \(b\) can be equal to zero.)
Given \(f_1\) and \(f_2\) in \(\mathbb{L}\text{,}\) it is easy to verify that \(f_1+f_2\) is also in \(\mathbb{L}\text{.}\) This gives us closure under function addition. For any scalar \(k\text{,}\) we have
\begin{equation*} kf(x)=k(mx+b)=(km)x+(kb). \end{equation*}
Therefore \(kf\) is in \(\mathbb{L}\text{,}\) and \(\mathbb{L}\) is closed under scalar multiplication. We now proceed to formulate eight properties analogous to those of vectors of \(\R^n\text{.}\)
Let \(f_1\text{,}\) \(f_2\) and \(f_3\) be elements of \(\mathbb{L}\) given by \(f_1(x)=m_1 x + b_1\text{,}\) \(f_2(x)=m_2 x + b_2\text{,}\) and \(f_3(x)=m_3 x + b_3\text{.}\) Let \(k\) and \(p\) be scalars.
  1. Commutative Property of Addition: \(f_1+f_2=f_2+f_1.\)
    This property holds because
    \begin{align*} f_1(x) + f_2(x) \amp = (m_1 x + b_1) + (m_2 x + b_2) \\ \amp = (m_2 x + b_2) + (m_1 x + b_1) \\ \amp = f_2(x) + f_1(x). \end{align*}
  2. Associative property of Addition:
    \begin{equation*} (f_1 + f_2) + f_3 = f_1 + (f_2 + f_3). \end{equation*}
    This property is easy to verify and is left to the reader.
  3. Existence of additive identity:
    \begin{equation*} f_1 + f_0 = f_1 \end{equation*}
    The additive identity \(f_0\) is given by \(f_0(x)=0\text{.}\) Note that \(f_0\) is a vector in the space \(\mathbb{L}\text{.}\)
  4. Existence of additive inverse:
    \begin{equation*} f_1 + (-f_1) = f_0. \end{equation*}
    The additive inverse of \(f_1\) is a function \(-f_1\) given by \(-f_1(x)=-mx+(-b)\text{.}\) Note that \(-f_1\) is in \(\mathbb{L}\text{.}\)
  5. Distributive Property over Vector Addition:
    \begin{equation*} k(f_1+f_2)=kf_1+kf_2. \end{equation*}
    This property holds because
    \begin{align*} k(f_1(x) + f_2(x)) \amp = k((m_1 x + b_1) + (m_2 x + b_2)) \\ \amp = k(m_1 x + b_1) + k(m_2 x + b_2) \\ \amp = k f_1(x) + k f_2(x). \end{align*}
  6. Distributive property over scalar addition:
    \begin{equation*} (k+p)f_1=kf_1+pf_1. \end{equation*}
    This property holds because
    \begin{align*} (k+p)f_1(x)\amp = (k+p)(m_1 x + b_1) \\ \amp =k(m_1 x + b_1) + p(m_1 x + b_1) \\ \amp = k f_1(x) + p f_1(x). \end{align*}
  7. Associative property for scalar multiplication: \((k(pf_1))=(kp)f_1.\)
    This property holds because
    \begin{align*} k(p(f_1(x)))\amp =k(p(m_1 x + b_1)) \\ \amp =k(p m_1 x +p b_1) \\ \amp =kp m_1 x +kp b_1 \\ \amp = (kp) m_1 x + (kp) b_1 \\ \amp = (kp)(m_1 x + b_1) \\ \amp =(kp)f_1(x). \end{align*}
  8. Multiplication by \(1\)
    This follows from
    \begin{equation*} 1 f_1=f_1. \end{equation*}

Subsection 9.1.2 Definition of a Vector Space

During Example 9.1.2 and Example 9.1.3 show us that there are many times in mathematics when we encounter a set with two operations (that we call addition and scalar multiplication) such that the set is closed under the two operations, and satisfies the same eight properties as \(\R^n\text{.}\) We will refer to such sets as vector spaces.

Definition 9.1.4.

Let \(V\) be a nonempty set. Suppose that elements of \(V\) can be added together and multiplied by scalars. The set \(V\text{,}\) together with operations of addition and scalar multiplication, is called a vector space provided that
  • \(V\) is closed under addition,
  • \(V\) is closed under scalar multiplication
and the following properties hold for \(\mathbf{u}\text{,}\) \(\mathbf{v}\) and \(\mathbf{w}\) in \(V\) and scalars \(k\) and \(p\text{:}\)
  1. Commutative Property of Addition: \(\mathbf{u}+\mathbf{v}=\mathbf{v}+\mathbf{u}.\)
  2. Associative Property of Addition:\quad \((\mathbf{u}+\mathbf{v})+\mathbf{w}=\mathbf{u}+(\mathbf{v}+\mathbf{w}).\)
  3. Existence of Additive Identity: \(\mathbf{u}+\mathbf{0}=\mathbf{u}.\)
  4. Existence of Additive Inverse: \(\mathbf{u}+(-\mathbf{u})=\mathbf{0}.\)
  5. Distributive Property over Vector Addition: \(k(\mathbf{u}+\mathbf{v})=k\mathbf{u}+k\mathbf{v}.\)
  6. Distributive Property over Scalar Addition: \((k+p)\mathbf{u}=k\mathbf{u}+p\mathbf{u}.\)
  7. Associative Property for Scalar Multiplication: \(k(p\mathbf{u})=(kp)\mathbf{u}.\)
  8. Multiplication by \(1\text{:}\) \(1\mathbf{u}=\mathbf{u}.\)
We will refer to elements of \(V\) as vectors.
When scalars \(k\) and \(p\) in the above definition are restricted to real numbers, as they are in this chapter, vector space \(V\) may be referred to as a vector space over the real numbers.
We have already encountered two abstract vectors spaces before, viz.:

Example 9.1.5.

Sets of polynomials provide an important source of examples, so we review some basic facts. A polynomial with real coefficients in \(x\) is an expression
\begin{equation*} p(x) = a_0 + a_1x + a_2x^2 + \ldots + a_nx^n \end{equation*}
where \(a_{0}, a_{1}, a_{2}, \ldots, a_{n}\) are real numbers called the coefficients of the polynomial.
If all the coefficients are zero, the polynomial is called the zero polynomial and is denoted simply as \(0\text{.}\)
If \(p(x) \neq 0\text{,}\) the highest power of \(x\) with a nonzero coefficient is called the degree of \(p(x)\) denoted as \(\mbox{deg}(p(x))\text{.}\) The degree of the zero polynomial is not defined.
The coefficient itself is called the leading coefficient of \(p(x)\text{.}\) Hence \(\mbox{deg}(3 + 5x) = 1\text{,}\) \(\mbox{deg}(1 + x + x^{2}) = 2\text{,}\) and \(\mbox{deg}(4) = 0\text{.}\)
Let \(\mathbb{P}\) denote the set of all polynomials and suppose that
\begin{align*} p(x) \amp = a_0 + a_1x + a_2x^2 + \ldots \\ q(x) \amp = b_0 + b_1x + b_2x^2 + \ldots \end{align*}
are two polynomials in \(\mathbb{P}\) (possibly of different degrees). Then \(p(x)\) and \(q(x)\) are called equal (written \(p(x) = q(x)\)) if and only if all the corresponding coefficients are equal--- that is, one has \(a_{0} = b_{0}\text{,}\) \(a_{1} = b_{1}\text{,}\) \(a_{2} = b_{2}\text{,}\) and so on. In particular, \(a_{0} + a_{1}x + a_{2}x^{2} + \ldots = 0\) means \(a_{0} = 0\text{,}\) \(a_{1} = 0\text{,}\) \(a_{2} = 0\text{,}\) \(\ldots\text{.}\)
The set \(\mathbb{P}\) has an addition and scalar multiplication defined on it as follows: if \(p(x)\) and \(q(x)\) are as before and \(k\) is a real number,
\begin{align*} p(x) + q(x) \amp = (a_0 + b_0) + (a_1 + b_1)x + (a_2 + b_2)x^2 + \ldots \\ kp(x) \amp = ka_0 + (ka_1)x + (ka_2)x^2 + \ldots \end{align*}
A ton of terminology was just introduced. They are underlined in the example below.

Example 9.1.6.

\(\mathbb{P}\) is a vector space.
Answer.
It is easy to see that the sum of two polynomials is again a polynomial, and that a scalar multiple of a polynomial is a polynomial. Thus, \(\mathbb{P}\) is closed under addition and scalar multiplication. The other eight vector space properties are easily verified, and we conclude that \(\mathbb{P}\) is a vector space.

Example 9.1.7.

Let \(Y\) be the set of all degree two polynomials in \(x\text{.}\) In other words,
\begin{equation*} Y=\left \lbrace ax^2+bx+c : a \ne 0 \right \rbrace. \end{equation*}
We claim that \(Y\) is not a vector space.
Answer.
Observe that \(Y\) is not closed under addition. To see this, let \(y_1 = 2x^2+3x+4\) and let \(y_2=-2x^2\text{.}\) Then \(y_1\) and \(y_2\) are both elements of \(Y\text{.}\) However, \(y_1+y_2 = 3x+4\) is not an element of \(Y\text{,}\) as it is only a degree one polynomial. We require the coefficient \(a\) of \(x^2\) to be nonzero for a polynomial to be in \(Y\text{,}\) and this is not the case for \(y_1+y_2\text{.}\) As an exercise, check the remaining vector space properties one-by-one to see which properties hold and which do not.
Set \(Y\) in Example 9.1.7 is not a vector space, but if we make a slight modification, we can make it into a vector space.

Example 9.1.8.

Let \(\mathbb{P}^2\) be the set of polynomials of degree two or less. In other words,
\begin{equation*} \mathbb{P}^2=\left \lbrace ax^2+bx+c : a,b,c \in \mathbb{R} \right \rbrace. \end{equation*}
Note that \(\mathbb{P}^2\) contains the zero polynomial (let \(a=b=c=0\)). Unlike set \(Y\) in Example 9.1.7, \(\mathbb{P}^2\) is closed under polynomial addition and scalar multiplication. It is easy to verify that all vector space properties hold, so \(\mathbb{P}^2\) is a vector space.

Example 9.1.9.

Let \(n\) be a natural number. Define \(\mathbb{P}^n\) to be the set of polynomials of degree \(n\) or less than \(n\text{,}\) then by reasoning similar to Example 9.1.8, \(\mathbb{P}^n\) is a vector space.

Subsection 9.1.3 Subspaces

Definition 9.1.10.

A nonempty subset \(U\) of a vector space \(V\) is called a subspace of \(V\text{,}\) provided that \(U\) is itself a vector space when given the same addition and scalar multiplication as \(V\text{.}\)
An example to showcase this is in order.

Example 9.1.11.

In Example 9.1.8 we demonstrated that \(\mathbb{P}^2\) is a vector space. From Example 9.1.6 we know that \(\mathbb{P}\) is a vector space. But \(\mathbb{P}^2\) is a subset of \(\mathbb{P}\text{,}\) and uses the same operations of polynomial addition and scalar multiplication. Therefore \(\mathbb{P}^2\) is a subspace of \(\mathbb{P}\text{.}\)
Checking all ten properties to verify that a subset of a vector space is a subspace can be cumbersome. Fortunately we have the following theorem.

Proof.

To prove that closure is a sufficient condition for \(U\) to be a subspace, we will need to show that closure under addition and scalar multiplication of \(V\) guarantees that the remaining eight properties are satisfied automatically.
Observe that Item 1, Item 2, Item 5, Item 6, Item 7 and Item 8 hold for all elements of \(V\text{.}\) Thus, these properties will hold for all elements of \(U\text{.}\) We say that these properties are inherited from \(V\text{.}\)
To prove Item 3 we need to show that \(\mathbf{0}\text{,}\) which we know to be an element of \(V\text{,}\) is contained in \(U\text{.}\) Let \(\mathbf{u}\) be an element of \(U\) (recall that \(U\) is nonempty). We will show that \(0\mathbf{u}=\mathbf{0}\) in \(V\text{.}\) Then, by closure under scalar multiplication, we will be able to conclude that \(0\mathbf{u}=\mathbf{0}\) must be in \(U\text{.}\)
\begin{equation*} 0\mathbf{u}=(0+0)\mathbf{u}=0\mathbf{u}+0\mathbf{u}. \end{equation*}
Adding the additive inverse of \(0\mathbf{u}\) to both sides gives us
\begin{equation*} 0\mathbf{u}+(-0\mathbf{u})=(0\mathbf{u}+0\mathbf{u})+(-0\mathbf{u}). \end{equation*}
Thanks to Item 2 and Item 4.
\begin{equation*} \mathbf{0}=0\mathbf{u}+(0\mathbf{u}+(-0\mathbf{u})). \end{equation*}
\begin{equation*} \mathbf{0}=0\mathbf{u}+\mathbf{0}=0\mathbf{u}. \end{equation*}
Because \(U\) is closed under scalar multiplication \(0\mathbf{u}=\mathbf{0}\) is in \(U\text{.}\) We know that every element of \(U\text{,}\) being an element of \(V\text{,}\) has an additive inverse in \(V\text{.}\) We need to show that the additive inverse of every element of \(U\) is contained in \(U\text{.}\) Let \(\mathbf{u}\) be any element of \(U\text{.}\) We will show that \((-1)\mathbf{u}\) is the additive inverse of \(\mathbf{u}\text{.}\) Then by closure, \((-1)\mathbf{u}\) will have to be contained in \(U\text{.}\) To show that \((-1)\mathbf{u}\) is the additive inverse of \(\mathbf{u}\text{,}\) we must show that \(\mathbf{u}+(-1)\mathbf{u}=\mathbf{0}\text{.}\) We compute:
\begin{equation*} \mathbf{u}+(-1)\mathbf{u}=1\mathbf{u}+(-1)\mathbf{u}=(1+(-1))\mathbf{u}=0\mathbf{u}=\mathbf{0}. \end{equation*}
Thus \((-1)\mathbf{u}\) is the additive inverse of \(\mathbf{u}\text{.}\) By closure, \((-1)\mathbf{u}\) is in \(U\text{.}\)
Let us activate the theorem in an example.

Example 9.1.13.

Let \(A\) be a fixed matrix in \(\mathbb{M}_{n,n}\text{.}\) Show that the set \(C_A\) of all \(n\times n\) matrices that commute with \(A\) under matrix multiplication is a subspace of \(\mathbb{M}_{n,n}\text{.}\)
Answer.
The set \(C_A\) consists of all \(n\times n\) matrices \(X\) such that \(AX=XA\text{.}\) First, observe that \(C_A\) is not empty because \(I_n\) is an element. Now we need to show that \(C_A\) is closed under matrix addition and scalar multiplication. Suppose that \(X_1\) and \(X_{2}\) lie in \(C_A\text{.}\) Then \(AX_1 = X_1A\) and \(AX_{2} = X_{2}A\text{.}\) Then
\begin{equation*} A(X_1 + X_2) = AX_1 + AX_2 = X_1A + X_2A + (X_1 + X_2)A. \end{equation*}
Therefore \((X_1+X_2)\) commutes with \(A\text{.}\) Thus \((X_1+X_2)\) is in \(C_A\text{.}\) We conclude that \(C_A\) is closed under matrix addition. Now suppose \(X\) is in \(C_A\text{.}\) Let \(k\) be a scalar, then
\begin{equation*} A(kX)= k(AX) = k(XA) = (kX)A. \end{equation*}
Therefore \((kX)\) commutes with \(A\text{.}\) We conclude that \((kX)\) is in \(C_A\text{,}\) and \(C_A\) is closed under scalar multiplication. Hence \(C_A\) is a subspace of \(\mathbb{M}_{n,n}\text{.}\)

Remark 9.1.14.

Suppose \(p(x)\) is a polynomial and \(a\) is a number. Then the number \(p(a)\) obtained by replacing \(x\) by \(a\) in the expression for \(p(x)\) is called the evaluation of \(p(x)\) at \(a\text{.}\) For example, if \(p(x) = 5 - 6x + 2x^{2}\text{,}\) then the evaluation of \(p(x)\) at \(a = 2\) is
\begin{equation*} p(2) = 5 - 12 + 8 = 1. \end{equation*}
If \(p(a) = 0\text{,}\) the number \(a\) is called a root of \(p(x)\text{.}\)
To get used to the new terminology, let us look at an example in the context of polynomials.

Example 9.1.15.

Consider the set \(U\) of all polynomials in \(\mathbb{P}\) that have \(3\) as a root:
\begin{equation*} U = \lbrace p(x) \in \mathbb{P} : p(3) = 0 \rbrace. \end{equation*}
Show that \(U\) is a subspace of \(\mathbb{P}\text{.}\)
Answer.
Observe that \(U\) is not empty because \(r(x)=x-3\) is an element of \(U\text{.}\) Suppose \(p(x)\) and \(q(x)\) lie in \(U\text{.}\) Then \(p(3) = 0\) and \(q(3) = 0\text{.}\) We have
\begin{equation*} (p + q)(x) = p(x) + q(x) \end{equation*}
for all \(x\text{,}\) so
\begin{equation*} (p + q)(3) = p(3) + q(3) = 0 + 0 = 0, \end{equation*}
and \(U\) is closed under addition. The verification that \(U\) is closed under scalar multiplication is similar.

Subsection 9.1.4 Linear Combinations and Span

Definition 9.1.16.

Let \(V\) be a vector space and let \(\mathbf{v}_1, \mathbf{v}_2,\ldots ,\mathbf{v}_n\) be vectors in \(V\text{.}\) A vector \(\mathbf{v}\) is said to be a linear combination of vectors \(\mathbf{v}_1, \mathbf{v}_2,\ldots, \mathbf{v}_n\) if
\begin{equation*} \mathbf{v}=a_1\mathbf{v}_1+ a_2\mathbf{v}_2+\ldots + a_n\mathbf{v}_n, \end{equation*}
for some scalars \(a_1, a_2, \ldots ,a_n\text{.}\)

Definition 9.1.17.

Let \(V\) be a vector space and let \(\mathbf{v}_1, \mathbf{v}_2,\ldots ,\mathbf{v}_p\) be vectors in \(V\text{.}\) The set \(S\) of all linear combinations of
\begin{equation*} \mathbf{v}_1, \mathbf{v}_2,\ldots ,\mathbf{v}_p \end{equation*}
is called the span of \(\mathbf{v}_1, \mathbf{v}_2,\ldots ,\mathbf{v}_p\text{.}\) We write
\begin{equation*} S=\mbox{span}(\mathbf{v}_1, \mathbf{v}_2,\ldots ,\mathbf{v}_p) \end{equation*}
and we say that vectors \(\mathbf{v}_1, \mathbf{v}_2,\ldots ,\mathbf{v}_p\) span \(S\text{.}\) Any vector in \(S\) is said to be in the span of \(\mathbf{v}_1, \mathbf{v}_2,\ldots ,\mathbf{v}_p\text{.}\) The set
\begin{equation*} \{\mathbf{v}_1, \mathbf{v}_2,\ldots ,\mathbf{v}_p\} \end{equation*}
is called a spanning set for \(S\text{.}\)
We revisit the situation for some specific polynomials.

Example 9.1.18.

Consider \(p_{1} = 1 + x + 4x^{2}\) and \(p_{2} = 1 + 5x + x^{2}\) in \(\mathbb{P}^{2}\text{.}\) Determine whether \(p_{1}\) and \(p_{2}\) lie in
\begin{equation*} \mbox{span}\{1 + 2x - x^{2}, 3 + 5x + 2x^{2}\}. \end{equation*}
Answer.
For \(p_{1}\text{,}\) we want to determine if \(a\) and \(b\) exist such that
\begin{equation*} p_1 = a(1 + 2x - x^2) + b(3 + 5x + 2x^2). \end{equation*}
Expanding the right hand side gives us:
\begin{equation*} a+2ax-ax^2+3b+5bx+2bx^2. \end{equation*}
Combining like terms, we get:
\begin{equation*} (a+3b)+(2a+5b)x+(-a+2b)x^2. \end{equation*}
Setting this equal to \(p_{1} = 1 + x + 4x^{2}\) and equating coefficients of powers of \(x\) gives us a system of equations
\begin{equation*} 1 = a + 3b,\quad 1 = 2a + 5b, \quad \mbox{ and } \quad 4 = -a + 2b. \end{equation*}
This system has the solution \(a = -2\) and \(b = 1\text{,}\) so \(p_{1}\) is indeed in \(\mbox{span}\{1 + 2x - x^{2}, 3 + 5x + 2x^{2}\}\text{.}\) Turning to \(p_{2} = 1 + 5x + x^{2}\text{,}\) we are looking for \(a\) and \(b\) such that
\begin{equation*} p_{2} = a(1 + 2x - x^{2}) + b(3 + 5x + 2x^{2}). \end{equation*}
Again equating coefficients of powers of \(x\) gives equations \(1 = a + 3b\text{,}\) \(5 = 2a + 5b\text{,}\) and \(1 = -a + 2b\text{.}\) But in this case there is no solution, so \(p_{2}\) is not in \(\mbox{span}\{1 + 2x - x^{2}, 3 + 5x + 2x^{2}\}\text{.}\)

Proof.

Subsection 9.1.5 Bases and Dimension of Abstract Vector Spaces

When working with \(\R^n\) and subspaces of \(\R^n\) we developed several fundamental ideas including span, linear independence, bases and dimension. We will find that these concepts generalize easily to abstract vector spaces and that analogous results hold in these new settings.

Definition 9.1.20. Linear Independence.

Let \(V\) be a vector space. Let \(\mathbf{v}_1, \mathbf{v}_2,\ldots ,\mathbf{v}_p\) be vectors of \(V\text{.}\) We say that the set \(\{\mathbf{v}_1, \mathbf{v}_2,\ldots ,\mathbf{v}_p\}\) is linearly independent if the only solution to
\begin{equation*} a_1\mathbf{v}_1+a_2\mathbf{v}_2+\ldots +a_p\mathbf{v}_p=\mathbf{0} \end{equation*}
is the trivial solution \(a_1=a_2=\ldots =a_p=0\text{.}\)
If, in addition to the trivial solution, a non-trivial solution (not all \(a_1, a_2,\ldots ,a_p\) are zero) exists, then we say that the set \(\{\mathbf{v}_1, \mathbf{v}_2,\ldots ,\mathbf{v}_p\}\) is linearly dependent.
Let us examine this abstract version of bases in the context of polynomials, to get a feeling for these concepts.

Example 9.1.21.

Show that \(P=\{1 + x, 3x + x^{2}, 2 + x - x^{2}\}\) is linearly independent in \(\mathbb{P}^{2}\text{.}\)
Answer.
Consider the linear combination equation
\begin{align*} a(1 + x) + b(3x + x^2) + c(2 + x - x^2) \amp = 0 \\ a+ax+3bx+bx^2+2c+cx-cx^2\amp =0 \\ (a+2c)+(a+3b+c)x+(b-c)x^2\amp =0 \end{align*}
The constant term, as well as the coefficients in front of \(x\) and \(x^2\text{,}\) must be equal to \(0\text{.}\) This gives us the following system of equations.
\begin{equation*} \begin{array}{rlrlrcr} a \amp + \amp \amp + \amp 2c \amp = \amp 0 \\ a \amp + \amp 3b \amp + \amp c \amp = \amp 0 \\ \amp \amp b \amp - \amp c \amp = \amp 0 \\ \end{array} \end{equation*}
The only solution is \(a = b = c = 0\text{.}\) We conclude that \(P\) is linearly independent in \(\mathbb{P}^2\text{.}\)

Subsection 9.1.6 Bases and Dimension

Recall that our motivation for defining a basis of a subspace of \(\R^n\) was to have a collection of vectors such that every vector of the subspace can be expressed as a unique linear combination of the vectors in that collection. Definition of a basis (Definition 5.2.4) generalizes to abstract vector spaces as follows.

Definition 9.1.22.

Let \(V\) be a vector space. A set \(\mathcal{B}\) of vectors of \(V\) is called a basis of \(V\) provided that
  1. \(\displaystyle \mbox{span}(\mathcal{B})=V\)
  2. \(\mathcal{B}\) is linearly independent.

Proof.

By the definition of a basis, we know that \(\mathbf{v}\) can be written as a linear combination of \(\mathbf{v}_1, \mathbf{v}_2,\ldots,\mathbf{v}_n\text{.}\) Suppose there are two such representations. Then,
\begin{equation*} \mathbf{v}=a_1\mathbf{v}_1+ a_2\mathbf{v}_2+\ldots+a_n\mathbf{v}_n \end{equation*}
\begin{equation*} \mathbf{v}=b_1\mathbf{v}_1+ b_2\mathbf{v}_2+\ldots+b_n\mathbf{v}_n \end{equation*}
But then we have:
\begin{align*} a_1\mathbf{v}_1+ a_2\mathbf{v}_2+\ldots+a_n\mathbf{v}_n =b_1\mathbf{v}_1+ b_2\mathbf{v}_2+\ldots+b_n\mathbf{v}_n \\ a_1\mathbf{v}_1+ a_2\mathbf{v}_2+\ldots+a_n\mathbf{v}_n-(b_1\mathbf{v}_1+ b_2\mathbf{v}_2+\ldots+b_n\mathbf{v}_n)\amp =\mathbf{0} \\ (a_1-b_1)\mathbf{v}_1+ (a_2-b_2)\mathbf{v}_2+\ldots+(a_n-b_n)\mathbf{v}_n\amp =\mathbf{0} \end{align*}
Because \(\mathbf{v}_1, \mathbf{v}_2,\ldots,\mathbf{v}_n\) are linearly independent, we have \(a_i-b_i=0\) for \(1\leq i\leq n\text{.}\) Consequently \(a_i=b_i\) for \(1\leq i\leq n\text{.}\)
In chapter \(5\text{,}\) we defined the dimension of a subspace of \(\R^n\) to be the number of elements in a basis (Definition 5.2.14). We will adopt this definition for abstract vector spaces. As before, to ensure that dimension is well-defined we need to establish that this definition is independent of our choice of a basis. The proof of the following theorem is identical to the proof of its counterpart in \(\R^n\) (Theorem 5.2.13).
Now we can state the definition of dimensions for abstract vector spaces.

Definition 9.1.25.

Let \(V\) be a subspace of a vector space \(W\text{.}\) The dimension of \(V\) is the number, \(m\text{,}\) of elements in any basis of \(V\text{.}\) We write
\begin{equation*} \mbox{dim}(V)=m. \end{equation*}
In our discussions up to this point, we have always assumed that a basis is nonempty and hence that the dimension of the space is at least \(1\text{.}\) However, the zero space \(\{\mathbf{0}\}\) has no basis. To accommodate for this, we will say that the zero vector space \(\{\mathbf{0}\}\) is defined to have dimension \(0\text{:}\)
\begin{equation*} \mbox{dim }\{\mathbf{0}\} = 0. \end{equation*}
Our insistence that \(\mbox{dim}\{\mathbf{0}\} = 0\) amounts to saying that the empty set of vectors is a basis of \(\{\mathbf{0}\}\text{.}\) Thus the statement that ``the dimension of a vector space is the number of vectors in any basis’’ holds even for the zero space.

Example 9.1.26.

Recall that the vector space \(\mathbb{M}_{m,n}\) consists of all \(m\times n\) matrices (see Example 9.1.5. Find a basis and the dimension of \(\mathbb{M}_{m,n}\text{.}\)
Answer.
Let \(\mathcal{B}\) consist of \(m\times n\) matrices with exactly one entry equal to \(1\) and all other entries equal to \(0\text{.}\) It is clear that every \(m\times n\) matrix can be written as a linear combination of elements of \(\mathcal{B}\text{.}\) It is also easy to see that the elements of \(\mathcal{B}\) are linearly independent. Thus \(\mathcal{B}\) is a basis for \(\mathbb{M}_{m,n}\text{.}\) The set \(\mathcal{B}\) contains \(mn\) elements, so \(\mbox{dim}(\mathbb{M}_{m,n})=mn\text{.}\)

Example 9.1.27.

Recall that \(\mathbb{P}^n\) is the set of all polynomials of degree \(n\) or less (see Example 9.1.9. Show that \(\mbox{dim}( \mathbb{P}^{n}) = n + 1\) and that
\begin{equation*} \lbrace 1, x, x^{2}, \dots, x^{n} \rbrace \end{equation*}
is a basis of \(\mathbb{P}^{n}\text{.}\)
Answer.
Each polynomial
\begin{equation*} p(x) = a_{0} + a_{1}x + \ldots + a_{n}x^{n}, \quad \text{in } \mathbb{P}^{n}, \end{equation*}
is clearly a linear combination of \(1, x, \dots, x^{n}\text{,}\) so
\begin{equation*} \mathbb{P}^{n} = \mbox{span} \lbrace 1, x, \dots, x^{n} \rbrace.\text{.} \end{equation*}
Suppose \(a_{0}1 + a_{1}x + \dots + a_{n}x^{n} = 0\text{,}\) then \(a_{0} = a_{1} = \ldots = a_{n} = 0\text{.}\) So \(\{1, x, \dots, x^{n}\}\) is linearly independent and is therefore a basis containing \(n + 1\) vectors. Thus, \(\mbox{dim}(\mathbb{P}^{n}) = n + 1\text{.}\)

Example 9.1.28.

Consider the subset
\begin{equation*} C_A = \lbrace X \in\mathbb{M}_{2,2} : AX = XA \rbrace. \end{equation*}
of \(\mathbb{M}_{2,2}\text{.}\) It was shown in Example 9.1.13 of that \(C_A\) is a subspace for any choice of the matrix \(A\text{.}\) Let
\begin{equation*} A = \begin{bmatrix} 1 \amp 1 \\ 0 \amp 0 \end{bmatrix}. \end{equation*}
Show that \(\mbox{dim}(C_A) = 2\) and find a basis of \(C_A\text{.}\)
Answer.
Suppose
\begin{equation*} X = \begin{bmatrix} a \amp b \\ c \amp d \end{bmatrix} \end{equation*}
is in \(C_A\text{.}\) Then
\begin{equation*} \begin{bmatrix}1\amp 1\\0\amp 0\end{bmatrix}\begin{bmatrix}a\amp b\\c\amp d\end{bmatrix}=\begin{bmatrix}a\amp b\\c\amp d\end{bmatrix}\begin{bmatrix}1\amp 1\\0\amp 0\end{bmatrix}. \end{equation*}
\begin{equation*} \begin{bmatrix}a+c\amp b+d\\0\amp 0\end{bmatrix}=\begin{bmatrix}a\amp a\\c\amp c\end{bmatrix}. \end{equation*}
This gives us two relationships:
\begin{equation*} b+d=a\quad\text{and}\quad c=0. \end{equation*}
We can now express a generic element \(X\) of \(C_A\) as
\begin{align*} X=\begin{bmatrix}a\amp b\\c\amp d\end{bmatrix} \amp = \begin{bmatrix}b+d\amp b\\0\amp d\end{bmatrix} \\ \amp =\begin{bmatrix}b\amp b\\0\amp 0\end{bmatrix}+\begin{bmatrix}d\amp 0\\0\amp d\end{bmatrix} \\ \amp =b\begin{bmatrix}1\amp 1\\0\amp 0\end{bmatrix}+d\begin{bmatrix}1\amp 0\\0\amp 1\end{bmatrix}. \end{align*}
Let
\begin{equation*} \mathcal{B}=\left \lbrace \begin{bmatrix}1\amp 1\\0\amp 0\end{bmatrix},\begin{bmatrix}1\amp 0\\0\amp 1\end{bmatrix}\right \rbrace. \end{equation*}
The set \(\mathcal{B}\) is linearly independent (see Exercise 9.1.9.15) Every element \(X\) of \(C_A\) can be written as a linear combination of elements of \(\mathcal{B}\text{.}\) Thus \(C_A=\mbox{span}\mathcal{B}\text{.}\) Therefore \(\mathcal{B}\) is a basis of \(C_A\text{,}\) and \(\mbox{dim}(C_A) = 2\text{.}\)

Example 9.1.29.

In Exercise 9.1.9.10 of you demonstrated that the set of all symmetric \(n\times n\) matrices is a subspace of \(\mathbb{M}_{n,n}\text{.}\) Let \(V\) be a subspace of \(\mathbb{M}_{2,2}\) consisting of all \(2\times 2\) symmetric matrices. Find the dimension of \(V\text{.}\)
Answer.
A matrix \(A\) is symmetric if \(A^{T} = A\text{.}\) In other words, a matrix \(A\) is symmetric when entries directly across the main diagonal are equal, so each \(2 \times 2\) symmetric matrix has the form
\begin{equation*} \begin{bmatrix} a \amp c \\ c \amp b \end{bmatrix} = a\begin{bmatrix} 1 \amp 0 \\ 0 \amp 0 \end{bmatrix} + b\begin{bmatrix} 0 \amp 0 \\ 0 \amp 1 \end{bmatrix} + c\begin{bmatrix} 0 \amp 1 \\ 1 \amp 0 \end{bmatrix}. \end{equation*}
Hence the set
\begin{equation*} \mathcal{B} = \left \lbrace \begin{bmatrix} 1 \amp 0 \\ 0 \amp 0 \end{bmatrix}, \begin{bmatrix} 0 \amp 0 \\ 0 \amp 1 \end{bmatrix}, \begin{bmatrix} 0 \amp 1 \\ 1 \amp 0 \end{bmatrix} \right \rbrace \end{equation*}
spans \(V\text{.}\) The reader can verify that \(\mathcal{B}\) is linearly independent. Thus \(\mathcal{B}\) is a basis of \(V\text{,}\) so \(\mbox{dim}(V) = 3\text{.}\)

Subsection 9.1.7 Finite-Dimensional Vector Spaces

Our definition of dimension of a vector space depends on the vector space having a basis. In this section we will establish that any vector space spanned by finitely many vectors has a basis.

Definition 9.1.30.

A vector space is said to be finite-dimensional if it is spanned by finitely many vectors.
Given a finite-dimensional vector space \(V\) we will find a basis for \(V\) by starting with a linearly independent subset of \(V\) and expanding it to a basis. The following results are more general versions of Lemma 5.2.16 and Lemma 5.2.17, and Theorem 5.2.13. The proofs are identical and we will omit them.

Subsection 9.1.8 Coordinate Vectors

Recall that in the context of \(\R^n\) (and subspaces of \(\R^n\)) the requirement that elements of a basis be linearly independent guarantees that every element of the vector space has a unique representation in terms of the elements of the basis. (See Theorem Theorem 5.2.1 of Introduction to Bases
 1 
ximera.osu.edu/oerlinalg/LinearAlgebra/VSP-0030/main
) We proved the same property for abstract vector spaces in Theorem Theorem 9.1.23.
Uniqueness of representation in terms of the elements of a basis allows us to associate every element of a vector space \(V\) with a unique coordinate vector with respect to a given basis. Coordinate vectors were first introduced in Introduction to Bases
 2 
ximera.osu.edu/oerlinalg/LinearAlgebra/VSP-0030/main
. We now give a formal definition.

Definition 9.1.34.

Let \(V\) be a vector space, and let \(\mathcal{B}=\{\mathbf{v}_1, \ldots ,\mathbf{v}_n\}\) be a basis for \(V\text{.}\) If \(\mathbf{v}=a_1\mathbf{v}_1+\ldots +a_n\mathbf{v}_n\text{,}\) then the vector in \(\R^n\) whose components are the coefficients \(a_1, \ldots ,a_n\) is said to be the coordinate vector for \(\mathbf{v}\) with respect to \(\mathcal{B}\text{.}\) We denote the coordinate vector by \([\mathbf{v}]_{\mathcal{B}}\) and write:
\begin{equation*} [\mathbf{v}]_{\mathcal{B}}=\begin{bmatrix}a_1\\\vdots \\a_n\end{bmatrix}. \end{equation*}

Remark 9.1.35.

The order of in which vectors \(\mathbf{v}_1, \ldots ,\mathbf{v}_n\) appear in \(\mathcal{B}\) of Definition 9.1.34 is important. Switching the order of these vectors would switch the order of the coordinate vector components. For this reason, we will often use the term ordered basis to describe \(\mathcal{B}\) in the context of coordinate vectors.
Coordinate vectors may seem abstract as described above. In examples, however, one can nearly always pinpoint exactly what the coordinates are. Some examples will emphaize this:

Example 9.1.36.

The coordinate vector for \(p(x)=4-3x^2+5x^3\) in \(\mathbb{P}^4\) with respect to the ordered basis \(\mathcal{B}_1=\{1, x, x^2, x^3, x^4\}\) is
\begin{equation*} [p(x)]_{\mathcal{B}_1}=\begin{bmatrix}4\\0\\-3\\5\\0\end{bmatrix}. \end{equation*}
Now let’s change the order of the elements in \(\mathcal{B}_1\text{.}\) The coordinate vector for \(p(x)=4-3x^2+5x^3\) with respect to the ordered basis \(\mathcal{B}_2=\{x^4, x^3, x^2, x, 1\}\) is
\begin{equation*} [p(x)]_{\mathcal{B}_2}=\begin{bmatrix}0\\5\\-3\\0\\4\end{bmatrix}. \end{equation*}

Example 9.1.37.

Show that the set \(\mathcal{B}=\{x, 1+x, x+x^2\}\) is a basis for \(\mathbb{P}^2\text{.}\) Keep the order of elements in \(\mathcal{B}\) and find the coordinate vector for \(p(x)=4-x+3x^2\) with respect to the ordered basis \(\mathcal{B}\text{.}\)
Answer.
We will begin by showing that the elements of \(\mathcal{B}\) are linearly independent. Suppose
\begin{equation*} ax+b(1+x)+c(x+x^2)=0. \end{equation*}
Then
\begin{equation*} b+(a+b+c)x+cx^2=0. \end{equation*}
This gives us the following system of equations:
\begin{equation*} \begin{array}{ccccccc} \amp \amp b\amp \amp \amp =\amp 0 a \amp +\amp b\amp +\amp c\amp = \amp 0 \amp \amp \amp \amp c\amp =\amp 0 \end{array} \end{equation*}
The solution \(a=b=c=0\) is unique. We conclude that \(\mathcal{B}\) is linearly independent.
Next, we need to show that \(\mathcal{B}\) spans \(\mathbb{P}^2\text{.}\) To this end, we will consider a generic element \(p(x)=\alpha+\beta x+\gamma x^2\) of \(\mathbb{P}^2\) and attempt to express it as a linear combination of the elements of \(\mathcal{B}\text{.}\)
\begin{equation*} ax+b(1+x)+c(x+x^2)=\alpha+\beta x+\gamma x^2. \end{equation*}
then
\begin{equation*} b+(a+b+c)x+cx^2=\alpha+\beta x+\gamma x^2. \end{equation*}
Setting the coefficients of like terms equal to each other gives us
\begin{equation*} \begin{array}{ccccccc} \amp \amp b\amp \amp \amp =\amp \alpha\\ a \amp +\amp b\amp +\amp c\amp = \amp \beta \\ \amp \amp \amp \amp c\amp =\amp \gamma \end{array} \end{equation*}
Solving this linear system of \(a\text{,}\) \(b\) and \(c\) gives us
\begin{equation*} a=\beta-\alpha-\gamma,\quad b=\alpha,\quad c=\gamma . \end{equation*}
(You should verify this.) This shows that every element of \(\mathbb{P}^2\) can be written as a linear combination of elements of \(\mathcal{B}\text{.}\) Therefore \(\mathcal{B}\) is a basis for \(\mathbb{P}^2\text{.}\) To find the coordinate vector for \(p(x)=4-x+3x^2\) with respect to \(\mathcal{B}\) we need to express \(p(x)\) as a linear combination of the elements of \(\mathcal{B}\text{.}\) Fortunately, we have already done all the necessary work. For \(p(x)\text{,}\) \(\alpha=4\text{,}\) \(\beta=-1\) and \(\gamma=3\text{.}\) This gives us the coefficients of the linear combination: \(a=\beta-\alpha-\gamma=-8\text{,}\) \(b=\alpha=4\text{,}\) \(c=\gamma=3\text{.}\) We now write \(p(x)\) as a linear combination
\begin{equation*} p(x)=-8(x)+4(1+x)+3(x+x^2) \end{equation*}
The coordinate vector for \(p(x)\) with respect to \(\mathcal{B}\) is
\begin{equation*} [p(x)]_{\mathcal{B}}=\begin{bmatrix}-8\\4\\3\end{bmatrix} \end{equation*}

Example 9.1.38.

Recall that the set \(V\) of all symmetric \(2\times 2\) matrices is a subspace of \(\mathbb{M}_{2,2}\text{.}\) In Example 9.1.29, we demonstrated that
\begin{equation*} \mathcal{B} = \left \lbrace \begin{bmatrix} 1 \amp 0 \\ 0 \amp 0 \end{bmatrix}, \begin{bmatrix} 0 \amp 0 \\ 0 \amp 1 \end{bmatrix}, \begin{bmatrix} 0 \amp 1 \\ 1 \amp 0 \end{bmatrix} \right \rbrace \end{equation*}
is a basis for \(V\text{.}\) Let \(A=\begin{bmatrix}2\amp -3\\-3\amp 1\end{bmatrix}\text{.}\) Observe that \(A\) is an element of \(V\text{.}\)
  1. Find the coordinate vector with respect to the ordered basis \(\mathcal{B}\) for \(A\text{.}\)
  2. Let
    \begin{equation*} \mathcal{B}'=\left \lbrace \begin{bmatrix} 0 \amp 0 \\ 0 \amp 1 \end{bmatrix}, \begin{bmatrix} 1 \amp 0 \\ 0 \amp 0 \end{bmatrix}, \begin{bmatrix} 0 \amp 1 \\ 1 \amp 0 \end{bmatrix} \right \rbrace \end{equation*}
    be another ordered basis for \(V\text{.}\) Find the coordinate vector for \(A\) with respect to \(\mathcal{B}'\text{.}\)
Answer.
We write \(A\) as a linear combination of the elements of \(\mathcal{B}\text{.}\)
\begin{equation*} A=\begin{bmatrix}2\amp -3\\-3\amp 1\end{bmatrix}=2\begin{bmatrix}1\amp 0\\0\amp 0\end{bmatrix}+\begin{bmatrix} 0 \amp 0 \\ 0 \amp 1 \end{bmatrix}-3\begin{bmatrix} 0 \amp 1 \\ 1 \amp 0 \end{bmatrix} \end{equation*}
Thus, the coordinate vector with respect to \(\mathcal{B}\) is
\begin{equation*} [A]_{\mathcal{B}}=\begin{bmatrix}2\\1\\-3\end{bmatrix} \end{equation*}
The coordinate vector with respect to \(\mathcal{B}'\) is
\begin{equation*} [A]_{\mathcal{B}'}=\begin{bmatrix}1\\2\\-3\end{bmatrix}. \end{equation*}
Coordinate vectors will play a vital role in establishing one of the most fundamental results in linear algebra, that all \(n\)-dimensional vector spaces have the same structure as \(\R^n\text{.}\) In Example 9.3.4, for instance, we will show that \(\mathbb{P}^2\) is essentially the same as \(\R^3\text{.}\)

Exercises 9.1.9 Exercises

Exercise Group.

Is the set of all points in \(\mathbb{R}^2\) a vector space under the given definitions of addition and scalar multiplication? In each case be specific about which vector space properties hold and which properties fail.
1.
Addition:
\begin{equation*} (a, b)+(c, d)=(a+d, b+c) \end{equation*}
and scalar Multiplication:
\begin{equation*} k(a, b)=(ka, kb). \end{equation*}
2.
Addition:
\begin{equation*} (a, b)+(c, d)=(0, b+d) \end{equation*}
and scalar Multiplication:
\begin{equation*} k(a, b)=(ka, kb). \end{equation*}
3.
Addition:
\begin{equation*} (a, b)+(c, d)=(a+c, b+d) \end{equation*}
and scalar Multiplication:
\begin{equation*} k(a, b)=(a, kb). \end{equation*}
4.
Addition:
\begin{equation*} (a, b)+(c, d)=(a-c, b-d) \end{equation*}
and scalar Multiplication:
\begin{equation*} k(a, b)=(ka, kb). \end{equation*}

5.

Let \(\mathcal{F}\) be the set of all real-valued functions whose domain is all real numbers. Define addition and scalar multiplication as follows:
\begin{equation*} (f+g)(x)=f(x)+g(x)\quad (cf)(x)=cf(x). \end{equation*}
Verify that \(\mathcal{F}\) is a vector space.

6.

A differential equation is an equation that contains derivatives. Consider the differential equation:
\begin{equation} f''+f=0.\tag{9.1.1} \end{equation}
A solution to such an equation is a function.
  1. Verify that \(f(x)=\sin x\) is a solution to (9.1.1).
  2. Is \(f(x)=2\sin x\) a solution?
  3. Is \(f(x)=\cos x\) a solution?
  4. Is \(f(x)=\sin x+\cos x\) a solution?
  5. Let \(S\) be the set of all solutions to (9.1.1). Prove that \(S\) is a vector space.

7.

In this problem we will check that the set \(\mathbb{C}\) of all complex numbers is in fact a vector space. Let \(z_1 = a_1 + b_1 i\) be a complex number. Similarly, let \(z_2 = a_2 + b_2 i\text{,}\) \(z_3 = a_3 + b_3 i\) be complex numbers, and let \(k\) and \(p\) be real number scalars. Check that complex numbers are closed under addition and multiplication, and that they satisfy each of the vector space properties.

8.

Refer to Example 9.1.13 and describe all elements of \(C_I\text{,}\) where \(I\) is a \(3\times 3\) identity matrix.

9.

Is the subset of all invertible \(n\times n\) matrices a subspace of \(\mathbb{M}_{n,n}\text{?}\) Prove your claim.

10.

Is the subset of all symmetric \(n\times n\) matrices a subspace of \(\mathbb{M}_{n,n}\text{?}\) (eee Definition 4.1.24.) Prove your claim.

11.

Let \(Z\) be a subset of \(\mathbb{M}_{n,n}\) that consists of \(n\times n\) matrices that commute with every matrix in \(\mathbb{M}_{n,n}\) under matrix multiplication. In other words,
\begin{equation*} Z= \lbrace B : BY=YB \mbox{ for all } Y \in \mathbb{M}_{n,n} \rbrace. \end{equation*}
Is \(Z\) a subspace of \(\mathbb{M}_{n,n}\text{?}\)
Hint.
Don’t forget to check that \(Z\) is not empty!

12.

List several elements of
\begin{equation*} \mbox{span}\left(\begin{bmatrix}1\amp 0\\0\amp 1\end{bmatrix}, \begin{bmatrix}0\amp 1\\1\amp 0\end{bmatrix}\right). \end{equation*}
Suggest a spanning set for \(\mathbb{M}_{2,2}\text{.}\)

13.

Describe how every element of \(\mbox{span}(1, x, x^2, x^3)\) looks liike.

15.

Prove that set
\begin{equation*} \mathcal{B}=\left\{\begin{bmatrix}1\amp 1\\0\amp 0\end{bmatrix},\begin{bmatrix}1\amp 0\\0\amp 1\end{bmatrix}\right\} \end{equation*}
of Example 9.1.28 is linearly independent.

Exercise Group.

Show that each of the following sets of vectors is linearly independent.
16.
\begin{equation*} \lbrace 1 + x, 1 - x, x + x^{2} \rbrace \quad \text{in } \mathbb{P}^{2}. \end{equation*}
17.
\begin{equation*} \lbrace x^{2}, x + 1, 1 - x - x^{2} \rbrace \quad \text{in } \mathbb{P}^{2}. \end{equation*}
18.
\begin{equation*} \left \lbrace \begin{bmatrix} 1 \amp 1 \\ 0 \amp 0 \end{bmatrix} , \begin{bmatrix} 1 \amp 0 \\ 1 \amp 0 \end{bmatrix} , \begin{bmatrix} 0 \amp 0 \\ 1 \amp -1 \end{bmatrix} ,\ \begin{bmatrix} 0 \amp 1 \\ 0 \amp 1 \end{bmatrix} \right \rbrace \quad \text{in } \mathbb{M}_{2,2}. \end{equation*}

Exercise Group.

Find the coordinate vector for \(p(x)=6-2x+4x^2\) with respect to the given ordered basis \(\mathcal{B}\) of \(\mathbb{P}^2\text{.}\)
20.
\begin{equation*} \mathcal{B}= \lbrace 1 + x, 1 - x, x + x^{2} \rbrace. \end{equation*}
Answer.
\begin{equation*} [p(x)]_{\mathcal{B}}=\begin{bmatrix}0\\6\\4\end{bmatrix}. \end{equation*}
21.
\begin{equation*} \mathcal{B}=\{x^{2}, x + 1, 1 - x - x^{2}\}. \end{equation*}
Answer.
\begin{equation*} [p(x)]_{\mathcal{B}}=\begin{bmatrix}8\\2\\4\end{bmatrix}. \end{equation*}

22.

Find the coordinate vector for
\begin{equation*} A=\begin{bmatrix}4\amp -3\\1\amp 2\end{bmatrix} \end{equation*}
with respect to the ordered basis
\begin{equation*} \mathcal{B}= \left \lbrace \begin{bmatrix} 1 \amp 1 \\ 0 \amp 0 \end{bmatrix} , \begin{bmatrix} 1 \amp 0 \\ 1 \amp 0 \end{bmatrix} , \begin{bmatrix} 0 \amp 0 \\ 1 \amp -1 \end{bmatrix} ,\ \begin{bmatrix} 0 \amp 1 \\ 0 \amp 1 \end{bmatrix} \right \rbrace. \end{equation*}
Answer.
\begin{equation*} [A]_{\mathcal{B}}=\begin{bmatrix}-1\\5\\-4\\-2\end{bmatrix}. \end{equation*}

23.

Let \(V\) be a vector space of dimension \(3\text{.}\) Suppose \(S=\{\mathbf{v}_1, \mathbf{v}_2, \mathbf{v}_3\}\) is linearly independent in \(V\text{.}\) Show that \(S\) is a basis for \(V\text{.}\)