Section 4.4 The Inverse of a Matrix
Consider the equation \(2x=6\text{.}\) It takes little time to recognize that the solution to this equation is \(x=3\text{.}\) In fact, the solution is so obvious that we do not think about the algebraic steps necessary to find it. Let’s take a look at these steps in detail.
\begin{equation*}
2x=6
\end{equation*}
\begin{equation*}
\frac{1}{2}\times (2x)=\frac{1}{2}\times 6
\end{equation*}
\begin{equation*}
(\frac{1}{2}\times 2)x=3
\end{equation*}
\begin{equation*}
1x=3
\end{equation*}
\begin{equation*}
x=3
\end{equation*}
This process utilizes many properties of real-number multiplication. In particular, we make use of existence of multiplicative inverses. Every non-zero real number \(a\) has a multiplicative inverse \(a^{-1}=\frac{1}{a}\) with the property that \(\frac{1}{a}\times a=a\times \frac{1}{a}=1\text{.}\) We say that \(1\) is the multiplicative identity because \(a\times 1=1\times a=a\text{.}\)
Given a matrix equation \(A\mathbf{x}=\mathbf{b}\text{,}\) we would like to follow a process similar to the one above to solve this matrix equation for \(\mathbf{x}\text{.}\)
Observe that the role of the multiplicative identity for \(n\times n\) square matrices is filled by \(I_n\) because \(AI_n=I_nA=A\text{.}\) Given an \(n\times n\) matrix \(A\text{,}\) a multiplicative inverse of \(A\) would have to be some \(n\times n\) matrix \(B\) such that
\begin{equation*}
BA=AB=I_n
\end{equation*}
Assuming that such an inverse \(B\) exists, this is what the process of solving the equation \(A\mathbf{x}=\mathbf{b}\) would look like:
\begin{equation*}
A\mathbf{x}=\mathbf{b}
\end{equation*}
\begin{equation*}
B(A\mathbf{x})=B\mathbf{b}
\end{equation*}
\begin{equation*}
(BA)\mathbf{x}=B\mathbf{b}
\end{equation*}
\begin{equation*}
I\mathbf{x}=B\mathbf{b}
\end{equation*}
\begin{equation*}
\mathbf{x}=B\mathbf{b}
\end{equation*}
Definition 4.4.1.
Let \(A\) be an \(n\times n\) matrix. An \(n\times n\) matrix \(B\) is called an inverse of \(A\) if
\begin{equation*}
AB=BA=I
\end{equation*}
where \(I\) is an \(n\times n\) identity matrix. If such an inverse matrix exists, we say that \(A\) is invertible. If an inverse does not exist, we say that \(A\) is not invertible.
It follows directly from the way the definition is stated that if \(B\) is an inverse of \(A\text{,}\) then \(A\) is an inverse of \(B\text{.}\) We say that \(A\) and \(B\) are inverses of each other. The following theorem shows that matrix inverses are unique.
Theorem 4.4.2.
Suppose \(A\) is an invertible matrix, and \(B\) is an inverse of \(A\text{.}\) Then \(B\) is unique.
Proof.
Because \(B\) is an inverse of \(A\text{,}\) we have:
\begin{equation*}
AB=BA=I.
\end{equation*}
Suppose there exists another \(n\times n\) matrix \(C\) such that
\begin{equation*}
AC=CA=I.
\end{equation*}
Then
\begin{equation*}
C(AB)=CI
\end{equation*}
\begin{equation*}
(CA)B=C
\end{equation*}
\begin{equation*}
IB=C
\end{equation*}
\begin{equation*}
B=C.
\end{equation*}
Now that we know that a matrix \(A\) cannot have more than one inverse, we can safely refer to the inverse of \(A\) as \(A^{-1}\text{.}\)
Example 4.4.3.
Let
\begin{equation*}
A=\begin{bmatrix}1 \amp -1\\-5 \amp 2\end{bmatrix}\quad\text{and}\quad B=\begin{bmatrix}-2/3 \amp -1/3\\-5/3 \amp -1/3\end{bmatrix}.
\end{equation*}
Verify that \(A\) and \(B\) are inverses of each other.
Answer.
We will show that \(AB=BA=I\text{.}\)
\begin{align*}
AB \amp =\begin{bmatrix}1 \amp -1\\-5 \amp 2\end{bmatrix}\begin{bmatrix}-2/3 \amp -1/3\\-5/3 \amp -1/3\end{bmatrix} \\
\amp =\begin{bmatrix}-2/3+5/3 \amp -1/3+1/3\\10/3-10/3 \amp 5/3-2/3\end{bmatrix} \\
\amp =\begin{bmatrix}1 \amp 0\\0 \amp 1\end{bmatrix},
\end{align*}
\begin{align*}
BA \amp =\begin{bmatrix}-2/3 \amp -1/3\\-5/3 \amp -1/3\end{bmatrix}\begin{bmatrix}1 \amp -1\\-5 \amp 2\end{bmatrix} \\
\amp =\begin{bmatrix}-2/3+5/3 \amp 2/3-2/3\\-5/3+5/3 \amp 5/3-2/3\end{bmatrix} \\
\amp =\begin{bmatrix}1 \amp 0\\0 \amp 1\end{bmatrix}.
\end{align*}
Example 4.4.4.
\begin{equation*}
\begin{bmatrix}1 \amp -1\\-5 \amp 2\end{bmatrix}\mathbf{x}=\begin{bmatrix}-3\\1\end{bmatrix}.
\end{equation*}
Answer.
We multiply both sides of the equation by the inverse of \(\begin{bmatrix}1 \amp -1\\-5 \amp 2\end{bmatrix}\text{.}\)
\begin{equation*}
\begin{bmatrix}-2/3 \amp -1/3\\-5/3 \amp -1/3\end{bmatrix}\begin{bmatrix}1 \amp -1\\-5 \amp 2\end{bmatrix}\mathbf{x}=\begin{bmatrix}-2/3 \amp -1/3\\-5/3 \amp -1/3\end{bmatrix}\begin{bmatrix}-3\\1\end{bmatrix}
\end{equation*}
\begin{equation*}
\mathbf{x}=\begin{bmatrix}5/3\\14/3\end{bmatrix}.
\end{equation*}
We now prove several useful properties of matrix inverses.
Theorem 4.4.5.
The following properties are stated for square matrices of appropriate sizes.
\(\displaystyle I^{-1}=I\)
For an invertible matrix \(A\text{,}\) \((A^{-1})^{-1}=A\text{.}\)
If \(A\) and \(B\) are invertible matrices, then \((AB)\) is invertible and \((AB)^{-1}=B^{-1}A^{-1}\) (Shoes and socks rule).
If \(A\) is an invertible matrix and \(k\) is a non-zero number, then \((kA)^{-1}=\frac{1}{k}A^{-1}\text{.}\)
If \(A\) is an invertible matrix, then \(A^T\) is also invertible, and \((A^T)^{-1}=(A^{-1})^T\text{.}\)
Proof.
We will prove
Item 3. The remaining properties are left as exercises.
[Proof of
Item 3:]: We will check to see if
\(B^{-1}A^{-1}\) is the inverse of
\(AB\text{.}\)
\begin{equation*}
(B^{-1}A^{-1})(AB)=B^{-1}(A^{-1}A)B=B^{-1}IB=B^{-1}B=I
\end{equation*}
\begin{equation*}
(AB)(B^{-1}A^{-1})=A(BB^{-1})A^{-1}=AIA^{-1}=AA^{-1}=I
\end{equation*}
Thus \((AB)\) is invertible and \((AB)^{-1}=B^{-1}A^{-1}\text{.}\)
Subsection 4.4.1 Computing the Inverse
We now turn to the question of how to find the inverse of a square matrix, or determine that the inverse does not exist. Given a square matrix \(A\text{,}\) we are looking for a square matrix \(B\) such that
\begin{equation*}
AB=I\quad\text{and}\quad BA=I
\end{equation*}
We will start by attempting to satisfy \(AB=I\text{.}\) Let \(\mathbf{v}_1,\ldots,\mathbf{v}_n\) be the columns of \(B\text{,}\) then
\begin{equation*}
A\begin{bmatrix}
| \amp |\amp \amp |\\
\mathbf{v}_1 \amp \mathbf{v}_2 \amp \dots \amp \mathbf{v}_n\\
|\amp | \amp \amp |
\end{bmatrix}=\begin{bmatrix}
| \amp |\amp \amp |\\
\mathbf{e}_1 \amp \mathbf{e}_2 \amp \dots \amp \mathbf{e}_n\\
|\amp | \amp \amp |
\end{bmatrix}
\end{equation*}
where each \(\mathbf{e}_i\) is a standard unit vector of \(\R^n\text{.}\) This gives us a system of equations \(A\mathbf{v}_i=\mathbf{e}_i\) for each \(1\leq i\leq n\text{.}\) If each \(A\mathbf{v}_i=\mathbf{e}_i\) has a unique solution, then finding these solutions will give us the columns of the desired matrix \(B\text{.}\)
First, suppose that \(\mbox{rref}(A)=I\text{,}\) then we can use elementary row operations to carry each \([A|\mathbf{e}_i]\) to its reduced row-echelon form.
\begin{equation*}
[A|\mathbf{e}_i]\rightsquigarrow [I|\mathbf{v}_i]
\end{equation*}
Observe that the row operations that carry \(A\) to \(I\) will be the same for each \(A\mathbf{v}_i=\mathbf{e}_i\text{.}\) We can, therefore, combine the process of solving \(n\) systems of equations into a single process
\begin{equation*}
[A|I]\rightsquigarrow [I|B]
\end{equation*}
Each \(\mathbf{v}_i\) is a unique solution of \(A\mathbf{v}_i=\mathbf{e}_i\text{,}\) and we conclude that
\begin{equation*}
B=\begin{bmatrix}
| \amp |\amp \amp |\\
\mathbf{v}_1 \amp \mathbf{v}_2 \amp \dots \amp \mathbf{v}_n\\
|\amp | \amp \amp |
\end{bmatrix}
\end{equation*}
is a solution to
\(AB=I\text{.}\) By
Exercise 2.1.5.11, we can reverse the elementary row operations to obtain
\begin{equation*}
[I|B]\rightsquigarrow [A|I]
\end{equation*}
But the same row operations would also give us
\begin{equation*}
[B|I]\rightsquigarrow [I|A]
\end{equation*}
We conclude that \(BA=I\text{,}\) and \(B=A^{-1}\text{.}\)
Next, suppose that \(\mbox{rref}(A)\neq I\text{.}\) Then \(\mbox{rref}(A)\) must contain a row of zeros. Because one of the rows of \(A\) was completely wiped out by elementary row operations, one of the rows of \(A\) must be a linear combination of the other rows. Suppose row \(p\) is a linear combination of the other rows. Then row \(p\) can be carried to a row of zeros. But then the system \(A\mathbf{v}_p=\mathbf{e}_p\) is inconsistent. This is because \(\mathbf{e}_p\) has a \(1\) as the \(p^{th}\) entry and zeros everywhere else. The \(1\) in the \(p^{th}\) spot will not be affected by elementary row operations, and the \(p^{th}\) row will eventually look like this
\begin{equation*}
[0\ldots 0|1]
\end{equation*}
This shows that a matrix \(B\) such that \(AB=I\) does not exist, and \(A\) does not have an inverse. We have just proved the following theorem.
Theorem 4.4.6. Row-reduction Method for Computing the Inverse of a Matrix.
Let \(A\) be a square matrix. If it is possible to use elementary row operations to carry the augmented matrix \([A|I]\) to \([I|B]\text{,}\) then \(B=A^{-1}\text{.}\) If such a reduction is not possible, then \(A\) does not have an inverse.
Corollary 4.4.7.
A square matrix \(A\) has an inverse if and only if \(\mbox{rref}(A)=I\text{.}\)
Finding inverses is important. We exhibit two examples of this.
Example 4.4.8.
Find \(A^{-1}\) or demonstrate that \(A^{-1}\) does not exist.
\begin{equation*}
A=\begin{bmatrix}1\amp -1\amp 2\\1\amp 1\amp 1\\1\amp 3\amp -1\end{bmatrix}
\end{equation*}
Answer.
We start with the augmented matrix
\begin{equation*}
\left[\begin{array}{ccc|ccc}
1\amp -1\amp 2\amp 1\amp 0\amp 0\\1\amp 1\amp 1\amp 0\amp 1\amp 0\\1\amp 3\amp -1\amp 0\amp 0\amp 1
\end{array}\right]
\begin{array}{c}
\\
\xrightarrow{R_2-R_1}\\
\xrightarrow{R_3-R_1}
\end{array}
\end{equation*}
\begin{equation*}
\left[\begin{array}{ccc|ccc}
1\amp -1\amp 2\amp 1\amp 0\amp 0\\0\amp 2\amp -1\amp -1\amp 1\amp 0\\0\amp 4\amp -3\amp -1\amp 0\amp 1
\end{array}\right]
\begin{array}{c}
\\
\\
\xrightarrow{R_3-2R_2}
\end{array}
\end{equation*}
\begin{equation*}
\left[\begin{array}{ccc|ccc}
1\amp -1\amp 2\amp 1\amp 0\amp 0\\0\amp 2\amp -1\amp -1\amp 1\amp 0\\0\amp 0\amp -1\amp 1\amp -2\amp 1
\end{array}\right]
\begin{array}{c}
\\
\\
\xrightarrow{(-1)R_3}
\end{array}
\end{equation*}
\begin{equation*}
\left[\begin{array}{ccc|ccc}
1\amp -1\amp 2\amp 1\amp 0\amp 0\\0\amp 2\amp -1\amp -1\amp 1\amp 0\\0\amp 0\amp 1\amp -1\amp 2\amp -1
\end{array}\right]
\begin{array}{c}
\xrightarrow{R_1-2R_3}\\
\xrightarrow{R_2+R_3}\\
\\
\end{array}
\end{equation*}
\begin{equation*}
\left[\begin{array}{ccc|ccc}
1\amp -1\amp 0\amp 3\amp -4\amp 2\\0\amp 2\amp 0\amp -2\amp 3\amp -1\\0\amp 0\amp 1\amp -1\amp 2\amp -1
\end{array}\right]
\begin{array}{c}
\\
\xrightarrow{\frac{1}{2}R_2}\\
\\
\end{array}
\end{equation*}
\begin{equation*}
\left[\begin{array}{ccc|ccc}
1\amp -1\amp 0\amp 3\amp -4\amp 2\\0\amp 1\amp 0\amp -1\amp 3/2\amp -1/2\\0\amp 0\amp 1\amp -1\amp 2\amp -1
\end{array}\right]
\begin{array}{c}
\xrightarrow{R_1+R_2}\\
\\
\\
\end{array}
\end{equation*}
\begin{equation*}
\left[\begin{array}{ccc|ccc}
1\amp 0\amp 0\amp 2\amp -5/2\amp 3/2\\0\amp 1\amp 0\amp -1\amp 3/2\amp -1/2\\0\amp 0\amp 1\amp -1\amp 2\amp -1
\end{array}\right].
\end{equation*}
We conclude that
\begin{equation*}
A^{-1}=\begin{bmatrix}2\amp -5/2\amp 3/2\\-1\amp 3/2\amp -1/2\\-1\amp 2\amp -1\end{bmatrix}.
\end{equation*}
Example 4.4.9.
Find \(A^{-1}\) or demonstrate that \(A\) is not invertible.
\begin{equation*}
A=\begin{bmatrix}2\amp 3\amp -1\\0\amp 2\amp 1\\4\amp 4\amp -3\end{bmatrix}
\end{equation*}
Answer.
We start with the augmented matrix
\begin{equation*}
\left[\begin{array}{ccc|ccc}
2\amp 3\amp -1\amp 1\amp 0\amp 0\\0\amp 2\amp 1\amp 0\amp 1\amp 0\\4\amp 4\amp -3\amp 0\amp 0\amp 1
\end{array}\right]
\begin{array}{c}
\xrightarrow{\frac{1}{2}R_1}\\
\xrightarrow{\frac{1}{2}R_2}\\
\\
\end{array}
\end{equation*}
\begin{equation*}
\left[\begin{array}{ccc|ccc}
1\amp 3/2\amp -1/2\amp 1/2\amp 0\amp 0\\0\amp 1\amp 1/2\amp 0\amp 1/2\amp 0\\4\amp 4\amp -3\amp 0\amp 0\amp 1
\end{array}\right]
\begin{array}{c}
\\
\\
\xrightarrow{R_3-4R_1}
\end{array}
\end{equation*}
\begin{equation*}
\left[\begin{array}{ccc|ccc}
1\amp 3/2\amp -1/2\amp 1/2\amp 0\amp 0\\0\amp 1\amp 1/2\amp 0\amp 1/2\amp 0\\0\amp -2\amp -1\amp -2\amp 0\amp 1
\end{array}\right]
\begin{array}{c}
\\
\\
\xrightarrow{R_3+2R_2}
\end{array}
\end{equation*}
\begin{equation*}
\left[\begin{array}{ccc|ccc}
1\amp 3/2\amp -1/2\amp 1/2\amp 0\amp 0\\0\amp 1\amp 1/2\amp 0\amp 1/2\amp 0\\0\amp 0\amp 0\amp -2\amp 1\amp 1
\end{array}\right].
\end{equation*}
At this point we see that the left-hand side cannot be turned into \(I\) through elementary row operations. We conclude that \(A^{-1}\) does not exist.
Subsection 4.4.2 Inverse of a \(2\times 2\) Matrix
We will conclude this section by discussing the inverse of a nonsingular \(2\times 2\) matrix. Let \(A=\begin{bmatrix}a\amp b\\c\amp d\end{bmatrix}\) be a nonsingular matrix. We can find \(A^{-1}\) by using the row reduction method described above, that is, by computing the reduced row-echelon form of \([A|I]\text{.}\) Row reduction yields the following:
\begin{equation*}
\left[\begin{array}{cc|cc}
a\amp b\amp 1\amp 0\\c\amp d\amp 0\amp 1
\end{array}\right]\rightsquigarrow\left[\begin{array}{cc|cc}
1\amp 0\amp d/(ad-bc)\amp -b/(ad-bc)\\0\amp 1\amp -c/(ad-bc)\amp a/(ad-bc)
\end{array}\right]
\end{equation*}
Note that the denominator of each term in the inverse matrix is the same. Factoring it out, gives us the following formula for \(A^{-1}\text{.}\)
Formula 4.4.11.
Clearly, the expression for
\(A^{-1}\) is defined, if and only if
\(ad-bc\neq 0\text{.}\) So, what happens when
\(ad-bc=0\text{?}\) In
Exercise 4.4.3.9 you will be asked to fill in the steps of the row reduction procedure that produces this formula, and show that if
\(ad-bc=0\) then
\(A\) does not have an inverse.
Exercises 4.4.3 Exercises
1.
Verify that the matrix \(\begin{bmatrix} 2 \amp -1\\3 \amp -2\end{bmatrix}\) is its own inverse.
2.
Use the row-reduction method for computing matrix inverses to explain why the given matrix does not have an inverse.
\begin{equation*}
\begin{bmatrix}1\amp 1\amp 1\\2\amp 2\amp 2\\3\amp 3\amp 3\end{bmatrix}
\end{equation*}
3.
Let
\begin{equation*}
A=\begin{bmatrix}1\amp -1\amp 2\\2\amp 1\amp 1\\-3\amp 1\amp 0\end{bmatrix}\quad\text{and}\quad B=\begin{bmatrix}-1/12\amp 1/6\amp -1/4\\-1/4\amp 1/2\amp 1/4\\5/12\amp 1/6\amp 1/4\end{bmatrix}
\end{equation*}
Are \(A\) and \(B\) inverses of each other?
Answer.
Yes, \(A\) and \(B\) are inverses.
4.
5.
6.
\begin{equation*}
A=\begin{bmatrix}2\amp 0\\4\amp -3\end{bmatrix}\quad\text{and}\quad B=\begin{bmatrix}2\amp -1\\1\amp 5\end{bmatrix}
\end{equation*}
7.
\begin{equation*}
A=\begin{bmatrix}1\amp -1\amp 2\\1\amp 1\amp 1\\1\amp 3\amp -1\end{bmatrix}
\end{equation*}
Use \(A^{-1}\) to solve the equation
\begin{equation*}
A\mathbf{x}=\begin{bmatrix}3\\-2\\4\end{bmatrix}
\end{equation*}
Answer.
\begin{equation*}
\mathbf{x}=\begin{bmatrix}17\\-8\\-11\end{bmatrix}
\end{equation*}
8.
Find the inverse of each matrix by using the row-reduction procedure.
\begin{equation*}
A=\begin{bmatrix}1\amp -1\amp 2\\-1\amp 0\amp 0\\1\amp -2\amp 3\end{bmatrix}
\end{equation*}
Answer.
\begin{equation*}
A^{-1}=\begin{bmatrix}0 \amp -1 \amp 0 \\ 3 \amp 1 \amp -2\\ 2 \amp 1 \amp -1\end{bmatrix}
\end{equation*}
9.
Hint.
After going through the row reduction, try it again, considering the possibility that \(a=0\text{.}\)
10.
Exercise Group.
For each matrix below refer to
Formula 4.4.11 to find the value of
\(x\) for which the matrix is not invertible.
11.
\begin{equation*}
\begin{bmatrix}1\amp 2\\4\amp x\end{bmatrix}
\end{equation*}
12.
\begin{equation*}
\begin{bmatrix}3\amp 2\\3\amp x\end{bmatrix}
\end{equation*}
13.
\begin{equation*}
\begin{bmatrix}5\amp 4\\-5\amp x\end{bmatrix}
\end{equation*}
14.
Suppose \(AB=AC\) and \(A\) is an invertible \(n\times n\) matrix. Does it follow that \(B=C?\) Explain why or why not.
15.
Suppose \(AB=AC\) and \(A\) is a non-invertible \(n\times n\) matrix. Does it follow that \(B=C?\) Explain why or why not.
16.
Give an example of a matrix \(A\) such that \(A^{2}=I\) and yet \(A\neq I\) and \(A\neq -I.\)
17.
Suppose
\(A\) is a symmetric, invertible matrix. Does it follow that
\(A^{-1}\) is symmetric? What if we change ``symmetric" to ``skew symmetric"? (See
Definition 4.1.24.)
18.
Suppose \(A\) and \(B\) are invertible \(n\times n\) matrices. Does it follow that \((A+B)\) is invertible?