Skip to main content
Logo image

Coordinated Linear Algebra

Section 7.3 Extra Topic: Cramer’s Rule

Combining results of Theorem 7.2.13 and Theorem 4.2.5 shows that the following statements about matrix \(A\) are equivalent:
  • \(A^{-1}\) exists
  • Any equation \(Ax=\mathbf{b}\) has a unique solution
  • \(\displaystyle \det{A}\neq 0\)
In this section, we take a closer look at the relationship between the determinants of nonsingular matrices \(A\text{,}\) solutions to \(A\mathbf{x}=\mathbf{b}\text{,}\) and \(A^{-1}\text{.}\)

Subsection 7.3.1 Cramer’s Rule

We begin by establishing a formula that allows us to express the unique solution to the system \(A\mathbf{x}=\mathbf{b}\) in terms of the determinant of \(A\text{,}\) for a nonsingular matrix \(A\text{.}\) This formula is called Cramer’s rule. Consider the system
\begin{equation*} \begin{array}{ccccc} ax\amp +\amp by\amp =\amp e\\ cx \amp +\amp dy\amp = \amp f \end{array} \end{equation*}
The system can be written as a matrix equation
\begin{equation*} \begin{bmatrix}a\amp b\\c\amp d\end{bmatrix}\begin{bmatrix}x\\y\end{bmatrix}=\begin{bmatrix}e\\f\end{bmatrix}. \end{equation*}
Using one of our standard methods for solving systems we find that
\begin{equation*} x=\frac{ed-bf}{ad-bc}\quad\text{and}\quad y=\frac{af-ec}{ad-bc}. \end{equation*}
Observe that the denominators in the expressions for \(x\) and \(y\) are the same and equal to \(\det{\begin{bmatrix}a\amp b\\c\amp d\end{bmatrix}}\text{.}\)
A close examination shows that the numerators of expressions for \(x\) and \(y\) can also be interpreted as determinants of matrices. The numerator of the expression for \(x\) is the determinant of the matrix formed by replacing the first column of
\begin{equation*} \begin{bmatrix}a\amp b\\c\amp d\end{bmatrix} \text{ with } \begin{bmatrix}e\\f\end{bmatrix}. \end{equation*}
The numerator of the expression for \(y\) is the determinant of the matrix that is formed by replacing the second column of
\begin{equation*} \begin{bmatrix}a\amp b\\c\amp d\end{bmatrix} \text{ with } \begin{bmatrix}e\\f\end{bmatrix}. \end{equation*}
Thus, \(x\) and \(y\) can be written as
\begin{equation*} x=\frac{ed-bf}{ad-bc}=\frac{\begin{vmatrix}e\amp b\\f\amp d\end{vmatrix}}{\begin{vmatrix}a\amp b\\c\amp d\end{vmatrix}}\quad\text{and}\quad y=\frac{af-ec}{ad-bc}=\frac{\begin{vmatrix}a\amp e\\c\amp f\end{vmatrix}}{\begin{vmatrix}a\amp b\\c\amp d\end{vmatrix}}. \end{equation*}
It turns out that a solution to any square system \(A\mathbf{x}=\mathbf{b}\) can be expressed using ratios of determinants, provided that \(A\) is nonsingular. The general formula for the \(i^{th}\) component of the solution vector is
\begin{equation*} x_i=\frac{\det{(\text{matrix } A \text{ with column } i \text{ replaced by } \mathbf{b})}}{\det{A}}. \end{equation*}
To formalize this expression, we need to introduce some notation. Given a matrix
\begin{equation*} A=\begin{bmatrix} | \amp |\amp \amp |\\ \mathbf{a}_1 \amp \mathbf{a}_2\amp \dots \amp \mathbf{a}_n\\ | \amp |\amp \amp | \end{bmatrix} \end{equation*}
and a vector \(\mathbf{b}\) we use
\begin{equation*} A_i(\mathbf{b}) \end{equation*}
to denote the matrix obtained from \(A\) by replacing the \(i^{th}\) column of \(A\) with \(\mathbf{b}\text{.}\) In other words,
\begin{equation} A_i(\mathbf{b})=\begin{bmatrix} | \amp |\amp \amp |\amp |\amp |\amp \amp |\\ \mathbf{a}_1 \amp \mathbf{a}_2\amp \dots \amp \mathbf{a}_{i-1}\amp \mathbf{b}\amp \mathbf{a}_{i+1}\amp \dots\amp \mathbf{a}_n\\ | \amp |\amp \amp |\amp |\amp |\amp \amp | \end{bmatrix}\tag{7.3.1} \end{equation}
Using our new notation, we can write the \(i^{th}\) component of the solution vector as
\begin{equation*} x_i=\frac{\det{A_i(\mathbf{b})}}{\det{A}} \end{equation*}
We will work through a couple of examples before proving this result as a theorem.

Example 7.3.1.

Solve \(A\mathbf{x}=\mathbf{b}\) using Cramer’s rule if
\begin{equation*} A=\begin{bmatrix}3\amp -1\\2\amp 5\end{bmatrix}\quad\text{and}\quad \mathbf{b}=\begin{bmatrix}-2\\4\end{bmatrix}. \end{equation*}
Answer.
We start by computing the determinant of \(A\text{.}\)
\begin{equation*} \det{A}=\begin{vmatrix}3\amp -1\\2\amp 5\end{vmatrix}=(3)(5)-(-1)(2)=17. \end{equation*}
Next, we compute \(\det{A_1(\mathbf{b})}\) and \(\det{A_2(\mathbf{b})}\text{.}\)
\begin{equation*} \det{A_1(\mathbf{b})}=\begin{vmatrix}-2\amp -1\\4\amp 5\end{vmatrix}=(-2)(5)-(-1)(4)=-6, \end{equation*}
\begin{equation*} \det{A_2(\mathbf{b})}=\begin{vmatrix}3\amp -2\\2\amp 4\end{vmatrix}=(3)(4)-(-2)(2)=16. \end{equation*}
We now compute the components of the solution vector.
\begin{equation*} x_1=\frac{-6}{17}\quad\text{and}\quad x_2=\frac{16}{17}. \end{equation*}
Finally, it is a good idea to verify that what we found is a solution to the system.
\begin{equation*} \begin{bmatrix}3\amp -1\\2\amp 5\end{bmatrix}\begin{bmatrix}-6/17\\16/17\end{bmatrix}=\begin{bmatrix}-18/17-16/17\\-12/17+80/17\end{bmatrix}=\begin{bmatrix}-34/17\\68/17\end{bmatrix}=\begin{bmatrix}-2\\4\end{bmatrix}. \end{equation*}

Example 7.3.2.

Solve \(A\mathbf{x}=\mathbf{b}\) using Cramer’s rule if
\begin{equation*} A=\begin{bmatrix}1\amp 2\amp -1\\-1\amp 1\amp 1\\0\amp 3\amp 1\end{bmatrix}\quad\text{and}\quad \mathbf{b}=\begin{bmatrix}-2\\3\\1\end{bmatrix}. \end{equation*}
Answer.
Find the determinant of \(A\text{.}\)
\begin{equation*} \det{A}=\begin{vmatrix}1\amp 2\amp -1\\-1\amp 1\amp 1\\0\amp 3\amp 1\end{vmatrix}=3. \end{equation*}
Next, we compute \(\det{A_i(\mathbf{b})}\) for \(i=1, 2, 3\text{.}\)
\begin{equation*} \det{A_1(\mathbf{b})}=\begin{vmatrix}-2\amp 2\amp -1\\3\amp 1\amp 1\\1\amp 3\amp 1\end{vmatrix}=-8, \end{equation*}
\begin{equation*} \det{A_2(\mathbf{b})}=\begin{vmatrix}1\amp -2\amp -1\\-1\amp 3\amp 1\\0\amp 1\amp 1\end{vmatrix}=1, \end{equation*}
\begin{equation*} \det{A_3(\mathbf{b})}=\begin{vmatrix}1\amp 2\amp -2\\-1\amp 1\amp 3\\0\amp 3\amp 1\end{vmatrix}=0. \end{equation*}
This gives us the solution vector
\begin{equation*} \mathbf{x}=\begin{bmatrix}-8/3\\1/3\\0\end{bmatrix}. \end{equation*}
You should verify that what you found really is a solution.
We are now ready to state and prove Cramer’s rule as a theorem.

Proof.

For this proof we will need to think of matrices in terms of their columns. Thus,
\begin{equation*} A=\begin{bmatrix} | \amp |\amp \amp |\\ \mathbf{a}_1 \amp \mathbf{a}_2\amp \dots\amp \mathbf{a}_n\\ | \amp |\amp \amp | \end{bmatrix}. \end{equation*}
We will also need the identity matrix \(I\text{.}\) The columns of \(I\) are standard unit vectors.
\begin{equation*} I=\begin{bmatrix} | \amp |\amp \amp |\\ \mathbf{e}_1 \amp \mathbf{e}_2\amp \dots\amp \mathbf{e}_n\\ | \amp |\amp \amp | \end{bmatrix}. \end{equation*}
Recall that
\begin{equation*} A_i(\mathbf{b})=\begin{bmatrix} | \amp |\amp \amp |\amp |\amp |\amp \amp |\\ \mathbf{a}_1 \amp \mathbf{a}_2\amp \dots \amp \mathbf{a}_{i-1}\amp \mathbf{b}\amp \mathbf{a}_{i+1}\amp \dots\amp \mathbf{a}_n\\ | \amp |\amp \amp |\amp |\amp |\amp \amp | \end{bmatrix}. \end{equation*}
Similarly,
\begin{equation*} I_i(\mathbf{x})=\begin{bmatrix} | \amp |\amp \amp |\amp |\amp |\amp \amp |\\ \mathbf{e}_1 \amp \mathbf{e}_2\amp \dots \amp \mathbf{e}_{i-1}\amp \mathbf{x}\amp \mathbf{e}_{i+1}\amp \dots\amp \mathbf{e}_n\\ | \amp |\amp \amp |\amp |\amp |\amp \amp | \end{bmatrix}. \end{equation*}
Observe that \(x_i\) is the only non-zero entry in the \(i^{th}\) row of \(I_i(\mathbf{x})\text{.}\) Cofactor expansion along the \(i^{th}\) row gives us
\begin{equation} \det{I_i(\mathbf{x})}=x_i.\tag{7.3.2} \end{equation}
Now, consider the product \(A\Big(I_i(\mathbf{x})\Big)\)
\begin{align*} A\Big(I_i(\mathbf{x})\Big)\amp =\begin{bmatrix} | \amp |\amp \amp |\\ \mathbf{a}_1 \amp \mathbf{a}_2\amp \dots\amp \mathbf{a}_n\\ | \amp |\amp \amp | \end{bmatrix}\begin{bmatrix} | \amp |\amp \amp |\amp |\amp |\amp \amp |\\ \mathbf{e}_1 \amp \dots \amp \mathbf{e}_{i-1}\amp \mathbf{x}\amp \mathbf{e}_{i+1}\amp \dots\amp \mathbf{e}_n\\ | \amp |\amp \amp |\amp |\amp |\amp \amp | \end{bmatrix} \\ \amp =\begin{bmatrix} | \amp |\amp \amp |\amp |\amp |\amp \amp |\\ \mathbf{a}_1 \amp \mathbf{a}_2\amp \dots \amp \mathbf{a}_{i-1}\amp A\mathbf{x}\amp \mathbf{a}_{i+1}\amp \dots\amp \mathbf{a}_n\\ | \amp |\amp \amp |\amp |\amp |\amp \amp | \end{bmatrix} \\ \amp =\begin{bmatrix} | \amp |\amp \amp |\amp |\amp |\amp \amp |\\ \mathbf{a}_1 \amp \mathbf{a}_2\amp \dots \amp \mathbf{a}_{i-1}\amp \mathbf{b}\amp \mathbf{a}_{i+1}\amp \dots\amp \mathbf{a}_n\\ | \amp |\amp \amp |\amp |\amp |\amp \amp | \end{bmatrix}=A_i(\mathbf{b}). \end{align*}
This gives us
\begin{equation*} AI_i(\mathbf{x})=A_i(\mathbf{b}), \end{equation*}
\begin{equation*} \det{AI_i(\mathbf{x})}=\det{A_i(\mathbf{b})}, \end{equation*}
\begin{equation*} \det{A}\det{I_i(\mathbf{x})}=\det{A_i(\mathbf{b})}. \end{equation*}
By our earlier observation in (7.3.2), we have
\begin{equation*} \det{A}x_i=\det{A_i(\mathbf{b})}. \end{equation*}
\(A\) is nonsingular, so \(\det{A}\neq 0\text{.}\) Thus
\begin{equation*} x_i=\frac{\det{A_i(\mathbf{b})}}{\det{A}}. \end{equation*}
Finding the determinant is computationally expensive. Because Cramer’s rule requires finding many determinants, it is not a computationally efficient way of solving a system of equations. However, Cramer’s rule is often used for small systems in applications that arise in economics, natural, and social sciences, particularly when solving for only a subset of the variables.

Subsection 7.3.2 Adjugate Formula for the Inverse of a Matrix

In Exercise 4.4.3.9 we used the row reduction algorithm to show that if
\begin{equation*} A=\begin{bmatrix}a\amp b\\c\amp d\end{bmatrix} \end{equation*}
is nonsingular then
\begin{equation} A^{-1}=\frac{1}{\det{A}}\begin{bmatrix}d\amp -b\\-c\amp a\end{bmatrix}.\tag{7.3.3} \end{equation}
This formula is a special case of a more general formula for finding inverse matrices. Just like the formula for a \(2\times 2\) matrix, the general formula includes the coefficient \(\frac{1}{\det{A}}\) and a matrix related to the original matrix. We will now derive the general formula using Cramer’s rule.
Let \(A\) be an \(n\times n\) nonsingular matrix. When looking for the inverse of \(A\text{,}\) we look for a matrix \(X\) such that \(AX=I\text{.}\) We will think of matrices in terms of their columns
\begin{equation*} I=\begin{bmatrix} | \amp |\amp \amp |\\ \mathbf{e}_1 \amp \mathbf{e}_2\amp \dots\amp \mathbf{e}_n\\ | \amp |\amp \amp | \end{bmatrix}\quad\text{and}\quad X=\begin{bmatrix} | \amp |\amp \amp |\\ \mathbf{x}_1 \amp \mathbf{x}_2\amp \dots\amp \mathbf{x}_n\\ | \amp |\amp \amp | \end{bmatrix}. \end{equation*}
If \(AX=I\) then we must have
\begin{equation*} A\mathbf{x}_1=\mathbf{e}_1, \quad A\mathbf{x}_2=\mathbf{e}_2, \ \vdots \ A\mathbf{x}_n=\mathbf{e}_n. \end{equation*}
This gives us \(n\) systems of equations. Solution vectors to these systems are the columns of \(X\text{.}\) Thus, the \(j^{th}\) column of \(X\) is
\begin{equation*} \mathbf{x}_j=\begin{bmatrix}x_{1j}\\x_{2j}\\\vdots\\x_{nj}\end{bmatrix}\quad\text{such that}\quad A\mathbf{x}_j=\mathbf{e}_j. \end{equation*}
By Cramer’s rule
\begin{equation*} x_{ij}=\frac{\det{A_i(\mathbf{e}_j)}}{\det{A}}. \end{equation*}
But
\begin{equation*} A_i(\mathbf{e})=\begin{bmatrix} | \amp |\amp \amp |\amp |\amp |\amp \amp |\\ \mathbf{a}_1 \amp \mathbf{a}_2\amp \dots \amp \mathbf{a}_{i-1}\amp \mathbf{e}_j\amp \mathbf{a}_{i+1}\amp \dots\amp \mathbf{a}_n\\ | \amp |\amp \amp |\amp |\amp |\amp \amp | \end{bmatrix}. \end{equation*}
To find \(\det{A_i(\mathbf{e}_j)}\text{,}\) we can expand along the \(i^{th}\) column of \(A_i(\mathbf{e}_j)\text{.}\) But the \(i^{th}\) column of \(A_i(\mathbf{e}_j)\) is the vector \(\mathbf{e}_j\) which has 1 in the \(j^{th}\) spot and zeros everywhere else. Thus
\begin{equation*} \det{A_i(\mathbf{e}_j)}=(-1)^{i+j}\det{A_{ji}}=C_{ji}. \end{equation*}
We now have
\begin{equation*} \mathbf{x}_j=\begin{bmatrix}C_{j1}/\det{A}\\C_{j2}/\det{A}\\\vdots\\C_{jn}/\det{A}\end{bmatrix}=\frac{1}{\det{A}}\begin{bmatrix}C_{j1}\\C_{j2}\\\vdots\\C_{jn}\end{bmatrix}. \end{equation*}
Thus,
\begin{equation*} A^{-1}=X=\frac{1}{\det{A}}\begin{bmatrix}C_{11}\amp C_{21}\amp \ldots\amp C_{n1}\\C_{12}\amp C_{22}\amp \ldots\amp C_{n2}\\\vdots\amp \vdots\amp \ddots\amp \vdots\\ C_{1n}\amp C_{2n}\amp \ldots\amp C_{nn}\end{bmatrix}. \end{equation*}
The matrix of cofactors of \(A\) is called the adjugate of \(A\text{.}\) We write
\begin{equation*} \text{adj}(A)=\begin{bmatrix}C_{11}\amp C_{21}\amp \ldots\amp C_{n1}\\C_{12}\amp C_{22}\amp \ldots\amp C_{n2}\\\vdots\amp \vdots\amp \ddots\amp \vdots\\ C_{1n}\amp C_{2n}\amp \ldots\amp C_{nn}\end{bmatrix}. \end{equation*}
\(\textbf{Warning:}\) Note the order of subscripts of \(C\) in the adjugate matrix. The \((i,j)\)-entry of the adjugate matrix is \(C_{ji}\text{.}\)
We summarize our result as a theorem.

Example 7.3.5.

Use Theorem 7.3.4 to find \(A^{-1}\) if
\begin{equation*} A=\begin{bmatrix}1\amp -1\amp 2\\1\amp 1\amp 1\\1\amp 3\amp -1\end{bmatrix}. \end{equation*}
Answer.
We begin by finding \(\det{A}\text{.}\) One checks that
\begin{equation*} \det{A}=-2. \end{equation*}
The first column of \(\mbox{adj}(A)\) has entries \(C_{11}\text{,}\) \(C_{12}\) and \(C_{13}\text{.}\)
\begin{equation*} C_{11}=(-1)^{1+1}\begin{vmatrix}1\amp 1\\3\amp -1\end{vmatrix}=-4, \end{equation*}
\begin{equation*} C_{12}=(-1)^{1+2}\begin{vmatrix}1\amp 1\\1\amp -1\end{vmatrix}=2, \end{equation*}
\begin{equation*} C_{13}=(-1)^{1+3}\begin{vmatrix}1\amp 1\\1\amp 3\end{vmatrix}=2. \end{equation*}
The second column of \(\mbox{adj}(A)\) has entries \(C_{21}\text{,}\) \(C_{22}\) and \(C_{23}\text{.}\) Now,
\begin{equation*} C_{21}=(-1)^{2+1}\begin{vmatrix}-1\amp 2\\3\amp -1\end{vmatrix}=5, \end{equation*}
\begin{equation*} C_{22}=(-1)^{2+2}\begin{vmatrix}1\amp 2\\1\amp -1\end{vmatrix}=-3, \end{equation*}
\begin{equation*} C_{23}=(-1)^{2+3}\begin{vmatrix}1\amp -1\\1\amp 3\end{vmatrix}=-4. \end{equation*}
Next, we compute the third column of \(\mbox{adj}(A)\text{:}\)
\begin{equation*} C_{31}=-3, \end{equation*}
\begin{equation*} C_{32}=1, \end{equation*}
\begin{equation*} C_{33}=2. \end{equation*}
This gives us
\begin{equation*} A^{-1}=\begin{bmatrix}2\amp -5/2\amp 3/2\\-1\amp 3/2\amp -1/2\\-1\amp 2\amp -1\end{bmatrix}. \end{equation*}
Compare this result to the answer in Example 4.4.8.

Exercises 7.3.3 Exercises

Exercise Group.

Use Cramer’s rule to solve each of the following systems.
1.
\begin{equation*} \begin{array}{ccccc} -3x\amp +\amp 2y\amp =\amp 1\\ 2x \amp -\amp y\amp = \amp 4 \end{array} \end{equation*}
Answer.
\begin{equation*} x=9, \quad y=14. \end{equation*}
2.
\begin{equation*} \begin{bmatrix}2\amp -5\amp 1\\6\amp 0\amp -2\\-3\amp 1\amp 1\end{bmatrix}\mathbf{x}=\begin{bmatrix}1\\4\\3\end{bmatrix}. \end{equation*}
Answer.
\begin{equation*} \mathbf{x}=\begin{bmatrix}28/5\\5\\74/5\end{bmatrix}. \end{equation*}

3.

Consider the equation
\begin{equation*} \begin{bmatrix}1\amp 1\amp 1\amp 1\\2\amp 3\amp 2\amp -1\\3\amp -2\amp 3\amp -2\\2\amp 1\amp 1\amp 1\end{bmatrix}\begin{bmatrix}x_1\\x_2\\x_3\\x_4\end{bmatrix}=\begin{bmatrix}10\\10\\0\\11\end{bmatrix}. \end{equation*}
  1. Solve for \(x_2\) using Cramer’s Rule.
  2. If you had to solve for all four variables, which method would you use? Why?
Answer.
\(x_2 = 2 \) is the solution for part (a).

Exercise Group.

Use Theorem Theorem 7.3.4 to find the inverse of each of the following matrices.
4.
\begin{equation*} A=\begin{bmatrix}2\amp 7\\1\amp 3\end{bmatrix}. \end{equation*}
Answer.
\begin{equation*} A^{-1}=\begin{bmatrix}-3\amp 7\\1\amp -2\end{bmatrix}. \end{equation*}
5.
\begin{equation*} A=\begin{bmatrix}2\amp 1\amp 4\\4\amp -2\amp 1\\0\amp 3\amp -1\end{bmatrix}. \end{equation*}
Answer.
\begin{equation*} A^{-1}=\frac{1}{50}\begin{bmatrix}-1\amp 13\amp 9\\4\amp -2\amp 14\\12\amp -6\amp -8\end{bmatrix}. \end{equation*}

6.

Show that the formula in (7.3.3) is a special case of the formula in Theorem 7.3.4 by showing that
\begin{equation*} \mbox{adj}\left(\begin{bmatrix}a\amp b\\c\amp d\end{bmatrix}\right)=\begin{bmatrix}d\amp -b\\-c\amp a\end{bmatrix}. \end{equation*}