We will start by addressing linear independence for
Item 1. To do so, we will solve the linear combination equation
\begin{equation}
c_1 \begin{bmatrix} 2\\-3 \end{bmatrix}
+ c_2 \begin{bmatrix}0\\3\end{bmatrix}
+ c_3\begin{bmatrix}1\\-1\end{bmatrix}
+ c_4\begin{bmatrix}1\\-2\end{bmatrix}
= \mathbf{0}.\tag{2.4.2}
\end{equation}
Clearly \(c_1=c_2=c_3=c_4=0\) is a solution to the equation. The question is whether another solution exists. The linear combination equation translates into the following system:
\begin{equation*}
\begin{array}{ccccccccc}
2c_1 \amp \amp \amp +\amp c_3\amp +\amp c_4\amp = \amp 0 \\
-3c_1\amp +\amp 3c_2\amp -\amp c_3\amp -\amp 2c_4\amp = \amp 0 \\
\end{array}.
\end{equation*}
Writing the system in augmented matrix form and applying elementary row operations gives us the following reduced row echelon form:
\begin{equation*}
\left[\begin{array}{cccc|c}
2\amp 0\amp 1\amp 1\amp 0\\-3\amp 3\amp -1\amp -2\amp 0
\end{array}\right]
\xrightarrow{\text{RREF}}
\left[\begin{array}{cccc|c}
1\amp 0\amp 1/2\amp 1/2\amp 0\\0\amp 1\amp 1/6\amp -1/6\amp 0
\end{array}\right].
\end{equation*}
This shows that
(2.4.2) has infinitely many solutions:
\begin{equation*}
c_1=-\frac{1}{2}s-\frac{1}{2}t,\quad c_2=-\frac{1}{6}s+\frac{1}{6}t,\quad c_3=s,\quad c_4=t.
\end{equation*}
Since there are infinitely many solutions, we conclude that the vectors are linearly dependent.
Now to address part
Item 2 of the remark. We can use the solution to the homogeneous linear combination equation
(2.4.2) to write one of the vectors as a linear combination of the others. Letting
\(t=s=6\text{,}\) we obtain the following:
\begin{equation}
-6\begin{bmatrix}2\\-3\end{bmatrix}
+ 0 \begin{bmatrix}0\\3\end{bmatrix}
+ 6\begin{bmatrix}1\\-1\end{bmatrix}
+ 6\begin{bmatrix}1\\-2\end{bmatrix}
= \mathbf{0}.\tag{2.4.3}
\end{equation}
Now we solve
(2.4.3) for one of the vectors:
\begin{equation}
\begin{bmatrix}2\\-3\end{bmatrix}
= 0\begin{bmatrix}0\\3\end{bmatrix}
+ \begin{bmatrix}1\\-1\end{bmatrix}
+ \begin{bmatrix}1\\-2\end{bmatrix}.\tag{2.4.4}
\end{equation}
This would not be possible if a nontrivial solution to the equation
\begin{equation*}
c_1\begin{bmatrix}2\\-3\end{bmatrix}
+ c_2 \begin{bmatrix}0\\3\end{bmatrix}
+ c_3\begin{bmatrix}1\\-1\end{bmatrix}
+ c_4\begin{bmatrix}1\\-2\end{bmatrix}
= \mathbf{0}
\end{equation*}
did not exist. Therefore, we conclude the answer to both questions in
Remark 2.4.10 is ``YES".
For part
Item 2 To address linear independence, we need to solve the equation
\begin{equation*}
c_1\begin{bmatrix}2\\1\\4\end{bmatrix}+c_2\begin{bmatrix}-3\\1\\1\end{bmatrix}=\mathbf{0}.
\end{equation*}
Converting the equation to augmented matrix form and performing row reduction gives us
\begin{equation*}
\left[\begin{array}{cc|c}
2\amp -3\amp 0\\1\amp 1\amp 0\\4\amp 1\amp 0
\end{array}\right]
\xrightarrow{\text{RREF}}
\left[\begin{array}{cc|c}
1\amp 0\amp 0\\0\amp 1\amp 0\\0\amp 0\amp 0
\end{array}\right].
\end{equation*}
This shows that \(c_1=c_2=0\) is the only solution. Therefore the two vectors are linearly independent. Furthermore, we cannot write one of the vectors as a linear combination of the other. Do you see that the only way this would be possible with a set of two vectors is if they were scalar multiples of each other?