Rank of a Matrix

In previous sections, we solved linear systems using Gauss elimination method or the Gauss-Jordan method. In the examples considered, we have encountered three possibilities, namely

  1. existence of a unique solution,
  2. existence of an infinite number of solutions, and
  3. no solution.

Based on the above possibilities, we have the following definition.

DEFINITION 2.4.1 (Consistent, Inconsistent)   A linear system is called CONSISTENT if it admits a solution and is called INCONSISTENT if it admits no solution.

The question arises, as to whether there are conditions under which the linear system $ A {\mathbf x}= {\mathbf b}$ is consistent. The answer to this question is in the affirmative. To proceed further, we need a few definitions and remarks.

Recall that the row reduced echelon form of a matrix is unique and therefore, the number of non-zero rows is a unique number. Also, note that the number of non-zero rows in either the row reduced form or the row reduced echelon form of a matrix are same.

DEFINITION 2.4.2 (Row rank of a Matrix)   The number of non-zero rows in the row reduced form of a matrix is called the row-rank of the matrix.

By the very definition, it is clear that row-equivalent matrices have the same row-rank. For a matrix $ A,$ we write ` $ {\mbox{row-rank }}(A)$ ' to denote the row-rank of $ A.$

EXAMPLE 2.4.3  
  1. Determine the row-rank of $ A = \begin{bmatrix}1 &
2 & 1 \\ 2 & 3 & 1 \\ 1 & 1 & 2 \end{bmatrix}.$
    Solution: To determine the row-rank of $ A,$ we proceed as follows.
    1. $ \begin{bmatrix}1 & 2 & 1 \\ 2 & 3 & 1 \\ 1
& 1 & 2 \end{bmatrix} \overrightarr...
..._{31}(-1)}
\begin{bmatrix}1 & 2 & 1 \\ 0 & -1 & -1 \\ 0 & -1 & 1
\end{bmatrix}.$
    2. $ \begin{bmatrix}1 & 2 & 1 \\ 0 & -1 & -1 \\ 0 & -1 & 1
\end{bmatrix} \overright...
..., R_{32}(1) }
\begin{bmatrix}1 & 2 & 1 \\ 0 & 1 & 1 \\ 0 & 0 & 2
\end{bmatrix}.$
    3. $ \begin{bmatrix}1 & 2 & 1 \\ 0 & 1 & 1 \\ 0 & 0 & 2
\end{bmatrix}\overrightarro...
...R_{12}(-2) }
\begin{bmatrix}1 & 0 & -1 \\ 0 & 1 & 1 \\ 0 & 0 & 1 \end{bmatrix}.$
    4. $ \begin{bmatrix}1 & 0 & -1 \\ 0 & 1 & 1 \\ 0 & 0 & 1
\end{bmatrix}\overrightarr...
...1), R_{13}(1)}\begin{bmatrix}1 & 0 & 0
\\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix} $
    The last matrix in Step 1d is the row reduced form of $ A$ which has $ 3$ non-zero rows. Thus, $ {\mbox{row-rank}}(A)~=~3.$ This result can also be easily deduced from the last matrix in Step 1b.
  2. Determine the row-rank of $ A = \begin{bmatrix}1 & 2 & 1
\\ 2 & 3 & 1 \\ 1 & 1 & 0 \end{bmatrix}.$
    Solution: Here we have
    1. $ \begin{bmatrix}1 & 2 & 1 \\ 2 & 3 & 1 \\ 1
& 1 & 0 \end{bmatrix} \overrightarr...
...31}(-1) }
\begin{bmatrix}1 & 2 & 1 \\ 0 & -1 & -1 \\ 0 & -1 & -1
\end{bmatrix}.$
    2. $ \begin{bmatrix}1 & 2 & 1 \\ 0 & -1 & -1 \\ 0 & -1 & -1
\end{bmatrix} \overrigh...
..., R_{32}(1) }
\begin{bmatrix}1 & 2 & 1 \\ 0 & 1 & 1 \\ 0 & 0 & 0
\end{bmatrix}.$
    From the last matrix in Step 2b, we deduce $ {\mbox{row-rank}}(A)=2.$

Remark 2.4.4   Let $ A {\mathbf x}= {\mathbf b}$ be a linear system with $ m$ equations and $ n$ unknowns. Then the row-reduced echelon form of $ A$ agrees with the first $ n$ columns of $ [A \; \; {\mathbf b}],$ and hence

$\displaystyle {\mbox{row-rank}}
(A) \leq {\mbox{row-rank}} ([A \; \; {\mathbf b}]).$

The reader is advised to supply a proof.

Remark 2.4.5   Consider a matrix $ A.$ After application of a finite number of elementary column operations (see Definition 2.3.16) to the matrix $ A,$ we can have a matrix, say $ B,$ which has the following properties:
  1. The first nonzero entry in each column is $ 1.$
  2. A column containing only 0 's comes after all columns with at least one non-zero entry.
  3. The first non-zero entry (the leading term) in each non-zero column moves down in successive columns.

Therefore, we can define column-rank of $ A$ as the number of non-zero columns in $ B.$ It will be proved later that

$\displaystyle {\mbox{row-rank}} (A) = {\mbox{column-rank}} (A).$

Thus we are led to the following definition.

DEFINITION 2.4.6   The number of non-zero rows in the row reduced form of a matrix $ A$ is called the rank of $ A,$ denoted $ {\mbox{rank }}
(A).$

THEOREM 2.4.7   Let $ A$ be a matrix of rank $ r.$ Then there exist elementary matrices $ E_{1},E_{2},\ldots ,E_{s}$ and $ F_{1},F_{2},\ldots
,F_{\ell }$ such that

$\displaystyle E_{1}E_{2}\ldots E_{s} \; A \;
F_{1}F_{2}\ldots F_{\ell } = \begin{bmatrix}I_{r}& {\mathbf 0}\\
{\mathbf 0}& {\mathbf 0}\end{bmatrix}. $

Proof. Let $ C$ be the row reduced echelon matrix obtained by applying elementary row operations to the given matrix $ A.$ As $ {\mbox{rank}}(A) = r,$ the matrix $ C$ will have the first $ r$ rows as the non-zero rows. So by Remark 2.3.5, $ C$ will have $ r$ leading columns, say $ i_1, i_2, \ldots, i_r.$ Note that, for $ 1 \leq s \leq r, $ the $ i_s^{\mbox{th}}$ column will have $ 1$ in the $ s^{\mbox{th}}$ row and zero elsewhere.

We now apply column operations to the matrix $ C.$ Let $ D$ be the matrix obtained from $ C$ by successively interchanging the $ s^{\mbox{th}}$ and $ i_s^{\mbox{th}}$ column of $ C$ for $ 1 \leq s \leq r.$ Then the matrix $ D$ can be written in the form $ \begin{bmatrix}I_r & B \\
{\mathbf 0}& {\mathbf 0}\end{bmatrix},$ where $ B$ is a matrix of appropriate size. As the $ (1,1)$ block of $ D$ is an identity matrix, the block $ (1,2)$ can be made the zero matrix by application of column operations to $ D.$ This gives the required result. height6pt width 6pt depth 0pt

COROLLARY 2.4.8   Let $ A$ be a $ n \times n$ matrix of rank $ r<n.$ Then the system of equations $ A {\mathbf x}= {\mathbf 0}$ has infinite number of solutions.

Proof. By Theorem 2.4.7, there exist elementary matrices $ E_{1},E_{2},\ldots ,E_{s}$ and $ F_{1},F_{2},\ldots
,F_{\ell }$ such that $ E_{1}E_{2}\ldots E_{s} \; A \;
F_{1}F_{2}\ldots F_{\ell } = \begin{bmatrix}I_{r}&0\\
0&0\end{bmatrix}. $ Define $ Q= F_{1}F_{2}\ldots F_{\ell }$ . Then the matrix

$\displaystyle A Q = \left[\begin{array}{c\vert c} & \\ \star & {\mathbf 0}\\ & \end{array}\right]$

as the elementary martices $ E_i$ 's are being multiplied on the left of the matrix $ \begin{bmatrix}I_{r}& {\mathbf 0}\\ {\mathbf 0}& {\mathbf 0}\end{bmatrix}.$ Let $ Q_1, Q_2, \ldots, Q_n$ be the columns of the matrix $ Q$ . Then check that $ A Q_i = {\mathbf 0}$ for $ i = r+1, r+2, \ldots, n$ . Hence, we can use the $ Q_i$ 's which are non-zero (Use Exercise 1.2.17.2) to generate infinite number of solutions. height6pt width 6pt depth 0pt

EXERCISE 2.4.9  
  1. Determine the ranks of the coefficient and the augmented matrices that appear in Part 1 and Part 2 of Exercise 2.3.12.
  2. Let $ A$ be an $ n \times n$ matrix with $ {\mbox{rank}}(A) =n.$ Then prove that $ A$ is row-equivalent to $ I_n.$
  3. If $ P$ and $ Q$ are invertible matrices and $ P A Q$ is defined then show that $ {\mbox{rank }}(P A Q) = {\mbox{ rank }} (A).$
  4. Find matrices $ P$ and $ Q$ which are product of elementary matrices such that $ B = P A Q$ where $ A = \begin{bmatrix}
2 & 4 & 8\\ 1 & 3 & 2 \end{bmatrix}$ and $ B =
\begin{bmatrix}1 & 0 & 0\\ 0 & 1 & 0
\end{bmatrix}.$
  5. Let $ A$ and $ B$ be two matrices. Show that
    1. if $ A + B$ is defined, then $ {\mbox{rank}}(A+B) \leq {\mbox{rank}}(A) + {\mbox{rank}}(B),$
    2. if $ A B$ is defined, then $ {\mbox{rank}}(AB) \leq {\mbox{rank}}(A)$ and $ {\mbox{rank}}(AB) \leq {\mbox{rank}}(B).$
  6. Let $ A$ be any matrix of rank $ r.$ Then show that there exists invertible matrices $ B_i, C_i$ such that
    $ B_1 A =
\begin{bmatrix}R_1 & R_2 \\ {\mathbf 0}& {\mathbf 0}\end{bmatrix},
\;\...
...C_2 = \begin{bmatrix}A_1 & {\mathbf 0}\\ {\mathbf 0}& {\mathbf 0}\end{bmatrix},$ and $ B_3 A C_3 =
\begin{bmatrix}I_r & {\mathbf 0}\\ {\mathbf 0}& {\mathbf 0}\end{bmatrix}.$ Also, prove that the matrix $ A_1$ is an $ r \times r$ invertible matrix.
  7. Let $ A$ be an $ m \times n$ matrix of rank $ r.$ Then $ A$ can be written as $ A = B C,$ where both $ B$ and $ C$ have rank $ r$ and $ B$ is a matrix of size $ m \times r$ and $ C$ is a matrix of size $ r \times n.$
  8. Let $ A$ and $ B$ be two matrices such that $ A B$ is defined and $ {\mbox{rank }}(A) = {\mbox{rank }}(A B).$ Then show that $ A = A B X$ for some matrix $ X.$ Similarly, if $ B A$ is defined and $ {\mbox{rank }}(A) = {\mbox{rank }}(B A),$ then $ A =
Y B A$ for some matrix $ Y.$ [Hint: Choose non-singular matrices $ P, Q $ and $ R$ such that $ P A Q = \begin{bmatrix}A_1 &
{\mathbf 0}\\ {\mathbf 0}& {\mathbf 0}
\end{bmatrix}$ and $ P (A B) R = \begin{bmatrix}C & {\mathbf 0}\\ {\mathbf 0}& {\mathbf 0}
\end{bmatrix}.$ Define $ X = R \begin{bmatrix}C^{-1} A_1 & {\mathbf 0}\\ {\mathbf 0}& {\mathbf 0}
\end{bmatrix} Q^{-1}.$ ]
  9. If matrices $ B$ and $ C$ are invertible and the involved partitioned products are defined, then show that

    $\displaystyle \begin{bmatrix}A&B \\ C&{\mathbf 0}\end{bmatrix}^{-1} =
\begin{bmatrix}{\mathbf 0}&C^{-1}\\
B^{-1}&-B^{-1}AC^{-1}\end{bmatrix}. $

  10. Suppose $ A$ is the inverse of a matrix $ B.$ Partition $ A$ and $ B$ as follows:

    $\displaystyle A =
\begin{bmatrix}A_{11} & A_{12} \\ A_{21} & A_{22}\end{bmatrix},
\;\; B = \begin{bmatrix}B_{11}& B_{12} \\ B_{21} & B_{22}
\end{bmatrix}. $

    If $ A_{11}$ is invertible and $ P = A_{22} - A_{21}(A^{-1}_{11} A_{12}),$ then show that

    $\displaystyle B_{11} = A^{-1}_{11} + (A^{-1}_{11}A_{12})P^{-1}(A_{21} A^{-1}_{1...
... = - P^{-1}(A_{21} A^{-1}_{11}), \;\;
B_{12} = - (A^{-1}_{11} A_{12})P^{-1},
$

    and $ B_{22} = P^{-1}.$

A K Lal 2007-09-12