Properties of Determinant

THEOREM 15.3.1 (Properties of Determinant)   Let $ A=[a_{ij}]$ be an $ n \times n$ matrix. Then
  1. if $ B$ is obtained from $ A$ by interchanging two rows, then
    $ \det (B) = - \det (A).$
  2. if $ B$ is obtained from $ A$ by multiplying a row by $ c$ then
    $ \det (B) = c \det (A).$
  3. if all the elements of one row is 0 then $ \det (A) =
0.$
  4. if $ A$ is a square matrix having two rows equal then $ \det (A) =
0.$
  5. Let $ B= [b_{ij}]$ and $ C =
[c_{ij}]$ be two matrices which differ from the matrix $ A=[a_{ij}]$ only in the $ m^{\mbox{th}}$ row for some $ m$ . If $ c_{mj} = a_{mj} + b_{mj}$ for $ 1 \le j \le n$ then $ \det(C) = \det(A) + \det(B)$ .
  6. if $ B$ is obtained from $ A$ by replacing the $ \ell$ th row by itself plus $ k$ times the $ m$ th row, for $ \ell \neq m$ then $ \det (B) = \det (A).$
  7. if $ A$ is a triangular matrix then $ \det (A) = a_{11} a_{22} \cdots a_{nn},$ the product of the diagonal elements.
  8. If $ E$ is an elementary matrix of order $ n$ then $ \det(EA) = \det(E)\det(A)$ .
  9. $ A$ is invertible if and only if $ \det(A) \ne 0$ .
  10. If $ B$ is an $ n \times n$ matrix then $ \det(A B) = \det(A) \det(B)$ .
  11. $ \det(A) = \det(A^t)$ , where recall that $ A^t$ is the transpose of the matrix $ A$ .

Proof. Proof of Part 1. Suppose $ B= [b_{ij}]$ is obtained from $ A=[a_{ij}]$ by the interchange of the $ \ell^{\mbox{th}}$ and $ m^{\mbox{th}}$ row. Then $ b_{\ell j} = a_{m j}, \; b_{m j} = a_{\ell j} $ for $ 1 \le j \le n$ and $ b_{ij} = a_{ij}$ for $ 1 \le i \ne \ell, m \le n, \; 1 \le j \le n$ .

Let $ \tau = (\ell \; m)$ be a transposition. Then by Proposition 14.2.4, $ {\mathcal S}_n = \{ \sigma \circ \tau: \; \sigma \in {\mathcal S}_n \}$ . Hence by the definition of determinant and Example 14.2.14.2, we have

$\displaystyle \det(B)$ $\displaystyle =$ $\displaystyle \sum\limits_{\sigma \in {\mathcal S}_n} {\mbox{sgn}}(\sigma) \pro...
...} {\mbox{sgn}}(\sigma\circ\tau) \prod\limits_{i=1}^n
b_{i (\sigma\circ\tau)(i)}$  
  $\displaystyle =$ $\displaystyle \sum\limits_{\sigma\circ\tau \in {\mathcal S}_n} {\mbox{sgn}}(\ta...
...tau)(\ell)} \cdots b_{m (\sigma\circ\tau)(m)} \cdots b_{n (\sigma\circ\tau)(n)}$  
  $\displaystyle =$ $\displaystyle {\mbox{sgn}}(\tau ) \sum\limits_{\sigma \in {\mathcal S}_n} {\mbo...
...(2)} \cdots b_{\ell \sigma(m)} \cdots b_{m \sigma(\ell)} \cdots b_{n \sigma(n)}$  
  $\displaystyle =$ $\displaystyle - \left(\sum\limits_{\sigma \in {\mathcal S}_n} {\mbox{sgn}}(\sig...
... \sigma(\ell)} \cdots a_{n \sigma(n)} \right) \;\;\; {\mbox{ as sgn}}(\tau)= -1$  
  $\displaystyle =$ $\displaystyle - \det(A).$  

Proof of Part 2. Suppose that $ B= [b_{ij}]$ is obtained by multiplying the $ m^{\mbox{th}}$ row of $ A$ by $ c \ne 0$ . Then $ b_{mj} = c \;a_{mj}$ and $ b_{ij} = a_{ij}$ for $ 1 \le i \ne m \le n, \; 1 \le j \le n$ . Then

$\displaystyle \det(B)$ $\displaystyle =$ $\displaystyle \sum\limits_{\sigma \in {\mathcal S}_n} {\mbox{sgn}}(\sigma) b_{1 \sigma(1)} b_{2 \sigma(2)} \cdots
b_{m \sigma(m)} \cdots b_{n \sigma(n)}$  
  $\displaystyle =$ $\displaystyle \sum\limits_{\sigma \in {\mathcal S}_n} {\mbox{sgn}}(\sigma) a_{1 \sigma(1)} a_{2 \sigma(2)} \cdots
c a_{m \sigma(m)} \cdots a_{n \sigma(n)}$  
  $\displaystyle =$ $\displaystyle c \sum\limits_{\sigma \in {\mathcal S}_n} {\mbox{sgn}}(\sigma) a_{1 \sigma(1)} a_{2 \sigma(2)} \cdots a_{m \sigma(m)} \cdots a_{n \sigma(n)}$  
  $\displaystyle =$ $\displaystyle c \det(A).$  

Proof of Part 3. Note that $ \det(A) = \sum\limits_{\sigma\in {\mathcal S}_n} {\mbox{sgn}}(\sigma) a_{1 \sigma(1)} a_{2\sigma(2)} \ldots a_{n\sigma(n)}$ . So, each term in the expression for determinant, contains one entry from each row. Hence, from the condition that $ A$ has a row consisting of all zeros, the value of each term is 0 . Thus, $ \det (A) = 0$ .

Proof of Part 4. Suppose that the $ \ell^{\mbox{th}}$ and $ m^{\mbox{th}}$ row of $ A$ are equal. Let $ B$ be the matrix obtained from $ A$ by interchanging the $ \ell^{\mbox{th}}$ and $ m^{\mbox{th}}$ rows. Then by the first part, $ \det (B) = - \det (A).$ But the assumption implies that $ B = A$ . Hence, $ \det (B) = \det (A)$ . So, we have $ \det(B) = - \det(A) = \det(A).$ Hence, $ \det (A) = 0$ .

Proof of Part 5. By definition and the given assumption, we have

$\displaystyle \det(C)$ $\displaystyle =$ $\displaystyle \sum\limits_{\sigma \in {\mathcal S}_n} {\mbox{sgn}}(\sigma) c_{1 \sigma(1)} c_{2 \sigma(2)} \cdots
c_{m \sigma(m)} \cdots c_{n \sigma(n)}$  
  $\displaystyle =$ $\displaystyle \sum\limits_{\sigma \in {\mathcal S}_n} {\mbox{sgn}}(\sigma) c_{1...
...{2 \sigma(2)} \cdots
(b_{m \sigma(m)} + a_{m \sigma(m)}) \cdots c_{n \sigma(n)}$  
  $\displaystyle =$ $\displaystyle \sum\limits_{\sigma \in {\mathcal S}_n} {\mbox{sgn}}(\sigma) b_{1 \sigma(1)} b_{2 \sigma(2)} \cdots
b_{m \sigma(m)} \cdots b_{n \sigma(n)}$  
    $\displaystyle \hspace{1in} +
\sum\limits_{\sigma \in {\mathcal S}_n} {\mbox{sgn...
...) a_{1 \sigma(1)} a_{2 \sigma(2)} \cdots a_{m \sigma(m)} \cdots a_{n \sigma(n)}$  
  $\displaystyle =$ $\displaystyle \det(B) + \det(A).$  

Proof of Part 6. Suppose that $ B= [b_{ij}]$ is obtained from $ A$ by replacing the $ \ell$ th row by itself plus $ k$ times the $ m$ th row, for $ \ell \neq m$ . Then $ b_{\ell j} = a_{\ell j} + k \;a_{mj}$ and $ b_{ij} = a_{ij}$ for $ 1 \le i \ne m \le n, \; 1 \le j \le n$ . Then

$\displaystyle \det(B)$ $\displaystyle =$ $\displaystyle \sum\limits_{\sigma \in {\mathcal S}_n} {\mbox{sgn}}(\sigma) b_{1...
...(2)} \cdots
b_{\ell \sigma(\ell)} \cdots b_{m \sigma(m)} \cdots b_{n \sigma(n)}$  
  $\displaystyle =$ $\displaystyle \sum\limits_{\sigma \in {\mathcal S}_n} {\mbox{sgn}}(\sigma) a_{1...
...sigma(\ell)} + k a_{m \sigma(m)}) \cdots a_{m \sigma(m)} \cdots a_{n \sigma(n)}$  
  $\displaystyle =$ $\displaystyle \sum\limits_{\sigma \in {\mathcal S}_n} {\mbox{sgn}}(\sigma) a_{1...
...(2)} \cdots a_{\ell \sigma(\ell)} \cdots a_{m \sigma(m)} \cdots a_{n \sigma(n)}$  
    $\displaystyle \hspace{1in} +
k \sum\limits_{\sigma \in {\mathcal S}_n} {\mbox{s...
...\sigma(2)} \cdots a_{m \sigma(m)} \cdots a_{m \sigma(m)} \cdots a_{n \sigma(n)}$  
  $\displaystyle =$ % latex2html id marker 77436
$\displaystyle \sum\limits_{\sigma \in {\mathcal S...
...s a_{n \sigma(n)} \hspace{.25in}
{\mbox{ use Part }}~\ref{app:lem:tworows:same}$  
  $\displaystyle =$ $\displaystyle \det(A).$  

Proof of Part 7. First let us assume that $ A$ is an upper triangular matrix. Observe that if $ \sigma \in {\mathcal S}_n$ is different from the identity permutation then $ n(\sigma) \ge 1$ . So, for every $ \sigma \ne Id_n \in {\mathcal S}_n$ , there exists a positive integer $ m, \; 1 \le m \le n-1 $ (depending on $ \sigma$ ) such that $ m > \sigma(m)$ . As $ A$ is an upper triangular matrix, $ a_{m \sigma(m) } = 0$ for each $ \sigma (\ne Id_n) \in {\mathcal S}_n$ . Hence the result follows.

A similar reasoning holds true, in case $ A$ is a lower triangular matrix.

Proof of Part 8. Let $ I_n$ be the identity matrix of order $ n$ . Then using Part 7, $ \det(I_n) = 1$ . Also, recalling the notations for the elementary matrices given in Remark 2.3.14, we have $ \det(E_{ij}) = -1,$ (using Part 1) $ \; \det(E_i(c)) = c$ (using Part 2) and $ \det(E_{ij}(k) = 1$ (using Part 6). Again using Parts 12 and 6, we get $ \det(EA) = \det(E)\det(A)$ .

Proof of Part 9. Suppose $ A$ is invertible. Then by Theorem 2.5.8, $ A$ is a product of elementary matrices. That is, there exist elementary matrices $ E_1, E_2, \ldots, E_k$ such that $ A = E_1 E_2 \cdots E_k$ . Now a repeated application of Part 8 implies that $ \det(A) = \det(E_1) \det(E_2) \cdots \det(E_k)$ . But $ \det(E_i) \ne 0$ for $ 1 \le i \le k$ . Hence, $ \det(A) \ne 0$ .

Now assume that $ \det(A) \ne 0$ . We show that $ A$ is invertible. On the contrary, assume that $ A$ is not invertible. Then by Theorem 2.5.8, the matrix $ A$ is not of full rank. That is there exists a positive integer $ r < n$ such that $ {\mbox{rank }}(A) = r$ . So, there exist elementary matrices $ E_1, E_2, \ldots, E_k$ such that $ E_1 E_2 \cdots E_k A = \left[\begin{array}{c} B \\ {\mathbf 0}\end{array}\right]$ . Therefore, by Part 3 and a repeated application of Part 8,

$\displaystyle \det(E_1) \det(E_2) \cdots \det(E_k) \det(A) = \det(E_1 E_2 \cdot...
...et \left( \left[\begin{array}{c} B \\ {\mathbf 0}\end{array}\right] \right)= 0.$

But $ \det(E_i) \ne 0$ for $ 1 \le i \le k$ . Hence, $ \det (A) = 0$ . This contradicts our assumption that $ \det(A) \ne 0$ . Hence our assumption is false and therefore $ A$ is invertible.

Proof of Part 10. Suppose $ A$ is not invertible. Then by Part 9, $ \det (A) = 0$ . Also, the product matrix $ A B$ is also not invertible. So, again by Part 9, $ \det(A B) = 0$ . Thus, $ \det(A B) = \det(A) \det(B)$ .

Now suppose that $ A$ is invertible. Then by Theorem 2.5.8, $ A$ is a product of elementary matrices. That is, there exist elementary matrices $ E_1, E_2, \ldots, E_k$ such that $ A = E_1 E_2 \cdots E_k$ . Now a repeated application of Part 8 implies that

$\displaystyle \det(AB)$ $\displaystyle =$ $\displaystyle \det ( E_1 E_2 \cdots E_k B) = \det(E_1) \det(E_2) \cdots \det(E_k) \det(B)$  
  $\displaystyle =$ $\displaystyle \det ( E_1 E_2 \cdots E_k) \det( B) = \det(A) \det(B).$  

Proof of Part 11. Let $ B = [b_{ij}]= A^t$ . Then $ b_{ij} = a_{ji}$ for $ 1 \le i, j \le n$ . By Proposition 14.2.4, we know that $ {\mathcal S}_n = \{\sigma^{-1} : \; \sigma \in {\mathcal S}_n\}$ . Also $ {\mbox{sgn}}(\sigma) = {\mbox{sgn}}(\sigma^{-1})$ . Hence,

$\displaystyle \det(B)$ $\displaystyle =$ $\displaystyle \sum\limits_{\sigma \in {\mathcal S}_n} {\mbox{sgn}}(\sigma) b_{1 \sigma(1)} b_{2 \sigma(2)}
\cdots b_{n \sigma(n)}$  
  $\displaystyle =$ $\displaystyle \sum\limits_{\sigma \in {\mathcal S}_n} {\mbox{sgn}}(\sigma^{-1})
b_{\sigma^{-1}(1) \;1} \; b_{\sigma^{-1}(2) \;2} \cdots b_{\sigma^{-1}(n)\; n}$  
  $\displaystyle =$ $\displaystyle \sum\limits_{\sigma \in {\mathcal S}_n} {\mbox{sgn}}(\sigma^{-1}) a_{1 \sigma^{-1}(1)} b_{2 \sigma^{-1}(2)} \cdots
b_{n \sigma^{-1}(n)}$  
  $\displaystyle =$ $\displaystyle \det(A).$  

height6pt width 6pt depth 0pt

Remark 15.3.2  
  1. The result that $ \det(A) = \det(A^t)$ implies that in the statements made in Theorem 14.3.1, where ever the word ``row" appears it can be replaced by ``column".
  2. Let $ A=[a_{ij}]$ be a matrix satisfying $ a_{11} = 1$ and $ a_{1j} = 0$ for $ 2 \le j \le n$ . Let $ B$ be the submatrix of $ A$ obtained by removing the first row and the first column. Then it can be easily shown that $ \det(A) =
\det(B)$ . The reason being is as follows:
    for every $ \sigma \in {\mathcal S}_n$ with $ \sigma(1) = 1$ is equivalent to saying that $ \sigma$ is a permutation of the elements $ \{2, 3, \ldots, n\}$ . That is, $ \sigma \in {\mathcal S}_{n-1}$ . Hence,
    $\displaystyle \det(A)$ $\displaystyle =$ $\displaystyle \sum\limits_{\sigma \in {\mathcal S}_n} {\mbox{sgn}}(\sigma) a_{1...
...}_n, \sigma(1) = 1} {\mbox{sgn}}(\sigma) a_{2 \sigma(2)}
\cdots a_{n \sigma(n)}$  
      $\displaystyle =$ $\displaystyle \sum\limits_{\sigma \in {\mathcal S}_{n-1}} {\mbox{sgn}}(\sigma) b_{1 \sigma(1)}
\cdots b_{n \sigma(n)} = \det(B).$  

We are now ready to relate this definition of determinant with the one given in Definition 2.6.2.

THEOREM 15.3.3   Let $ A$ be an $ n \times n$ matrix. Then $ \det(A) = \sum\limits_{j=1}^n (-1)^{1 + j} a_{1j}
\det\bigl(A(1\vert j)\bigr),$ where recall that $ A(1\vert j)$ is the submatrix of $ A$ obtained by removing the $ 1^{\mbox{st}}$ row and the $ j^{\mbox{th}}$ column.

Proof. For $ 1 \le j \le n$ , define two matrices

$\displaystyle B_j = \begin{bmatrix}0 & 0 & \cdots & a_{1j} & \cdots & 0 \\ a_{2...
...dots \\
a_{nj} & a_{n1} & a_{n2} & \cdots & a_{nn} \end{bmatrix}_{n \times n}.$

Then by Theorem 14.3.1.5,

$\displaystyle \det(A) = \sum\limits_{j=1}^n \det(B_j).$ (15.3.6)

We now compute $ \det(B_j)$ for $ 1 \le j \le n$ . Note that the matrix $ B_j$ can be transformed into $ C_j$ by $ j-1$ interchanges of columns done in the following manner:
first interchange the $ 1^{\mbox{st}}$ and $ 2^{\mbox{nd}}$ column, then interchange the $ 2^{\mbox{nd}}$ and $ 3^{\mbox{rd}}$ column and so on (the last process consists of interchanging the $ (j-1)^{\mbox{th}}$ column with the $ j^{\mbox{th}}$ column. Then by Remark 14.3.2 and Parts 1 and 2 of Theorem 14.3.1, we have $ \det(B_j) = a_{1j} (-1)^{j-1} \det(C_j)$ . Therefore by (14.3.6),

$\displaystyle \det(A) = \sum\limits_{j=1}^n (-1)^{j-1} a_{1j} \det\bigl( A(1\vert j)\bigr)=\sum\limits_{j=1}^n (-1)^{j+1} a_{1j} \det\bigl( A(1\vert j)\bigr).$

height6pt width 6pt depth 0pt

A K Lal 2007-09-12