Important Results

THEOREM 3.3.7   Let $ \{{\mathbf v}_1, {\mathbf v}_2, \ldots, {\mathbf v}_n \}$ be a basis of a given vector space $ V.$ If $ \{{\mathbf w}_1, {\mathbf w}_2, \ldots, {\mathbf w}_m \}$ is a set of vectors from $ V$ with $ m > n$ then this set is linearly dependent.

Proof. Since we want to find whether the set $ \{{\mathbf w}_1, {\mathbf w}_2, \ldots, {\mathbf w}_m \}$ is linearly independent or not, we consider the linear system

$\displaystyle \alpha_1 {\mathbf w}_1 + \alpha_2 {\mathbf w}_2 + \cdots + \alpha_m {\mathbf w}_m = {\mathbf 0}$ (3.3.1)

with $ \alpha_1, \alpha_2, \ldots, \alpha_m$ as the $ m$ unknowns. If the solution set of this linear system of equations has more than one solution, then this set will be linearly dependent.

As $ \{{\mathbf v}_1, {\mathbf v}_2, \ldots, {\mathbf v}_n \}$ is a basis of $ V$ and $ {\mathbf w}_i \in V,$ for each $ i, \; 1 \leq i \leq m,$ there exist scalars $ a_{ij}, \; 1 \leq i \leq n,\; 1 \leq j \leq m,$ such that

$\displaystyle {\mathbf w}_1$ $\displaystyle =$ $\displaystyle a_{11} {\mathbf v}_1 + a_{21} {\mathbf v}_2 + \cdots + a_{n1} {\mathbf v}_n$  
$\displaystyle {\mathbf w}_2$ $\displaystyle =$ $\displaystyle a_{12} {\mathbf v}_1 + a_{22} {\mathbf v}_2 + \cdots + a_{n2} {\mathbf v}_n$  
$\displaystyle \vdots$ $\displaystyle =$ $\displaystyle \vdots$  
$\displaystyle {\mathbf w}_m$ $\displaystyle =$ $\displaystyle a_{1m} {\mathbf v}_1 + a_{2m} {\mathbf v}_2 + \cdots + a_{nm} {\mathbf v}_n.$  

The set of Equations (3.3.1) can be rewritten as
    $\displaystyle \alpha_1 \left( \sum\limits_{j=1}^n a_{j1} {\mathbf v}_j\right)
+...
... \alpha_m \left( \sum\limits_{j=1}^n a_{jm} {\mathbf v}_j \right) = {\mathbf 0}$  
  $\displaystyle {\mbox{i.e.,}}$ $\displaystyle \left( \sum\limits_{i=1}^m \alpha_i a_{1i}\right)
{\mathbf v}_1 +...
... \left( \sum\limits_{i=1}^m \alpha_i a_{ni}\right) {\mathbf v}_n =
{\mathbf 0}.$  

Since the set $ \{{\mathbf v}_1, {\mathbf v}_2, \ldots, {\mathbf v}_n \}$ is linearly independent, we have

$\displaystyle \sum\limits_{i=1}^m \alpha_i a_{1i}=
\sum\limits_{i=1}^m \alpha_i a_{2i}= \cdots = \sum\limits_{i=1}^m
\alpha_i a_{ni} = 0.$

Therefore, finding $ \alpha_i$ 's satisfying equation (3.3.1) reduces to solving the system of homogeneous equations $ A \alpha = {\mathbf 0}$ where $ \alpha^t =( \alpha_1,
\alpha_2, \ldots, \alpha_m)$ and $ A = \begin{bmatrix}
a_{11} & a_{12} & \cdots & a_{1m} \\ a_{21} & a_{22} & \cd...
... & \vdots & \ddots & \vdots \\ a_{n1} & a_{n2} &
\cdots & a_{nm} \end{bmatrix}.$ Since $ n < m,$ i.e., THE NUMBER OF EQUATIONS is strictly less than THE NUMBER OF UNKNOWNS, Corollary 2.5.3 implies that the solution set consists of infinite number of elements. Therefore, the equation (3.3.1) has a solution with not all $ \alpha_i, \; 1 \leq i \leq m,$ zero. Hence, the set $ \{{\mathbf w}_1, {\mathbf w}_2, \ldots, {\mathbf w}_m \}$ is a linearly dependent set. height6pt width 6pt depth 0pt

Remark 3.3.8   Let $ V$ be a vector subspace of $ {\mathbb{R}}^n$ with spanning set $ S.$ We give a method of finding a basis of $ V$ from $ S.$
  1. Construct a matrix $ A$ whose rows are the vectors in $ S.$
  2. Use only the elementary row operations $ R_i(c)$ and $ R_{ij}(c)$ to get the row-reduced form $ B$ of $ A$ (in fact we just need to make as many zero-rows as possible).
  3. Let $ {\cal B}$ be the set of vectors in $ S$ corresponding to the non-zero rows of $ B.$
Then the set $ {\cal B}$ is a basis of $ L(S)= V.$

EXAMPLE 3.3.9   Let $ S= \{ (1,1,1,1), (1,1,-1,1), (1,1,0,1), (1, -1, 1,1)\}$ be a subset of $ {\mathbb{R}}^4.$ Find a basis of $ L(S).$
Solution: Here $ A =\begin{bmatrix}1& 1& 1 & 1 \\ 1& 1 & -1 &1
\\ 1 & 1 & 0 &1\\ 1& -1 & 1 &1 \end{bmatrix}.$ Applying row-reduction to $ A,$ we have
$ \begin{bmatrix}1& 1& 1 &1 \\ 1& 1 & -1 &1 \\ 1
& 1 & 0 &1\\ 1& -1 & 1 &1 \end{...
...rix}1& 1& 1 &1\\ 0& 0 & 0 &0 \\
0 & 0 & -1 &0 \\ 0 & -2 & 0 &0 \end{bmatrix}. $
Observe that the rows $ 1, 3$ and $ 4$ are non-zero. Hence, a basis of $ L(S)$ consists of the first, third and fourth vectors of the set $ S.$ Thus, $ {\cal B}=\{(1,1,1,1), \; (1,1,0,1),\; (1, -1, 1,1) \}$ is a basis of $ L(S).$

Observe that at the last step, in place of the elementary row operation $ R_{32}(-2)$ , we can apply $ R_{23}(-\frac{1}{2})$ to make the third row as the zero-row. In this case, we get $ \{(1,1,1,1), \; (1,1,-1,1),\; (1, -1, 1,1) \}$ as a basis of $ L(S)$ .

COROLLARY 3.3.10   Let $ V$ be a finite dimensional vector space. Then any two bases of $ V$ have the same number of vectors.

Proof. Let $ \{{\mathbf u}_1, {\mathbf u}_2, \ldots, {\mathbf u}_n\} $ and $ \{{\mathbf v}_1,
{\mathbf v}_2, \ldots, {\mathbf v}_m \}$ be two bases of $ V$ with $ m > n.$ Then by the above theorem the set $ \{{\mathbf v}_1,
{\mathbf v}_2, \ldots, {\mathbf v}_m \}$ is linearly dependent if we take $ \{{\mathbf u}_1, {\mathbf u}_2, \ldots, {\mathbf u}_n\} $ as the basis of $ V.$ This contradicts the assumption that $ \{{\mathbf v}_1,
{\mathbf v}_2, \ldots, {\mathbf v}_m \}$ is also a basis of $ V.$ Hence, we get $ m =
n.$ height6pt width 6pt depth 0pt

DEFINITION 3.3.11 (Dimension of a Vector Space)   The dimension of a finite dimensional vector space $ V$ is the number of vectors in a basis of $ V,$ denoted $ \dim(V).$

Note that the Corollary 3.2.6 can be used to generate a basis of ANY NON-TRIVIAL FINITE DIMENSIONAL VECTOR SPACE.

EXAMPLE 3.3.12  
  1. Consider the complex vector space $ {\mathbb{C}}^2 ({\mathbb{C}}).$ Then,

    $\displaystyle (a+ i b, c + i d) = (a+ i b) (1 , 0) + (c + i d) ( 0, 1).$

    So, $ \{(1, 0), (0,1) \}$ is a basis of $ {\mathbb{C}}^2 ({\mathbb{C}})$ and thus $ \dim (V) = 2.$
  2. Consider the real vector space $ {\mathbb{C}}^2 ({\mathbb{R}}).$ In this case, any vector

    $\displaystyle (a + i b, c + i d) = a (1, 0) + b (i, 0) + c (0,1) + d (0, i).$

    Hence, the set $ \{(1,0),
(i, 0), (0,1), (0, i)\}$ is a basis and $ \dim (V) = 4.$

Remark 3.3.13   It is important to note that the dimension of a vector space may change if the underlying field (the set of scalars) is changed.

EXAMPLE 3.3.14   Let $ V$ be the set of all functions $ f: {\mathbb{R}}^n {\longrightarrow}{\mathbb{R}}$ with the property that $ f({\mathbf x}+ {\mathbf y}) = f({\mathbf x}) + f({\mathbf y})$ and $ f( \alpha {\mathbf x}) = \alpha f({\mathbf x}).$ For $ f, g \in V,$ and $ t \in {\mathbb{R}},$ define
$\displaystyle (f \oplus g)({\mathbf x})$ $\displaystyle =$ $\displaystyle f({\mathbf x}) + g({\mathbf x}) \;\; {\mbox{ and }}$  
$\displaystyle (t \odot f) ({\mathbf x})$ $\displaystyle =$ $\displaystyle f( t {\mathbf x}).$  

Then $ V$ is a real vector space.

For $ 1 \leq i \leq n,$ consider the functions

$\displaystyle {\mathbf e_i}({\mathbf x}) =
{\mathbf e_i} \bigl( (x_1, x_2, \ldots, x_n) \bigr) = x_i.$

Then it can be easily verified that the set $ \{ {\mathbf e_1}, {\mathbf e_2}, \ldots, {\mathbf e_n} \}$ is a basis of $ V$ and hence $ \dim (V) = n.$

The next theorem follows directly from Corollary 3.2.6 and Theorem 3.3.7. Hence, the proof is omitted.

THEOREM 3.3.15   Let $ S$ be a linearly independent subset of a finite dimensional vector space $ V.$ Then the set $ S$ can be extended to form a basis of $ V.$

Theorem 3.3.15 is equivalent to the following statement:
Let $ V$ be a vector space of dimension $ n.$ Suppose, we have found a linearly independent set
[4] $ S = \{{\mathbf v}_1, {\mathbf v}_2,
\ldots, {\mathbf v}_r\} \subset V.$ Then there exist vectors $ {\mathbf v}_{r+1}, \ldots, {\mathbf v}_n$ in $ V$ such that $ \{{\mathbf v}_1, {\mathbf v}_2, \ldots, {\mathbf v}_n \}$ is a basis of $ V.$

COROLLARY 3.3.16   Let $ V$ be a vector space of dimension $ n.$ Then any set of $ n$ linearly independent vectors forms a basis of $ V.$ Also, every set of $ m$ vectors, $ m > n,$ is linearly dependent.

COROLLARY 3.3.17   Let $ V$ be a vector space of dimension $ n.$ Then any set of vectors $ S = \{{\mathbf u}_1, {\mathbf u}_2, \ldots,
{\mathbf u}_n \}$ such that $ L(S) = V$ forms a basis of $ V$ .

EXAMPLE 3.3.18   Let $ V = \{(v,w,x,y,z) \in {\mathbb{R}}^5\; : v + x - 3y + z = 0\}$ and $ W = \{(v,w,x,y,z) \in {\mathbb{R}}^5\; : w -x - z = 0, v = y\}$ be two subspaces of $ {\mathbb{R}}^5.$ Find bases of $ V$ and $ W$ containing a basis of $ V \cap W.$
Solution: Let us find a basis of $ V \cap W.$ The solution set of the linear equations

$\displaystyle v + x - 3y + z = 0,
\;\; w -x - z = 0 \;\; {\mbox{ and }} \;\; v = y$

is given by

$\displaystyle (v,w,x,y,z)^t = (y, 2y, x, y, 2y-x)^t = y (1,2,0,1,2)^t +
x (0,0,1,0,-1)^t. $

Thus, a basis of $ V \cap W$ is

$\displaystyle \{ (1,2,0,1,2), (0,0,1,0,-1)\}.$

To find a basis of $ W$ containing a basis of $ V \cap W,$ we can proceed as follows:
  1. Find a basis of $ W.$
  2. Take the basis of $ V \cap W$ found above as the first two vectors and that of $ W$ as the next set of vectors.

    Now use Remark 3.3.8 to get the required basis.

Heuristically, we can also find the basis in the following way:
A vector of $ W$ has the form $ (y,x+z,x,y,z)$ for $ x,y,z \in {\mathbb{R}}.$ Substituting $ y=1,x=1,$ and $ z = 0$ in $ (y,x+z,x,y,z)$ gives us the vector $ (1,1,1,1,0) \in W.$ It can be easily verified that a basis of $ W$ is

$\displaystyle \{ (1,2,0,1,2), (0,0,1,0,-1), (1,1,1,1,0) \}.$

Similarly, a vector of $ V$ has the form $ (v,w,x,y,3y-v-x)$ for $ v,w, x,y\in {\mathbb{R}}.$ Substituting $ v=0,w=1,x=0$ and $ y=0,$ gives a vector $ (0,1,0,0,0) \in V.$ Also, substituting $ v=0,w=1,x=1$ and $ y=1,$ gives another vector $ (0,1,1,1,2) \in V.$ So, a basis of $ V$ can be taken as

$\displaystyle \{ (1,2,0,1,2), (0,0,1,0,-1), (0,1,0,0,0), (0,1,1,1,2)\}.$

Recall that for two vector subspaces $ M$ and $ N$ of a vector space $ V({\mathbb{F}}),$ the vector subspace $ M + N$ (see Exercise 3.1.18.7) is defined by

$\displaystyle M+N = \{ {\mathbf u}+ {\mathbf v}: {\mathbf u}\in M, \; {\mathbf v}\in N \}.$

With this definition, we have the following very important theorem (for a proof, see Appendix 14.4.1).

THEOREM 3.3.19   Let $ V ({\mathbb{F}})$ be a finite dimensional vector space and let $ M$ and $ N$ be two subspaces of $ V.$ Then

$\displaystyle \dim (M) + \dim(N) = \dim (M + N) + \dim (M \cap N).$ (3.3.2)

EXERCISE 3.3.20  
  1. Find a basis of the vector space $ {\cal P}_n({\mathbb{R}}).$ Also, find $ \dim ( {\cal P}_n ({\mathbb{R}}) ).$ What can you say about the dimension of $ {\cal P}({\mathbb{R}})?$
  2. Consider the real vector space, $ C([0, 2 \pi]),$ of all real valued continuous functions. For each $ n$ consider the vector $ {\mathbf e}_n$ defined by $ {\mathbf e}_n(x) = \sin (n x).$ Prove that the collection of vectors $ \{ {\mathbf e}_n : 1 \leq n < \infty \}$ is a linearly independent set.
    [Hint: On the contrary, assume that the set is linearly dependent. Then we have a finite set of vectors, say $ \{{\mathbf e}_{k_1}, {\mathbf e}_{k_2}, \ldots,
{\mathbf e}_{k_\ell}\}$ that are linearly dependent. That is, there exist scalars $ {\alpha}_i \in {\mathbb{R}}$ for $ 1 \leq i \leq \ell$ not all zero such that

    $\displaystyle {\alpha}_1 \sin(k_1 x) + {\alpha}_2 \sin (k_2 x) + \cdots + {\alpha}_\ell \sin(k_\ell x)
= {\mathbf 0}\;\;$    for all $\displaystyle \;\; x \in [0, 2 \pi].$

    Now for different values of $ m$ integrate the function

    $\displaystyle \int_0^{2 \pi} \sin(m x) \left( {\alpha}_1 \sin(k_1 x) + {\alpha}_2 \sin (k_2 x) +
\cdots + {\alpha}_\ell \sin(k_\ell x) \right)$    dx$\displaystyle $

    to get the required result.]
  3. Show that the set $ \{(1,0,0), (1,1,0), (1,1,1) \}$ is a basis of $ {\mathbb{C}}^3 ({\mathbb{C}}).$ Is it a basis of $ {\mathbb{C}}^3 ({\mathbb{R}})$ also?
  4. Let $ W= \{(x,y,z,w) \in {\mathbb{R}}^4 : x + y - z + w = 0 \}$ be a subspace of $ {\mathbb{R}}^4.$ Find its basis and dimension.
  5. Let $ V = \{(x,y,z,w) \in {\mathbb{R}}^4 : x + y - z + w = 0,
x + y + z + w = 0 \}$ and $ W = \{(x,y,z,w)\in {\mathbb{R}}^4 : x - y - z + w = 0, x + 2 y -w = 0 \}$ be two subspaces of $ {\mathbb{R}}^4.$ Find bases and dimensions of $ V,$ $ W,$ $ V \cap W$ and $ V + W.$
  6. Let $ V$ be the set of all real symmetric $ n \times n$ matrices. Find its basis and dimension. What if $ V$ is the complex vector space of all $ n \times n$ Hermitian matrices?
  7. If $ M$ and $ N$ are $ 4$ -dimensional subspaces of a vector space $ V$ of dimension $ 7$ then show that $ M$ and $ N$ have at least one vector in common other than the zero vector.
  8. Let $ P=L\{(1,0,0), (1,1,0) \}$ and $ Q=L\{(1,1,1)\}$ be vector subspaces of $ {\mathbb{R}}^3.$ Show that $ P+Q = {\mathbb{R}}^3$ and $ P \cap Q =
\{{\mathbf 0}\}.$ If $ {\mathbf u}\in {\mathbb{R}}^3,$ determine $ {\mathbf u}_P, {\mathbf u}_Q$ such that $ {\mathbf u}=
{\mathbf u}_P + {\mathbf u}_Q$ where $ {\mathbf u}_P \in P$ and $ {\mathbf u}_Q \in Q.$ Is it necessary that $ {\mathbf u}_P$ and $ {\mathbf u}_Q$ are unique?
  9. Let $ W_1$ be a $ k$ -dimensional subspace of an $ n$ -dimensional vector space $ V ({\mathbb{F}})$ where $ k \geq 1.$ Prove that there exists an $ (n-k)$ -dimensional subspace $ W_2$ of $ V$ such that $ W_1 \cap W_2 =
\{ {\mathbf 0}\}$ and $ W_1 + W_2 = V.$
  10. Let $ P$ and $ Q$ be subspaces of $ {\mathbb{R}}^n$ such that $ P + Q = {\mathbb{R}}^n$ and $ P \cap Q =
\{{\mathbf 0}\}.$ Then show that each $ {\mathbf u}\in {\mathbb{R}}^n$ can be uniquely expressed as $ {\mathbf u}=
{\mathbf u}_P + {\mathbf u}_Q$ where $ {\mathbf u}_P \in P$ and $ {\mathbf u}_Q \in Q.$
  11. Let $ P=L\{(1,-1,0), (1,1,0) \}$ and $ Q=L\{(1,1,1), (1,2,1)\}$ be vector subspaces of $ {\mathbb{R}}^3.$ Show that $ P+Q = {\mathbb{R}}^3$ and $ P \cap Q
\neq \{{\mathbf 0}\}.$ Show that there exists a vector $ {\mathbf u}\in {\mathbb{R}}^3$ such that $ {\mathbf u}$ cannot be written uniquely in the form $ {\mathbf u}=
{\mathbf u}_P + {\mathbf u}_Q$ where $ {\mathbf u}_P \in P$ and $ {\mathbf u}_Q \in Q.$
  12. Let $ W_1$ and $ W_2$ be two subspaces of a vector space $ V$ such that $ W_1 \subset W_2$ . Show that $ W_1 = W_2$ if and only if $ \dim(W_1) = \dim(W_2)$ .
  13. Let $ W_1$ and $ W_2$ be two subspaces of a vector space $ V$ . If $ \dim(W_1) + \dim(W_2) > \dim(V)$ , then prove that $ W_1 \cap W_2$ contains a non-zero vector.
  14. Recall the vector space $ {\cal P}_4({\mathbb{R}}).$ Is the set,

    $\displaystyle W = \{p(x) \in {\cal P}_4({\mathbb{R}}) \; : \; p(-1) = p(1) = 0 \}$

    a subspace of $ {\cal P}_4({\mathbb{R}})?$ If yes, find its dimension.
  15. Let $ V$ be the set of all $ 2 \times 2$ matrices with complex entries and $ a_{11} + a_{22} = 0.$ Show that $ V$ is a real vector space. Find its basis. Also let $ W = \{A \in V : a_{21} = {\overline{- a_{12}}}
\}.$ Show $ W$ is a vector subspace of $ V,$ and find its dimension.
  16. Let $ A = \begin{bmatrix}1 & 2 & 1 & 3 & 2\\ 0 & 2
& 2 & 2 & 4\\ 2 & -2 & 4 & 0 & 8\\ 4 & 2 & 5 & 6 & 10
\end{bmatrix},$ and $ B =\begin{bmatrix}2 & 4 & 0 &
6\\ -1 & 0 & -2 &5 \\ -3 & -5 & 1 & -4 \\ -1 & -1 & 1 & 2
\end{bmatrix}$ be two matrices. For $ A$ and $ B$ find the following:
    1. their row-reduced echelon forms.
    2. the matrices $ P_1$ and $ P_2$ such that $ P_1 A$ and $ P_2 B$ are in row-reduced form.
    3. a basis each for the row spaces of $ A$ and $ B.$
    4. a basis each for the range spaces of $ A$ and $ B.$
    5. bases of the null spaces of $ A$ and $ B.$
    6. the dimensions of all the vector subspaces so obtained.
  17. Let $ M(n,{\mathbb{R}})$ denote the space of all $ n \times n$ real matrices. For the sets given below, check that they are subspaces of $ M(n,{\mathbb{R}})$ and also find their dimension.
    1. $ {\mathit sl}(n,{\mathbb{R}}) = \{ A \in M(n,{\mathbb{R}}) \; : \; {\mbox{tr}}(A) = 0\},$ where recall that $ {\mbox{tr }}(A)$ stands for trace of $ A.$
    2. $ S(n,{\mathbb{R}}) = \{ A \in M(n,{\mathbb{R}}) \; : \; A = A^t\}.$
    3. $ A(n,{\mathbb{R}}) = \{ A \in M(n,{\mathbb{R}}) \; : \; A+A^t = {\mathbf 0}\}.$





Before going to the next section, we prove that for any matrix $ A$ of order $ m \times n$

$\displaystyle {\mbox{Row rank}}(A) = {\mbox{Column rank}}(A).$

PROPOSITION 3.3.21   Let $ A$ be an $ m \times n$ real matrix. Then

$\displaystyle {\mbox{Row rank}}(A) = {\mbox{Column rank}}(A).$

Proof. Let $ R_1, R_2, \ldots, R_m $ be the rows of $ A$ and $ C_1, C_2, \ldots, C_n$ be the columns of $ A.$ Note that $ {\mbox{Row rank}}(A)= r,$ means that

$\displaystyle \dim \bigl( L(R_1, R_2, \ldots, R_m) \bigr) = r.$

Hence, there exists vectors

$\displaystyle {\mathbf u}_1= (u_{11}, \ldots, u_{1n}),
{\mathbf u}_2=(u_{21}, \ldots, u_{2n}), \ldots,
{\mathbf u}_r=(u_{r1}, \ldots, u_{rn}) \in {\mathbb{R}}^n$

with

$\displaystyle R_i \in L ({\mathbf u}_1, {\mathbf u}_2, \ldots, {\mathbf u}_r) \in {\mathbb{R}}^n, \; {\mbox{ for all }} \; i,
1 \leq i \leq m.$

Therefore, there exist real numbers $ {\alpha}_{ij}, \; 1 \leq i
\leq m, \; 1 \leq j \leq r$ such that

$\displaystyle R_1 = {\alpha}_{11} {\mathbf u}_1 + {\alpha}_{12}{\mathbf u}_2 + ...
..., \sum_{i=1}^r {\alpha}_{1i}u_{i2},
\ldots, \sum_{i=1}^r {\alpha}_{1i}u_{in} ),$

$\displaystyle R_2 = {\alpha}_{21} {\mathbf u}_1 + {\alpha}_{22}{\mathbf u}_2 + ...
..., \sum_{i=1}^r {\alpha}_{2i}u_{i2},
\ldots, \sum_{i=1}^r {\alpha}_{2i}u_{in} ),$

and so on, till

$\displaystyle R_m = {\alpha}_{m1} {\mathbf u}_1 + \cdots + {\alpha}_{mr} {\math...
..., \sum_{i=1}^r {\alpha}_{mi}u_{i2},
\ldots, \sum_{i=1}^r {\alpha}_{mi}u_{in} ).$

So,

$\displaystyle C_1 = \begin{bmatrix}\sum\limits_{i=1}^r {\alpha}_{1i}u_{i1} \\
...
...atrix}
{\alpha}_{1r} \\ {\alpha}_{2r} \\ \vdots \\ {\alpha}_{mr} \end{bmatrix}.$

In general, for $ 1 \leq j \leq n,$ we have

$\displaystyle C_j = \begin{bmatrix}\sum\limits_{i=1}^r {\alpha}_{1i}u_{ij} \\
...
...atrix}
{\alpha}_{1r} \\ {\alpha}_{2r} \\ \vdots \\ {\alpha}_{mr} \end{bmatrix}.$

Therefore, we observe that the columns $ C_1, C_2, \ldots, C_n$ are linear combination of the $ r$ vectors

$\displaystyle ({\alpha}_{11}, {\alpha}_{21}, \ldots, {\alpha}_{m1})^t,
({\alpha...
...lpha}_{m2})^t, \ldots, ({\alpha}_{1r}, {\alpha}_{2r},
\ldots, {\alpha}_{mr})^t.$

Therefore,

$\displaystyle {\mbox{Column rank}}(A) = \dim\bigl( L(C_1, C_2, \ldots, C_n) \bigr) =
\leq r = {\mbox{Row rank}}(A).$

A similar argument gives

$\displaystyle {\mbox{Row rank}}(A) \leq {\mbox{Column rank}}(A).$

Thus, we have the required result. height6pt width 6pt depth 0pt

A K Lal 2007-09-12