banner



How To Find Column Space Of Matrix

Row Space, Column Space, and Cypher Space

Definition. Let A be an $m \times n$ matrix.

(a) The row vectors of A are the vectors in $F^n$ respective to the rows of A. The row space of A is the subspace of $F^n$ spanned by the row vectors of A.

(b) The column vectors of A are the vectors in $F^n$ respective to the columns of A. The column space of A is the subspace of $F^n$ spanned by the column vectors of A.


Instance. Consider the real matrix

$$A = \left[\matrix{1 & 0 \cr 0 & 1 \cr 0 & 0 \cr}\right].$$

The row vectors are $(1,0)$ , $(0,1)$ , and $(0,0)$ . The row space is the subspace of $\real^2$ spanned by these vectors. Since the beginning ii vectors are the standard basis vectors for $\real^2$ , the row space is $\real^2$ .

The cavalcade vectors are $(1,0,0)$ and $(0,1,0)$ . The column infinite is the subspace of $\real^3$ spanned by these vectors. Thus, the column space consists of all vectors of the course

$$a\cdot (1,0,0) + b\cdot (0,1,0) = (a,b,0).\quad\halmos$$


Lemma. If Eastward is an elementary row operation and A is a matrix, then $E(A)$ has the same row space every bit A.

Proof. If E is an operation of the form $r_i \leftrightarrow r_j$ , then $E(A)$ and A have the same rows (except for order), so information technology's clear that their row vectors have the same span.

If Eastward is an operation of the class $r_i \rightarrow a r_i$ , then A and $E(A)$ concur except in the i-th row. Since

$$a_1 r_1 + \cdots + a_i r_i + \cdots + a_m r_m = a_1 r_1 + \cdots + \left(a_i \dfrac{1}{a}\right) (a r_i) + \cdots + a_m r_m,$$

whatever chemical element of the row space of A is in the row space of $E(A)$ , and vice versa.

Finally, suppose E is a row operation of the grade $r_i \rightarrow r_i + ar_j$ . And then

$$a_1 r_1 + \cdots + a_i r_i + \cdots + a_m r_m = a_1 r_1 + \cdots + a_i(r_i + ar_j) + \cdots + (a_j - a_ia)r_j + \cdots + a_m r_m,$$

which shows that the row space of A is contained in the row space of $E(A)$ .

Conversely,

$$a_1 r_1 + \cdots + a_i (r_i + ar_j) + \cdots + a_j r_j + \cdots + a_m r_m = a_1 r_1 + \cdots + a_i r_i + \cdots + (a_i a + a_j)r_j + \cdots + a_m r_m,$$

so the row space of $E(A)$ is contained in the row space of A.

Definition. Ii matrices are row equivalent if one tin exist obtained from the other via elementary row operations.

Since row operations preserve row space, row equivalent matrices accept the same row space. In particular, a matrix and its row reduced echelon class have the same row space.

Lemma. Let $R = \{ r_{ij}\}$ be a row reduced echelon matrix with nonzero rows $r_1$ , ..., $r_p$ . Suppose the leading coefficients of R occur at

$$(i,j_1), (2, j_2), \ldots, \quad \hbox{where} \quad j_1 < j_2 < \cdots.$$

If $v = (v_1, \ldots, v_n)$ and

$$v = a_1 r_1 + \cdots + a_p r_p,$$

then $v_{j_k} = a_k$ .

In other words, this lemma describes the components of a vector in the row space of R.

Proof.

$$v_{j_k} = \sum_{i=1}^p a_i r_{ij_k}.$$

Only the but nonzero chemical element in column $j_k$ is $r_{kj_k} =     1$ . Therefore, the only nonzero term in the sum is $a_k     r_{kj_k} = a_k$ .


Example. Here is a row reduced echelon matrix over $\real$ :

$$\left[\matrix{ 0 & 1 & 2 & 0 & -1 & 0 \cr 0 & 0 & 0 & 1 & 2 & 0 \cr 0 & 0 & 0 & 0 & 0 & 1 \cr 0 & 0 & 0 & 0 & 0 & 0 \cr}\right].$$

Notation that $j_1 = 2$ , $j_2 = 4$ , and $j_3 = 6$ . Consider the following element of the row infinite:

$$5\cdot (0, 1, 2, 0, -1, 0) + (-3)\cdot (0, 0, 0, 1, 2, 0) + 1\cdot (0, 0, 0, 0, 0, 1) = (0, 5, 10, -3, -11, 1).$$

So

$$v_2 = a_1 = 5, \quad v_4 = a_2 = -3, \quad v_6 = a_3 = 1.\quad\halmos$$


Corollary. The nonzero rows of a row reduced echelon matrix are contained.

Proof. Suppose R is a row reduced echelon matrix with nonzero rows $r_1$ , ..., $r_p$ . Suppose the leading coefficients of R occur at $(1,j_1), (2, j_2), \ldots$ , where $j_1 < j_2 < \cdots$ . If

$$0 = a_1 r_1 + \cdots + a_p r_p,$$

then the lemma implies that $a_k = v_{j_k} = 0$ for all thou. Therefore, the $\{ r_i\}$ are independent.

Corollary. The nonzero rows of a row reduced echelon matrix form a ground for the row infinite of the matrix.

Proof. The nonzero rows bridge the row space, and are independent, by the preceding corollary.

Algorithm. Let 5 be a finite-dimensional vector space, and let $v_1,     \ldots, v_m$ be vectors in V. The object is to find a basis for $W = \langle v_1, \ldots, v_m \rangle$ , the subspace spanned by the $v_i$ .

Let K exist the matrix whose i-th row is $v_i$ . The row space of M is W. Let R be a row-reduced echelon matrix which is row equivalent to M. Then R and Chiliad have the aforementioned row space W, and the nonzero rows of R course a footing for W.


Example. Consider the vectors $v_1 = (1, 0, 1, 1)$ , $v_2 = (-2, 1, 1, 0)$ , and $v_3 = (7, -2, 1, 3)$ in $\real^4$ . I'll notice a basis for the subspace $( v_1, v_2, v_3)$ spanned by the vectors. Construct a matrix with the $v_i$ as its rows and row reduce:

$$\left[\matrix{1 & 0 & 1 & 1 \cr -2 & 1 & 1 & 0 \cr 7 & -2 & 1 & 3 \cr}\right] \to \left[\matrix{1 & 0 & 1 & 1 \cr 0 & 1 & 3 & 2 \cr 0 & 0 & 0 & 0 \cr}\right]$$

The vectors $(1, 0, 1, 1)$ and $(0, 1, 3, 2)$ course a basis for $(v_1, v_2, v_3)$ .


Example. Determine the dimension of the subspace of $\real^3$ spanned by $(1, 2, -1)$ , $(1, 1, 1)$ , and $(2, -2, 1)$ .

$$\left[\matrix{1 & 2 & -1 \cr 1 & 1 & 1 \cr 2 & -2 & 1 \cr}\right] \rightarrow \left[\matrix{1 & 0 & 0 \cr 0 & 1 & 0 \cr 0 & 0 & 1 \cr}\right]$$

The subspace has dimension iii, since the row reduced echelon matrix has iii nonzero rows.


Definition. The rank of a matrix is the dimension of its row space.


Example. Consider the post-obit matrix over $\integer_5$ :

$$\left[\matrix{ 1 & 4 & 2 & 1 \cr 3 & 3 & 1 & 2 \cr 0 & 1 & 0 & 4 \cr}\right].$$

Row reduce it:

$$\left[\matrix{ 1 & 4 & 2 & 1 \cr 3 & 3 & 1 & 2 \cr 0 & 1 & 0 & 4 \cr}\right] \to \left[\matrix{ 1 & 0 & 2 & 0 \cr 0 & 1 & 0 & 4 \cr 0 & 0 & 0 & 0 \cr}\right]$$

The row reduced echelon matrix has 2 nonzero rows. Therefore, the original matrix has rank 2.


I'll need the post-obit fact virtually matrix multiplication for the proof of the next lemma. Consider the following multiplication:

$$\left[\matrix{a_1 & a_2 & \cdots & a_n \cr}\right] \left[\matrix{\leftarrow & r_1 & \rightarrow \cr \leftarrow & r_2 & \rightarrow \cr & \vdots & \cr \leftarrow & r_n & \rightarrow \cr}\right].$$

In doing the multiplication, each a multiplies the corresponding row r. Here's the picture:

$$\hbox{\epsfysize=1.25in \epsffile{matrix-subspaces1.eps}}$$

Therefore,

$$\left[\matrix{a_1 & a_2 & \cdots & a_n \cr}\right] \left[\matrix{\leftarrow & r_1 & \rightarrow \cr \leftarrow & r_2 & \rightarrow \cr & \vdots & \cr \leftarrow & r_n & \rightarrow \cr}\right] = \left[\matrix{a_1r_1 + a_2r_2 + \cdots + a_nr_n \cr}\right].$$

If instead of a single row vector on the left I have an entire matrix, hither's what I get:

$$\left[\matrix{a_{11} & a_{12} & \cdots & a_{1n} \cr a_{21} & a_{22} & \cdots & a_{2n} \cr \vdots & \vdots & & \vdots \cr a_{m1} & a_{m2} & \cdots & a_{mn} \cr}\right] \left[\matrix{\leftarrow & r_1 & \rightarrow \cr \leftarrow & r_2 & \rightarrow \cr & \vdots & \cr \leftarrow & r_n & \rightarrow \cr}\right] = \left[\matrix{ \leftarrow & a_{11}r_1 + a_{12}r_2 + \cdots + a_{1n}r_n & \rightarrow\cr \leftarrow & a_{21}r_1 + a_{22}r_2 + \cdots + a_{2n}r_n & \rightarrow\cr & \vdots & \cr \leftarrow & a_{m1}r_1 + a_{m2}r_2 + \cdots + a_{mn}r_n & \rightarrow\cr}\right].$$

Here's the betoken: The rows of the product are linear combinations of the rows $r_1$ , $r_2$ , ... $r_n$ .

Lemma. Let K and Northward exist matrices over a field F which are uniform for multiplication. Then

$$\rank (MN) \le \rank N.$$

Proof. The preceding word shows that the rows of $MN$ are linear combinations of the rows of N. Therefore, the rows of $MN$ are all independent in the row space of N.

The row space of N is a subspace, and then information technology's closed nether taking linear combinations of vectors. Hence, any linear combination of the rows of $MN$ is in the row space of N. Therefore, the row space of $MN$ is contained in the row space of N.

From this, it follows that the dimension of the row infinite of $MN$ is less than or equal to the dimension of the row infinite of N --- that is, $\rank     (MN) \le \rank N$ .

I already have one algorithm for testing whether a ready of vectors in $F^n$ is contained. That algorithm involves constructing a matrix with the vectors as the columns, and then row reducing. The algorithm will also produce a linear combination of the vectors which adds upwards to the zero vector if the prepare is dependent.

If all y'all care most is whether or non a set of vectors in $F^n$ is independent --- i.e. you don't care most a possible dependence relation --- the results on rank tin be used to give an alternative algorithm. In this arroyo, you lot construct a matrix with the given vectors as the rows.

Algorithm. Let V be a finite-dimensional vector space, and allow $v_1,     \ldots, v_m$ exist vectors in V. The object is to determine whether the set $\{v_1, \ldots, v_m\}$ is independent.

Allow M be the matrix whose i-th row is $v_i$ . Permit R be a row reduced echelon matrix which is row equivalent to Thousand. If R has k nonzero rows, then $\{v_1, \ldots, v_m\}$ is independent. Otherwise, the set is dependent.

If R has p nonzero rows, then R and M have rank p. (They have the same rank, because they have the same row space.) Suppose $p = m$ . Since $\{v_i\}$ spans, some subset of $\{v_i\}$ is a basis. However, a basis must comprise $p = m$ elements. Therefore, $\{v_i\}$ must exist independent.

Any independent subset of the row infinite must comprise $\le p$ elements. Hence, if $m > p$ , $\{v_i\}$ must be dependent.


Example. The vectors $v_1 = (1, 0, 1, 1)$ , $v_2 = (-2, 1, 1, 0)$ , and $v_3 = (7, -2, 1, 3)$ in $\real^4$ are dependent:

$$\left[\matrix{1 & 0 & 1 & 1 \cr -2 & 1 & 1 & 0 \cr 7 & -2 & 1 & 3 \cr}\right] \rightarrow \left[\matrix{1 & 0 & 1 & 1 \cr 0 & 1 & 3 & 2 \cr 0 & 0 & 0 & 0 \cr}\right]$$

The row reduced echelon matrix has only ii nonzero rows.


I already know that every matrix tin can be row reduced to a row reduced echelon matrix. The adjacent result completes the discussion by showing that the row reduced echelon grade is unique

Proposition. Every matrix tin can exist row reduced to a unique row reduced echelon matrix.

Proof. Suppose M row reduces to R, a row reduced echelon matrix with nonzero rows $r_1, \ldots, r_p$ . Suppose the leading coefficients of R occur at $(1,j_1), (2, j_2),     \ldots $ , where $j_1 < j_2 < \cdots $ .

Let W be the row space of R and let

$$v = a_1 r_1 + \cdots + a_p r_p$$

be an chemical element of W. In component form, $v = (v_1, \ldots, v_n)$ .

Claim: The first nonzero component of v must occur in cavalcade $j_k$ , for $k = 1, 2, \ldots$ .

Suppose $a_k$ is the first $a_i$ which is nonzero. The sum looks like

$$a_k r_k + \cdots + a_p r_p.$$

The first nonzero element of $r_k$ is a ane at $(k,j_k)$ . The get-go nonzero element in $r_{k+1}, \ldots, r_p$ lies to the right of column $j_k$ . Thus, $v_j     = 0$ for $j < j_k$ , and $v_{j_k} = a_k$ . Evidently, this is the outset nonzero component of v. This proves the claim.

This establishes that if a row reduced echelon matrix $R'$ is row equivalent to M, its leading coefficients must prevarication in the same columns as those of R. For the rows of $R'$ are elements of W, and the claim applies.

Adjacent, I'll show that the nonzero rows of $R'$ are the same equally the nonzero row of R.

Consider, for instance, the get-go nonzero rows of R and $R'$ . Their get-go nonzero components are 1's lying in column $j_1$ . Moreover, both $r_1$ and $r_1'$ have zeros in columns $j_2$ , $j_3$ , ... .

Suppose $r_1 \ne r_1'$ . And then $r_1 - r_1'$ is a nonzero vector in West whose first nonzero component is not in column $j_1$ , $j_2$ , ..., which is a contradiction.

The same argument applies to testify that $r_k = r_k'$ for all 1000. Therefore, $R = R'$ .

I showed before that you lot can add vectors to an contained set up to make a footing. Here'due south how it works in a item case.


Case. Add together vectors to the following set to brand a basis for $\real^5$ :

$$\left\{(2,4,3,0,7), (1,2,0,3,-1), (1,2,2,0,9), (-2,-4,-2,-2,-3)\right\}.$$

If I make a matrix with these vectors every bit rows and row reduce, the row reduced echelon grade will have the same row space (i.e. the aforementioned span) as the original set of vectors:

$$\left[\matrix{ 2 & 4 & 3 & 0 & 7 \cr 1 & 2 & 0 & 3 & -1 \cr 1 & 2 & 2 & 0 & 9 \cr -2 & -4 & -2 & -2 & -3 \cr}\right] \to \left[\matrix{ 1 & 2 & 0 & 0 & 0 \cr 0 & 0 & 1 & 0 & 0 \cr 0 & 0 & 0 & 1 & 0 \cr 0 & 0 & 0 & 0 & 1 \cr}\right]$$

Since in that location are 4 nonzero rows, and since these rows are conspicuously contained vectors, the original set of vectors is independent.

By examining the row reduced echelon grade, I see that the vector $(0, 1, 0, 0, 0)$ will non be a linear combination of the others. (I'm choosing a standard basis vector with a "1" in a position not occupied by a leading coefficient.) Therefore, I tin can add it to the gear up and become a new independent set:

$$\left\{(2, 4, 3, 0, 7), (1, 2, 0, 3, -1), (1, 2, 2, 0, 9), (-2, -4, -2, -2, -3), (0, 1, 0, 0, 0)\right\}.$$

In that location are five vectors in this prepare, and so it is a basis for $\real^5$ .


I besides showed earlier that you tin can remove vectors from a spanning fix to become a footing. You can practice this using the same algorithm that gives a basis for the column space of a matrix.

Kickoff, here'south a reminder about matrix multiplication. If A is an $m \times n$ matrix and $V     \in F^n$ , then the product $A v$ is a linear combination of the columns of A.

In fact, if $c_i$ is the i-th cavalcade of A and $v = (v_1, \ldots, v_n)$ ,

$$\left[\matrix{\uparrow & \uparrow & & \uparrow \cr c_1 & c_2 & \cdots & c_n \cr \downarrow & \downarrow & & \downarrow \cr}\right] \left[\matrix{v_1 \cr v_2 \cr \vdots \cr v_n \cr}\right] = v_1 c_1 + v_2 c_2 + \cdots + v_n c_n.$$

Lemma. Permit A be a matrix, and let R exist the row reduced echelon matrix which is row equivalent to A. Suppose the leading coefficients of R occur in columns $j_1, \ldots, j_p$ , where $j_1 < \cdots < j_p$ , and let $c_i$ announce the i-th column of A. Then $\{c_{j_1}, \ldots, c_{j_p}\}$ is contained.

Proof. Suppose that

$$a_{j_1} c_{j_1} + \cdots + a_{j_p} c_{j_p} = 0, \quad\hbox{for}\quad a_i \in F.$$

Form the vector $v = (v_i)$ , where

$$v_i = \cases{0 & if $i \notin \{ j_1, \ldots, j_p\}$ \cr a_i & if $i \in \{ j_1, \ldots, j_p\}$ \cr}$$

The equation above implies that $A v = 0$ .

It follows that v is in the solution space of the arrangement $Ax = 0$ . Since $Rx =     0$ has the same solution space, $Rv = 0$ . Let $c'_i$ denote the i-thursday cavalcade of R. Then

$$0 = Rv = a_{j_1} c'_{j_1} + \cdots + a_{j_p} c_{j_p}.$$

However, since R is in row reduced echelon form, $c'_{j_k}$ is a vector with 1 in the k-th row and 0'southward elsewhere. Hence, $\{c_{j_1}, \ldots,     c_{j_p}\}$ is independent, and $a_{j_1} = \cdots = a_{j_p} =     0$ .


Example. Consider the existent matrix

$$A = \left[\matrix{1 & 1 & -2 & 3 \cr 2 & 0 & -1 & 1 \cr 1 & 1 & 1 & 0 \cr}\right].$$

It row reduces to

$$R = \left[\matrix{1 & 0 & 0 & 0 \cr 0 & 1 & 0 & 0 \cr 0 & 0 & 1 & -1 \cr}\right].$$

The leading coefficients occur in the first three columns. Hence, the first three columns of A are independent:

$$\left\{\left[\matrix{1 \cr 2 \cr 1 \cr}\right], \left[\matrix{1 \cr 0 \cr 1 \cr}\right], \left[\matrix{2 \cr -1 \cr 1 \cr}\right]\right\}$$

In fact, they grade a basis for the column infinite of A.


Example. Notice a subset of the post-obit set up of vectors which forms a basis for $\real^3$ .

$$\left\{\left[\matrix{1 \cr 2 \cr 1 \cr}\right], \left[\matrix{-1 \cr 1 \cr -1 \cr}\right], \left[\matrix{1 \cr 1 \cr 1 \cr}\right], \left[\matrix{4 \cr -1 \cr 2 \cr}\right]\right\}$$

Make a matrix with the vectors equally columns and row reduce:

$$\left[\matrix{1 & -1 & 1 & 4 \cr 2 & 1 & 1 & -1 \cr 1 & -1 & 1 & 2 \cr}\right] \to \left[\matrix{1 & 0 & \dfrac{2}{3} & 0 \cr \noalign{\vskip2pt} 0 & 1 & -\dfrac{1}{3} & 0 \cr \noalign{\vskip2pt} 0 & 0 & 0 & 1 \cr}\right]$$

The leading coefficients occur in columns i, 2, and 4. Therefore, the corresponding columns of the original matrix are independent, and form a basis for $\real^3$ :

$$\left\{\left[\matrix{1 \cr 2 \cr 1 \cr}\right], \left[\matrix{-1 \cr 1 \cr -1 \cr}\right], \left[\matrix{4 \cr -1 \cr 2 \cr}\right]\right\}.\quad\halmos$$


Proposition. Allow A be a matrix. Then

$$\hbox{row rank}(A) = \hbox{column rank}(A).$$

Proof. Let R be the row reduced echelon matrix which is row equivalent to A. Suppose the leading coefficients of R occur in columns $j_1,     \ldots, j_p$ , where $j_1 < \cdots < j_p$ , and permit $c_i$ denote the i-th column of A. By the preceding lemma, $\{c_{j_1}, \ldots,     c_{j_p}\}$ is contained. In that location is one vector in this set for each leading coefficient, and the number of leading coefficients equals the row rank. Therefore,

$$\hbox{row rank}(A) \le \hbox{column rank}(A).$$

At present consider $A^T$ . This is A with the rows and columns swapped, so

$$\hbox{row rank}(A^T) = \hbox{column rank}(A),$$

$$\hbox{column rank}(A^T) = \hbox{row rank}(A).$$

Applying the first part of the proof to $A^T$ ,

$$\hbox{column rank}(A) = \hbox{row rank}(A^T) \le \hbox{column rank}(A^T) = \hbox{row rank}(A).$$

Therefore,

$$\hbox{column rank}(A) = \hbox{row rank}(A).\quad\halmos$$

Remark. The proof provides an algorithm for finding a basis for the column space of a matrix. Specifically, row reduce the matrix A to a row reduced echelon matrix R. If the leading coefficients of R occur in columns $j_1, \ldots, j_p$ , then consider the columns $c_{j_1}, \ldots, c_{j_p}$ of A. These columns form a basis for the column space of A.


Example. Consider the real matrix

$$\left[\matrix{1 & -2 & 3 & 1 & 1 \cr 2 & 1 & 0 & 3 & 1 \cr 0 & -5 & 6 & -1 & 1 \cr 7 & 1 & 3 & 10 & 4 \cr}\right].$$

It row reduces to

$$\left[\matrix{1 & 0 & 0.6 & 1.4 & 0.6 \cr 0 & 1 & -1.2 & 0.2 & -0.2 \cr 0 & 0 & 0 & 0 & 0 \cr 0 & 0 & 0 & 0 & 0 \cr}\right]$$

The leading coefficients occur in columns 1 and 2. Therefore, $(1,2,0,7)$ and $(-2,1,-5,1)$ form a basis for the column infinite of A.


Example. If A and B are row equivalent, they don't necessarily have the same cavalcade space. For example,

$$\left[\matrix{1 & 2 & 1 \cr 1 & 2 & 1 \cr}\right] \matrix{\rightarrow \cr r_2 \rightarrow r_2 - r_1 \cr} \left[\matrix{1 & 2 & 1 \cr 0 & 0 & 0 \cr}\right].$$

However, all the elements of the column space of the 2d matrix have their 2d component equal to 0; this is obviously not true of elements of the column space of the first matrix.


Proposition. Permit A, B, P and Q be matrices, where P and Q are invertible. Suppose $A = PBQ$ . Then

$$\rank A = \rank B.$$

Proof. I showed earlier that $\rank MN \le \rank N$ . This was row rank; a similar proof shows that

$$\hbox{column rank}(MN) \le \hbox{column rank}(M).$$

Since row rank and column rank are the same, $\rank MN \le \rank M$ .

Now

$$\rank A = \rank PBQ \le \rank BQ = \hbox{column rank}(BQ) \le \hbox{column rank}(B) = \rank B.$$

But $B = P^{-1}AQ^{-1}$ , so repeating the ciphering gives $\rank B \le \rank A$ . Therefore, $\rank A = \rank B$ .

Definition. The aught space (or kernel) of a matrix A is the set of vectors $\vec{x}$ such that $A\vec{x} = \vec{0}$ . The dimension of the null space of A is called the nullity of A, and is denoted $\nullity(A)$ .

Remark. The null space is the same as the solution space of the arrangement of equations $A\vec{x} = \vec{0}$ . I showed earlier that if A is an $m \times n$ matrix, then the solution space is a subspace of $F^n$ . Thus, the null infinite of a matrix is a subspace of $F^n$ .


Example. Consider the real matrix

$$A = \left[\matrix{3 & -1 & 1 \cr -1 & 1 & 1 \cr}\right].$$

The vector $(1,2,-1)$ is in the nix infinite of A, since

$$\left[\matrix{3 & -1 & 1 \cr -1 & 1 & 1 \cr}\right] \left[\matrix{1 \cr 2 \cr -1 \cr}\right] = \left[\matrix{0 \cr 0 \cr}\right].$$

The vector $(1,1,1)$ is not in the nothing infinite of A, since

$$\left[\matrix{3 & -1 & 1 \cr -1 & 1 & 1 \cr}\right] \left[\matrix{1 \cr 1 \cr 1 \cr}\right] = \left[\matrix{3 \cr 1 \cr}\right] \ne \left[\matrix{0 \cr 0 \cr}\right].\quad\halmos$$


Algorithm. Let A be an $m \times n$ matrix. The object is to find a ground for the null space of A.

Let $\vec{x} = (x_1, x_2,     \ldots, x_n)$ . Solve the following organization by row reducing A to row-reduced echelon form:

$$A\vec{x} = \vec{0}.$$

In the row reduced echelon grade, suppose that $\{x_{i_1}, x_{i_2}, \ldots,     x_{i_p}\}$ are the variables respective to the leading coefficients, and suppose that $\{x_{j_1}, x_{j_2}, \ldots,     x_{j_q}\}$ are the complimentary variables. Note for futurity reference that $p + q = n$ .

Every bit usual, put the solution in parametric form, writing $\{x_{i_1}, x_{i_2}, \ldots,     x_{i_p}\}$ in terms of $\{x_{j_1}, x_{j_2}, \ldots,     x_{j_q}\}$ :

$$\eqalign{x_{i_1} &= f_{i_1}(x_{j_1}, x_{j_2}, \ldots, x_{j_q}) \cr x_{i_2} &= f_{i_2}(x_{j_1}, x_{j_2}, \ldots, x_{j_q}) \cr &\vdots \cr x_{i_p} &= f_{i_p}(x_{j_1}, x_{j_2}, \ldots, x_{j_q}) \cr}$$

Plug the expressions for $\{x_{i_1}, x_{i_2}, \ldots, x_{i_p}\}$ into the general solution vector $\vec{x} = (x_1, x_2, \ldots,     x_n)$ , expressing it in terms of$\{x_{j_1}, x_{j_2},     \ldots, x_{j_q}\}$ . Schematically, the result looks similar this:

$$\left[\matrix{x_1 \cr x_2 \cr \vdots \cr x_n \cr}\right] = \left[\matrix{(f_{i_k}\,'s) \cr x_{j_1} \cr (f_{i_k}\,'s) \cr x_{j_2} \cr (f_{i_k}\,'s) \cr \vdots \cr (f_{i_k}\,'s) \cr x_{j_q} \cr (f_{i_k}\,'s) \cr}\right] = x_{j_1}\left[\matrix{\ast \cr 1 \cr \ast \cr 0 \cr \ast \cr \vdots \cr \ast \cr 0 \cr \ast \cr}\right] + x_{j_2}\left[\matrix{\ast \cr 0 \cr \ast \cr 1 \cr \ast \cr \vdots \cr \ast \cr 0 \cr \ast \cr}\right] + \cdots + x_{j_q}\left[\matrix{\ast \cr 0 \cr \ast \cr 0 \cr \ast \cr \vdots \cr \ast \cr 1 \cr \ast \cr}\right].$$

In the last expression, the vectors which are existence multiplied past $x_{j_1}$ , $x_{j_2}$ , ..., $x_{j_q}$ course a ground for the zilch space.

The vectors span the null space, considering the equation in a higher place has expressed an arbitrary vector in the goose egg space as a linear combination of the vectors.

The vectors are independent, because (ignoring the $\ast$ 'south) the 1'south and 0'southward in the $j_1$ , $j_2$ , ..., $j_q$ components form a q-dimensional standard footing.

This description is probably hard to understand with all the subscripts flight effectually, but y'all'll see in the examples below that it'southward very like shooting fish in a barrel to implement. Before giving an instance, here's an important result that comes out of this discussion. Notice that p, the number of leading coefficient variables, is the rank of A. Notice also that q, the number of free variables, is the aforementioned as the number of vectors in the basis for the aught space. That is, $q = \nullity(A)$ . Finally, I observed earlier that $p + q = n$ .

Theorem. Let A be an $m \times n$ matrix. So

$$n = \rank A + \nullity A.\quad\halmos$$


Case. Find the nullity and a basis for the nix infinite of the real matrix

$$\left[\matrix{1 & 2 & 0 & 3 \cr 1 & 2 & 1 & -2 \cr 2 & 4 & 1 & 1 \cr}\right].$$

Row reduce the matrix to row-reduced echelon class:

$$\left[\matrix{1 & 2 & 0 & 3 \cr 1 & 2 & 1 & -2 \cr 2 & 4 & 1 & 1 \cr}\right] \to \left[\matrix{1 & 2 & 0 & 3 \cr 0 & 0 & 1 & -5 \cr 0 & 0 & 0 & 0 \cr}\right]$$

I'll use due west, x, y, and z as my solution variables. Thinking of the terminal matrix as representing equations for a homogeneous arrangement, I have

$$w + 2x + 3z = 0, \quad\hbox{or}\quad w = -2x - 3z,$$

$$y - 5z = 0, \quad\hbox{or}\quad y = 5z.$$

Thus,

$$\left[\matrix{w \cr x \cr y \cr z \cr}\right] = \left[\matrix{-2x - 3z \cr x \cr 5z \cr z \cr}\right] = x\left[\matrix{-2 \cr 1 \cr 0 \cr 0 \cr}\right] + z \left[\matrix{-3 \cr 0 \cr 5 \cr 1 \cr}\right].$$

$\{(-2,1,0,0), (-3,0,5,1)\}$ is a basis for the null space. The nullity is 2.

Notice that the rank is ii, the number of columns is 4, and $4 = 2 + 2$ .


Example. Consider the following matrix over $\integer_3$ :

$$A = \left[\matrix{1 & 1 & 0 & 2 \cr 2 & 2 & 1 & 2 \cr 1 & 1 & 1 & 0 \cr}\right].$$

Find bases for the row space, column space, and null space.

Row reduce the matrix:

$$\left[\matrix{1 & 1 & 0 & 2 \cr 2 & 2 & 1 & 2 \cr 1 & 1 & 1 & 0 \cr}\right] \to \left[\matrix{1 & 1 & 0 & 2 \cr 0 & 0 & 1 & 1 \cr 0 & 0 & 0 & 0 \cr}\right]$$

$\{(1,1,0,2), (0,0,1,1)\}$ is a basis for the row space.

The leading coefficients occur in columns 1 and 3. Taking the start and third columns of the original matrix, I find that $\{(1,2,1), (0,1,1)\}$ is a basis for the column infinite.

Using a, b, c, and d as variables, I observe that the row reduced matrix says

$$a + b + 2d = 0, \quad\hbox{or}\quad a = 2b + d,$$

$$c + d = 0, \quad\hbox{or}\quad c = 2d.$$

Thus,

$$\left[\matrix{a \cr b \cr c \cr d \cr}\right] = \left[\matrix{2b + d \cr b \cr 2d \cr d \cr}\right] = b\left[\matrix{2 \cr 1 \cr 0 \cr 0 \cr}\right] + d\left[\matrix{1 \cr 0 \cr 2 \cr 1 \cr}\right].$$

Therefore, $\{(2,1,0,0),     (1,0,2,1)\}$ is a basis for the null infinite.


Transport comments virtually this page to: Bruce.Ikenaga@millersville.edu.

Bruce Ikenaga'southward Home Page

Copyright 2013 by Bruce Ikenaga

Source: https://sites.millersville.edu/bikenaga/linear-algebra/matrix-subspaces/matrix-subspaces.html

Posted by: claybleart.blogspot.com

0 Response to "How To Find Column Space Of Matrix"

Post a Comment

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel