Rank–nullity theorem

{{#invoke:Hatnote|hatnote}}Template:Main other {{ safesubst:#invoke:Unsubst||$N=Refimprove |date=__DATE__ |$B= {{#invoke:Message box|ambox}} }}

In mathematics, the rank–nullity theorem of linear algebra, in its simplest form, states that the rank and the nullity of a matrix add up to the number of columns of the matrix. Specifically, if A is an m-by-n matrix (with m rows and n columns) over some field, then

$\operatorname {rk} (A)+\operatorname {nul} (A)=n.$ This applies to linear maps as well. Let V and W be vector spaces over some field and let T : VW be a linear map. Then the rank of T is the dimension of the image of T and the nullity of T is the dimension of the kernel of T, so we have

$\operatorname {dim} (\operatorname {im} (T))+\operatorname {dim} (\operatorname {ker} (T))=\operatorname {dim} (V),$ or, equivalently,

$\operatorname {rk} (T)+\operatorname {nul} (T)=\operatorname {dim} (V).$ One can refine this statement (via the splitting lemma or the below proof) to be a statement about an isomorphism of spaces, not just dimensions.

More generally, one can consider the image, kernel, coimage, and cokernel, which are related by the fundamental theorem of linear algebra.

Proofs

We give two proofs. The first proof uses notations for linear transformations, but can be easily adapted to matrices by writing T(x) = Ax, where A is m × n. The second proof looks at the homogeneous system Ax = 0 associated with an m × n matrix A of rank r and shows explicitly that there exist a set of nr linearly independent solutions that span the null space of A.

First proof: Suppose $\{{\mathbf {u} }_{1},\ldots ,{\mathbf {u} }_{m}\}$ forms a basis of ker T. We can extend this to form a basis of V: $\{{\mathbf {u} }_{1},\ldots ,{\mathbf {u} }_{m},{\mathbf {w} }_{1},\ldots ,{\mathbf {w} }_{n}\}$ . Since the dimension of ker T is m and the dimension of V is m + n, it suffices to show that the dimension of im T is n.

Let us see that $\{T{\mathbf {w} }_{1},\ldots ,T{\mathbf {w} }_{n}\}$ is a basis of im T. Let v be an arbitrary vector in V. There exist unique scalars such that:

${\mathbf {v} }=a_{1}{\mathbf {u} }_{1}+\cdots +a_{m}{\mathbf {u} }_{m}+b_{1}{\mathbf {w} }_{1}+\cdots +b_{n}{\mathbf {w} }_{n}$ $\Rightarrow T{\mathbf {v} }=a_{1}T{\mathbf {u} }_{1}+\cdots +a_{m}T{\mathbf {u} }_{m}+b_{1}T{\mathbf {w} }_{1}+\cdots +b_{n}T{\mathbf {w} }_{n}$ $\Rightarrow T{\mathbf {v} }=b_{1}T{\mathbf {w} }_{1}+\cdots +b_{n}T{\mathbf {w} }_{n}\;\;\because T{\mathbf {u} }_{i}=0$ We only now need to show that this list is not redundant; that is, that $\{T{\mathbf {w} }_{1},\ldots ,T{\mathbf {w} }_{n}\}$ are linearly independent. We can do this by showing that a linear combination of these vectors is zero if and only if the coefficient on each vector is zero. Let:

$c_{1}T{\mathbf {w} }_{1}+\cdots +c_{n}T{\mathbf {w} }_{n}=0\Leftrightarrow T\{c_{1}{\mathbf {w} }_{1}+\cdots +c_{n}{\mathbf {w} }_{n}\}=0$ $\therefore c_{1}{\mathbf {w} }_{1}+\cdots +c_{n}{\mathbf {w} }_{n}\in \operatorname {ker} \;T$ Then, since ui span ker T, there exists a set of scalars di such that:

$c_{1}{\mathbf {w} }_{1}+\cdots +c_{n}{\mathbf {w} }_{n}=d_{1}{\mathbf {u} }_{1}+\cdots +d_{m}{\mathbf {u} }_{m}$ But, since $\{{\mathbf {u} }_{1},\ldots ,{\mathbf {u} }_{m},{\mathbf {w} }_{1},\ldots ,{\mathbf {w} }_{n}\}$ form a basis of V, all ci, di must be zero. Therefore, $\{T{\mathbf {w} }_{1},\ldots ,T{\mathbf {w} }_{n}\}$ is linearly independent and indeed a basis of im T. This proves that the dimension of im T is n, as desired.

In more abstract terms, the map T : V → im T splits.

Second proof: Let A be an m × n matrix with r linearly independent columns (i.e. rank of A is r). We will show that: (i) there exists a set of nr linearly independent solutions to the homogeneous system Ax = 0, and (ii) that every other solution is a linear combination of these nr solutions. In other words, we will produce an n × (nr) matrix X whose columns form a basis of the null space of A.

Without loss of generality, assume that the first r columns of A are linearly independent. So, we can write A = [A1:A2], where A1 is m × r with r linearly independent column vectors and A2 is m × (nr), each of whose nr columns are linear combinations of the columns of A1. This means that A2 = A1 B for some r × (nr) matrix B (see rank factorization) and, hence, A = [A1:A1B]. Let $\displaystyle {\mathbf {X} }={\begin{pmatrix}-{\mathbf {B} }\\{\mathbf {I} }_{n-r}\end{pmatrix}}$ , where ${\mathbf {I} }_{n-r}$ is the (nr) × (nr) identity matrix. We note that X is an n × (nr) matrix that satisfies

${\mathbf {A} }{\mathbf {X} }=[{\mathbf {A} }_{1}:{\mathbf {A} }_{1}{\mathbf {B} }]{\begin{pmatrix}-{\mathbf {B} }\\{\mathbf {I} }_{n-r}\end{pmatrix}}=-{\mathbf {A} }_{1}{\mathbf {B} }+{\mathbf {A} }_{1}{\mathbf {B} }={\mathbf {O} }\;.$ Therefore, each of the nr columns of X are particular solutions of Ax = 0. Furthermore, the nr columns of X are linearly independent because Xu = 0 will imply u = 0:

${\mathbf {X} }{\mathbf {u} }={\mathbf {0} }\Rightarrow {\begin{pmatrix}-{\mathbf {B} }\\{\mathbf {I} }_{n-r}\end{pmatrix}}{\mathbf {u} }={\mathbf {0} }\Rightarrow {\begin{pmatrix}-{\mathbf {B} }{\mathbf {u} }\\{\mathbf {u} }\end{pmatrix}}={\begin{pmatrix}{\mathbf {0} }\\{\mathbf {0} }\end{pmatrix}}\Rightarrow {\mathbf {u} }={\mathbf {0} }\;.$ Therefore, the column vectors of X constitute a set of nr linearly independent solutions for Ax = 0.

We next prove that any solution of Ax = 0 must be a linear combination of the columns of X For this, let $\displaystyle {\mathbf {u} }={\begin{pmatrix}{\mathbf {u} }_{1}\\{\mathbf {u} }_{2}\end{pmatrix}}$ be any vector such that Au = 0. Note that since the columns of A1 are linearly independent, A1x = 0 implies x = 0. Therefore,

${\mathbf {A} }{\mathbf {u} }={\mathbf {0} }\Rightarrow [{\mathbf {A} }_{1}:{\mathbf {A} }_{1}{\mathbf {B} }]{\begin{pmatrix}{\mathbf {u} }_{1}\\{\mathbf {u} }_{2}\end{pmatrix}}={\mathbf {0} }\Rightarrow {\mathbf {A} }_{1}({\mathbf {u} }_{1}+{\mathbf {B} }{\mathbf {u} }_{2})={\mathbf {0} }\Rightarrow {\mathbf {u} }_{1}+{\mathbf {B} }{\mathbf {u} }_{2}={\mathbf {0} }\Rightarrow {\mathbf {u} }_{1}=-{\mathbf {B} }{\mathbf {u} }_{2}$ $\Rightarrow {\mathbf {u} }={\begin{pmatrix}{\mathbf {u} }_{1}\\{\mathbf {u} }_{2}\end{pmatrix}}={\begin{pmatrix}-{\mathbf {B} }\\{\mathbf {I} }_{n-r}\end{pmatrix}}{\mathbf {u} }_{2}={\mathbf {X} }{\mathbf {u} }_{2}.$ This proves that any vector u that is a solution of Ax = 0 must be a linear combination of the nr special solutions given by the columns of X. And we have already seen that the columns of X are linearly independent. Hence, the columns of X constitute a basis for the null space of A. Therefore, the nullity of A is nr. Since r equals rank of A, it follows that rk(A) + nul(A) = n. QED.

Reformulations and generalizations

This theorem is a statement of the first isomorphism theorem of algebra to the case of vector spaces; it generalizes to the splitting lemma.

In more modern language, the theorem can also be phrased as follows: if

0 → UVR → 0

is a short exact sequence of vector spaces, then

dim(U) + dim(R) = dim(V).

Here R plays the role of im T and U is ker T, i.e.

$0\rightarrow \ker T~{\overset {Id}{\rightarrow }}~V~{\overset {T}{\rightarrow }}~\operatorname {im} T\rightarrow 0$ In the finite-dimensional case, this formulation is susceptible to a generalization: if

0 → V1V2 → ... → Vr → 0

is an exact sequence of finite-dimensional vector spaces, then

$\sum _{i=1}^{r}(-1)^{i}\dim(V_{i})=0.$ {{ safesubst:#invoke:Unsubst||date=__DATE__ |\$B=

{{#invoke:Category handler|main}}{{#invoke:Category handler|main}}[citation needed] }}

The rank–nullity theorem for finite-dimensional vector spaces may also be formulated in terms of the index of a linear map. The index of a linear map T : VW, where V and W are finite-dimensional, is defined by

index T = dim(ker T) − dim(coker T).

Intuitively, dim(ker T) is the number of independent solutions x of the equation Tx = 0, and dim(coker T) is the number of independent restrictions that have to be put on y to make Tx = y solvable. The rank–nullity theorem for finite-dimensional vector spaces is equivalent to the statement

index T = dim(V) − dim(W).

We see that we can easily read off the index of the linear map T from the involved spaces, without any need to analyze T in detail. This effect also occurs in a much deeper result: the Atiyah–Singer index theorem states that the index of certain differential operators can be read off the geometry of the involved spaces.