Split supersymmetry: Difference between revisions
en>Jbackroyd No edit summary |
en>AnomieBOT m Dating maintenance tags: {{Pov}} |
||
Line 1: | Line 1: | ||
{{More footnotes|date=February 2012}} | |||
In [[algebra]], an '''idempotent matrix''' is a [[matrix (mathematics)|matrix]] which, when multiplied by itself, yields itself.<ref>{{cite book |last=Chiang |first=Alpha C. |title=Fundamental Methods of Mathematical Economics |publisher=McGraw–Hill |edition=3rd |year=1984 |page=80 |location=New York |isbn=0070108137 }}</ref><ref name=Greene>{{cite book |last=Greene |first=William H. |title=Econometric Analysis |publisher=Prentice–Hall |location=Upper Saddle River, NJ |edition=5th |year=2003 |pages=808–809 |isbn=0130661899 }}</ref> That is, the matrix ''M'' is idempotent if and only if ''MM'' = ''M''. For this product ''MM'' to be [[Matrix multiplication|defined]], ''M'' must necessarily be a [[square matrix]]. Viewed this way, idempotent matrices are [[idempotent element]]s of [[matrix ring]]s. | |||
==Properties== | |||
With the exception of the [[identity matrix]], an idempotent matrix is [[singular matrix|singular]]; that is, its number of independent rows (and columns) is less than its number of rows (and columns). This can be seen from writing ''MM = M'', assuming that ''M'' has full rank (is non-singular), and pre-multiplying by ''M''<sup>−1</sup> to obtain ''M'' = ''M''<sup>−1</sup>''M'' = ''I''. | |||
When an idempotent matrix is subtracted from the identity matrix, the result is also idempotent. This holds since [''I'' − ''M''][''I'' − ''M''] = ''I'' − ''M'' − ''M'' + ''M''<sup>2</sup> = ''I'' − ''M'' − ''M'' + ''M'' = ''I'' − ''M''. | |||
An idempotent matrix is always [[diagonalizable]] and its [[eigenvalue]]s are either 0 or 1.<ref>{{cite book |first=Roger A. |last=Horn |first2=Charles R. |last2=Johnson |title=Matrix analysis |publisher=Cambridge University Press |year=1990 |page={{Google books quote|id=PlYQN0ypTwEC|page=148|text=every idempotent matrix is diagonalizable|p. 148}} |isbn=0521386322 }}</ref> The [[trace (linear algebra)|trace]] of an idempotent matrix — the sum of the elements on its main diagonal — equals the [[rank (linear algebra)|rank]] of the matrix and thus is always an integer. This provides an easy way of computing the rank, or alternatively an easy way of determining the trace of a matrix whose elements are not specifically known (which is helpful in [[econometrics]], for example, in establishing the degree of [[bias (statistics)|bias]] in using a [[variance|sample variance]] as an estimate of a [[variance|population variance]]). | |||
==Applications== | |||
Idempotent matrices arise frequently in [[regression analysis]] and [[econometrics]]. For example, in [[ordinary least squares]], the regression problem is to choose a vector <math>\beta</math> of coefficient estimates so as to minimize the sum of squared residuals (mispredictions) ''e''<sub>''i''</sub>: in matrix form, | |||
:<math>\text{Minimize } (y - X \beta)^T(y - X \beta) \, </math> | |||
where ''y'' is a vector of [[Dependent and independent variables#Use in statistics|dependent variable]] observations, and ''X'' is a matrix each of whose columns is a column of observations on one of the [[Dependent and independent variables#Use in statistics|independent variables]]. The resulting estimator is | |||
:<math>\beta = (X^TX)^{-1}X^Ty \, </math> | |||
where superscript ''T'' indicates a [[transpose]], and the vector of residuals is<ref name=Greene/> | |||
:<math>e = y - X \beta = y - X(X^TX)^{-1}X^Ty = [I - X(X^TX)^{-1}X^T]y = My. \, </math> | |||
Here both ''M'' and <math>X(X^TX)^{-1}X^T</math>(the latter being known as the [[hat matrix]]) are idempotent matrices, a fact which allows simplification when the sum of squared residuals is computed: | |||
:<math> e^Te = (My)^T(My) = y^TM^TMy = y^TMMy = y^TMy. \, </math> | |||
The idempotency of ''M'' plays a role in other calculations as well, such as in determining the variance of the estimator <math>\beta</math>. | |||
An idempotent linear operator ''P'' is a projection operator on the [[Column space|range space]] ''R(P)'' along its [[null space]] ''N(P)''. ''P'' is an [[orthogonal projection]] operator if and only if it is idempotent and [[Symmetric matrix|symmetric]]. | |||
==See also== | |||
* [[Idempotence]] | |||
* [[Nilpotent]] | |||
* [[Projection (linear algebra)]] | |||
* [[Hat matrix]] | |||
==References== | |||
{{reflist}} | |||
[[Category:Algebra]] | |||
[[Category:Econometrics]] | |||
[[Category:Matrices]] |
Revision as of 07:26, 2 January 2014
In algebra, an idempotent matrix is a matrix which, when multiplied by itself, yields itself.[1][2] That is, the matrix M is idempotent if and only if MM = M. For this product MM to be defined, M must necessarily be a square matrix. Viewed this way, idempotent matrices are idempotent elements of matrix rings.
Properties
With the exception of the identity matrix, an idempotent matrix is singular; that is, its number of independent rows (and columns) is less than its number of rows (and columns). This can be seen from writing MM = M, assuming that M has full rank (is non-singular), and pre-multiplying by M−1 to obtain M = M−1M = I.
When an idempotent matrix is subtracted from the identity matrix, the result is also idempotent. This holds since [I − M][I − M] = I − M − M + M2 = I − M − M + M = I − M.
An idempotent matrix is always diagonalizable and its eigenvalues are either 0 or 1.[3] The trace of an idempotent matrix — the sum of the elements on its main diagonal — equals the rank of the matrix and thus is always an integer. This provides an easy way of computing the rank, or alternatively an easy way of determining the trace of a matrix whose elements are not specifically known (which is helpful in econometrics, for example, in establishing the degree of bias in using a sample variance as an estimate of a population variance).
Applications
Idempotent matrices arise frequently in regression analysis and econometrics. For example, in ordinary least squares, the regression problem is to choose a vector of coefficient estimates so as to minimize the sum of squared residuals (mispredictions) ei: in matrix form,
where y is a vector of dependent variable observations, and X is a matrix each of whose columns is a column of observations on one of the independent variables. The resulting estimator is
where superscript T indicates a transpose, and the vector of residuals is[2]
Here both M and (the latter being known as the hat matrix) are idempotent matrices, a fact which allows simplification when the sum of squared residuals is computed:
The idempotency of M plays a role in other calculations as well, such as in determining the variance of the estimator .
An idempotent linear operator P is a projection operator on the range space R(P) along its null space N(P). P is an orthogonal projection operator if and only if it is idempotent and symmetric.
See also
References
43 year old Petroleum Engineer Harry from Deep River, usually spends time with hobbies and interests like renting movies, property developers in singapore new condominium and vehicle racing. Constantly enjoys going to destinations like Camino Real de Tierra Adentro.
- ↑ 20 year-old Real Estate Agent Rusty from Saint-Paul, has hobbies and interests which includes monopoly, property developers in singapore and poker. Will soon undertake a contiki trip that may include going to the Lower Valley of the Omo.
My blog: http://www.primaboinca.com/view_profile.php?userid=5889534 - ↑ 2.0 2.1 20 year-old Real Estate Agent Rusty from Saint-Paul, has hobbies and interests which includes monopoly, property developers in singapore and poker. Will soon undertake a contiki trip that may include going to the Lower Valley of the Omo.
My blog: http://www.primaboinca.com/view_profile.php?userid=5889534 - ↑ 20 year-old Real Estate Agent Rusty from Saint-Paul, has hobbies and interests which includes monopoly, property developers in singapore and poker. Will soon undertake a contiki trip that may include going to the Lower Valley of the Omo.
My blog: http://www.primaboinca.com/view_profile.php?userid=5889534