World line: Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
en>Arydberg
en>Sardanaphalus
m →‎top: template name update
 
Line 1: Line 1:
In [[mathematics]], a '''square matrix''' is a [[Matrix (mathematics)|matrix]] with the same number of rows and columns. An ''n''-by-''n'' matrix is known as a square matrix of order ''n''. For instance, this is a square matrix of order 4:
Hiya and welcome there, I am Adrianne and I totally dig that name. Vermont has always been my home and I really like every day living this site. Gardening is what I do 7 days a week. I am a people manager on the other hand soon I'll be on my own. You can find my net site here: http://prometeu.net<br><br>my site [http://prometeu.net clash of clans bot]
 
: <math>\mathbf{A} = \begin{bmatrix}
9 & 13 & 5 & 2 \\
1 & 11 & 7 & 6 \\
3 & 7 & 4 & 1 \\
6 & 0 & 7 & 10 \end{bmatrix}.</math>
 
The entries ''a''<sub>''ii''</sub> form the [[main diagonal]] of a square matrix. For instance, the main diagonal of the 4-by-4 matrix above contains the elements ''a''<sub>11</sub>&nbsp;=&nbsp;9, ''a''<sub>22</sub>&nbsp;=&nbsp;11, ''a''<sub>33</sub>&nbsp;=&nbsp;4, ''a''<sub>44</sub>&nbsp;=&nbsp;10.
 
Any two square matrices of the same order can be added and multiplied.
 
Square matrices are often used to represent simple [[linear transformation]]s, such as [[Shear mapping|shearing]] or [[Rotation (mathematics)|rotation]]. For example, if '''R''' is a square matrix representing a rotation ([[rotation matrix]]) and '''v''' is a [[column vector]] describing the [[Position (vector)|position]] of a point in space, the product '''Rv''' yields another column vector describing the position of that point after that rotation. If '''v''' is a [[row vector]], the same transformation can be obtained using '''vR'''<sup>T</sup>, where '''R'''<sup>T</sup> is the [[transpose]] of '''R'''.
 
==Main diagonal==
{{Main|Main diagonal}}
The entries ''a''<sub>''ii''</sub> (''i'' = 1, ..., ''n'') form the [[main diagonal]] of a square matrix. They lie on the imaginary line which runs from the top left corner to the bottom right corner of the matrix. For instance, the main diagonal of the 4-by-4 matrix above contains the elements ''a''<sub>11</sub>&nbsp;=&nbsp;9, ''a''<sub>22</sub>&nbsp;=&nbsp;11, ''a''<sub>33</sub>&nbsp;=&nbsp;4, ''a''<sub>''44''</sub>&nbsp;=&nbsp;10.
 
The diagonal of a square matrix from the top right to the bottom left corner is called ''antidiagonal'' or ''counterdiagonal''.
 
== Special kinds ==
:{| class="wikitable" style="float:right; margin:0ex 0ex 2ex 2ex;"
|-
! Name !! Example with ''n'' = 3
|-
| [[Diagonal matrix]] || style="text-align:center;" | <math>
      \begin{bmatrix}
          a_{11} & 0 & 0 \\
          0 & a_{22} & 0 \\
          0 & 0 & a_{33}
      \end{bmatrix}
  </math>
|-
| [[Lower triangular matrix]] || style="text-align:center;" | <math>
      \begin{bmatrix}
          a_{11} & 0 & 0 \\
          a_{21} & a_{22} & 0 \\
          a_{31} & a_{32} & a_{33}
      \end{bmatrix}
  </math>
|-
| [[Upper triangular matrix]] || style="text-align:center;" | <math>
      \begin{bmatrix}
          a_{11} & a_{12} & a_{13} \\
          0 & a_{22} & a_{23} \\
          0 & 0 & a_{33}
      \end{bmatrix}
  </math>
|}
=== Diagonal or triangular matrix ===
If all entries outside the main diagonal are zero, '''A''' is called a [[diagonal matrix]]. If only all entries above (or below) the main diagonal are zero, '''A''' is called a lower (or upper) [[triangular matrix]].
 
=== Identity matrix ===
The [[identity matrix]] '''I'''<sub>''n''</sub> of size ''n'' is the ''n''-by-''n'' matrix in which all the elements on the [[main diagonal]] are equal to 1 and all other elements are equal to 0, e.g.
:<math>
I_1 = \begin{bmatrix} 1 \end{bmatrix}
,\
I_2 = \begin{bmatrix}
        1 & 0 \\
        0 & 1
      \end{bmatrix}
,\ \cdots ,\
I_n = \begin{bmatrix}
        1 & 0 & \cdots & 0 \\
        0 & 1 & \cdots & 0 \\
        \vdots & \vdots & \ddots & \vdots \\
        0 & 0 & \cdots & 1
      \end{bmatrix}
</math>
It is a square matrix of order ''n'', and also a special kind of [[diagonal matrix]]. It is called identity matrix because multiplication with it leaves a matrix unchanged:
:{{nowrap begin}}'''AI'''<sub>''n''</sub> = '''I'''<sub>''m''</sub>'''A''' = '''A'''{{nowrap end}} for any ''m''-by-''n'' matrix '''A'''.
 
===Symmetric or skew-symmetric matrix===
A square matrix '''A''' that is equal to its transpose, i.e., {{nowrap begin}}'''A''' = '''A'''<sup>T</sup>{{nowrap end}}, is a [[symmetric matrix]].  If instead, '''A''' was equal to the negative of its transpose, i.e., {{nowrap begin}}'''A''' = −'''A'''<sup>T</sup>,{{nowrap end}} then '''A''' is a [[skew-symmetric matrix]]. In complex matrices, symmetry is often replaced by the concept of [[Hermitian matrix|Hermitian matrices]], which satisfy '''A'''<sup>∗</sup> = '''A''', where the star or [[asterisk]] denotes the [[conjugate transpose]] of the matrix, i.e., the transpose of the [[complex conjugate]] of '''A'''.
 
By the [[spectral theorem]], real symmetric matrices and complex Hermitian matrices have an [[eigenbasis]]; i.e., every vector is expressible as a [[linear combination]] of eigenvectors. In both cases, all eigenvalues are real.<ref>{{Harvard citations |last1=Horn |last2=Johnson |year=1985 |nb=yes |loc=Theorem 2.5.6 }}</ref> This theorem can be generalized to infinite-dimensional situations related to matrices with infinitely many rows and columns, see [[#Infinite matrices|below]].
 
===Invertible matrix and its inverse===
A square matrix '''A''' is called ''[[invertible matrix|invertible]]'' or ''non-singular'' if there exists a matrix '''B''' such that
: '''AB''' = '''BA''' = '''I'''<sub>''n''</sub>.<ref>{{Harvard citations |last1=Brown |year=1991 |nb=yes |loc=Definition I.2.28 }}</ref><ref>{{Harvard citations |last1=Brown |year=1991 |nb=yes |loc=Definition I.5.13 }}</ref>
If '''B''' exists, it is unique and is called the ''[[inverse matrix]]'' of '''A''', denoted '''A'''<sup>−1</sup>.
 
===Definite matrix===
{| class="wikitable" style="float:right; text-align:center; margin:0ex 0ex 2ex 2ex;"
|-
! [[Positive definite matrix|Positive definite]] !! [[Indefinite matrix|Indefinite]]
|-
| <math> \begin{bmatrix}
        1/4 & 0 \\
        0 & 1/4 \\
    \end{bmatrix} </math>
| <math> \begin{bmatrix}
        1/4 & 0 \\
        0 & -1/4
    \end{bmatrix} </math>
|-
| ''Q''(''x'',''y'') = 1/4 ''x''<sup>2</sup> + 1/4''y''<sup>2</sup>
| ''Q''(''x'',''y'') = 1/4 ''x''<sup>2</sup> − 1/4 ''y''<sup>2</sup>
|-
| [[File:Ellipse in coordinate system with semi-axes labelled.svg|150px]] <br>Points such that ''Q''(''x'',''y'')&nbsp;=&nbsp;1 <br> ([[Ellipse]]).
| [[File:Hyperbola2.png|100px]] <br> Points such that ''Q''(''x'',''y'')&nbsp;=&nbsp;1 <br> ([[Hyperbola]]).
|}
A symmetric ''n''×''n''-matrix is called'' [[positive-definite matrix|positive-definite]]'' (respectively negative-definite; indefinite), if for all nonzero vectors '''x'''&nbsp;∈&nbsp;'''R'''<sup>''n''</sup> the associated [[quadratic form]] given by
:<cite id=quadratic_forms>''Q''('''x''') = '''x'''<sup>T</sup>'''Ax'''</cite>
takes only positive values (respectively only negative values; both some negative and some positive values).<ref>{{Harvard citations |last1=Horn |last2=Johnson |year=1985 |nb=yes |loc=Chapter 7 }}</ref> If the quadratic form takes only non-negative (respectively only non-positive) values, the symmetric matrix is called positive-semidefinite (respectively negative-semidefinite); hence the matrix is indefinite precisely when it is neither positive-semidefinite nor negative-semidefinite.
 
A symmetric matrix is positive-definite if and only if all its eigenvalues are positive.<ref>{{Harvard citations |last1=Horn |last2=Johnson |year=1985 |nb=yes |loc=Theorem 7.2.1 }}</ref> The table at the right shows two possibilities for 2-by-2 matrices.
 
Allowing as input two different vectors instead yields the [[bilinear form]] associated to '''A''':
:''B''<sub>'''A'''</sub> ('''x''', '''y''') = '''x'''<sup>T</sup>'''Ay'''.<ref>{{Harvard citations |last1=Horn |last2=Johnson |year=1985 |nb=yes |loc=Example 4.0.6, p. 169 }}</ref>
 
=== Orthogonal matrix ===
An ''orthogonal matrix'' is a [[Matrix (mathematics)#Square matrices|square matrix]] with [[real number|real]] entries whose columns and rows are [[orthogonal]] [[unit vector]]s (i.e., [[orthonormality|orthonormal]] vectors). Equivalently, a matrix ''A'' is orthogonal if its [[transpose]] is equal to its [[inverse matrix|inverse]]:
:<math>A^\mathrm{T}=A^{-1}, \,</math>
which entails
:<math>A^\mathrm{T} A = A A^\mathrm{T} = I, \,</math>
where ''I'' is the [[identity matrix]].
 
An orthogonal matrix ''A'' is necessarily [[Invertible matrix|invertible]] (with inverse {{nowrap|1=''A''<sup>&minus;1</sup> = ''A''<sup>T</sup>}}), [[Unitary matrix|unitary]] ({{nowrap|1=''A''<sup>&minus;1</sup> = ''A''*}}), and [[Normal matrix|normal]] ({{nowrap|1=''A''*''A'' = ''AA''*}}). The [[determinant]] of any orthogonal matrix is either +1 or −1. A ''special orthogonal matrix'' is an orthogonal matrix with [[determinant]] +1. As a [[linear transformation]], every orthogonal matrix with determinant +1 is a pure [[Rotation (mathematics)|rotation]], while every orthogonal matrix with determinant&nbsp;−1 is either a pure [[reflection (mathematics)|reflection]], or a composition of reflection and rotation.
 
The [[complex number|complex]] analogue of an orthogonal matrix is a [[unitary matrix]].
 
==Operations==
===Trace===
The [[trace of a matrix|trace]], tr('''A''') of a square matrix '''A''' is the sum of its diagonal entries. While matrix multiplication is not commutative as mentioned [[#non commutative|above]], the trace of the product of two matrices is independent of the order of the factors:
: tr('''AB''') = tr('''BA''').
This is immediate from the definition of matrix multiplication:
:<math>\scriptstyle\operatorname{tr}(\mathsf{AB}) = \sum_{i=1}^m \sum_{j=1}^n A_{ij} B_{ji} = \operatorname{tr}(\mathsf{BA}).</math>
Also, the trace of a matrix is equal to that of its transpose, i.e.,
:{{nowrap begin}}tr('''A''') = tr('''A'''<sup>T</sup>){{nowrap end}}.
 
===Determinant===
{{Main|Determinant}}
[[File:Determinant example.svg|thumb|300px|right|A linear transformation on '''R'''<sup>2</sup> given by the indicated matrix. The determinant of this matrix is −1, as the area of the green parallelogram at the right is 1, but the map reverses the [[orientation (mathematics)|orientation]], since it turns the counterclockwise orientation of the vectors to a clockwise one.]]
 
The ''determinant'' det('''A''') or |'''A'''| of a square matrix '''A''' is a number encoding certain properties of the matrix. A matrix is invertible [[if and only if]] its determinant is nonzero. Its [[absolute value]] equals the area (in '''R'''<sup>2</sup>) or volume (in '''R'''<sup>3</sup>) of the image of the unit square (or cube), while its sign corresponds to the orientation of the corresponding linear map: the determinant is positive if and only if the orientation is preserved.
 
The determinant of 2-by-2 matrices is given by
:<math>\det \begin{bmatrix}a&b\\c&d\end{bmatrix} = ad-bc.</math>
The determinant of 3-by-3 matrices involves 6 terms ([[rule of Sarrus]]). The more lengthy [[Leibniz formula for determinants|Leibniz formula]] generalises these two formulae to all dimensions.<ref>{{Harvard citations |last1=Brown |year=1991 |nb=yes |loc=Definition III.2.1 }}</ref>
 
The determinant of a product of square matrices equals the product of their determinants:
:{{nowrap begin}}det('''AB''') = det('''A''') · det('''B''').<ref>{{Harvard citations |last1=Brown |year=1991 |nb=yes |loc=Theorem III.2.12 }}</ref>{{nowrap end}}
Adding a multiple of any row to another row, or a multiple of any column to another column, does not change the determinant. Interchanging two rows or two columns affects the determinant by multiplying it by −1.<ref>{{Harvard citations |last1=Brown |year=1991 |nb=yes |loc=Corollary III.2.16 }}</ref> Using these operations, any matrix can be transformed to a lower (or upper) triangular matrix, and for such matrices the determinant equals the product of the entries on the main diagonal; this provides a method to calculate the determinant of any matrix. Finally, the [[Laplace expansion]] expresses the determinant in terms of [[minor (linear algebra)|minors]], i.e., determinants of smaller matrices.<ref>{{Harvard citations |last1=Mirsky |year=1990 |nb=yes |loc=Theorem 1.4.1 }}</ref> This expansion can be used for a recursive definition of determinants (taking as starting case the determinant of a 1-by-1 matrix, which is its unique entry, or even the determinant of a 0-by-0 matrix, which is 1), that can be seen to be equivalent to the Leibniz formula. Determinants can be used to solve [[linear system]]s using [[Cramer's rule]], where the division of the determinants of two related square matrices equates to the value of each of the system's variables.<ref>{{Harvard citations |last1=Brown |year=1991 |nb=yes |loc=Theorem III.3.18 }}</ref>
 
===Eigenvalues and eigenvectors===
{{Main|Eigenvalue, eigenvector and eigenspace|l1=Eigenvalues and eigenvectors}}
A number λ and a non-zero vector '''v''' satisfying
:'''Av''' = λ'''v'''
are called an ''eigenvalue'' and an ''eigenvector'' of '''A''', respectively.<ref group="nb">''Eigen'' means "own" in [[German language|German]] and in [[Dutch language|Dutch]].</ref><ref>{{Harvard citations |last1=Brown |year=1991 |nb=yes |loc=Definition III.4.1 }}</ref> The number λ is an eigenvalue of an ''n''×''n''-matrix '''A''' if and only if '''A'''−λ'''I'''<sub>''n''</sub> is not invertible, which is [[logical equivalence|equivalent]] to
:<math>\det(\mathsf{A}-\lambda \mathsf{I}) = 0.\ </math><ref>{{Harvard citations |last1=Brown |year=1991 |nb=yes |loc=Definition III.4.9 }}</ref>
The polynomial ''p''<sub>'''A'''</sub> in an [[indeterminate (variable)|indeterminate]] ''X'' given by evaluation the determinant det(''X'''''I'''<sub>''n''</sub>−'''A''') is called the [[characteristic polynomial]] of '''A'''. It is a [[monic polynomial]] of [[degree of a polynomial|degree]] ''n''. Therefore the polynomial equation ''p''<sub>'''A'''</sub>(λ)&nbsp;=&nbsp;0 has at most ''n'' different solutions, i.e., eigenvalues of the matrix.<ref>{{Harvard citations |last1=Brown |year=1991 |nb=yes |loc=Corollary III.4.10 }}</ref> They may be complex even if the entries of '''A''' are real. According to the [[Cayley–Hamilton theorem]], {{nowrap begin}}''p''<sub>'''A'''</sub>('''A''') = '''0'''{{nowrap end}}, that is, the result of substituting the matrix itself into its own characteristic polynomial yields the [[zero matrix]].
 
==Notes==
{{Reflist|colwidth=30em}}
{{Reflist|group=nb|3}}
 
{{linear algebra}}
[[Category:Matrices| ]]

Latest revision as of 00:10, 6 October 2014

Hiya and welcome there, I am Adrianne and I totally dig that name. Vermont has always been my home and I really like every day living this site. Gardening is what I do 7 days a week. I am a people manager on the other hand soon I'll be on my own. You can find my net site here: http://prometeu.net

my site clash of clans bot