Avogadro's law: Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
en>Myasuda
m sp
en>Rhinestone K
m Reverted edits by 206.132.134.30 (talk) to last version by ClueBot NG
Line 1: Line 1:
[[File:Symmetric_group_3;_Cayley_table;_matrices.svg|thumb|320px|Matrices describing the permutations of 3 elements<br> The [[Matrix multiplication|product]] of two permutation matrices is a permutation matrix as well.<br><br>These are the positions of the six matrices:<br>[[File:Symmetric_group_3;_Cayley_table;_positions.svg|310px]]<br>(They are also permutation matrices.)]]
I am Gerard from Stavanger doing my final year engineering in American Politics. I did my schooling, secured 86% and hope to find someone with same interests in Geocaching.<br><br>Also visit my blog post :: Fifa 15 Coin Generator ([http://E2G-Tecnology.biz/profile/CWestgart http://E2G-Tecnology.biz/profile/CWestgart])
In [[mathematics]], in [[Matrix (mathematics)|matrix theory]], a '''permutation matrix''' is a square [[binary matrix]] that has exactly one entry 1 in each row and each column and 0s elsewhere. Each such matrix represents a specific [[permutation]] of m elements and, when used to multiply another matrix, can produce that permutation in the rows or columns of the other matrix.
 
== Definition ==
 
Given a permutation &pi; of ''m'' elements,
:<math>\pi : \lbrace 1, \ldots, m \rbrace \to \lbrace 1, \ldots, m \rbrace</math>
given in two-line form by
:<math>\begin{pmatrix} 1 & 2 & \cdots & m \\ \pi(1) & \pi(2) & \cdots & \pi(m) \end{pmatrix},</math>
its permutation matrix is the ''m &times; m'' matrix ''P''<sub>&pi;</sub> whose entries are all 0 except that in row ''i'', the entry &pi;(''i'') equals 1.  We may write
:<math>P_\pi = \begin{bmatrix} \mathbf e_{\pi(1)} \\ \mathbf e_{\pi(2)} \\ \vdots \\ \mathbf e_{\pi(m)} \end{bmatrix},</math>
where <math>\mathbf e_j</math> denotes a row vector of length ''m'' with 1 in the ''j''th position and 0 in every other position.<ref name=Bru2>Brualdi (2006) p.2</ref>
 
== Properties ==
 
Given two permutations &pi; and &sigma; of ''m'' elements and the corresponding permutation matrices ''P''<sub>&pi;</sub> and ''P''<sub>&sigma;</sub>
:<math>P_{\sigma} P_{\pi}  = P_{\pi\,\circ\,\sigma} </math>
This somewhat unfortunate rule is a consequence of the definitions of multiplication of permutations (composition of bijections) and of matrices, and of the choice of using the vectors <math>\mathbf{e}_{\pi(i)}</math> as rows of the permutation matrix; if one had used columns instead then the product above would have been equal to <math>P_{\sigma\,\circ\,\pi}</math> with the permutations in their original order.
 
As permutation matrices are [[orthogonal matrix|orthogonal matrices]] (i.e., <math>P_{\pi}P_{\pi}^{T} = I</math>), the inverse matrix exists and can be written as
:<math>P_{\pi}^{-1} = P_{\pi^{-1}} = P_{\pi}^{T}.</math>
 
Multiplying <math>P_{\pi}</math> times a [[column vector]] '''g''' will permute the rows of the vector:
:<math>P_\pi \mathbf{g}
=
\begin{bmatrix}
\mathbf{e}_{\pi(1)} \\
\mathbf{e}_{\pi(2)} \\
\vdots \\
\mathbf{e}_{\pi(n)}
\end{bmatrix}
 
\begin{bmatrix}
g_1 \\
g_2 \\
\vdots \\
g_n
\end{bmatrix}
=
\begin{bmatrix}
g_{\pi(1)} \\
g_{\pi(2)} \\
\vdots \\
g_{\pi(n)}
\end{bmatrix}.
</math>
 
Now applying <math>P_\sigma</math> after applying <math>P_\pi</math> gives the same result as applying <math>P_{\pi\circ\sigma}</math> directly, in accordance with the above multiplication rule: call <math>P_\pi\mathbf{g} = \mathbf{g}'</math>, in other words
:<math>g'_i=g_{\pi(i)}\,</math>
for all ''i'', then
:<math>P_\sigma(P_\pi(\mathbf{g})) = P_\sigma(\mathbf{g}')
=
\begin{bmatrix}
g'_{\sigma(1)} \\
g'_{\sigma(2)} \\
\vdots \\
g'_{\sigma(n)}
\end{bmatrix}
=
\begin{bmatrix}
g_{\pi(\sigma(1))} \\
g_{\pi(\sigma(2))} \\
\vdots \\
g_{\pi(\sigma(n))}
\end{bmatrix}.
</math>
 
Multiplying a [[row vector]] '''h''' times <math>P_{\pi}</math> will permute the columns of the vector by the inverse of <math>P_{\pi}</math>:
:<math>\mathbf{h}P_\pi
=
\begin{bmatrix} h_1 \; h_2 \; \dots \; h_n \end{bmatrix}
 
\begin{bmatrix}
\mathbf{e}_{\pi(1)} \\
\mathbf{e}_{\pi(2)} \\
\vdots \\
\mathbf{e}_{\pi(n)}
\end{bmatrix}
=
\begin{bmatrix} h_{\pi^{-1}(1)} \; h_{\pi^{-1}(2)} \; \dots \; h_{\pi^{-1}(n)} \end{bmatrix}
</math>
 
Again it can be checked that <math>(\mathbf{h}P_\sigma)P_\pi = \mathbf{h}P_{\pi\circ\sigma}</math>.
 
== Notes ==
 
Let ''S<sub>n</sub>'' denote the [[symmetric group]], or group of permutations, on {1,2,...,''n''}.  Since there are ''n''! permutations, there are ''n''! permutation matrices.  By the formulas above, the ''n'' &times; ''n'' permutation matrices form a [[Group (mathematics)|group]] under matrix multiplication with the identity matrix as the [[identity element]].
 
If (1) denotes the identity permutation, then ''P''<sub>(1)</sub> is the [[identity matrix]].
 
One can view the permutation matrix of a permutation &sigma; as the permutation &sigma; of the columns of the identity matrix ''I'', or as the permutation &sigma;<sup>&minus;1</sup> of the rows of ''I''.
 
A permutation matrix is a [[doubly stochastic matrix]].  The [[Birkhoff–von Neumann theorem]] says that every doubly stochastic real matrix is a [[convex combination]] of permutation matrices of the same order and the permutation matrices are precisely the [[extreme point]]s of the set of doubly stochastic matrices. That is, the [[Birkhoff polytope]], the set of doubly stochastic matrices, is the [[convex hull]] of the set of permutation matrices.<ref name=Bru19>Brualdi (2006) p.19</ref>
 
The product ''PM'', premultiplying a matrix ''M'' by a permutation matrix ''P'', permutes the rows of ''M''; row ''i'' moves to row &pi;(''i'').  Likewise, ''MP'' permutes the columns of ''M''.
 
The map ''S''<sub>''n''</sub> &rarr; A &sub; GL(''n'', '''Z'''<sub>2</sub>) is a [[faithful representation]].  Thus, |A| = ''n''!.
 
The [[Trace (linear algebra)|trace]] of a permutation matrix is the number of fixed points of the permutation. If the permutation has fixed points, so it can be written in cycle form as &pi; = (''a''<sub>1</sub>)(''a''<sub>2</sub>)...(''a''<sub>''k''</sub>)&sigma; where &sigma; has no fixed points, then '''''e'''''<sub>''a''<sub>1</sub></sub>,'''''e'''''<sub>''a''<sub>2</sub></sub>,...,'''''e'''''<sub>''a''<sub>''k''</sub></sub> are [[eigenvector]]s of the permutation matrix.
 
From [[group theory]] we know that any permutation may be written as a product of [[transposition (mathematics)|transposition]]s. Therefore, any permutation matrix ''P'' factors as a product of row-interchanging [[elementary matrix|elementary matrices]], each having determinant &minus;1. Thus the determinant of a permutation matrix ''P'' is just the [[signature of a permutation|signature]] of the corresponding permutation.
 
== Examples ==
===Permutation of rows and columns===
When a permutation matrix ''P'' is multiplied with a matrix ''M'' from the left ''PM'' it will permute the rows of ''M'' (here the elements of a column vector),<br>when ''P'' is multiplied with ''M'' from the right ''MP'' it will permute the columns of ''M'' (here the elements of a row vector):
{|  style="text-align: center; width: 100%;"
|style="width:50%"|[[File:Permutation matrix; P * column.svg|thumb|center|180px|''P'' * (1,2,3,4)<sup>T</sup> = (4,1,3,2)<sup>T</sup>]]
|style="width:50%"|[[File:Permutation matrix; row * P.svg|thumb|center|257px|(1,2,3,4) * ''P'' = (2,4,3,1)]]
|}
 
Permutations of rows and columns are for example reflections (see below) and cyclic permutations (see [[Circulant matrix#Properties|cyclic permutation matrix]]).
 
{| class="collapsible collapsed" style="width: 100%; border: 1px solid #aaaaaa;"
! bgcolor="#ccccff"|reflections
|-
|
{|  style="text-align: center; width: 100%;"
|style="width:50%"|[[File:Permutation matrix; row * P^T.svg|thumb|center|257px|(1,2,3,4) * ''P''<sup>T</sup> = (4,1,3,2)]]
|style="width:50%"|[[File:Permutation matrix; P^T * column.svg|thumb|center|180px|''P''<sup>T</sup> * (1,2,3,4)<sup>T</sup> = (2,4,3,1)<sup>T</sup>]]
|}
 
These arrangements of matrices are reflections of those directly above.<br>
This follows from the rule <math>\left( \mathbf{A B} \right) ^\mathrm{T} = \mathbf{B}^\mathrm{T} \mathbf{A}^\mathrm{T} \,</math> &nbsp;&nbsp;&nbsp;&nbsp; (Compare: [[Transpose#Properties|Transpose]])
|}
 
===Permutation of rows===
The permutation matrix ''P''<sub>&pi;</sub> corresponding to the permutation :<math>\pi=\begin{pmatrix} 1 & 2 & 3 & 4 & 5 \\ 1 & 4 & 2 & 5 & 3 \end{pmatrix},</math> is
:<math>P_\pi
=
\begin{bmatrix}
\mathbf{e}_{\pi(1)} \\
\mathbf{e}_{\pi(2)} \\
\mathbf{e}_{\pi(3)} \\
\mathbf{e}_{\pi(4)} \\
\mathbf{e}_{\pi(5)}
\end{bmatrix}
=
\begin{bmatrix}
\mathbf{e}_{1} \\
\mathbf{e}_{4} \\
\mathbf{e}_{2} \\
\mathbf{e}_{5} \\
\mathbf{e}_{3}
\end{bmatrix}
=
\begin{bmatrix}
1 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 1 & 0 \\
0 & 1 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 1 \\
0 & 0 & 1 & 0 & 0
\end{bmatrix}.
</math>
 
Given a vector '''g''',
:<math>P_\pi \mathbf{g}
=
\begin{bmatrix}
\mathbf{e}_{\pi(1)} \\
\mathbf{e}_{\pi(2)} \\
\mathbf{e}_{\pi(3)} \\
\mathbf{e}_{\pi(4)} \\
\mathbf{e}_{\pi(5)}
\end{bmatrix}
 
\begin{bmatrix}
g_1 \\
g_2 \\
g_3 \\
g_4 \\
g_5
\end{bmatrix}
=
\begin{bmatrix}
g_1 \\
g_4 \\
g_2 \\
g_5 \\
g_3
\end{bmatrix}.
</math>
 
== Explanation ==
A permutation matrix will always be in the form
:<math>\begin{bmatrix}
\mathbf{e}_{a_1} \\
\mathbf{e}_{a_2} \\
\vdots \\
\mathbf{e}_{a_j} \\
\end{bmatrix}</math>
where '''e'''<sub>''a''<sub>''i''</sub></sub> represents the ''i''th basis vector (as a row) for '''R'''<sup>''j''</sup>, and where
:<math>\begin{bmatrix}
1  & 2  & \ldots & j \\
a_1 & a_2 & \ldots & a_j\end{bmatrix}</math>
is the [[permutation]] form of the permutation matrix.
 
Now, in performing matrix multiplication, one essentially forms the dot product of each row of the first matrix with each column of the second. In this instance, we will be forming the dot product of each row of this matrix with the vector of elements we want to permute. That is, for example, '''v'''= (''g''<sub>0</sub>,...,''g''<sub>5</sub>)<sup>T</sup>,
:'''e'''<sub>''a''<sub>''i''</sub></sub>&middot;'''v'''=''g''<sub>''a''<sub>''i''</sub></sub>
 
So, the product of the permutation matrix with the vector '''v''' above,
will be a vector in the form (''g''<sub>''a''<sub>1</sub></sub>, ''g''<sub>''a''<sub>2</sub></sub>, ..., ''g''<sub>''a''<sub>''j''</sub></sub>), and that this then is a permutation of '''v''' since we have said that the permutation form is 
:<math>\begin{pmatrix}
1  & 2  & \ldots & j \\
a_1 & a_2 & \ldots & a_j\end{pmatrix}.</math>
So, permutation matrices do indeed permute the order of elements in vectors multiplied with them.
 
== See also ==
* [[Alternating sign matrix]]
* [[Generalized permutation matrix]]
 
==References==
{{reflist}}
* {{cite book | last=Brualdi | first=Richard A. | title=Combinatorial matrix classes | series=Encyclopedia of Mathematics and Its Applications | volume=108 | location=Cambridge | publisher=[[Cambridge University Press]] | year=2006 | isbn=0-521-86565-4 | zbl=1106.05001 }}
 
[[Category:Matrices]]
[[Category:Permutations]]
[[Category:Sparse matrices]]

Revision as of 18:20, 18 February 2014

I am Gerard from Stavanger doing my final year engineering in American Politics. I did my schooling, secured 86% and hope to find someone with same interests in Geocaching.

Also visit my blog post :: Fifa 15 Coin Generator (http://E2G-Tecnology.biz/profile/CWestgart)