|
|
(One intermediate revision by one other user not shown) |
Line 1: |
Line 1: |
| {{About|the cross product of two vectors in three-dimensional Euclidean space}}
| | Making the computer run quickly is pretty easy. Most computers run slow because they are jammed up with junk files, that Windows has to look from every time it wants to locate anything. Imagine having to find a book in a library, nevertheless all the library books are in a big huge pile. That's what it's like for your computer to obtain something, whenever a program is full of junk files.<br><br>Registry is not also significant to quick computer boot up, however also crucial to the total performance of a computer. If you have a registry error, we will face blue screen, freezing or even crash. It's required to frequently clean up the invalid, missing, junk registry keys to keep the computer healthy plus running swiftly.<br><br>It doesn't matter whether you may be not especially obvious about what rundll32.exe is. But remember which it plays an important role inside retaining the stability of our computers and the integrity of the program. Whenever several software or hardware may not answer normally to the program procedure, comes the rundll32 exe error, which will be caused by corrupted files or lost information in registry. Usually, error message can shows up at booting or the beginning of running a program.<br><br>Chrome enables customizing itself by applying range of themes available online. If you had recently used a theme which no longer functions properly, it results inside Chrome crash on Windows 7. It is recommended to set the original theme.<br><br>The second step to fixing these errors is to utilize a system called a "[http://bestregistrycleanerfix.com/fix-it-utilities fix it utilities]" to scan through your computer and fix some of the registry mistakes which may moreover be leading for this error. A registry cleaner is a software system that usually scan from a computer plus repair some of the difficulties that Windows has inside, permitting a computer to "remember" all settings it has whenever it loads up. Although the registry is continually being selected to help load up a large amount of programs on the PC, it's continually being saved incorrectly - leading to a big amount of mistakes to be formed. To fix this problem, it's suggested we download a registry cleaner from the Internet and install it on a Pc, permitting Windows to run smoother again.<br><br>Your system is designed and built for the purpose of helping you accomplish jobs plus not be pestered by windows XP error messages. When there are errors, what do we do? Some individuals pull their hair and cry, whilst those sane ones have their PC repaired, while those absolutely wise ones analysis to have the mistakes fixed themselves. No, these errors were not equally designed to rob you off your money plus time. There are aspects which you can do to actually avoid this from happening.<br><br>Google Chrome is my lifeline plus to this day happily. My all settings plus research associated bookmarks were saved in Chrome and stupidly I didn't synchronize them with all the Gmail to shop them online. I can not afford to install new variation plus sacrifice all my function settings. There was no method to retrieve the older settings. The only choice left for me was to miraculously fix it browser inside a method that all data plus settings stored inside it are recovered.<br><br>A registry cleaner is a program which cleans the registry. The Windows registry usually gets flooded with junk information, info which has not been removed from uninstalled programs, erroneous file association plus additional computer-misplaced entries. These neat small system software tools are quite popular today plus you will find many wise ones found on the Internet. The wise ones give you way to maintain, clean, update, backup, and scan the System Registry. Whenever it finds supposedly unwanted ingredients in it, the registry cleaner lists them and recommends the user to delete or repair these orphaned entries plus corrupt keys. |
| | |
| In [[mathematics]], the '''cross product''' or '''vector product''' is a [[binary operation]] on two [[Euclidean vector|vector]]s in three-dimensional [[Euclidean space|space]]. It results in a vector which is [[perpendicular]] to both and therefore [[Normal (geometry)|normal]] to the plane containing them. It has many applications in mathematics, [[physics]], and [[engineering]].
| |
| | |
| If the vectors have the same direction or one has zero length, then their cross product is zero. More generally, the magnitude of the product equals the area of a parallelogram with the vectors for sides; in particular for perpendicular vectors this is a rectangle and the magnitude of the product is the product of their lengths. The cross product is [[anticommutativity|anticommutative]] and [[distributivity|distributive]] over addition. The space and product form an [[algebra over a field]], which is neither [[commutative]] nor [[associative]], but is a [[Lie algebra]] with the cross product being the Lie bracket.
| |
| | |
| Like the [[dot product]], it depends on the [[metric space|metric]] of Euclidean space, but unlike the dot product, it also depends on the choice of [[orientation (mathematics)|orientation]] or "handedness". The product can be generalized in various ways; it can be made independent of orientation by changing the result to [[pseudovector]], or in arbitrary dimensions the [[exterior algebra|exterior product]] of vectors can be used with a [[bivector]] or [[two-form]] result. Also, using the orientation and metric structure just as for the traditional 3-dimensional cross product, one can in ''n'' dimensions take the product of ''n'' − 1 vectors to produce a vector perpendicular to all of them. But if the product is limited to non-trivial binary products with vector results, it exists only in three and [[Seven-dimensional cross product|seven dimensions]].<ref name=Massey2>{{cite journal |title=Cross products of vectors in higher dimensional Euclidean spaces |author=WS Massey |year=1983 |jstor=2323537|quote=If one requires only three basic properties of the cross product ... it turns out that a cross product of vectors exists only in 3-dimensional and 7-dimensional Euclidean space. |pages=697–701 |journal=The American Mathematical Monthly |volume=90 |issue=10 |ref=harv |doi=10.2307/2323537}}</ref>
| |
| [[Image:Cross product vector.svg|thumb|right|The cross-product in respect to a right-handed coordinate system]]
| |
| | |
| == Definition ==
| |
| [[Image:Right hand rule cross product.svg|right|thumb|Finding the direction of the cross product by the [[right-hand rule]]]]
| |
| The cross product of two vectors '''a''' and '''b''' is defined only in three-dimensional space and is denoted by {{nowrap|1='''a''' × '''b'''}}. In [[physics]], sometimes the notation {{nowrap|1='''a''' ∧ '''b'''}} is used,<ref>{{cite book|author=Jeffreys, H and Jeffreys, BS|title=Methods of mathematical physics|year=1999|publisher=Cambridge University Press|url=http://worldcat.org/oclc/41158050?tab=details}}</ref> though this is avoided in mathematics to avoid confusion with the [[exterior product]]. | |
| | |
| The cross product {{nowrap|'''a''' × '''b'''}} is defined as a vector '''c''' that is [[perpendicular]] to both '''a''' and '''b''', with a direction given by the [[right-hand rule]] <!-- this is how first time students, who also use right-hand coordinates, learn --> and a magnitude equal to the area of the [[parallelogram]] that the vectors span.
| |
| | |
| The cross product is defined by the formula<ref>{{harvnb|Wilson|1901|page=60–61}}</ref><ref name=Cullen>
| |
| | |
| {{cite book |title=Advanced engineering mathematics |author=Dennis G. Zill, Michael R. Cullen |edition=3rd |year=2006 |publisher=Jones & Bartlett Learning |url=http://books.google.com/?id=x7uWk8lxVNYC&pg=PA324 |page=324 |chapter=Definition 7.4: Cross product of two vectors |isbn=0-7637-4591-X}}
| |
| | |
| </ref>
| |
| | |
| :<math>\mathbf{a} \times \mathbf{b} = \left\| \mathbf{a} \right\| \left\| \mathbf{b} \right\| \sin \theta \ \mathbf{n}</math>
| |
| | |
| where ''θ'' is the [[angle]] between '''a''' and '''b''' in the plane containing them, ‖'''a'''‖ and ‖'''b'''‖ are the [[Magnitude (vector)|magnitudes]] of vectors '''a''' and '''b''', and '''n''' is a [[unit vector]] [[perpendicular]] to the plane containing '''a''' and '''b''' in the direction given by the right-hand rule (illustrated). If the vectors '''a''' and '''b''' are parallel (i.e., the angle ''θ'' between them is either 0° or 180°), by the above formula, the cross product of '''a''' and '''b''' is the [[zero vector]] '''0'''.
| |
| | |
| [[Image:Cross product animation.gif|left|thumb|The cross product (vertical) changes as the angle between the vectors changes]]
| |
| | |
| The direction of the vector '''n''' is given by the right-hand rule, where one simply points the forefinger of the right hand in the direction of '''a''' and the middle finger in the direction of '''b'''. Then, the vector '''n''' is coming out of the thumb (see the picture on the right). Using this rule implies that the cross-product is [[Anticommutativity|anti-commutative]], i.e., {{nowrap|1='''b''' × '''a''' = −('''a''' × '''b''')}}. By pointing the forefinger toward '''b''' first, and then pointing the middle finger toward '''a''', the thumb will be forced in the opposite direction, reversing the sign of the product vector.
| |
| | |
| Using the cross product requires the handedness of the coordinate system to be taken into account (as explicit in the definition above). If a [[Cartesian coordinate system#Orientation and handedness|left-handed coordinate system]] is used, the direction of the vector '''n''' is given by the left-hand rule and points in the opposite direction.
| |
| | |
| This, however, creates a problem because transforming from one arbitrary reference system to another (''e.g.'', a mirror image transformation from a right-handed to a left-handed coordinate system), should not change the direction of '''n'''. The problem is clarified by realizing that the cross-product of two vectors is not a (true) vector, but rather a ''[[pseudovector]]''. See [[Cross product#Cross product and handedness|cross product and handedness]] for more detail.
| |
| | |
| == Names ==
| |
| [[File:Sarrus rule.svg|upright=1.25|thumb|right|According to [[Sarrus' rule]], the [[determinant]] of a 3×3 matrix involves multiplications between matrix elements identified by crossed diagonals]]
| |
| | |
| In 1881, [[Josiah Willard Gibbs]], and independently [[Oliver Heaviside]], introduced both the [[dot product]] and the cross product using a period ({{nowrap|1='''a . b'''}}) and an "x" ({{nowrap|1='''a''' x '''b'''}}), respectively, to denote them.<ref name=ucd>[https://www.math.ucdavis.edu/~temple/MAT21D/SUPPLEMENTARY-ARTICLES/Crowe_History-of-Vectors.pdf ''A History of Vector Analysis'' by Michael J. Crowe], Math. UC Davis</ref>
| |
| | |
| In 1877, to emphasize the fact that the result of a dot product is a [[scalar (mathematics)|scalar]] while the result of a cross product is a [[Euclidean vector|vector]], [[William Kingdon Clifford]] coined the alternative names '''scalar product''' and '''vector product''' for the two operations.<ref name=ucd/> These alternative names are still widely used in the literature.
| |
| | |
| Both the cross notation ({{nowrap|1='''a''' × '''b'''}}) and the name '''cross product''' were possibly inspired by the fact that each [[scalar component]] of {{nowrap|1='''a''' × '''b'''}} is computed by multiplying non-corresponding components of '''a''' and '''b'''. Conversely, a dot product {{nowrap|1='''a · b'''}} involves multiplications between corresponding components of '''a''' and '''b'''. As explained [[#Matrix notation|below]], the cross product can be expressed in the form of a [[determinant]] of a special 3×3 matrix. According to [[Sarrus' rule]], this involves multiplications between matrix elements identified by crossed diagonals.
| |
| | |
| ==Computing the cross product==
| |
| | |
| ===Coordinate notation===
| |
| [[Image:3D Vector.svg|300px|thumb|right|[[Standard basis]] vectors ('''i''', '''j''', '''k''', also denoted '''e'''<sub>1</sub>, '''e'''<sub>2</sub>, '''e'''<sub>3</sub>) and [[vector component]]s of '''a''' ('''a'''<sub>x</sub>, '''a'''<sub>y</sub>, '''a'''<sub>z</sub>, also denoted '''a'''<sub>1</sub>, '''a'''<sub>2</sub>, '''a'''<sub>3</sub>)]]
| |
| The [[standard basis]] vectors '''i''', '''j''', and '''k''' satisfy the following equalities:
| |
| :<math>\begin{align}
| |
| \mathbf{i}&=\mathbf{j}\times\mathbf{k}\\
| |
| \mathbf{j}&=\mathbf{k}\times\mathbf{i}\\
| |
| \mathbf{k}&=\mathbf{i}\times\mathbf{j}
| |
| \end{align}</math>
| |
| which imply, by the [[anticommutativity]] of the cross product, that
| |
| :<math>\begin{align}
| |
| \mathbf{k\times j}=&-\mathbf{i}\\
| |
| \mathbf{i\times k}=&-\mathbf{j}\\
| |
| \mathbf{j\times i}=&-\mathbf{k}
| |
| \end{align}</math>
| |
| The definition of the cross product also implies that
| |
| :<math>\mathbf{i}\times\mathbf{i}=\mathbf{j}\times\mathbf{j}=\mathbf{k}\times\mathbf{k}=\mathbf{0}</math> (the [[zero vector]]).
| |
| These equalities, together with the [[distributivity]] and [[linearity]] of the cross product, are sufficient to determine the cross product of any two vectors '''u''' and '''v'''. Each vector can be defined as the sum of three orthogonal components parallel to the standard basis vectors:
| |
| :<math>\mathbf{u}=u_1\mathbf{i}+u_2\mathbf{j}+u_3\mathbf{k}</math>
| |
| :<math>\mathbf{v}=v_1\mathbf{i}+v_2\mathbf{j}+v_3\mathbf{k}</math>
| |
| Their cross product {{nowrap|1='''u''' × '''v'''}} can be expanded using [[distributivity]]:
| |
| :<math> \begin{align}
| |
| \mathbf{u}\times\mathbf{v}=&(u_1\mathbf{i}+u_2\mathbf{j}+u_3\mathbf{k})\times(v_1\mathbf{i}+v_2\mathbf{j}+v_3\mathbf{k})\\
| |
| =&u_1v_1(\mathbf{i}\times\mathbf{i})+u_1v_2(\mathbf{i}\times\mathbf{j})+u_1v_3(\mathbf{i}\times\mathbf{k})\\
| |
| &+u_2v_1(\mathbf{j}\times\mathbf{i})+u_2v_2(\mathbf{j}\times\mathbf{j})+u_2v_3(\mathbf{j}\times\mathbf{k})\\
| |
| &+u_3v_1(\mathbf{k}\times\mathbf{i})+u_3v_2(\mathbf{k}\times\mathbf{j})+u_3v_3(\mathbf{k}\times\mathbf{k})\\
| |
| \end{align}</math>
| |
| This can be interpreted as the decomposition of {{nowrap|1='''u''' × '''v'''}} into the sum of nine simpler cross products involving vectors aligned with '''i''', '''j''', or '''k'''. Each one of these nine cross products operates on two vectors that are easy to handle as they are either parallel or orthogonal to each other. From this decomposition, by using the above mentioned equalities and collecting similar terms, we obtain:
| |
| :<math>\begin{align}
| |
| \mathbf{u}\times\mathbf{v}=&u_1v_1\mathbf{0}+u_1v_2\mathbf{k}-u_1v_3\mathbf{j}\\
| |
| &-u_2v_1\mathbf{k}-u_2v_2\mathbf{0}+u_2v_3\mathbf{i}\\
| |
| &+u_3v_1\mathbf{j}-u_3v_2\mathbf{i}-u_3v_3\mathbf{0}\\
| |
| =&(u_2v_3-u_3v_2)\mathbf{i}+(u_3v_1-u_1v_3)\mathbf{j}+(u_1v_2-u_2v_1)\mathbf{k}.\\
| |
| \end{align}</math>
| |
| meaning that the three [[scalar component]]s of the resulting vector '''s''' = ''s''<sub>1</sub>'''i''' + ''s''<sub>2</sub>'''j''' + ''s''<sub>3</sub>'''k''' = {{nowrap|1='''u''' × '''v'''}} are
| |
| :<math>\begin{align}
| |
| s_1=&u_2v_3-u_3v_2\\
| |
| s_2=&u_3v_1-u_1v_3\\
| |
| s_3=&u_1v_2-u_2v_1
| |
| \end{align}</math>
| |
| Using [[column vector]]s, we can represent the same result as follows:
| |
| :<math>\begin{pmatrix}s_1\\s_2\\s_3\end{pmatrix}=\begin{pmatrix}u_2v_3-u_3v_2\\u_3v_1-u_1v_3\\u_1v_2-u_2v_1\end{pmatrix}</math>
| |
| | |
| ===Matrix notation===
| |
| The cross product can also be expressed as the [[formal calculation|formal]]<ref group="note">Here, “formal" means that this notation has the form of a determinant, but does not strictly adhere to the definition; it is a mnemonic used to remember the expansion of the cross product.</ref> [[determinant]]:
| |
| :<math>\mathbf{u\times v}=\begin{vmatrix}
| |
| \mathbf{i}&\mathbf{j}&\mathbf{k}\\
| |
| u_1&u_2&u_3\\
| |
| v_1&v_2&v_3\\
| |
| \end{vmatrix}</math>
| |
| This determinant can be computed using [[Rule of Sarrus|Sarrus' rule]] or [[cofactor expansion]].
| |
| Using Sarrus' rule, it expands to
| |
| :<math>\mathbf{u\times v}=(u_2v_3\mathbf{i}+u_3v_1\mathbf{j}+u_1v_2\mathbf{k})
| |
| -(u_3v_2\mathbf{i}+u_1v_3\mathbf{j}+u_2v_1\mathbf{k}).
| |
| </math>
| |
| Using cofactor expansion along the first row instead, it expands to<ref name=Cullen2>
| |
| {{cite book |title=''cited work'' |url=http://books.google.com/?id=x7uWk8lxVNYC&pg=PA321 |page=321 |chapter= Equation 7: '''a''' × '''b''' as sum of determinants |isbn=0-7637-4591-X |author=Dennis G. Zill, Michael R. Cullen |publisher=Jones & Bartlett Learning |year=2006}}
| |
| </ref>
| |
| :<math>\mathbf{u\times v}=
| |
| \begin{vmatrix}
| |
| u_2&u_3\\
| |
| v_2&v_3
| |
| \end{vmatrix}\mathbf{i}
| |
| -\begin{vmatrix}
| |
| u_1&u_3\\
| |
| v_1&v_3
| |
| \end{vmatrix}\mathbf{j}
| |
| +\begin{vmatrix}
| |
| u_1&u_2\\
| |
| v_1&v_2
| |
| \end{vmatrix}\mathbf{k}
| |
| </math>
| |
| which gives the components of the resulting vector directly.
| |
| | |
| == Properties ==
| |
| | |
| === Geometric meaning ===
| |
| {{See also|Triple product}}
| |
| [[Image:Cross product parallelogram.svg|right|thumb|Figure 1. The area of a parallelogram as a cross product]]
| |
| [[Image:Parallelepiped volume.svg|right|thumb|240px|Figure 2. Three vectors defining a parallelepiped]]
| |
| | |
| The [[Euclidean norm|magnitude]] of the cross product can be interpreted as the positive [[area]] of the [[parallelogram]] having '''a''' and '''b''' as sides (see Figure 1):
| |
| | |
| :<math>A = \left\| \mathbf{a} \times \mathbf{b} \right\| = \left\| \mathbf{a} \right\| \left\| \mathbf{b} \right\| \sin \theta. \,\!</math>
| |
| | |
| Indeed, one can also compute the volume ''V'' of a [[parallelepiped]] having '''a''', '''b''' and '''c''' as sides by using a combination of a cross product and a dot product, called [[scalar triple product]] (see Figure 2):
| |
| | |
| :<math>
| |
| \mathbf{a}\cdot(\mathbf{b}\times \mathbf{c})=
| |
| \mathbf{b}\cdot(\mathbf{c}\times \mathbf{a})=
| |
| \mathbf{c}\cdot(\mathbf{a}\times \mathbf{b}).
| |
| </math>
| |
| | |
| Since the result of the scalar triple product may be negative, the volume of the parallelepiped is given by its absolute value. For instance,
| |
| | |
| :<math>V = |\mathbf{a} \cdot (\mathbf{b} \times \mathbf{c})|.</math>
| |
| | |
| Because the magnitude of the cross product goes by the sine of the angle between its arguments, the cross product can be thought of as a measure of ''‘perpendicularity’'' in the same way that the [[dot product]] is a measure of ''‘parallelism’''. Given two [[unit vectors]], their cross product has a magnitude of 1 if the two are perpendicular and a magnitude of zero if the two are parallel. The opposite is true for the dot product of two unit vectors.
| |
| | |
| Unit vectors enable two convenient identities: the dot product of two unit vectors yields the cosine (which may be positive or negative) of the angle between the two unit vectors. The magnitude of the cross product of the two unit vectors yields the sine (which will always be positive).
| |
| | |
| === Algebraic properties ===
| |
| | |
| [[File:Cross product distributivity.svg|250px|thumb|Cross product distributivity over vector addition. The vectors '''b''' and '''c''' are resolved into parallel and perpendicular components to '''a''': parallel components vanish in the cross product, perpendicular ones remain. The planes indicate the axial vectors normal to those planes, and are '''''not''''' [[bivector]]s.<ref>{{cite book|title=Vector Analysis|author=M. R. Spiegel, S. Lipschutz, D. Spellman|series=Schaum's outlines|year=2009|page=29|publisher=McGraw Hill|isbn=978-0-07-161545-7}}</ref>]]
| |
| | |
| * If the cross product of two vectors is the zero vector, ('''a''' × '''b''' = '''0'''), then either of them is the zero vector, ('''a''' = '''0''', or '''b''' = '''0''') or both of them are zero vectors, ('''a''' = '''b''' = '''0'''), or else they are parallel or antiparallel, ('''a''' || '''b'''), so that the sine of the angle between them is zero, (''θ'' = 0° or ''θ'' = 180° and sin''θ'' = 0).
| |
| | |
| * The self cross product of a vector is the zero vector, i.e., '''a''' × '''a''' = '''0'''.
| |
| | |
| * The cross product is [[anticommutative]],
| |
| | |
| :<math>\mathbf{a} \times \mathbf{b} = -\mathbf{b} \times \mathbf{a},</math>
| |
| | |
| * [[distributive]] over addition,
| |
| | |
| : <math>\mathbf{a} \times (\mathbf{b} + \mathbf{c}) = (\mathbf{a} \times \mathbf{b}) + (\mathbf{a} \times \mathbf{c}),</math>
| |
| | |
| * and compatible with scalar multiplication so that
| |
| | |
| :<math>(r\mathbf{a}) \times \mathbf{b} = \mathbf{a} \times (r\mathbf{b}) = r(\mathbf{a} \times \mathbf{b}).</math>
| |
| | |
| * It is not [[associative]], but satisfies the [[Jacobi identity]]:
| |
| | |
| :<math>\mathbf{a} \times (\mathbf{b} \times \mathbf{c}) + \mathbf{b} \times (\mathbf{c} \times \mathbf{a}) + \mathbf{c} \times (\mathbf{a} \times \mathbf{b}) = \mathbf{0}.</math>
| |
| | |
| Distributivity, linearity and Jacobi identity show that the '''R'''<sup>3</sup> [[Euclidian space#Real coordinate space|vector space]] together with vector addition and the cross product forms a [[Lie algebra]], the Lie algebra of the real [[orthogonal group]] in 3 dimensions, [[SO(3)]].
| |
| | |
| * The cross product does not obey the [[cancellation law]]: '''a''' × '''b''' = '''a''' × '''c''' with non-zero '''a''' does not imply that '''b''' = '''c'''. Instead if '''a''' × '''b''' = '''a''' × '''c''':
| |
| | |
| :<math> \begin{align}
| |
| \mathbf{0} &= (\mathbf{a} \times \mathbf{b}) - (\mathbf{a} \times \mathbf{c})\\
| |
| &= \mathbf{a} \times (\mathbf{b} - \mathbf{c}).\\
| |
| \end{align}</math>
| |
| | |
| If neither '''a''' nor '''b''' − '''c''' is zero then from the definition of the cross product the angle between them must be zero and they must be parallel. They are related by a scale factor, so one of '''b''' or '''c''' can be expressed in terms of the other, for example
| |
| | |
| : <math>\mathbf{c} = \mathbf{b} + t\mathbf{a},</math>
| |
| | |
| for some scalar ''t''.
| |
| | |
| * If '''a''' · '''b''' = '''a''' · '''c''' and '''a''' × '''b''' = '''a''' × '''c''', for non-zero vector '''a''', then '''b''' = '''c''', as
| |
| | |
| :<math>\mathbf{a} \times (\mathbf{b} - \mathbf{c}) = \mathbf{0}</math> and
| |
| :<math>\mathbf{a} \cdot (\mathbf{b} - \mathbf{c}) = 0,</math>
| |
| | |
| so '''b''' − '''c''' is both parallel and perpendicular to the non-zero vector ''a'', something that is only possible if '''b''' − '''c''' = '''0''' so they are identical.
| |
| | |
| * From the geometrical definition the cross product is invariant under [[rotation]]s about the axis defined by '''a''' × '''b'''. More generally the cross product obeys the following identity under [[matrix (math)|matrix]] transformations:
| |
| | |
| :<math>(M\mathbf{a}) \times (M\mathbf{b}) = (\det M) M^{-T}(\mathbf{a} \times \mathbf{b})</math>
| |
| | |
| where <math>\scriptstyle M</math> is a 3 by 3 [[matrix (math)|matrix]] and <math>\scriptstyle M^{-T}</math> is the [[transpose]] of the [[inverse matrix|inverse]]
| |
| | |
| * The cross product of two vectors in 3-D always lies in the [[null space]] of the matrix with the vectors as rows:
| |
| :<math>\mathbf{a} \times \mathbf{b} \in NS\left(\begin{bmatrix}\mathbf{a} \\ \mathbf{b}\end{bmatrix}\right).</math>
| |
| | |
| * For the sum of two cross products, the following identity holds:
| |
| :<math>\mathbf{a} \times \mathbf{b} + \mathbf{c} \times \mathbf{d} = (\mathbf{a} - \mathbf{c}) \times (\mathbf{b} - \mathbf{d}) + \mathbf{a} \times \mathbf{d} + \mathbf{c} \times \mathbf{b}.</math>
| |
| | |
| === Differentiation ===
| |
| {{Main|Vector-valued_function#Derivative_and_vector_multiplication|l1=Vector-valued function: Derivative and vector multiplication}}
| |
| | |
| The [[product rule]] applies to the cross product in a similar manner:
| |
| :<math>\frac{d}{dx}(\mathbf{a} \times \mathbf{b}) = \frac{d\mathbf{a}}{dx} \times \mathbf{b} + \mathbf{a} \times \frac{d\mathbf{b}}{dx}.</math>
| |
| This identity can be easily proved using the [[#Conversion to matrix multiplication|matrix multiplication]] representation.
| |
| | |
| === Triple product expansion ===
| |
| {{Main|Triple product}}
| |
| | |
| The cross product is used in both forms of the triple product. The [[scalar triple product]] of three vectors is defined as
| |
| | |
| :<math>\mathbf{a} \cdot (\mathbf{b} \times \mathbf{c}), </math>
| |
| | |
| It is the signed volume of the [[parallelepiped]] with edges '''a''', '''b''' and '''c''' and as such the vectors can be used in any order that's an [[even permutation]] of the above ordering. The following therefore are equal:
| |
| | |
| :<math>\mathbf{a} \cdot (\mathbf{b} \times \mathbf{c}) = \mathbf{b} \cdot (\mathbf{c} \times \mathbf{a}) = \mathbf{c} \cdot (\mathbf{a} \times \mathbf{b}), </math>
| |
| | |
| The [[vector triple product]] is the cross product of a vector with the result of another cross product, and is related to the dot product by the following formula
| |
| | |
| :<math>\mathbf{a} \times (\mathbf{b} \times \mathbf{c}) = \mathbf{b}(\mathbf{a} \cdot \mathbf{c}) - \mathbf{c}(\mathbf{a} \cdot \mathbf{b}).</math>
| |
| | |
| The [[mnemonic]] "BAC minus CAB" is used to remember the order of the vectors in the right hand member. This formula is used in [[physics]] to simplify vector calculations. A special case, regarding [[gradient]]s and useful in [[vector calculus]], is
| |
| :<math>\begin{align}
| |
| \nabla \times (\nabla \times \mathbf{f}) & = \nabla (\nabla \cdot \mathbf{f} ) - (\nabla \cdot \nabla) \mathbf{f} \\
| |
| & = \nabla (\nabla \cdot \mathbf{f} ) - \nabla^2 \mathbf{f},\\
| |
| \end{align} </math>
| |
| where ∇<sup>2</sup> is the [[vector Laplacian]] operator.
| |
| | |
| Another identity relates the cross product to the scalar triple product:
| |
| | |
| :<math>(\mathbf{a}\times \mathbf{b})\times (\mathbf{a}\times \mathbf{c}) = (\mathbf{a}\cdot(\mathbf{b}\times \mathbf{c})) \mathbf{a}</math>
| |
| | |
| ===Alternative formulation===
| |
| The cross product and the dot product are related by:
| |
| :<math> |\mathbf{a} \times \mathbf{b}|^2 = |\mathbf{a}|^2 |\mathbf{b}|^2 - (\mathbf{a} \cdot \mathbf{b})^2.</math>
| |
| The right-hand side is the [[Gramian matrix#Gram determinant|Gram determinant]] of '''a''' and '''b''', the square of the area of the parallelogram defined by the vectors. This condition determines the magnitude of the cross product. Namely, since the dot product is defined, in terms of the angle ''θ'' between the two vectors, as:
| |
| | |
| :<math> \mathbf{a \cdot b} = | \mathbf a | | \mathbf b | \cos \theta , </math>
| |
| | |
| the above given relationship can be rewritten as follows:
| |
| | |
| :<math> |\mathbf{a \times b}|^2 = |\mathbf{a}|^2 |\mathbf{b}|^2 \left(1-\cos^2 \theta \right) .</math>
| |
| | |
| Invoking the [[Pythagorean trigonometric identity]] one obtains:
| |
| :<math> |\mathbf{a} \times \mathbf{b}| = |\mathbf{a}| |\mathbf{b}| |\sin \theta| , </math>
| |
| | |
| which is the magnitude of the cross product expressed in terms of ''θ'', equal to the area of the parallelogram defined by '''a''' and '''b''' (see [[#Definition|definition]] above).
| |
| | |
| The combination of this requirement and the property that the cross product be orthogonal to its constituents '''a''' and '''b''' provides an alternative definition of the cross product.<ref name=Massey>
| |
| | |
| {{cite journal |title=Cross products of vectors in higher dimensional Euclidean spaces |author=WS Massey |journal=The American Mathematical Monthly |volume=90 |month=Dec |year=1983 |pages=697–701 |issue=10 |doi=10.2307/2323537 |publisher=The American Mathematical Monthly, Vol. 90, No. 10 |ref=harv |jstor=2323537}}
| |
| </ref>
| |
| | |
| ===Lagrange's identity===
| |
| The relation:
| |
| :<math> |\mathbf{a} \times \mathbf{b}|^2 = |\mathbf{a}|^2 |\mathbf{b}|^2 - (\mathbf{a} \cdot \mathbf{b})^2 .</math>
| |
| can be compared with another relation involving the right-hand side, namely [[Lagrange's identity]] expressed as:<ref name=Boichenko>
| |
| | |
| {{cite book |title=Dimension theory for ordinary differential equations |author=Vladimir A. Boichenko, Gennadiĭ Alekseevich Leonov, Volker Reitmann |url=http://books.google.com/?id=9bN1-b_dSYsC&pg=PA26 |page=26 |isbn=3-519-00437-2 |year=2005 |publisher=Vieweg+Teubner Verlag}}
| |
| | |
| </ref>
| |
| | |
| :<math>\sum_{1 \le i < j \le n} \left(a_ib_j-a_jb_i \right)^2 = | \mathbf a |^2 | \mathbf b |^2 - (\mathbf {a \cdot b } )^2\ , </math>
| |
| | |
| where '''a''' and '''b''' may be ''n''-dimensional vectors. In the case ''n''=3, combining these two equations results in the expression for the magnitude of the cross product in terms of its components:<ref name=Lounesto1>
| |
| | |
| {{cite book |url=http://books.google.com/?id=kOsybQWDK4oC&pg=PA94&dq=%22which+in+coordinate+form+means+Lagrange%27s+identity%22&cd=1#v=onepage&q=%22which%20in%20coordinate%20form%20means%20Lagrange%27s%20identity%22 |author=Pertti Lounesto |page=94 |title=Clifford algebras and spinors |isbn=0-521-00551-5 |edition=2nd |publisher=Cambridge University Press |year=2001}}
| |
| | |
| </ref>
| |
|
| |
| :<math> |\mathbf{a} \times \mathbf{b}|^2 = \sum_{1 \le i < j \le 3} \left(a_ib_j-a_jb_i \right)^2 = (a_1b_2 - b_1a_2)^2 + (a_2b_3 - a_3b_2)^2 + (a_3b_1-a_1b_3)^2 \ . </math>
| |
| | |
| The same result is found directly using the components of the cross-product found from:
| |
| :<math>\mathbf{a}\times\mathbf{b}=\det \begin{bmatrix}
| |
| \mathbf{i} & \mathbf{j} & \mathbf{k} \\
| |
| a_1 & a_2 & a_3 \\
| |
| b_1 & b_2 & b_3 \\
| |
| \end{bmatrix}.</math>
| |
| | |
| In '''R'''<sup>3</sup> Lagrange's equation is a special case of the multiplicativity |'''vw'''| = |'''v'''||'''w'''| of the norm in the [[Quaternion#Algebraic properties|quaternion algebra]].
| |
| | |
| It is a special case of another formula, also sometimes called Lagrange's identity, which is the three dimensional case of the [[Binet-Cauchy identity]]:<ref name=Liu/><ref name=Weisstein>by {{cite book |author=Eric W. Weisstein |chapter=Binet-Cauchy identity |title=CRC concise encyclopedia of mathematics |url=http://books.google.com/?id=8LmCzWQYh_UC&pg=PA228 |page=228 |isbn=1-58488-347-2 |edition=2nd |year=2003 |publisher=CRC Press}}</ref>
| |
| | |
| : <math>(\mathbf{a} \times \mathbf{b}) \cdot (\mathbf{c} \times \mathbf{d}) = (\mathbf{a} \cdot \mathbf{c})(\mathbf{b} \cdot \mathbf{d}) - (\mathbf{a} \cdot \mathbf{d})(\mathbf{b} \cdot \mathbf{c}).</math>
| |
| | |
| If '''a''' = '''c''' and '''b''' = '''d''' this simplifies to the formula above.
| |
| | |
| == Alternative ways to compute the cross product ==
| |
| | |
| === Conversion to matrix multiplication ===
| |
| The vector cross product also can be expressed as the product of a [[skew-symmetric matrix]] and a vector:<ref name=Liu>
| |
| | |
| {{cite journal |title=Hadamard, Khatri-Rao, Kronecker and other matrix products |journal=Int J Information and systems sciences |volume=4 |pages=160–177 |year=2008 |publisher=Institute for scientific computing and education |url=http://www.math.ualberta.ca/ijiss/SS-Volume-4-2008/No-1-08/SS-08-01-17.pdf |author=Shuangzhe Liu and Gõtz Trenkler |issue=1 |ref=harv}}
| |
| </ref>
| |
| :<math>\mathbf{a} \times \mathbf{b} = [\mathbf{a}]_{\times} \mathbf{b} = \begin{bmatrix}\,0&\!-a_3&\,\,a_2\\ \,\,a_3&0&\!-a_1\\-a_2&\,\,a_1&\,0\end{bmatrix}\begin{bmatrix}b_1\\b_2\\b_3\end{bmatrix}</math>
| |
| | |
| :<math>\mathbf{a} \times \mathbf{b} = [\mathbf{b}]_{\times}^\mathrm T \mathbf{a} = \begin{bmatrix}\,0&\,\,b_3&\!-b_2\\ -b_3&0&\,\,b_1\\\,\,b_2&\!-b_1&\,0\end{bmatrix}\begin{bmatrix}a_1\\a_2\\a_3\end{bmatrix}</math>
| |
| | |
| where superscript <sup>T</sup> refers to the [[transpose]] operation, and ['''a''']<sub>×</sub> is defined by:
| |
| | |
| :<math>[\mathbf{a}]_{\times} \stackrel{\rm def}{=} \begin{bmatrix}\,\,0&\!-a_3&\,\,\,a_2\\\,\,\,a_3&0&\!-a_1\\\!-a_2&\,\,a_1&\,\,0\end{bmatrix}.</math>
| |
| | |
| Also, if '''a''' is itself a cross product:
| |
| | |
| :<math>\mathbf{a} = \mathbf{c} \times \mathbf{d}</math>
| |
| | |
| then
| |
| | |
| :<math>[\mathbf{a}]_{\times} = \mathbf{d}\mathbf{c}^\mathrm T - \mathbf{c}\mathbf{d}^\mathrm T.</math>
| |
| | |
| :{| class="toccolours collapsible collapsed" width="60%" style="text-align:left"
| |
| !Proof by substitution
| |
| |-
| |
| |Evaluation of the cross product gives
| |
| :<math> \mathbf{a} = \mathbf{c} \times \mathbf{d} = \begin{pmatrix}
| |
| c_2 d_3 - c_3 d_2 \\
| |
| c_3 d_1 - c_1 d_3 \\
| |
| c_1 d_2 - c_2 d_1 \end{pmatrix}
| |
| </math>
| |
| Hence, the left hand side equals
| |
| :<math> [\mathbf{a}]_{\times} = \begin{bmatrix}
| |
| 0 & c_2 d_1 - c_1 d_2 & c_3 d_1 - c_1 d_3 \\
| |
| c_1 d_2 - c_2 d_1 & 0 & c_3 d_2 - c_2 d_3 \\
| |
| c_1 d_3 - c_3 d_1 & c_2 d_3 - c_3 d_2 & 0 \end{bmatrix}
| |
| </math>
| |
| Now, for the right hand side,
| |
| :<math> \mathbf{c} \mathbf{d}^{\mathrm T} = \begin{bmatrix}
| |
| c_1 d_1 & c_1 d_2 & c_1 d_3 \\
| |
| c_2 d_1 & c_2 d_2 & c_2 d_3 \\
| |
| c_3 d_1 & c_3 d_2 & c_3 d_3 \end{bmatrix}
| |
| </math>
| |
| And its transpose is
| |
| :<math> \mathbf{d} \mathbf{c}^{\mathrm T} = \begin{bmatrix}
| |
| c_1 d_1 & c_2 d_1 & c_3 d_1 \\
| |
| c_1 d_2 & c_2 d_2 & c_3 d_2 \\
| |
| c_1 d_3 & c_2 d_3 & c_3 d_3 \end{bmatrix}
| |
| </math>
| |
| Evaluation of the right hand side gives
| |
| :<math> \mathbf{d} \mathbf{c}^{\mathrm T} -
| |
| \mathbf{c} \mathbf{d}^{\mathrm T} = \begin{bmatrix}
| |
| 0 & c_2 d_1 - c_1 d_2 & c_3 d_1 - c_1 d_3 \\
| |
| c_1 d_2 - c_2 d_1 & 0 & c_3 d_2 - c_2 d_3 \\
| |
| c_1 d_3 - c_3 d_1 & c_2 d_3 - c_3 d_2 & 0 \end{bmatrix}
| |
| </math>
| |
| Comparison shows that the left hand side equals the right hand side.
| |
| |}
| |
| | |
| This result can be generalized to higher dimensions using [[geometric algebra]]. In particular in any dimension [[bivector]]s can be identified with skew-symmetric matrices, so the product between a skew-symmetric matrix and vector is equivalent to the grade-1 part of the product of a bivector and vector.{{Citation needed|date=May 2010}} In three dimensions bivectors are [[Hodge dual|dual]] to vectors so the product is equivalent to the cross product, with the bivector instead of its vector dual. In higher dimensions the product can still be calculated but bivectors have more degrees of freedom and are not equivalent to vectors.{{Citation needed|date=May 2010}}
| |
| | |
| This notation is also often much easier to work with, for example, in [[epipolar geometry]].
| |
| | |
| From the general properties of the cross product follows immediately that
| |
| | |
| :<math>[\mathbf{a}]_{\times} \, \mathbf{a} = \mathbf{0}</math> and <math>\mathbf{a}^\mathrm T \, [\mathbf{a}]_{\times} = \mathbf{0}</math>
| |
| | |
| and from fact that ['''a''']<sub>×</sub> is skew-symmetric it follows that
| |
| | |
| :<math>\mathbf{b}^\mathrm T \, [\mathbf{a}]_{\times} \, \mathbf{b} = 0. </math>
| |
| | |
| The above-mentioned triple product expansion (bac-cab rule) can be easily proven using this notation.{{Citation needed|date=May 2010}}
| |
| | |
| The above definition of ['''a''']<sub>×</sub> means that there is a one-to-one mapping between the set of 3×3 skew-symmetric matrices, also known as the [[Lie algebra]] of [[SO(3)]], and the operation of taking the cross product with some vector '''a'''.{{Citation needed|date=May 2010}}
| |
| | |
| ===Index notation for tensors===
| |
| The cross product can alternatively be defined in terms of the [[Levi-Civita symbol]] ε<sub>ijk</sub> and a dot product ''η<sup>mi</sup>'' (= δ<sup>''mi''</sup> for an orthonormal basis), which are useful in converting vector notation for tensor applications:
| |
| :<math>
| |
| \mathbf{a \times b} = \mathbf{c}\Leftrightarrow\ c^m = \sum_{i=1}^3 \sum_{j=1}^3 \sum_{k=1}^3 \eta^{mi} \varepsilon_{ijk} a^j b^k
| |
| </math>
| |
| where the [[Indexed family|indices]] <math>\scriptstyle i,j,k</math> correspond to vector components. This characterization of the cross product is often expressed more compactly using the [[Einstein summation convention]] as
| |
| :<math>
| |
| \mathbf{a \times b} = \mathbf{c}\Leftrightarrow\ c^m = \eta^{mi} \varepsilon_{ijk} a^j b^k
| |
| </math>
| |
| in which repeated indices are summed over the values 1 to 3. Note that this representation is another form of the skew-symmetric representation of the cross product:
| |
| :<math>\eta^{mi} \varepsilon_{ijk} a^j = [\mathbf{a}]_\times.</math>
| |
| | |
| In [[classical mechanics]]: representing the cross-product with the Levi-Civita symbol can cause mechanical symmetries to be obvious when physical systems are [[isotropic]]. (Quick example: consider a particle in a Hooke's Law potential in three-space, free to oscillate in three dimensions; none of these dimensions are "special" in any sense, so symmetries lie in the cross-product-represented angular momentum, which are made clear by the abovementioned Levi-Civita representation).{{Citation needed|date=November 2009}}
| |
| | |
| === Mnemonic ===
| |
| The word "[[xyzzy#Origin|xyzzy]]" can be used to remember the definition of the cross product.
| |
| | |
| If
| |
| | |
| :<math>\mathbf{a} = \mathbf{b} \times \mathbf{c}</math>
| |
| | |
| where:
| |
| | |
| :<math>
| |
| \mathbf{a} = \begin{bmatrix}a_x\\a_y\\a_z\end{bmatrix},
| |
| \mathbf{b} = \begin{bmatrix}b_x\\b_y\\b_z\end{bmatrix},
| |
| \mathbf{c} = \begin{bmatrix}c_x\\c_y\\c_z\end{bmatrix}
| |
| </math>
| |
| | |
| then:
| |
| | |
| :<math>a_x = b_y c_z - b_z c_y \, </math>
| |
| :<math>a_y = b_z c_x - b_x c_z \, </math>
| |
| :<math>a_z = b_x c_y - b_y c_x. \, </math>
| |
| | |
| The second and third equations can be obtained from the first by simply vertically rotating the subscripts, ''x'' → ''y'' → ''z'' → ''x''. The problem, of course, is how to remember the first equation, and two options are available for this purpose: either to remember the relevant two diagonals of Sarrus's scheme (those containing '''''i'''''), or to remember the xyzzy sequence.
| |
| | |
| Since the first diagonal in Sarrus's scheme is just the [[main diagonal]] of the [[cross product#Matrix notation|above]]-mentioned <math>\scriptstyle 3 \times 3</math> matrix, the first three letters of the word xyzzy can be very easily remembered.
| |
| | |
| === Cross visualization ===
| |
| Similarly to the mnemonic device above, a "cross" or X can be visualized between the two vectors in the equation. This may help you to remember the correct cross product formula.
| |
| | |
| If
| |
| | |
| :<math>\mathbf{a} = \mathbf{b} \times \mathbf{c}</math>
| |
| | |
| then:
| |
| | |
| :<math>
| |
| \mathbf{a} =
| |
| \begin{bmatrix}b_x\\b_y\\b_z\end{bmatrix} \times
| |
| \begin{bmatrix}c_x\\c_y\\c_z\end{bmatrix}.
| |
| </math>
| |
| | |
| If we want to obtain the formula for <math>a_x</math> we simply drop the <math>b_x</math> and <math>c_x</math> from the formula, and take the next two components down -
| |
| | |
| :<math>
| |
| a_x =
| |
| \begin{bmatrix}b_y\\b_z\end{bmatrix} \times
| |
| \begin{bmatrix}c_y\\c_z\end{bmatrix}.
| |
| </math>
| |
| | |
| It should be noted that when doing this for <math>a_y</math> the next two elements down should "wrap around" the matrix so that after the z component comes the x component. For clarity, when performing this operation for <math>a_y</math>, the next two components should be z and x (in that order). While for <math>a_z</math> the next two components should be taken as x and y.
| |
| | |
| :<math>
| |
| a_y =
| |
| \begin{bmatrix}b_z\\b_x\end{bmatrix} \times
| |
| \begin{bmatrix}c_z\\c_x\end{bmatrix},
| |
| a_z =
| |
| \begin{bmatrix}b_x\\b_y\end{bmatrix} \times
| |
| \begin{bmatrix}c_x\\c_y\end{bmatrix}
| |
| </math>
| |
| | |
| For <math>a_x</math> then, if we visualize the cross operator as pointing from an element on the left to an element on the right, we can take the first element on the left and simply multiply by the element that the cross points to in the right hand matrix. We then subtract the next element down on the left, multiplied by the element that the cross points to here as well. This results in our <math>a_x</math> formula -
| |
| | |
| :<math>a_x = b_y c_z - b_z c_y. \, </math>
| |
| | |
| We can do this in the same way for <math>a_y</math> and <math>a_z</math> to construct their associated formulas.
| |
| | |
| == Applications ==
| |
| | |
| === Computational geometry ===
| |
| | |
| The cross product can be used to calculate the normal for a triangle or polygon, an operation frequently performed in [[computer graphics]]. For example, the winding of a polygon (clockwise or anticlockwise) about a point within the polygon can be calculated by triangulating the polygon (like spoking a wheel) and summing the angles (between the spokes) using the cross product to keep track of the sign of each angle.
| |
| | |
| In [[computational geometry]] of [[the plane]], the cross product is used to determine the sign of the [[acute angle]] defined by three points <math>\scriptstyle p_1=(x_1,y_1)</math>, <math>\scriptstyle p_2=(x_2,y_2)</math> and <math>\scriptstyle p_3=(x_3,y_3)</math>. It corresponds to the direction of the cross product of the two coplanar [[vector (geometry)|vector]]s defined by the pairs of points <math>\scriptstyle p_1, p_2</math> and <math>\scriptstyle p_1, p_3</math>, i.e., by the sign of the expression <math>\scriptstyle P = (x_2-x_1)(y_3-y_1)-(y_2-y_1)(x_3-x_1)</math>. In the "right-handed" coordinate system, if the result is 0, the points are collinear; if it is positive, the three points constitute a positive angle of rotation around <math>\scriptstyle p_1</math> from <math>\scriptstyle p_2</math> to <math>\scriptstyle p_3</math>, otherwise a negative angle. From another point of view, the sign of <math>\scriptstyle P</math> tells whether <math>\scriptstyle p_3</math> lies to the left or to the right of line <math>\scriptstyle p_1, p_2</math>.
| |
| | |
| === Mechanics ===
| |
| [[Moment (physics)|Moment]] of a force <math>\scriptstyle\mathbf{F}_\mathrm{B}</math> applied at point B around point A is given as:
| |
| :: <math>\mathbf{M}_\mathrm{A} = \mathbf{r}_\mathrm{AB} \times \mathbf{F}_\mathrm{B}. \,</math>
| |
| | |
| === Other ===
| |
| The cross product occurs in the formula for the [[vector operator]] [[Curl (mathematics)|curl]].
| |
| It is also used to describe the [[Lorentz force]] experienced by a moving electrical charge in a magnetic field. The definitions of [[torque]] and [[angular momentum]] also involve the cross product.
| |
| | |
| The trick of rewriting a cross product in terms of a matrix multiplication appears frequently in epipolar and multi-view geometry, in particular when deriving matching constraints.
| |
| | |
| == Cross product as an exterior product ==
| |
| [[Image:Exterior calc cross product.svg|right|thumb|The cross product in relation to the exterior product. In red are the orthogonal [[unit vector]], and the "parallel" unit bivector.]]
| |
| The cross product can be viewed in terms of the [[exterior product]]. This view allows for a natural geometric interpretation of the cross product. In [[exterior algebra]] the exterior product (or wedge product) of two vectors is a [[bivector]]. A bivector is an oriented plane element, in much the same way that a vector is an oriented line element. Given two vectors ''a'' and ''b'', one can view the bivector {{nowrap|1=''a'' ∧ ''b''}} as the oriented parallelogram spanned by ''a'' and ''b''. The cross product is then obtained by taking the [[Hodge dual]] of the bivector {{nowrap|1=''a'' ∧ ''b''}}, mapping [[p-vector|2-vectors]] to vectors:
| |
| | |
| :<math>a \times b = *(a \wedge b) \,.</math>
| |
| | |
| This can be thought of as the oriented multi-dimensional element "perpendicular" to the bivector. Only in three dimensions is the result an oriented line element – a vector – whereas, for example, in 4 dimensions the Hodge dual of a bivector is two-dimensional – another oriented plane element. So, only in three dimensions is the cross product of ''a'' and ''b'' the vector dual to the bivector {{nowrap|1=''a'' ∧ ''b''}}: it is perpendicular to the bivector, with orientation dependent on the coordinate system's handedness, and has the same magnitude relative to the unit normal vector as {{nowrap|1=''a'' ∧ ''b''}} has relative to the unit bivector; precisely the properties described above.
| |
| | |
| == Cross product and handedness ==<!-- This section is linked from [[Vector calculus]] -->
| |
| | |
| When measurable quantities involve cross products, the ''handedness'' of the coordinate systems used cannot be arbitrary. However, when physics laws are written as equations, it should be possible to make an arbitrary choice of the coordinate system (including handedness). To avoid problems, one should be careful to never write down an equation where the two sides do not behave equally under all transformations that need to be considered. For example, if one side of the equation is a cross product of two vectors, one must take into account that when the handedness of the coordinate system is ''not'' fixed a priori, the result is not a (true) vector but a [[pseudovector]]. Therefore, for consistency, the other side '''must''' also be a pseudovector.{{Citation needed|date=April 2008}}
| |
| | |
| More generally, the result of a cross product may be either a vector or a pseudovector, depending on the type of its operands (vectors or pseudovectors). Namely, vectors and pseudovectors are interrelated in the following ways under application of the cross product:
| |
| | |
| * vector × vector = pseudovector
| |
| * pseudovector × pseudovector = pseudovector
| |
| * vector × pseudovector = vector
| |
| * pseudovector × vector = vector.
| |
| | |
| So by the above relationships, the unit basis vectors '''i''', '''j''' and '''k''' of an orthonormal, right-handed (Cartesian) coordinate frame '''must''' all be pseudovectors (if a basis of mixed vector types is disallowed, as it normally is) since '''i''' × '''j''' = '''k''', '''j''' × '''k''' = '''i''' and '''k''' × '''i''' = '''j'''.
| |
| | |
| Because the cross product may also be a (true) vector, it may not change direction with a mirror image transformation. This happens, according to the above relationships, if one of the operands is a (true) vector and the other one is a pseudovector (''e.g.'', the cross product of two vectors). For instance, a [[vector triple product]] involving three (true) vectors is a (true) vector.
| |
| | |
| A handedness-free approach is possible using [[exterior algebra]].
| |
| | |
| == Generalizations ==
| |
| There are several ways to generalize the cross product to the higher dimensions.
| |
| | |
| === Lie algebra ===
| |
| {{Main|Lie algebra}}
| |
| The cross product can be seen as one of the simplest Lie products,
| |
| and is thus generalized by [[Lie algebra]]s, which are axiomatized as binary products satisfying the axioms of multilinearity, skew-symmetry, and the Jacobi identity. Many Lie algebras exist, and their study is a major field of mathematics, called [[Lie theory]].
| |
| | |
| For example, the [[Heisenberg algebra]] gives another Lie algebra structure on <math>\scriptstyle\mathbf{R}^3.</math> In the basis <math>\scriptstyle\{x,y,z\},</math> the product is <math>\scriptstyle [x,y]=z, [x,z]=[y,z]=0.</math>
| |
| | |
| === Quaternions ===
| |
| {{Further|quaternions and spatial rotation}}
| |
| The cross product can also be described in terms of [[quaternion]]s, and this is why the letters '''i''', '''j''', '''k''' are a convention for the standard basis on '''R'''<sup>3</sup>. The unit vectors '''i''', '''j''', '''k''' correspond to "binary" (180 deg) rotations about their respective axes (Altmann, S. L., 1986, Ch. 12), said rotations being represented by "pure" quaternions (zero scalar part) with unit norms.
| |
| | |
| For instance, the above given cross product relations among '''i''', '''j''', and '''k''' agree with the multiplicative relations among the quaternions ''i'', ''j'', and ''k''. In general, if a vector [''a''<sub>1</sub>, ''a''<sub>2</sub>, ''a''<sub>3</sub>] is represented as the quaternion ''a''<sub>1</sub>''i'' + ''a''<sub>2</sub>''j'' + ''a''<sub>3</sub>''k'', the cross product of two vectors can be obtained by taking their product as quaternions and deleting the real part of the result. The real part will be the negative of the [[dot product]] of the two vectors.
| |
| | |
| Alternatively, using the above identification of the 'purely imaginary' quaternions with '''R'''<sup>3</sup>, the cross product may be thought of as half of the [[commutator]] of two quaternions.
| |
| | |
| === Octonions ===
| |
| {{See also|Seven-dimensional cross product|Octonion}}
| |
| A cross product for 7-dimensional vectors can be obtained in the same way by using the [[octonions]] instead of the quaternions. The nonexistence of nontrivial vector-valued cross products of two vectors in other dimensions is related to the result from [[Hurwitz's theorem (normed division algebras)|Hurwitz's theorem]] that the only [[normed division algebra]]s are the ones with dimension 1, 2, 4, and 8.
| |
| | |
| === Wedge product ===
| |
| {{Main|Exterior algebra}}
| |
| In general dimension, there is no direct analogue of the binary cross product that yields specifically a vector. There is however the [[wedge product]], which has similar properties, except that the wedge product of two vectors is now a [[p-vector|2-vector]] instead of an ordinary vector. As mentioned above, the cross product can be interpreted as the wedge product in three dimensions after using Hodge duality to map 2-vectors to vectors. The Hodge dual of the wedge product yields an (''n''−2)-vector, which is a natural generalization of the cross product in any number of dimensions.
| |
| | |
| The wedge product and dot product can be combined (through summation) to form the [[Geometric algebra|geometric product]].
| |
| | |
| === Multilinear algebra ===
| |
| In the context of [[multilinear algebra]], the cross product can be seen as the (1,2)-tensor (a [[mixed tensor]], specifically a [[bilinear map]]) obtained from the 3-dimensional [[volume form]],<ref group="note">By a volume form one means a function that takes in ''n'' vectors and gives out a scalar, the volume of the [[parallelotope]] defined by the vectors: <math>\scriptstyle V\times \cdots \times V \to \mathbf{R}.</math> This is an ''n''-ary multilinear skew-symmetric form. In the presence of a basis, such as on <math>\scriptstyle\mathbf{R}^n,</math> this is given by the [[determinant]], but in an abstract vector space, this is added structure. In terms of [[G-structure|''G''-structures]], a volume form is an [[Special linear group|<math>\scriptstyle SL</math>]]-structure.</ref> a (0,3)-tensor, by [[Raising and lowering indices|raising an index]].
| |
| | |
| In detail, the 3-dimensional volume form defines a product <math>\scriptstyle V \times V \times V \to \mathbf{R},</math> by taking the determinant of the matrix given by these 3 vectors.
| |
| By [[Dual space|duality]], this is equivalent to a function <math>\scriptstyle V \times V \to V^*,</math> (fixing any two inputs gives a function <math>\scriptstyle V \to \mathbf{R}</math> by evaluating on the third input) and in the presence of an [[inner product]] (such as the [[dot product]]; more generally, a non-degenerate bilinear form), we have an isomorphism <math>\scriptstyle V \to V^*,</math> and thus this yields a map <math>\scriptstyle V \times V \to V,</math> which is the cross product: a (0,3)-tensor (3 vector inputs, scalar output) has been transformed into a (1,2)-tensor (2 vector inputs, 1 vector output) by "raising an index".
| |
| | |
| Translating the above algebra into geometry, the function "volume of the parallelepiped defined by <math>\scriptstyle (a,b,-)</math>" (where the first two vectors are fixed and the last is an input), which defines a function <math>\scriptstyle V \to \mathbf{R}</math>, can be ''represented'' uniquely as the dot product with a vector: this vector is the cross product <math>\scriptstyle a \times b.</math> From this perspective, the cross product is ''defined'' by the [[scalar triple product]], <math>\scriptstyle\mathrm{Vol}(a,b,c) = (a\times b)\cdot c.</math>
| |
| | |
| In the same way, in higher dimensions one may define generalized cross products by raising indices of the ''n''-dimensional volume form, which is a <math>\scriptstyle (0,n)</math>-tensor.
| |
| The most direct generalizations of the cross product are to define either:
| |
| * a <math>\scriptstyle (1,n-1)</math>-tensor, which takes as input <math>\scriptstyle n-1</math> vectors, and gives as output 1 vector – an <math>\scriptstyle (n-1)</math>-ary vector-valued product, or
| |
| * a <math>\scriptstyle (n-2,2)</math>-tensor, which takes as input 2 vectors and gives as output [[skew-symmetric]] tensor of rank ''n''−2 – a binary product with rank ''n''−2 tensor values. One can also define <math>\scriptstyle (k,n-k)</math>-tensors for other ''k.''
| |
| | |
| These products are all multilinear and skew-symmetric, and can be defined in terms of the determinant and [[parity (physics)|parity]].
| |
| | |
| The <math>\scriptstyle (n-1)</math>-ary product can be described as follows: given <math>\scriptstyle n-1</math> vectors <math>\scriptstyle v_1,\dots,v_{n-1}</math> in <math>\scriptstyle\mathbf{R}^n,</math> define their generalized cross product <math>\scriptstyle v_n = v_1 \times \cdots \times v_{n-1}</math> as:
| |
| * perpendicular to the hyperplane defined by the <math>\scriptstyle v_i,</math>
| |
| * magnitude is the volume of the [[parallelotope]] defined by the <math>\scriptstyle v_i,</math> which can be computed as the [[Gram determinant]] of the <math>\scriptstyle v_i,</math>
| |
| * oriented so that <math>\scriptstyle v_1,\dots,v_n</math> is positively oriented.
| |
| This is the unique multilinear, alternating product which evaluates to <math>\scriptstyle e_1 \times \cdots \times e_{n-1} = e_n</math>, <math>\scriptstyle e_2 \times \cdots \times e_n = e_1,</math> and so forth for cyclic permutations of indices.
| |
| | |
| In coordinates, one can give a formula for this <math>\scriptstyle (n-1)</math>-ary analogue of the cross product in '''R'''<sup>''n''</sup> by:
| |
| | |
| :<math>\bigwedge(\mathbf{v}_1,\dots,\mathbf{v}_{n-1})=
| |
| \begin{vmatrix}
| |
| v_1{}^1 &\cdots &v_1{}^{n}\\
| |
| \vdots &\ddots &\vdots\\
| |
| v_{n-1}{}^1 & \cdots &v_{n-1}{}^{n}\\
| |
| \mathbf{e}_1 &\cdots &\mathbf{e}_{n}
| |
| \end{vmatrix}.</math>
| |
| | |
| This formula is identical in structure to the determinant formula for the normal cross product in '''R'''<sup>3</sup> except that the row of basis vectors is the last row in the determinant rather than the first. The reason for this is to ensure that the ordered vectors ('''v'''<sub>1</sub>,...,'''v'''<sub>''n''-1</sub>,Λ('''v'''<sub>1</sub>,...,'''v'''<sub>''n''-1</sub>)) have a positive [[orientation (mathematics)|orientation]] with respect to ('''e'''<sub>1</sub>,...,'''e'''<sub>''n''</sub>). If ''n'' is odd, this modification leaves the value unchanged, so this convention agrees with the normal definition of the binary product. In the case that ''n'' is even, however, the distinction must be kept. This <math>\scriptstyle (n-1)</math>-ary form enjoys many of the same properties as the vector cross product: it is [[alternating form|alternating]] and linear in its arguments, it is perpendicular to each argument, and its magnitude gives the hypervolume of the region bounded by the arguments. And just like the vector cross product, it can be defined in a coordinate independent way as the Hodge dual of the wedge product of the arguments.
| |
| | |
| == History ==
| |
| | |
| In 1773, the Italian mathematician [[Joseph Louis Lagrange]], (born Giuseppe Luigi Lagrancia), introduced the component form of both the dot and cross products in order to study the [[tetrahedron]] in three dimensions.<ref>{{cite book|author=Lagrange, JL|title=Oeuvres|volume=vol 3|chapter=Solutions analytiques de quelques problèmes sur les pyramides triangulaires|year=1773}}</ref> In 1843 the Irish mathematical physicist Sir [[William Rowan Hamilton]] introduced the [[quaternion]] product, and with it the terms "vector" and "scalar". Given two quaternions [0, '''u'''] and [0, '''v'''], where '''u''' and '''v''' are vectors in '''R'''<sup>3</sup>, their quaternion product can be summarized as [−'''u'''·'''v''', '''u'''×'''v''']. [[James Clerk Maxwell]] used Hamilton's quaternion tools to develop his famous [[Maxwell's equations|electromagnetism equations]], and for this and other reasons quaternions for a time were an essential part of physics education.
| |
| | |
| In 1878 [[William Kingdon Clifford]] published his [[Elements of Dynamic]] which was an advanced text for its time. He defined the product of two vectors<ref>[[William Kingdon Clifford]] (1878) [http://dlxs2.library.cornell.edu/cgi/t/text/text-idx?c=math;cc=math;view=toc;subview=short;idno=04370002 Elements of Dynamic], Part I, page 95, London: MacMillan & Co; online presentation by [[Cornell University]] ''Historical Mathematical Monographs''</ref> to have magnitude equal to the [[area]] of the [[parallelogram]] of which they are two sides, and direction perpendicular to their plane.
| |
| | |
| [[Oliver Heaviside]] in England and [[Josiah Willard Gibbs]], a professor at [[Yale University]] in [[Connecticut]], also felt that quaternion methods were too cumbersome, often requiring the scalar or vector part of a result to be extracted. Thus, about forty years after the quaternion product, the [[dot product]] and cross product were introduced—to heated opposition. Pivotal to (eventual) acceptance was the efficiency of the new approach, allowing Heaviside to reduce the equations of electromagnetism from Maxwell's original 20 to the four commonly seen today.<ref>{{Cite book |first = Paul J.|last = Nahin |title = Oliver Heaviside: the life, work, and times of an electrical genius of the Victorian age|publisher = JHU Press |isbn = 0-8018-6909-9 |year = 2000 |pages = 108–109}}</ref>
| |
| | |
| Largely independent of this development, and largely unappreciated at the time, [[Hermann Grassmann]] created a geometric algebra not tied to dimension two or three, with the [[exterior product]] playing a central role. [[William Kingdon Clifford]] combined the algebras of Hamilton and Grassmann to produce [[Clifford algebra]], where in the case of three-dimensional vectors the bivector produced from two vectors dualizes to a vector, thus reproducing the cross product.
| |
| | |
| The cross notation and the name "cross product" began with Gibbs. Originally they appeared in privately published notes for his students in 1881 as ''Elements of Vector Analysis''. The utility for mechanics was noted by [[Aleksandr Kotelnikov]]. Gibbs's notation and the name "cross product" later reached a wide audience through [[Vector Analysis]], a textbook by [[Edwin Bidwell Wilson]], a former student. Wilson rearranged material from Gibbs's lectures, together with material from publications by Heaviside, Föpps, and Hamilton. He divided [[vector analysis]] into three parts:
| |
| {{quote|First, that which concerns addition and the scalar and vector products of vectors. Second, that which concerns the differential and integral calculus in its relations to scalar and vector functions. Third, that which contains the theory of the linear vector function.}}
| |
| Two main kinds of vector multiplications were defined, and they were called as follows:
| |
| *The '''direct''', '''scalar''', or '''dot''' product of two vectors
| |
| *The '''skew''', '''vector''', or '''cross''' product of two vectors
| |
| Several kinds of [[triple product]]s and products of more than three vectors were also examined. The above mentioned triple product expansion was also included.
| |
| | |
| == See also ==
| |
| {{Portal|Geometry}}
| |
| * [[Bivector]]
| |
| * [[Cartesian product]] – A product of two sets
| |
| * [[Exterior algebra]]
| |
| * [[Multiple cross products]] – Products involving more than three vectors
| |
| * [[Pseudovector]]
| |
| * [[×]] (the symbol)
| |
| | |
| == Notes ==
| |
| {{Reflist|group=note}}
| |
| | |
| == References ==
| |
| {{Reflist}}
| |
| * {{Cite book | last=Cajori | first=Florian | author-link=Florian Cajori | title=A History Of Mathematical Notations Volume II | year=1929 | publisher=[[Open Court Publishing Company|Open Court Publishing]] | url=http://www.archive.org/details/historyofmathema027671mbp | isbn=978-0-486-67766-8 | page= 134 | ref=harv | postscript=<!--None-->}}
| |
| * [[E. A. Milne]] (1948) [[Vectorial Mechanics]], Chapter 2: Vector Product, pp 11 –31, London: [[Methuen Publishing]].
| |
| * {{Cite book | last=Wilson | first=Edwin Bidwell | author-link= | title=Vector Analysis: A text-book for the use of students of mathematics and physics, founded upon the lectures of J. Willard Gibbs | year=1901 | publisher=[[Yale University Press]] | isbn=<!--none--> | url=http://www.archive.org/details/117714283 | ref=harv | postscript=<!--None-->}}
| |
| | |
| == External links ==
| |
| * {{springer|title=Cross product|id=p/c027120}}
| |
| * {{Mathworld|title=Cross Product|urlname=CrossProduct}}
| |
| *[http://behindtheguesses.blogspot.com/2009/04/dot-and-cross-products.html A quick geometrical derivation and interpretation of cross products]
| |
| * [http://uk.arxiv.org/abs/math.la/0204357 Z.K. Silagadze (2002). Multi-dimensional vector product. Journal of Physics. A35, 4949] (it is only possible in 7-D space)
| |
| * [http://www.cut-the-knot.org/arithmetic/algebra/RealComplexProducts.shtml Real and Complex Products of Complex Numbers]
| |
| * [http://physics.syr.edu/courses/java-suite/crosspro.html An interactive tutorial] created at [[Syracuse University]] - (requires [[Java (programming language)|java]])
| |
| * [http://www.cs.berkeley.edu/~wkahan/MathH110/Cross.pdf W. Kahan (2007). Cross-Products and Rotations in Euclidean 2- and 3-Space. University of California, Berkeley (PDF).]
| |
| | |
| {{linear algebra}}
| |
| | |
| {{DEFAULTSORT:Cross Product}}
| |
| [[Category:Bilinear operators]]
| |
| [[Category:Binary operations]]
| |
| [[Category:Vector calculus]]
| |
| [[Category:Analytic geometry]]
| |