Difference between revisions of "Dot product"

From formulasearchengine
Jump to navigation Jump to search
en>Bender2k14
(→‎Complex vectors: added a reference that a vector that dots with itself to 0 is called isotropic)
 
en>Victimofleisure
m (by mean of the dot product -> by means of the dot product)
Line 1: Line 1:
{{redirect|Scalar product|the abstract scalar product|Inner product space|the operation on complex vector spaces|Hermitian form|the product of a vector and a scalar|Scalar multiplication}}
+
{{redirect|Scalar product|the abstract scalar product|Inner product space|the product of a vector and a scalar|Scalar multiplication}}
  
In [[mathematics]], the '''dot product''', or '''scalar product''' (or sometimes '''inner product''' in the context of Euclidean space), is an algebraic operation that takes two equal-length sequences of numbers (usually [[coordinate vector]]s) and returns a single number obtained by multiplying corresponding entries and then summing those products. The name "dot product" is derived from the [[Interpunct|centered dot]] "&nbsp;'''<math>\cdot</math>'''&nbsp;" that is often used to designate this operation; the alternative name "scalar product" emphasizes the [[scalar (mathematics)|scalar]] (rather than [[Euclidean vector|vector]]) nature of the result.
+
In [[mathematics]], the '''dot product''', or '''scalar product''' (or sometimes '''inner product''' in the context of Euclidean space), is an algebraic operation that takes two equal-length sequences of numbers (usually [[coordinate vector]]s) and returns a single number.  This operation can be defined either algebraically or geometrically.  Algebraically, it is the sum of the products of the corresponding entries of the two sequences of numbers. Geometrically, it is the product of the [[Euclidean vector#Length|magnitudes]] of the two vectors and the [[cosine]] of the angle between them. The name "dot product" is derived from the [[Dot operator|centered dot]] "&nbsp;'''·'''&nbsp;" that is often used to designate this operation; the alternative name "scalar product" emphasizes the [[scalar (mathematics)|scalar]] (rather than [[Euclidean vector|vectorial]]) nature of the result.
  
In [[Euclidean space]], the [[inner product]] of two vectors (expressed in terms of coordinate vectors on an [[orthonormal basis]]) is the same as their dot product.  However, in more general contexts (e.g., [[vector space]]s that use [[complex number]]s as scalars) the inner and the dot products are typically defined in ways that do not coincide.
+
In three-dimensional space, the dot product contrasts with the [[cross product]] of two vectors, which produces a [[pseudovector]] as the result.  The dot product is directly related to the cosine of the angle between two vectors in Euclidean space of any number of dimensions.
  
In three dimensional space, the dot product contrasts with the [[cross product]], which produces a vector as result.  The dot product is directly related to the cosine of the angle between two vectors in Euclidean space of any number of dimensions.
+
==Definition==
 +
The dot product is often defined in one of two ways: algebraically or geometrically.  The geometric definition is based on the notions of angle and distance (magnitude of vectors).  The equivalence of these two definitions relies on having a [[Cartesian coordinate system]] for Euclidean space.
  
==Definition (real vector spaces)==
+
In modern presentations of [[Euclidean geometry]], the points of space are defined in terms of their Cartesian coordinates, and [[Euclidean space]] itself is commonly identified with the [[real coordinate space]] '''R'''<sup>''n''</sup>. In such a presentation, the notions of length and angles are not primitive.  They are defined by means of the dot product:  the length of a vector is defined as the square root of the dot product of the vector by itself, and the [[cosine]] of the (non oriented) angle of two vectors of length one is defined as their dot product.  So the equivalence of the two definitions of the dot product is a part of the equivalence of the classical and the modern formulations of Euclidean geometry.
  
===''Dot'' product===
+
===Algebraic definition===
 +
The dot product of two vectors {{nowrap|1='''a''' = [''a''<sub>1</sub>, ''a''<sub>2</sub>, ..., ''a''<sub>''n''</sub>]}} and {{nowrap|1='''b''' = [''b''<sub>1</sub>, ''b''<sub>2</sub>, ..., ''b''<sub>''n''</sub>]}} is defined as:<ref name="Lipschutz2009">{{cite book |author= S. Lipschutz, M. Lipson |first1= |title= Linear Algebra (Schaum’s Outlines)|edition= 4th |year= 2009|publisher= McGraw Hill|isbn=978-0-07-154352-1}}</ref>
  
The dot product of two vectors '''a''' = [''a''<sub>1</sub>, ''a''<sub>2</sub>, ..., ''a''<sub>''n''</sub>] and '''b''' = [''b''<sub>1</sub>, ''b''<sub>2</sub>, ..., ''b''<sub>''n''</sub>] is defined as:<ref>{{cite book |author= S. Lipcshutz, M. Lipson |first1= |title= Linear Algebra (Schaum’s Outlines)|edition= 4th |year= 2009|publisher= McGraw Hill|isbn=978-0-07-154352-1}}</ref><ref>{{cite book |author= M.R. Spiegel, S. Lipcshutz, D. Spellman|first1= |title= Vector Analysis (Schaum’s Outlines)|edition= 2nd |year= 2009|publisher= McGraw Hill|isbn=978-0-07-161545-7}}</ref>
+
:<math>\mathbf{a}\cdot \mathbf{b} = \sum_{i=1}^n a_ib_i = a_1b_1 + a_2b_2 + \cdots + a_nb_n</math>
  
:<math>\mathbf{a}\cdot \mathbf{b} = \sum_{i=1}^n a_ib_i = a_1b_1 + a_2b_2 + \cdots + a_nb_n </math>
+
where Σ denotes [[Summation|summation notation]] and ''n'' is the dimension of the vector space. For instance, in [[three-dimensional space]], the dot product of vectors {{nowrap|[1, 3, −5]}} and {{nowrap|[4, −2, −1]}} is:
 
 
where Σ denotes [[Summation|summation notation]] and ''n'' is the dimension of the vector space.
 
 
 
*In [[two-dimensional space]], the dot product of vectors [''a, b''] and [''c, d''] is ''ac'' + ''bd''.
 
*Similarly, in a [[three-dimensional space]], the dot product of vectors [''a, b, c''] and [''d, e, f''] is ''ad + be + cf''. For example, if [1, 3, −5] and [4, −2, −1] their dot product is:
 
  
 
:<math>
 
:<math>
[1, 3, -5] \cdot [4, -2, -1]
+
\begin{align}
= (1)(4) + (3)(-2) + (-5)(-1)
+
\ [1, 3, -5] \cdot [4, -2, -1] &= (1)(4) + (3)(-2) + (-5)(-1) \\
= 4 - 6 + 5
+
&= 4 - 6 + 5 \\
= 3.
+
&= 3.
 +
\end{align}
 
</math>
 
</math>
  
Given two [[column vector]]s, their dot product can also be obtained by multiplying the [[transpose]] of one vector with the other vector and extracting the unique coefficient of the resulting 1 × 1 matrix. The operation of extracting the coefficient of such a matrix can be written as taking its [[determinant]] or its [[Trace (linear algebra)|trace]] (which is the same thing for 1 × 1 matrices); since in general ''AB'' = ''BA'' whenever ''AB'' or equivalently ''BA'' is a square matrix, one may write
+
===Geometric definition===
:<math>\mathbf{a} \cdot \mathbf{b}
+
In [[Euclidean space]], a [[Euclidean vector]] is a geometrical object that possesses both a magnitude and a direction. A vector can be pictured as an arrow.  Its magnitude is its length, and its direction is the direction the arrow points. The magnitude of a vector '''A''' is denoted by <math>\|\mathbf{A}\|</math>. The dot product of two Euclidean vectors '''A''' and '''B''' is defined by<ref name="Spiegel2009">{{cite book |author= M.R. Spiegel, S. Lipschutz, D. Spellman|first1= |title= Vector Analysis (Schaum’s Outlines)|edition= 2nd |year= 2009|publisher= McGraw Hill|isbn=978-0-07-161545-7}}</ref>
= \det( \mathbf{a}^{\mathrm{T}}\mathbf{b} )
+
:<math>\mathbf A\cdot\mathbf B = \|\mathbf A\|\,\|\mathbf B\|\cos\theta,</math>
= \det( \mathbf{b}^{\mathrm{T}}\mathbf{a} )
+
where θ is the [[angle]] between '''A''' and '''B'''.
= \mathrm{tr} ( \mathbf{a}^{\mathrm{T}}\mathbf{b} )
 
= \mathrm{tr} ( \mathbf{b}^{\mathrm{T}}\mathbf{a} )
 
= \mathrm{tr} ( \mathbf{a}\mathbf{b}^{\mathrm{T}} )
 
= \mathrm{tr} ( \mathbf{b}\mathbf{a}^{\mathrm{T}} ) </math>
 
More generally the coefficient (''i'', ''j'') of a product of matrices is the dot product of the transpose of row ''i'' of the first matrix and column ''j'' of the second matrix.
 
 
 
===''Inner'' product===
 
 
 
The [[Inner product space|inner product]] generalizes the dot product to [[vector space|abstract vector spaces]] over the [[real numbers]] and is usually denoted by <math>\langle\mathbf{a}\, , \mathbf{b}\rangle</math>. More precisely, if ''V'' is a vector space over &#x211D;, the inner product is a function
 
:<math>V\times V \rightarrow \mathbb{R}</math>.  
 
Owing to the geometric interpretation of the dot product, the [[norm (mathematics)|norm]] ||'''a'''|| of a vector '''a''' in such an [[inner product space]] is defined as:<ref>{{cite book |author= K.F. Riley, M.P. Hobson, S.J. Bence |title= Mathematical methods for physics and engineering|edition= 3rd|year= 2010|publisher= Cambridge University Press|isbn=978-0-521-86153-3}}</ref>
 
 
 
:<math>\|\mathbf{a}\| = \sqrt{\langle\mathbf{a}\, , \mathbf{a}\rangle}</math>
 
 
 
such that it generalizes length, and the angle θ between two vectors '''a''' and '''b''' by
 
 
 
:<math> \cos{\theta} = \frac{\langle\mathbf{a}\, , \mathbf{b}\rangle}{\|\mathbf{a}\| \, \|\mathbf{b}\|}. </math>
 
 
 
In particular, two vectors are considered [[orthogonal]] if their inner product is zero
 
 
 
:<math> \langle\mathbf{a}\, , \mathbf{b}\rangle = 0.</math>
 
 
 
== Geometric interpretation ==
 
[[File:Dot Product.svg|thumb|300px|right|<math>\mathbf{A}_B = \left\|\mathbf{A}\right\| \cos\theta</math> is the [[scalar projection]] of '''A''' onto '''B''', <br>since <math>\mathbf{A} \cdot \mathbf{B} = \left\|\mathbf{A}\right\| \left\|\mathbf{B}\right\| \cos\theta</math>, then <math>\mathbf{A}_B = \frac{\mathbf{A} \cdot \mathbf{B}}{\left\|\mathbf{B}\right\|}</math>.]]
 
 
 
In [[Euclidean geometry]], the dot product of vectors expressed in an [[orthonormal basis]] is related to their [[Euclidean norm|length]] and [[angle]]. For such a vector '''a''', the dot product '''a·b''' is the square of the [[Euclidean norm|length]] (magnitude) of '''a''', denoted by ||'''a'''||:<ref>{{cite book |author= S. Lipcshutz, M. Lipson |first1= |title= Linear Algebra (Schaum’s Outlines)|edition= 4th |year= 2009|publisher= McGraw Hill|isbn=978-0-07-154352-1}}</ref><ref>{{cite book |author= M.R. Spiegel, S. Lipcshutz, D. Spellman|first1= |title= Vector Analysis (Schaum’s Outlines)|edition= 2nd |year= 2009|publisher= McGraw Hill|isbn=978-0-07-161545-7}}</ref><ref>{{cite book |author= K.F. Riley, M.P. Hobson, S.J. Bence |title= Mathematical methods for physics and engineering|edition= 3rd|year= 2010|publisher= Cambridge University Press|isbn=978-0-521-86153-3}}</ref>
 
 
 
:<math>{\mathbf{a} \cdot \mathbf{a}}=\left\|\mathbf{a}\right\|^2</math>
 
 
 
If '''b''' is another such vector, and θ is the [[angle]] between them:
 
 
 
:<math> \mathbf{a} \cdot \mathbf{b}=\left\|\mathbf{a}\right\| \, \left\|\mathbf{b}\right\| \cos \theta \,</math>
 
 
 
This formula can be rearranged to determine the size of the angle between two nonzero vectors:
 
 
 
:<math>\theta=\arccos \left( \frac {\bold{a}\cdot\bold{b}} {\left\|\bold{a}\right\|\left\|\bold{b}\right\|}\right).</math>
 
 
 
The [[Cauchy–Schwarz inequality]] guarantees that the argument of arccos is valid.
 
 
 
One can also first convert the vectors to [[unit vector]]s by dividing by their magnitude:
 
:<math>\bold{\hat{a}} = \frac{\bold{a}}{\left\|\bold{a}\right\|}</math>
 
then the angle θ is given by
 
:<math>\theta =  \arccos ( \bold{\hat a}\cdot\bold{\hat b}).</math>
 
 
 
The terminal points of both unit vectors lie on the unit circle. The unit circle is where the trigonometric values for the six trig functions are found. After substitution, the first vector component is cosine and the second vector component is sine, i.e. (cos ''x'', sin ''x'') for some angle ''x''. The dot product of the two unit vectors then takes (cos ''x'', sin ''x'') and (cos ''y'', sin ''y'') for angles ''x'' and ''y'' and returns
 
:<math>\cos x \, \cos y + \sin x \, \sin y = \cos(x - y)</math>
 
 
 
where ''x'' − ''y'' = θ.
 
 
 
Note that the above method will determine the smallest of the two possible angles — it will never be more than 180°, as the value of the [[cosine]] is symmetric between the upper and lower halves of the unit circle.
 
 
 
As the cosine of 90° is zero, the dot product of two [[orthogonal]] vectors is always zero. Moreover, two vectors can be considered [[orthogonal]] if and only if their dot product is zero, and they have non-null length. This property provides a simple method to test the condition of orthogonality.
 
 
 
Sometimes these properties are also used for "defining" the ''dot product'', especially in 2 and 3 dimensions; this definition is equivalent to the above one. For higher dimensions the formula can be used to define the ''concept of angle''.
 
 
 
The geometric properties rely on the [[basis (linear algebra)|basis]] being [[orthonormal]], i.e. composed of pairwise perpendicular vectors with unit length.
 
  
===Scalar projection===
+
In particular, if '''A''' and '''B''' are [[orthogonal]], then the angle between them is 90° and
If both '''a''' and '''b''' have length equal to unity (i.e., they are [[unit vector]]s), their dot product simply gives the cosine of the angle between them.<ref>{{cite book |author= K.F. Riley, M.P. Hobson, S.J. Bence |title= Mathematical methods for physics and engineering|edition= 3rd|year= 2010|publisher= Cambridge University Press|isbn=978-0-521-86153-3}}</ref>
+
:<math>\mathbf A\cdot\mathbf B=0.</math>
 +
At the other extreme, if they are codirectional, then the angle between them is 0° and
 +
:<math>\mathbf A\cdot\mathbf B = \|\mathbf A\|\,\|\mathbf B\|</math>
 +
This implies that the dot product of a vector '''A''' by itself is
 +
:<math>\mathbf A\cdot\mathbf A = \|\mathbf A\|^2,</math>
 +
which gives
 +
: <math> \|\mathbf A\| = \sqrt{\mathbf A\cdot\mathbf A},</math>
 +
the formula for the [[Euclidean length]] of the vector.
  
If only '''b''' is a [[unit vector]], then the dot product '''a·b''' gives ||'''a'''||cos θ, i.e., the magnitude of the projection of '''a''' in the direction of '''b''', with a minus sign if the direction is opposite. This is called the [[scalar projection]] of '''a''' onto '''b''', or [[vector (geometry)|scalar component]] of '''a''' in the direction of '''b''' (see figure). This property of the dot product has several useful applications (for instance, see next section).
+
===Scalar projection and first properties===
 +
[[File:Dot Product.svg|thumb|right|Scalar projection]]
 +
The [[scalar projection]] (or scalar component) of a Euclidean vector '''A''' in the direction of a Euclidean vector '''B''' is given by
 +
:<math>A_B=\|\mathbf A\|\cos\theta</math>
 +
where θ is the angle between '''A''' and '''B'''.
  
If neither '''a''' nor '''b''' is a unit vector, then the magnitude of the projection of '''a''' in the direction of '''b''' is <math>\mathbf{a}\cdot\frac{\mathbf{b}}{\left\|\mathbf{b}\right\|}</math>, as the unit vector in the direction of '''b''' is <math>\frac{\mathbf{b}}{\left\|\mathbf{b}\right\|}</math>.
+
In terms of the geometric definition of the dot product, this can be rewritten
 +
:<math>A_B = \mathbf A\cdot\widehat{\mathbf B}</math>
 +
where <math>\widehat{\mathbf B} = \mathbf B/\|\mathbf B\|</math> is the [[unit vector]] in the direction of '''B'''.
  
===Rotation===
+
[[File:Dot product distributive law.svg|thumb|right|Distributive law for the dot product]]
When an orthonormal basis that the vector '''a''' is represented in terms of is [[rotation (mathematics)|rotated]], '''a''''s matrix in the new basis is obtained through multiplying '''a''' by a [[rotation matrix]] '''R'''. This [[matrix multiplication]] is just a compact representation of a sequence of dot products.
+
The dot product is thus characterized geometrically by<ref>{{Cite book | last1=Arfken | first1=G. B. | last2=Weber | first2=H. J. | title=Mathematical Methods for Physicists | publisher=[[Academic Press]] | location=Boston, MA | edition=5th | isbn=978-0-12-059825-0 | year=2000 | pages=14–15 }}.</ref>
 +
:<math>\mathbf A\cdot\mathbf B = A_B\|\mathbf{B}\|=B_A\|\mathbf{A}\|.</math>
 +
The dot product, defined in this manner, is homogeneous under scaling in each variable, meaning that for any scalar α,
 +
:<math>(\alpha\mathbf{A})\cdot\mathbf B=\alpha(\mathbf A\cdot\mathbf B)=\mathbf A\cdot(\alpha\mathbf B).</math>
 +
It also satisfies a [[distributive law]], meaning that
 +
:<math>\mathbf A\cdot(\mathbf B+\mathbf C) = \mathbf A\cdot\mathbf B+\mathbf A\cdot\mathbf C.</math>
  
For instance, let
+
These properties may be summarized by saying that the dot product is a [[bilinear form]]. Moreover, this bilinear form is [[positive definite bilinear form|positive definite]], which means that
*'''B'''<sub>1</sub> = {'''x''', '''y''', '''z'''} and '''B'''<sub>1</sub> = {'''u''', '''v''', '''w'''} be two different [[orthonormal basis|orthonormal bases]] of the same space &#x211D;<sup>3</sup>, with '''B'''<sub>2</sub> obtained by just rotating '''B'''<sub>1</sub>,
+
<math>\mathbf A\cdot \mathbf A</math>
*'''a'''<sub>2</sub> = (''a<sub>x</sub>, a<sub>y</sub>, a<sub>z</sub>'') represent vector '''a''' in terms of '''B'''<sub>1</sub>,
+
is never negative and is zero if and only if <math>\mathbf A = \mathbf 0.</math>
*'''a'''<sub>2</sub> = (''a<sub>u</sub>, a<sub>v</sub>, a<sub>w</sub>'') represent the same vector in terms of the rotated basis '''B'''<sub>2</sub>,
 
*'''u'''<sub>1</sub>, '''v'''<sub>1</sub>, '''w'''<sub>1</sub>, be the rotated basis vectors '''u''', '''v''', '''w''' represented in terms of '''B'''<sub>1</sub>.
 
Then the rotation from '''B'''<sub>1</sub> to '''B'''<sub>2</sub> is performed as follows:
 
  
:<math> \bold a_2 = \bold{Ra}_1 =
+
===Equivalence of the definitions===
\begin{bmatrix} u_x & u_y & u_z \\ v_x & v_y & v_z \\ w_x & w_y & w_z \end{bmatrix}
+
If '''e'''<sub>1</sub>,...,'''e'''<sub>''n''</sub> are the [[standard basis|standard basis vectors]] in '''R'''<sup>''n''</sup>, then we may write
\begin{bmatrix} a_x \\ a_y \\ a_z \end{bmatrix} =
+
:<math>\begin{align}
\begin{bmatrix} \bold u_1\cdot\bold a_1 \\ \bold v_1\cdot\bold a_1 \\ \bold w_1\cdot\bold a_1 \end{bmatrix} = \begin{bmatrix} a_u \\ a_v \\ a_w \end{bmatrix} .
+
\mathbf A &= [A_1,\dots,A_n] = \sum_i A_i\mathbf e_i\\
 +
\mathbf B &= [B_1,\dots,B_n] = \sum_i B_i\mathbf e_i.
 +
\end{align}
 
</math>
 
</math>
 +
The vectors '''e'''<sub>''i''</sub> are an [[orthonormal basis]], which means that they have unit length and are at right angles to each other.  Hence since these vectors have unit length
 +
:<math>\mathbf e_i\cdot\mathbf e_i=1</math>
 +
and since they form right angles with each other, if ''i''&nbsp;≠&nbsp;''j'',
 +
:<math>\mathbf e_i\cdot\mathbf e_j = 0.</math>
  
Notice that the rotation matrix '''R''' is assembled by using the rotated basis vectors '''u'''<sub>1</sub>, '''v'''<sub>1</sub>, '''w'''<sub>1</sub> as its rows, and these vectors are unit vectors. By definition, '''Ra'''<sub>1</sub> consists of a sequence of dot products between each of the three rows of '''R''' and vector '''a'''<sub>1</sub>. Each of these dot products determines a scalar component of '''a''' in the direction of a rotated basis vector (see previous section).
+
Now applying the distributivity of the geometric version of the dot product gives
 +
:<math>\mathbf A\cdot\mathbf B = \sum_i B_i(\mathbf A\cdot\mathbf e_i) = \sum_i B_iA_i</math>
 +
which is precisely the algebraic definition of the dot product.  So the (geometric) dot product equals the (algebraic) dot product.
  
If '''a'''<sub>1</sub> is a [[row vector]], rather than a [[column vector]], then '''R''' must contain the rotated basis vectors in its columns, and must post-multiply '''a'''<sub>1</sub>:
+
==Properties==
 +
The dot product fulfils the following properties if '''a''', '''b''', and '''c''' are real [[vector (geometry)|vectors]] and ''r'' is a [[scalar (mathematics)|scalar]].<ref name="Lipschutz2009" /><ref name="Spiegel2009" />
  
:<math> \bold a_2 = \bold a_1 \bold R =
+
# '''[[Commutative]]:'''
\begin{bmatrix} a_x & a_y & a_z \end{bmatrix}
+
#: <math> \mathbf{a} \cdot \mathbf{b} = \mathbf{b} \cdot \mathbf{a}.</math>
\begin{bmatrix} u_x & v_x & w_x \\ u_y & v_y & w_y \\ u_z & v_z & w_z \end{bmatrix} =
+
#: which follows from the definition (''θ'' is the angle between '''a''' and '''b'''):
\begin{bmatrix} \bold u_1\cdot\bold a_1 & \bold v_1\cdot\bold a_1 & \bold w_1\cdot\bold a_1 \end{bmatrix} = \begin{bmatrix} a_u & a_v & a_w \end{bmatrix} .
+
#: <math>\mathbf{a}\cdot \mathbf{b} = \|\mathbf{a}\|\|\mathbf{b}\|\cos\theta = \|\mathbf{b}\|\|\mathbf{a}\|\cos\theta = \mathbf{b}\cdot\mathbf{a} </math>
 +
# '''[[Distributive]] over vector addition:'''
 +
#: <math> \mathbf{a} \cdot (\mathbf{b} + \mathbf{c}) = \mathbf{a} \cdot \mathbf{b} + \mathbf{a} \cdot \mathbf{c}. </math>
 +
# '''[[bilinear form|Bilinear]]''':
 +
#: <math> \mathbf{a} \cdot (r\mathbf{b} + \mathbf{c})
 +
    = r(\mathbf{a} \cdot \mathbf{b}) + (\mathbf{a} \cdot \mathbf{c}).
 
</math>
 
</math>
 +
# '''[[Scalar multiplication]]:'''
 +
#: <math> (c_1\mathbf{a}) \cdot (c_2\mathbf{b}) = c_1 c_2 (\mathbf{a} \cdot \mathbf{b}) </math>
 +
# '''[[Orthogonal]]:'''
 +
#: Two non-zero vectors '''a''' and '''b''' are ''orthogonal'' [[if and only if]] {{nowrap|1='''a''' ⋅ '''b''' = 0}}.
 +
# '''No [[cancellation law|cancellation]]:'''
 +
#: Unlike multiplication of ordinary numbers, where if {{nowrap|1=''ab'' = ''ac''}}, then ''b'' always equals ''c'' unless ''a'' is zero, the dot product does not obey the [[cancellation law]]:
 +
#: If {{nowrap|1='''a''' ⋅ '''b''' = '''a''' ⋅ '''c'''}} and {{nowrap|'''a''' ≠ '''0'''}}, then we can write: {{nowrap|1='''a''' ⋅ ('''b''' − '''c''') = 0}} by the [[distributive law]]; the result above says this just means that '''a''' is perpendicular to {{nowrap|('''b''' − '''c''')}}, which still allows {{nowrap|('''b''' − '''c''') ≠ '''0'''}}, and therefore {{nowrap|'''b''' ≠ '''c'''}}.
 +
# '''[[Derivative]]:''' If '''a''' and '''b''' are [[function (mathematics)|functions]], then the derivative ([[Notation for differentiation#Lagrange's notation|denoted by a prime]] ′) of {{nowrap|'''a''' ⋅ '''b'''}} is {{nowrap|'''a'''′ ⋅ '''b''' + '''a''' ⋅ '''b'''′}}.
  
==Physics==
+
===Application to the cosine law===
In [[physics]], vector magnitude is a [[scalar (physics)|scalar]] in the physical sense, i.e. a [[physical quantity]] independent of the coordinate system, expressed as the [[product (mathematics)|product]]  of a [[number|numerical value]] and a [[physical unit]], not just a number. The dot product is also a scalar in this sense, given by the formula, independent of the coordinate system.  
+
[[File:Dot product cosine rule.svg|100px|thumb|Triangle with vector edges '''a''' and '''b''', separated by angle ''θ''.]]
Examples inlcude:<ref>{{cite book |author= K.F. Riley, M.P. Hobson, S.J. Bence |title= Mathematical methods for physics and engineering|edition= 3rd|year= 2010|publisher= Cambridge University Press|isbn=978-0-521-86153-3}}</ref><ref>{{cite book |author= M. Mansfield, C. O’Sullivan|title= Understanding Physics|edition= 4th |year= 2011|publisher= John Wiley & Sons|isbn=978-0-47-0746370}}</ref>
 
 
 
* [[Mechanical work]] is the dot product of [[force]] and [[Displacement (vector)|displacement]] vectors.
 
* [[Magnetic flux]] is the dot product of the [[magnetic field]] and the [[Area vector|area]] vectors.
 
 
 
==Properties==
 
The following properties hold if '''a''', '''b''', and '''c''' are real [[vector (geometry)|vectors]] and ''r'' is a [[scalar (mathematics)|scalar]].<ref>{{cite book |author= S. Lipcshutz, M. Lipson |first1= |title= Linear Algebra (Schaum’s Outlines)|edition= 4th |year= 2009|publisher= McGraw Hill|isbn=978-0-07-154352-1}}</ref><ref>{{cite book |author= M.R. Spiegel, S. Lipcshutz, D. Spellman|first1= |title= Vector Analysis (Schaum’s Outlines)|edition= 2nd |year= 2009|publisher= McGraw Hill|isbn=978-0-07-161545-7}}</ref>
 
  
The dot product is [[commutative]]:
+
{{main|law of cosines}}
:<math> \mathbf{a} \cdot \mathbf{b} = \mathbf{b} \cdot \mathbf{a}.</math>
 
  
The dot product is [[distributive]] over vector addition:
+
Given two vectors '''a''' and '''b''' separated by angle ''θ'' (see image right), they form a triangle with a third side {{nowrap|1='''c''' = '''a''' − '''b'''}}. The dot product of this with itself is:
:<math> \mathbf{a} \cdot (\mathbf{b} + \mathbf{c}) = \mathbf{a} \cdot \mathbf{b} + \mathbf{a} \cdot \mathbf{c}. </math>
 
  
The dot product is  [[bilinear form|bilinear]]:
+
:<math>
:<math> \mathbf{a} \cdot (r\mathbf{b} + \mathbf{c})
+
\begin{align}
    = r(\mathbf{a} \cdot   \mathbf{b}) +(\mathbf{a} \cdot \mathbf{c}).
+
\mathbf{c}\cdot\mathbf{c}  & = (\mathbf{a}-\mathbf{b})\cdot(\mathbf{a}-\mathbf{b}) \\
 +
& =\mathbf{a}\cdot\mathbf{a} - \mathbf{a}\cdot\mathbf{b} - \mathbf{b}\cdot\mathbf{a} + \mathbf{b}\cdot\mathbf{b}\\
 +
& = a^2 - \mathbf{a}\cdot\mathbf{b} - \mathbf{a}\cdot\mathbf{b} + b^2\\
 +
& = a^2 - 2\mathbf{a}\cdot\mathbf{b} + b^2\\
 +
c^2 & = a^2 + b^2 - 2ab\cos \theta\\
 +
\end{align}
 
</math>
 
</math>
  
When multiplied by a scalar value, dot product satisfies:
+
which is the [[law of cosines]].
:<math> (c_1\mathbf{a}) \cdot (c_2\mathbf{b}) = (c_1c_2) (\mathbf{a} \cdot \mathbf{b}) </math>
+
{{clear}}
(these last two properties follow from the first two).
 
 
 
Two non-zero vectors '''a''' and '''b''' are [[orthogonal]] [[if and only if]] '''a''' • '''b''' = 0.
 
 
 
Unlike multiplication of ordinary numbers, where if ''ab'' = ''ac'', then ''b'' always equals ''c'' unless ''a'' is zero, the dot product does not obey the [[cancellation law]]:
 
: If '''a''' • '''b''' = '''a''' • '''c''' and '''a''' ≠ '''0''', then we can write: '''a''' • ('''b''' − '''c''') = 0 by the [[distributive law]]; the result above says this just means that '''a''' is perpendicular to ('''b''' − '''c'''), which still allows ('''b''' − '''c''') ≠ '''0''', and therefore '''b''' ≠ '''c'''.
 
 
 
Provided that the basis is orthonormal, the dot product is invariant under isometric changes of the basis: rotations, reflections, and combinations, keeping the origin fixed. The above mentioned geometric interpretation relies on this property. In other words, for an orthonormal space with any number of dimensions, the dot product is invariant under a [[coordinate transformation]] based on an [[orthogonal matrix]]. This corresponds to the following two conditions:
 
*The new basis is again orthonormal (i.e., it is orthonormal expressed in the old one).
 
*The new base vectors have the same length as the old ones (i.e., unit length in terms of the old basis).
 
 
 
If '''a''' and '''b''' are functions, then the derivative of '''a''' • '''b''' is '''a'''' • '''b''' + '''a''' • '''b''''
 
  
 
==Triple product expansion==
 
==Triple product expansion==
 
{{Main|Triple product}}
 
{{Main|Triple product}}
  
This is a very useful identity (also known as '''Lagrange's formula''') involving the dot- and [[Cross product|cross-products]]. It is written as:<ref>{{cite book |author= S. Lipcshutz, M. Lipson |first1= |title= Linear Algebra (Schaum’s Outlines)|edition= 4th |year= 2009|publisher= McGraw Hill|isbn=978-0-07-154352-1}}</ref><ref>{{cite book |author= M.R. Spiegel, S. Lipcshutz, D. Spellman|first1= |title= Vector Analysis (Schaum’s Outlines)|edition= 2nd |year= 2009|publisher= McGraw Hill|isbn=978-0-07-161545-7}}</ref>
+
This is a very useful identity (also known as '''Lagrange's formula''') involving the dot- and [[Cross product|cross-products]]. It is written as:<ref name="Lipschutz2009" /><ref name="Spiegel2009" />
  
 
:<math>\mathbf{a} \times (\mathbf{b} \times \mathbf{c}) = \mathbf{b}(\mathbf{a}\cdot\mathbf{c}) - \mathbf{c}(\mathbf{a}\cdot\mathbf{b})</math>
 
:<math>\mathbf{a} \times (\mathbf{b} \times \mathbf{c}) = \mathbf{b}(\mathbf{a}\cdot\mathbf{c}) - \mathbf{c}(\mathbf{a}\cdot\mathbf{b})</math>
Line 166: Line 129:
 
which is [[mnemonic|easier to remember]] as "BAC minus CAB", keeping in mind which vectors are dotted together. This formula is commonly used to simplify vector calculations in [[physics]].
 
which is [[mnemonic|easier to remember]] as "BAC minus CAB", keeping in mind which vectors are dotted together. This formula is commonly used to simplify vector calculations in [[physics]].
  
==Proof of the geometric interpretation==
+
==Physics==
Consider the element of '''R'''<sup>n</sup>
+
In [[physics]], vector magnitude is a [[scalar (physics)|scalar]] in the physical sense, i.e. a [[physical quantity]] independent of the coordinate system, expressed as the [[product (mathematics)|product]] of a [[number|numerical value]] and a [[physical unit]], not just a number. The dot product is also a scalar in this sense, given by the formula, independent of the coordinate system.
:<math> \mathbf{v} = v_1 \mathbf{\hat{e}}_1 + v_2 \mathbf{\hat{e}}_2 + ... + v_n \mathbf{\hat{e}}_n. \, </math>
+
Examples include:<ref name="Riley2010">{{cite book |author= K.F. Riley, M.P. Hobson, S.J. Bence |title= Mathematical methods for physics and engineering|edition= 3rd|year= 2010|publisher= Cambridge University Press|isbn=978-0-521-86153-3}}</ref><ref>{{cite book |author= M. Mansfield, C. O’Sullivan|title= Understanding Physics|edition= 4th |year= 2011|publisher= John Wiley & Sons|isbn=978-0-47-0746370}}</ref>
Repeated application of the [[Pythagorean theorem]] yields for its length |'''v'''|
+
* [[Mechanical work]] is the dot product of [[force]] and [[Displacement (vector)|displacement]] vectors.
:<math> |\mathbf{v}|^2 = v_1^2 + v_2^2 + ... + v_n^2. \,</math>
+
* [[Magnetic flux]] is the dot product of the [[magnetic field]] and the [[Area vector|area]] vectors.
But this is the same as
 
:<math> \mathbf{v} \cdot \mathbf{v} = v_1^2 + v_2^2 + ... + v_n^2, \,</math>
 
so we conclude that taking the dot product of a vector '''v''' with itself yields the squared length of the vector.
 
; '''[[Lemma (mathematics)|Lemma]] 1''':<math> \mathbf{v} \cdot \mathbf{v} = |\mathbf{v}|^2. \, </math>
 
 
 
Now consider two vectors '''a''' and '''b''' extending from the origin, separated by an angle θ. A third vector '''c''' may be defined as
 
:<math> \mathbf{c} \ \stackrel{\mathrm{def}}{=}\  \mathbf{a} - \mathbf{b}. \,</math>
 
creating a triangle with sides '''a''', '''b''', and '''c'''. According to the [[law of cosines]], we have
 
:<math> |\mathbf{c}|^2 = |\mathbf{a}|^2 + |\mathbf{b}|^2 - 2 |\mathbf{a}||\mathbf{b}| \cos \theta. \,</math>
 
Substituting dot products for the squared lengths according to Lemma 1, we get
 
 
 
{{NumBlk|:|<math>
 
  \mathbf{c} \cdot \mathbf{c}
 
= \mathbf{a} \cdot \mathbf{a}
 
+ \mathbf{b} \cdot \mathbf{b}
 
- 2 |\mathbf{a}||\mathbf{b}| \cos\theta. \,
 
</math>|{{EquationRef|1}}}}
 
 
 
But as '''c''' ≡ '''a''' − '''b''', we also have
 
:<math>
 
  \mathbf{c} \cdot \mathbf{c}
 
= (\mathbf{a} - \mathbf{b}) \cdot (\mathbf{a} - \mathbf{b}), \,</math>
 
which, according to the [[distributive law]], expands to
 
 
 
{{NumBlk|:|<math>
 
  \mathbf{c} \cdot \mathbf{c}
 
= \mathbf{a} \cdot \mathbf{a}
 
+ \mathbf{b} \cdot \mathbf{b}
 
-2(\mathbf{a} \cdot \mathbf{b}). \,
 
</math>|{{EquationRef|2}}}}
 
 
 
Merging the two '''c''' • '''c''' equations, ({{EquationNote|1}}) and ({{EquationNote|2}}), we obtain
 
:<math>
 
  \mathbf{a} \cdot \mathbf{a}
 
+ \mathbf{b} \cdot \mathbf{b}
 
-2(\mathbf{a} \cdot \mathbf{b})
 
= \mathbf{a} \cdot \mathbf{a}
 
+ \mathbf{b} \cdot \mathbf{b}
 
- 2 |\mathbf{a}||\mathbf{b}| \cos\theta. \,
 
</math>
 
Subtracting '''a''' • '''a''' + '''b''' • '''b''' from both sides and dividing by −2 leaves
 
:<math> \mathbf{a} \cdot \mathbf{b} = |\mathbf{a}||\mathbf{b}| \cos\theta. \, </math>
 
[[Q.E.D.]]
 
  
 
==Generalizations==
 
==Generalizations==
 
 
===Complex vectors===
 
===Complex vectors===
 
+
For vectors with [[complex number|complex]] entries, using the given definition of the dot product would lead to quite different properties. For instance the dot product of a vector with itself would be an arbitrary complex number, and could be zero without the vector being the zero vector (such vectors are called [[Isotropic quadratic form|isotropic]]); this in turn would have consequences for notions like length and angle. Properties such as the positive-definite norm can be salvaged at the cost of giving up the symmetric and bilinear properties of the scalar product, through the alternative definition<ref name="Lipschutz2009" />
For vectors with [[complex number|complex]] entries, using the given definition of the dot product would lead to quite different geometric properties. For instance the dot product of a vector with itself can be an arbitrary complex number, and can be zero without the vector being the zero vector (such vectors are called [[Isotropic quadratic form|isotropic]]); this in turn would have severe consequences for notions like length and angle. Many geometric properties can be salvaged, at the cost of giving up the symmetric and bilinear properties of the scalar product, by alternatively defining:<ref>{{cite book |author= S. Lipcshutz, M. Lipson |first1= |title= Linear Algebra (Schaum’s Outlines)|edition= 4th |year= 2009|publisher= McGraw Hill|isbn=978-0-07-154352-1}}</ref>
 
  
 
:<math>\mathbf{a}\cdot \mathbf{b} = \sum{a_i \overline{b_i}} </math>
 
:<math>\mathbf{a}\cdot \mathbf{b} = \sum{a_i \overline{b_i}} </math>
where <span style="text-decoration: overline">''b<sub>i</sub>''</span> is the [[complex conjugate]] of ''b<sub>i</sub>''. Then the scalar product of any vector with itself is a non-negative real number, and it is nonzero except for the zero vector. However this scalar product is not linear in '''b''' (but rather [[conjugate linear]]), and the scalar product is not symmetric either, since
+
where <span style="text-decoration: overline">''b<sub>i</sub>''</span> is the [[complex conjugate]] of ''b<sub>i</sub>''. Then the scalar product of any vector with itself is a non-negative real number, and it is nonzero except for the zero vector. However this scalar product is thus [[sesquilinear]] rather than bilinear: it is [[conjugate linear]] and not linear in '''b''', and the scalar product is not symmetric, since
 
:<math> \mathbf{a} \cdot \mathbf{b} = \overline{\mathbf{b} \cdot \mathbf{a}}. </math>
 
:<math> \mathbf{a} \cdot \mathbf{b} = \overline{\mathbf{b} \cdot \mathbf{a}}. </math>
 
The angle between two complex vectors is then given by
 
The angle between two complex vectors is then given by
 
:<math>\cos\theta = \frac{\operatorname{Re}(\mathbf{a}\cdot\mathbf{b})}{\|\mathbf{a}\|\,\|\mathbf{b}\|}.</math>
 
:<math>\cos\theta = \frac{\operatorname{Re}(\mathbf{a}\cdot\mathbf{b})}{\|\mathbf{a}\|\,\|\mathbf{b}\|}.</math>
  
This type of scalar product is nevertheless quite useful, and leads to the notions of [[Hermitian form]] and of general [[inner product space]]s.
+
This type of scalar product is nevertheless useful, and leads to the notions of [[Hermitian form]] and of general [[inner product space]]s.
  
===Functions===
+
===Inner product===
 +
{{main|Inner product space}}
 +
The inner product generalizes the dot product to [[vector space|abstract vector spaces]] over a [[field (mathematics)|field]] of [[scalar (mathematics)|scalars]], being either the field of [[real number]]s <math>\mathbb{R}</math> or the field of [[complex number]]s <math>\mathbb{C}</math>. It is usually denoted by <math>\langle\mathbf{a}\, , \mathbf{b}\rangle</math>.
  
Vectors have a discrete number of [[coordinate vector|entries]], that is, an [[countable set|integer correspondence]] between [[natural number]] [[index notation|indices]] and the entries.
+
The inner product of two vectors over the field of complex numbers is, in general, a complex number, and  is [[Sesquilinear form|sesquilinear]] instead of bilinear. An inner product space is a [[normed vector space]], and the inner product of a vector with itself is real and positive-definite.
  
A [[Function (mathematics)|function]] ''f''(''x'') is the continuous analogue: an [[uncountably infinite]] number of entries where the correspondance is between the variable ''x'' and value ''f''(''x'') (see [[domain of a function]] for details).
+
===Functions===
 +
The dot product is defined for vectors that have a finite number of [[coordinate vector|entries]]. Thus these vectors can be regarded as [[discrete function]]s: a length-{{mvar|n}} vector {{mvar|u}} is, then, a function with [[domain of a function|domain]] {{math|{''k'' ∈ ℕ ∣ 1 ≤ ''k'' ''n''}}}, and {{math|''u''<sub>''i''</sub>}} is a notation for the image of {{math|''i''}} by the function/vector {{math|''u''}}.
  
Just as the inner product on vectors uses a sum over corresponding components, the inner product on functions is defined as an integral over some interval. For example, a the inner product of two [[real function|real]] [[continuous function]]s ''u''(''x''), ''v''(''x'') may be defined on the [[Interval (mathematics)|interval]] ''a'' ≤ ''x'' ≤ ''b'' (also denoted [''a, b'']):<ref>{{cite book |author= S. Lipcshutz, M. Lipson |first1= |title= Linear Algebra (Schaum’s Outlines)|edition= 4th |year= 2009|publisher= McGraw Hill|isbn=978-0-07-154352-1}}</ref>
+
This notion can be generalized to [[continuous function]]s: just as the inner product on vectors uses a sum over corresponding components, the inner product on functions is defined as an integral over some [[Interval (mathematics)|interval]] {{math|''a'' ≤ ''x'' ≤ ''b''}} (also denoted {{math|[''a'', ''b'']}}):<ref name="Lipschutz2009" />
  
:<math>(u , v )\equiv \langle u , v \rangle = \int_a^b f(x)g(x)dx </math>
+
:<math>\langle u , v \rangle = \int_a^b u(x)v(x)dx </math>
  
This can be generalized to [[complex function]]s ψ(''x'') and χ(''x''), by analogy with the complex inner product above:<ref>{{cite book |author= S. Lipcshutz, M. Lipson |first1= |title= Linear Algebra (Schaum’s Outlines)|edition= 4th |year= 2009|publisher= McGraw Hill|isbn=978-0-07-154352-1}}</ref>
+
Generalized further to [[complex function]]s {{math|''ψ''(''x'')}} and {{math|''χ''(''x'')}}, by analogy with the complex inner product above, gives<ref name="Lipschutz2009" />
  
:<math>(\psi , \chi ) \equiv \langle \psi , \chi \rangle = \int_a^b \psi(x)\overline{\chi(x)}dx.</math>
+
:<math>\langle \psi , \chi \rangle = \int_a^b \psi(x)\overline{\chi(x)}dx.</math>
  
 
===Weight function===
 
===Weight function===
 
+
Inner products can have a [[weight function]], i.e. a function which weight each term of the inner product with a value.
Inner products can have a [[weight function]], i.e. a function which weight each term of the inner product with a value.  
 
  
 
===Dyadics and matrices===
 
===Dyadics and matrices===
 
 
[[Matrix (mathematics)|Matrices]] have the [[Frobenius inner product]], which is analogous to the vector inner product. It is defined as the sum of the products of the corresponding components of two matrices '''A''' and '''B''' having the same size:
 
[[Matrix (mathematics)|Matrices]] have the [[Frobenius inner product]], which is analogous to the vector inner product. It is defined as the sum of the products of the corresponding components of two matrices '''A''' and '''B''' having the same size:
  
:<math>\bold{A}:\bold{B} = \sum_i\sum_j A_{ij}B_{ij}</math>
+
:<math>\bold{A}:\bold{B} = \sum_i\sum_j A_{ij}\overline{B_{ij}} = \mathrm{tr}(\mathbf{A}^* \mathbf{B}) = \mathrm{tr}(\mathbf{A} \mathbf{B}^*).</math>
 +
:<math>\bold{A}:\bold{B} = \sum_i\sum_j A_{ij}B_{ij} = \mathrm{tr}(\mathbf{A}^\mathrm{T} \mathbf{B}) = \mathrm{tr}(\mathbf{A} \mathbf{B}^\mathrm{T}).</math> (For real matrices)
  
[[Dyadics]] have a dot product and "double" dot product defined on them, see [[Dyadics#Dyadic and dyadic|Dyadics (Dyadic and dyadic)]] for their definitions.
+
[[Dyadics]] have a dot product and "double" dot product defined on them, see [[Dyadics#Product of dyadic and dyadic|Dyadics (Product of dyadic and dyadic)]] for their definitions.
  
 
===Tensors===
 
===Tensors===
 
+
The inner product between a [[tensor]] of order ''n'' and a tensor of order ''m'' is a tensor of order {{nowrap|''n'' + ''m'' − 2}}, see [[tensor contraction]] for details.
The inner product between a [[tensor]] of order ''n'' and a tensor of order ''m'' is a tensor of order ''n'' + ''m'' − 2, see [[tensor contraction]] for details.
 
  
 
==See also==
 
==See also==
 
* [[Cauchy–Schwarz inequality]]
 
* [[Cauchy–Schwarz inequality]]
 +
* [[Cross product]]
 
* [[Matrix multiplication]]
 
* [[Matrix multiplication]]
  
 
==References==
 
==References==
 
 
{{reflist}}
 
{{reflist}}
  
 
==External links==
 
==External links==
 +
* {{springer|title=Inner product|id=p/i051240}}
 
* {{mathworld|urlname=DotProduct|title=Dot product}}
 
* {{mathworld|urlname=DotProduct|title=Dot product}}
 
* [http://www.mathreference.com/la,dot.html Explanation of dot product including with complex vectors]
 
* [http://www.mathreference.com/la,dot.html Explanation of dot product including with complex vectors]
Line 279: Line 199:
 
[[Category:Vectors]]
 
[[Category:Vectors]]
 
[[Category:Analytic geometry]]
 
[[Category:Analytic geometry]]
 
[[am:ጥላ ብዜት]]
 
[[ar:جداء قياسي]]
 
[[bs:Skalarni proizvod]]
 
[[ca:Producte escalar]]
 
[[cs:Skalární součin]]
 
[[da:Skalarprodukt]]
 
[[de:Skalarprodukt]]
 
[[et:Skalaarkorrutis]]
 
[[es:Producto escalar]]
 
[[eo:Skalara produto]]
 
[[fa:ضرب داخلی]]
 
[[fr:Produit scalaire]]
 
[[gl:Produto escalar]]
 
[[ko:스칼라곱]]
 
[[it:Prodotto scalare]]
 
[[he:מכפלה סקלרית]]
 
[[kk:Скаляр көбейтінді]]
 
[[la:Productum interius]]
 
[[lv:Skalārais reizinājums]]
 
[[lt:Skaliarinė sandauga]]
 
[[hu:Skaláris szorzat]]
 
[[ms:Hasil darab bintik]]
 
[[nl:Inwendig product]]
 
[[ja:ドット積]]
 
[[no:Indreprodukt]]
 
[[nn:Indreprodukt]]
 
[[pl:Iloczyn skalarny]]
 
[[pt:Produto escalar]]
 
[[ru:Скалярное произведение]]
 
[[simple:Dot product]]
 
[[sk:Skalárny súčin]]
 
[[sl:Skalarni produkt]]
 
[[sr:Скаларни производ вектора]]
 
[[sv:Skalärprodukt]]
 
[[tl:Produktong tuldok]]
 
[[th:ผลคูณจุด]]
 
[[tr:Nokta çarpım]]
 
[[uk:Скалярний добуток]]
 
[[vi:Tích vô hướng]]
 
[[zh:数量积]]
 

Revision as of 18:49, 1 February 2014

{{#invoke:Hatnote|hatnote}}Template:Main other

In mathematics, the dot product, or scalar product (or sometimes inner product in the context of Euclidean space), is an algebraic operation that takes two equal-length sequences of numbers (usually coordinate vectors) and returns a single number. This operation can be defined either algebraically or geometrically. Algebraically, it is the sum of the products of the corresponding entries of the two sequences of numbers. Geometrically, it is the product of the magnitudes of the two vectors and the cosine of the angle between them. The name "dot product" is derived from the centered dot· " that is often used to designate this operation; the alternative name "scalar product" emphasizes the scalar (rather than vectorial) nature of the result.

In three-dimensional space, the dot product contrasts with the cross product of two vectors, which produces a pseudovector as the result. The dot product is directly related to the cosine of the angle between two vectors in Euclidean space of any number of dimensions.

Definition

The dot product is often defined in one of two ways: algebraically or geometrically. The geometric definition is based on the notions of angle and distance (magnitude of vectors). The equivalence of these two definitions relies on having a Cartesian coordinate system for Euclidean space.

In modern presentations of Euclidean geometry, the points of space are defined in terms of their Cartesian coordinates, and Euclidean space itself is commonly identified with the real coordinate space Rn. In such a presentation, the notions of length and angles are not primitive. They are defined by means of the dot product: the length of a vector is defined as the square root of the dot product of the vector by itself, and the cosine of the (non oriented) angle of two vectors of length one is defined as their dot product. So the equivalence of the two definitions of the dot product is a part of the equivalence of the classical and the modern formulations of Euclidean geometry.

Algebraic definition

The dot product of two vectors a = [a1, a2, ..., an] and b = [b1, b2, ..., bn] is defined as:[1]

where Σ denotes summation notation and n is the dimension of the vector space. For instance, in three-dimensional space, the dot product of vectors [1, 3, −5] and [4, −2, −1] is:

Geometric definition

In Euclidean space, a Euclidean vector is a geometrical object that possesses both a magnitude and a direction. A vector can be pictured as an arrow. Its magnitude is its length, and its direction is the direction the arrow points. The magnitude of a vector A is denoted by . The dot product of two Euclidean vectors A and B is defined by[2]

where θ is the angle between A and B.

In particular, if A and B are orthogonal, then the angle between them is 90° and

At the other extreme, if they are codirectional, then the angle between them is 0° and

This implies that the dot product of a vector A by itself is

which gives

the formula for the Euclidean length of the vector.

Scalar projection and first properties

Scalar projection

The scalar projection (or scalar component) of a Euclidean vector A in the direction of a Euclidean vector B is given by

where θ is the angle between A and B.

In terms of the geometric definition of the dot product, this can be rewritten

where is the unit vector in the direction of B.

Distributive law for the dot product

The dot product is thus characterized geometrically by[3]

The dot product, defined in this manner, is homogeneous under scaling in each variable, meaning that for any scalar α,

It also satisfies a distributive law, meaning that

These properties may be summarized by saying that the dot product is a bilinear form. Moreover, this bilinear form is positive definite, which means that is never negative and is zero if and only if

Equivalence of the definitions

If e1,...,en are the standard basis vectors in Rn, then we may write

The vectors ei are an orthonormal basis, which means that they have unit length and are at right angles to each other. Hence since these vectors have unit length

and since they form right angles with each other, if i ≠ j,

Now applying the distributivity of the geometric version of the dot product gives

which is precisely the algebraic definition of the dot product. So the (geometric) dot product equals the (algebraic) dot product.

Properties

The dot product fulfils the following properties if a, b, and c are real vectors and r is a scalar.[1][2]

  1. Commutative:
    which follows from the definition (θ is the angle between a and b):
  2. Distributive over vector addition:
  3. Bilinear:
  4. Scalar multiplication:
  5. Orthogonal:
    Two non-zero vectors a and b are orthogonal if and only if ab = 0.
  6. No cancellation:
    Unlike multiplication of ordinary numbers, where if ab = ac, then b always equals c unless a is zero, the dot product does not obey the cancellation law:
    If ab = ac and a0, then we can write: a ⋅ (bc) = 0 by the distributive law; the result above says this just means that a is perpendicular to (bc), which still allows (bc) ≠ 0, and therefore bc.
  7. Derivative: If a and b are functions, then the derivative (denoted by a prime ′) of ab is a′ ⋅ b + ab.

Application to the cosine law

Triangle with vector edges a and b, separated by angle θ.

{{#invoke:main|main}}

Given two vectors a and b separated by angle θ (see image right), they form a triangle with a third side c = ab. The dot product of this with itself is:

which is the law of cosines.

Triple product expansion

{{#invoke:main|main}}

This is a very useful identity (also known as Lagrange's formula) involving the dot- and cross-products. It is written as:[1][2]

which is easier to remember as "BAC minus CAB", keeping in mind which vectors are dotted together. This formula is commonly used to simplify vector calculations in physics.

Physics

In physics, vector magnitude is a scalar in the physical sense, i.e. a physical quantity independent of the coordinate system, expressed as the product of a numerical value and a physical unit, not just a number. The dot product is also a scalar in this sense, given by the formula, independent of the coordinate system. Examples include:[4][5]

Generalizations

Complex vectors

For vectors with complex entries, using the given definition of the dot product would lead to quite different properties. For instance the dot product of a vector with itself would be an arbitrary complex number, and could be zero without the vector being the zero vector (such vectors are called isotropic); this in turn would have consequences for notions like length and angle. Properties such as the positive-definite norm can be salvaged at the cost of giving up the symmetric and bilinear properties of the scalar product, through the alternative definition[1]

where bi is the complex conjugate of bi. Then the scalar product of any vector with itself is a non-negative real number, and it is nonzero except for the zero vector. However this scalar product is thus sesquilinear rather than bilinear: it is conjugate linear and not linear in b, and the scalar product is not symmetric, since

The angle between two complex vectors is then given by

This type of scalar product is nevertheless useful, and leads to the notions of Hermitian form and of general inner product spaces.

Inner product

{{#invoke:main|main}} The inner product generalizes the dot product to abstract vector spaces over a field of scalars, being either the field of real numbers or the field of complex numbers . It is usually denoted by .

The inner product of two vectors over the field of complex numbers is, in general, a complex number, and is sesquilinear instead of bilinear. An inner product space is a normed vector space, and the inner product of a vector with itself is real and positive-definite.

Functions

The dot product is defined for vectors that have a finite number of entries. Thus these vectors can be regarded as discrete functions: a length-Template:Mvar vector Template:Mvar is, then, a function with domain {k ∈ ℕ ∣ 1 ≤ kn}, and ui is a notation for the image of i by the function/vector u.

This notion can be generalized to continuous functions: just as the inner product on vectors uses a sum over corresponding components, the inner product on functions is defined as an integral over some interval axb (also denoted [a, b]):[1]

Generalized further to complex functions ψ(x) and χ(x), by analogy with the complex inner product above, gives[1]

Weight function

Inner products can have a weight function, i.e. a function which weight each term of the inner product with a value.

Dyadics and matrices

Matrices have the Frobenius inner product, which is analogous to the vector inner product. It is defined as the sum of the products of the corresponding components of two matrices A and B having the same size:

(For real matrices)

Dyadics have a dot product and "double" dot product defined on them, see Dyadics (Product of dyadic and dyadic) for their definitions.

Tensors

The inner product between a tensor of order n and a tensor of order m is a tensor of order n + m − 2, see tensor contraction for details.

See also

References

  1. 1.0 1.1 1.2 1.3 1.4 1.5 {{#invoke:citation/CS1|citation |CitationClass=book }}
  2. 2.0 2.1 2.2 {{#invoke:citation/CS1|citation |CitationClass=book }}
  3. {{#invoke:citation/CS1|citation |CitationClass=book }}.
  4. {{#invoke:citation/CS1|citation |CitationClass=book }}
  5. {{#invoke:citation/CS1|citation |CitationClass=book }}

External links

  • {{#invoke:citation/CS1|citation

|CitationClass=citation }}

{{#invoke: Navbox | navbox }}