Difference between revisions of "Dot product"
en>Bender2k14 (→Complex vectors: added a reference that a vector that dots with itself to 0 is called isotropic) 
(No difference)

Revision as of 16:51, 1 September 2012
{{#invoke:Hatnotehatnote}}Template:Main other
In mathematics, the dot product, or scalar product (or sometimes inner product in the context of Euclidean space), is an algebraic operation that takes two equallength sequences of numbers (usually coordinate vectors) and returns a single number obtained by multiplying corresponding entries and then summing those products. The name "dot product" is derived from the centered dot " " that is often used to designate this operation; the alternative name "scalar product" emphasizes the scalar (rather than vector) nature of the result.
In Euclidean space, the inner product of two vectors (expressed in terms of coordinate vectors on an orthonormal basis) is the same as their dot product. However, in more general contexts (e.g., vector spaces that use complex numbers as scalars) the inner and the dot products are typically defined in ways that do not coincide.
In three dimensional space, the dot product contrasts with the cross product, which produces a vector as result. The dot product is directly related to the cosine of the angle between two vectors in Euclidean space of any number of dimensions.
Contents
Definition (real vector spaces)
Dot product
The dot product of two vectors a = [a_{1}, a_{2}, ..., a_{n}] and b = [b_{1}, b_{2}, ..., b_{n}] is defined as:^{[1]}^{[2]}
where Σ denotes summation notation and n is the dimension of the vector space.
 In twodimensional space, the dot product of vectors [a, b] and [c, d] is ac + bd.
 Similarly, in a threedimensional space, the dot product of vectors [a, b, c] and [d, e, f] is ad + be + cf. For example, if [1, 3, −5] and [4, −2, −1] their dot product is:
Given two column vectors, their dot product can also be obtained by multiplying the transpose of one vector with the other vector and extracting the unique coefficient of the resulting 1 × 1 matrix. The operation of extracting the coefficient of such a matrix can be written as taking its determinant or its trace (which is the same thing for 1 × 1 matrices); since in general AB = BA whenever AB or equivalently BA is a square matrix, one may write
More generally the coefficient (i, j) of a product of matrices is the dot product of the transpose of row i of the first matrix and column j of the second matrix.
Inner product
The inner product generalizes the dot product to abstract vector spaces over the real numbers and is usually denoted by . More precisely, if V is a vector space over ℝ, the inner product is a function
Owing to the geometric interpretation of the dot product, the norm a of a vector a in such an inner product space is defined as:^{[3]}
such that it generalizes length, and the angle θ between two vectors a and b by
In particular, two vectors are considered orthogonal if their inner product is zero
Geometric interpretation
In Euclidean geometry, the dot product of vectors expressed in an orthonormal basis is related to their length and angle. For such a vector a, the dot product a·b is the square of the length (magnitude) of a, denoted by a:^{[4]}^{[5]}^{[6]}
If b is another such vector, and θ is the angle between them:
This formula can be rearranged to determine the size of the angle between two nonzero vectors:
The Cauchy–Schwarz inequality guarantees that the argument of arccos is valid.
One can also first convert the vectors to unit vectors by dividing by their magnitude:
then the angle θ is given by
The terminal points of both unit vectors lie on the unit circle. The unit circle is where the trigonometric values for the six trig functions are found. After substitution, the first vector component is cosine and the second vector component is sine, i.e. (cos x, sin x) for some angle x. The dot product of the two unit vectors then takes (cos x, sin x) and (cos y, sin y) for angles x and y and returns
where x − y = θ.
Note that the above method will determine the smallest of the two possible angles — it will never be more than 180°, as the value of the cosine is symmetric between the upper and lower halves of the unit circle.
As the cosine of 90° is zero, the dot product of two orthogonal vectors is always zero. Moreover, two vectors can be considered orthogonal if and only if their dot product is zero, and they have nonnull length. This property provides a simple method to test the condition of orthogonality.
Sometimes these properties are also used for "defining" the dot product, especially in 2 and 3 dimensions; this definition is equivalent to the above one. For higher dimensions the formula can be used to define the concept of angle.
The geometric properties rely on the basis being orthonormal, i.e. composed of pairwise perpendicular vectors with unit length.
Scalar projection
If both a and b have length equal to unity (i.e., they are unit vectors), their dot product simply gives the cosine of the angle between them.^{[7]}
If only b is a unit vector, then the dot product a·b gives acos θ, i.e., the magnitude of the projection of a in the direction of b, with a minus sign if the direction is opposite. This is called the scalar projection of a onto b, or scalar component of a in the direction of b (see figure). This property of the dot product has several useful applications (for instance, see next section).
If neither a nor b is a unit vector, then the magnitude of the projection of a in the direction of b is , as the unit vector in the direction of b is .
Rotation
When an orthonormal basis that the vector a is represented in terms of is rotated, a's matrix in the new basis is obtained through multiplying a by a rotation matrix R. This matrix multiplication is just a compact representation of a sequence of dot products.
For instance, let
 B_{1} = {x, y, z} and B_{1} = {u, v, w} be two different orthonormal bases of the same space ℝ^{3}, with B_{2} obtained by just rotating B_{1},
 a_{2} = (a_{x}, a_{y}, a_{z}) represent vector a in terms of B_{1},
 a_{2} = (a_{u}, a_{v}, a_{w}) represent the same vector in terms of the rotated basis B_{2},
 u_{1}, v_{1}, w_{1}, be the rotated basis vectors u, v, w represented in terms of B_{1}.
Then the rotation from B_{1} to B_{2} is performed as follows:
Notice that the rotation matrix R is assembled by using the rotated basis vectors u_{1}, v_{1}, w_{1} as its rows, and these vectors are unit vectors. By definition, Ra_{1} consists of a sequence of dot products between each of the three rows of R and vector a_{1}. Each of these dot products determines a scalar component of a in the direction of a rotated basis vector (see previous section).
If a_{1} is a row vector, rather than a column vector, then R must contain the rotated basis vectors in its columns, and must postmultiply a_{1}:
Physics
In physics, vector magnitude is a scalar in the physical sense, i.e. a physical quantity independent of the coordinate system, expressed as the product of a numerical value and a physical unit, not just a number. The dot product is also a scalar in this sense, given by the formula, independent of the coordinate system. Examples inlcude:^{[8]}^{[9]}
 Mechanical work is the dot product of force and displacement vectors.
 Magnetic flux is the dot product of the magnetic field and the area vectors.
Properties
The following properties hold if a, b, and c are real vectors and r is a scalar.^{[10]}^{[11]}
The dot product is commutative:
The dot product is distributive over vector addition:
The dot product is bilinear:
When multiplied by a scalar value, dot product satisfies:
(these last two properties follow from the first two).
Two nonzero vectors a and b are orthogonal if and only if a • b = 0.
Unlike multiplication of ordinary numbers, where if ab = ac, then b always equals c unless a is zero, the dot product does not obey the cancellation law:
 If a • b = a • c and a ≠ 0, then we can write: a • (b − c) = 0 by the distributive law; the result above says this just means that a is perpendicular to (b − c), which still allows (b − c) ≠ 0, and therefore b ≠ c.
Provided that the basis is orthonormal, the dot product is invariant under isometric changes of the basis: rotations, reflections, and combinations, keeping the origin fixed. The above mentioned geometric interpretation relies on this property. In other words, for an orthonormal space with any number of dimensions, the dot product is invariant under a coordinate transformation based on an orthogonal matrix. This corresponds to the following two conditions:
 The new basis is again orthonormal (i.e., it is orthonormal expressed in the old one).
 The new base vectors have the same length as the old ones (i.e., unit length in terms of the old basis).
If a and b are functions, then the derivative of a • b is a' • b + a • b'
Triple product expansion
{{#invoke:mainmain}}
This is a very useful identity (also known as Lagrange's formula) involving the dot and crossproducts. It is written as:^{[12]}^{[13]}
which is easier to remember as "BAC minus CAB", keeping in mind which vectors are dotted together. This formula is commonly used to simplify vector calculations in physics.
Proof of the geometric interpretation
Consider the element of R^{n}
Repeated application of the Pythagorean theorem yields for its length v
But this is the same as
so we conclude that taking the dot product of a vector v with itself yields the squared length of the vector.
 Lemma 1
Now consider two vectors a and b extending from the origin, separated by an angle θ. A third vector c may be defined as
creating a triangle with sides a, b, and c. According to the law of cosines, we have
Substituting dot products for the squared lengths according to Lemma 1, we get
But as c ≡ a − b, we also have
which, according to the distributive law, expands to
Merging the two c • c equations, (Template:EquationNote) and (Template:EquationNote), we obtain
Subtracting a • a + b • b from both sides and dividing by −2 leaves
Generalizations
Complex vectors
For vectors with complex entries, using the given definition of the dot product would lead to quite different geometric properties. For instance the dot product of a vector with itself can be an arbitrary complex number, and can be zero without the vector being the zero vector (such vectors are called isotropic); this in turn would have severe consequences for notions like length and angle. Many geometric properties can be salvaged, at the cost of giving up the symmetric and bilinear properties of the scalar product, by alternatively defining:^{[14]}
where b_{i} is the complex conjugate of b_{i}. Then the scalar product of any vector with itself is a nonnegative real number, and it is nonzero except for the zero vector. However this scalar product is not linear in b (but rather conjugate linear), and the scalar product is not symmetric either, since
The angle between two complex vectors is then given by
This type of scalar product is nevertheless quite useful, and leads to the notions of Hermitian form and of general inner product spaces.
Functions
Vectors have a discrete number of entries, that is, an integer correspondence between natural number indices and the entries.
A function f(x) is the continuous analogue: an uncountably infinite number of entries where the correspondance is between the variable x and value f(x) (see domain of a function for details).
Just as the inner product on vectors uses a sum over corresponding components, the inner product on functions is defined as an integral over some interval. For example, a the inner product of two real continuous functions u(x), v(x) may be defined on the interval a ≤ x ≤ b (also denoted [a, b]):^{[15]}
This can be generalized to complex functions ψ(x) and χ(x), by analogy with the complex inner product above:^{[16]}
Weight function
Inner products can have a weight function, i.e. a function which weight each term of the inner product with a value.
Dyadics and matrices
Matrices have the Frobenius inner product, which is analogous to the vector inner product. It is defined as the sum of the products of the corresponding components of two matrices A and B having the same size:
Dyadics have a dot product and "double" dot product defined on them, see Dyadics (Dyadic and dyadic) for their definitions.
Tensors
The inner product between a tensor of order n and a tensor of order m is a tensor of order n + m − 2, see tensor contraction for details.
See also
References
 ↑ {{#invoke:citation/CS1citation CitationClass=book }}
 ↑ {{#invoke:citation/CS1citation CitationClass=book }}
 ↑ {{#invoke:citation/CS1citation CitationClass=book }}
 ↑ {{#invoke:citation/CS1citation CitationClass=book }}
 ↑ {{#invoke:citation/CS1citation CitationClass=book }}
 ↑ {{#invoke:citation/CS1citation CitationClass=book }}
 ↑ {{#invoke:citation/CS1citation CitationClass=book }}
 ↑ {{#invoke:citation/CS1citation CitationClass=book }}
 ↑ {{#invoke:citation/CS1citation CitationClass=book }}
 ↑ {{#invoke:citation/CS1citation CitationClass=book }}
 ↑ {{#invoke:citation/CS1citation CitationClass=book }}
 ↑ {{#invoke:citation/CS1citation CitationClass=book }}
 ↑ {{#invoke:citation/CS1citation CitationClass=book }}
 ↑ {{#invoke:citation/CS1citation CitationClass=book }}
 ↑ {{#invoke:citation/CS1citation CitationClass=book }}
 ↑ {{#invoke:citation/CS1citation CitationClass=book }}
External links
 Weisstein, Eric W., "Dot product", MathWorld.
 Explanation of dot product including with complex vectors
 "Dot Product" by Bruce Torrence, Wolfram Demonstrations Project, 2007.
{{#invoke: Navbox  navbox }}
am:ጥላ ብዜት ar:جداء قياسي bs:Skalarni proizvod ca:Producte escalar cs:Skalární součin da:Skalarprodukt de:Skalarprodukt et:Skalaarkorrutis es:Producto escalar eo:Skalara produto fa:ضرب داخلی fr:Produit scalaire gl:Produto escalar ko:스칼라곱 it:Prodotto scalare he:מכפלה סקלרית kk:Скаляр көбейтінді la:Productum interius lv:Skalārais reizinājums lt:Skaliarinė sandauga hu:Skaláris szorzat ms:Hasil darab bintik nl:Inwendig product ja:ドット積 no:Indreprodukt nn:Indreprodukt pl:Iloczyn skalarny pt:Produto escalar ru:Скалярное произведение simple:Dot product sk:Skalárny súčin sl:Skalarni produkt sr:Скаларни производ вектора sv:Skalärprodukt tl:Produktong tuldok th:ผลคูณจุด tr:Nokta çarpım uk:Скалярний добуток vi:Tích vô hướng zh:数量积