# Talk:Dot product

## Regarding bases

In 2D and 3D Euclidean space, the scalar product a.b between vectors a and b was defined for me as |a||b|cos@. If you use an orthonormal basis then it can be shown that a.b = (a1, a2, a3).(b1, b2, b3) = a1b1 + a2b2 + a3b3, where (x1, x2, x3) means x1e1 + x2e2 + x3e3 where {ei} is an orthonormal basis (and the xi are scalars).

However, for any other basis, the scalar product will NOT turn out to be the sum of the products of the components with respect to the basis, i.e. will not equal a1b1 + a2b2 + a3b3. However this is how the scalar product is defined by the article.

What is the resolution of this apparent contradiction? --81.152.176.225 (talk) 17:46, 30 March 2011 (UTC)

- Your confusion is between the notions of inner product an dot product (at least as they are used here; their meanings are not always well distinguished). A Euclidean space comes equipped with an inner product. It is related to the norms of vectors and the cosine of the angle as you indicated, although I would say this defines the angle rather than the inner product. As defined here, a dot product is just an algebraic expression in terms of entries, like the determinant of a matrix (but a different expression of course). It happens that Euclidean spaces always have (many) orthonormal bases, and that in terms of the coordinates on a given orthonormal bases the inner product is given by the dot product of the respective coordinates. In terms of coordinates on other bases this is not the case, but that is no contradiction, it is actually more of a surprise that it is given by the very same expression in terms of coordinates of
*any*orthonormal bases (in general changing coordinates also changes the expressions for functions of the coordinates). As Euclidean spaces are often identified with**R**^{n}(in which case the dot product of the vector entries**is**an inner product), the notions are sometimes confounded, but this should not really be done (and I'm not sure to which one of the two "scalar product" should refer; I think inner product would be the most appropriate). The dot product is defined here in such a manner that the phrase "a coefficient of a matrix product is given by the dot product of a row of the left matrix and a column of the right matrix" makes sense; in this case there is no Euclidean space or inner product involved at all. Hope this helps. Marc van Leeuwen (talk) 17:29, 31 March 2011 (UTC)

- The deeper issue here is that a vector space alone does not have any concept of orthogonality or length. For a finite dimensional vector space over the reals, if we take
*any*basis, then we can define the scalar product relative to that basis. The basis will be orthonormal relative to the scalar product defined from that basis; the lengths and angles between other vectors are again only relative to the choice of basis, and they can be found using the usual formulas and the scalar product for that basis. — Carl (CBM · talk) 23:24, 30 June 2013 (UTC)

- The deeper issue here is that a vector space alone does not have any concept of orthogonality or length. For a finite dimensional vector space over the reals, if we take

## Remove section Dot product, Generalization to tensors...

...since this section doesn't say much at all. All the info is in the main articles tensor, tensor contraction. It would be better to briefly include various other generalizations such as weight functions, inner products on functions, dot products in matrices and dyadics... which are not mentioned, in additon to complex vectors and tensors.Maschen (talk) 20:24, 25 August 2012 (UTC)

## Recent edits

Following my major revision of the article, I thought I should summarize here the most important points of my edit, and highlight what I think still needs to be done.

- My chief concern in the sequence of edits was the demands of WP:NPOV, that we should give equal weight to the two different definitions of the dot product: the geometric and algebraic one. The earlier revision of the article took for granted that the dot product should be defined algebraically in terms of the components of a vector. As a result, the article had to worry about things like coordinate dependence and orthonormal bases. This is generally a bad way to define the dot product for that reason. That is why in physics/tensor analysis, the dot product is defined in a coordinate-independent manner at the outset, and its algebraic properties are then deduced. Of course, in much of applied mathematics, the dot product is just defined componentwise. That's fine, but it seems like we should present both points of view in parallel. It simplifies much of the discussion I think.
- I have removed the section on Rotations. This was so badly written that it completely failed to convey the relevant point: that the dot product is invariant under rotations. That is something that should be mentioned in the "Properties" section (which also needs to be taken in hand and organized a bit better). It is a trivial consequence of the geometric definition of the dot product (!) that doesn't require matrices to explain.
- I have added a more intuitive proof sketch of the equivalence of the algebraic and geometric definitions using the scalar projection. In doing so, I have unfortunately removed the connection with the law of cosines. This should probably be mentioned somewhere in the article, but it was not particularly enlightening where it was anyway, situated in the middle of a very dull proof that no one was going to read.
- I removed the paragraph from the lead that stated: "In Euclidean space, the inner product of two vectors (expressed in terms of coordinate vectors on an orthonormal basis) is the same as their dot product. However, in more general contexts (e.g., vector spaces that use complex numbers as scalars) the inner and the dot products are typically defined in ways that do not coincide." This is misleading because many many authors define the dot product geometrically in Euclidean space rather than in terms of components in an orthonormal basis.
- I shortened the discussion of the scalar product and geometric definition. This section managed to drag on without saying anything substantive.

--Sławomir Biały (talk) 15:52, 25 November 2012 (UTC)

- If it's ok I added the cosine law to the end of the properties section. Maschen (talk) 17:02, 25 November 2012 (UTC)
- Good idea. Sławomir Biały (talk) 18:09, 25 November 2012 (UTC)

## More recent edits

I edited the properties section without realizing Quondum's edit before:

*"The cross product can also produce a vector (when the inputs are a vector and a pseudovector); is this worth getting into in this article?)"*

I would say no since this article is about the dot product, not the cross...

Hope my changes to the properties section are ok - it is harder or clearer to read? **M∧ Ŝ**

*c*

^{2}

*ħ*ε

*И*18:20, 14 December 2012 (UTC)

_{τlk}## IMHO, A simple 2D example of the algebraic formula would make this page useful to a wider audience

IMHO, this page assumes a high level of comfort with mathematical terminology.

As a software writer, using computer graphics, I was looking for confirmation of the simple case of dot product for a vector in (x,y): "(x1,y1) dot (x2,y2) = (x1*x2 + y1*y2)"

I presume the Algebraic section confirms that. But it would take me too long to think through what it is saying. (So I looked elsewhere.)

I suggest this example to anyone who might be thinking "this page could be made more accessible"... ToolmakerSteve (talk) 06:37, 5 February 2013 (UTC)

## "Sequences of numbers" revert

Hoo-boy. This [revert] somewhat surprises me. It is understood that the dot product may be defined on the Cartesian product of any finite number of copies of **R**. Indeed, it seems that this generalizes to any field. However, "sequences of numbers" does not adequately capture the requirement that the sequences must form a vector space before it can be considered for the dot product; also usually that the two vectors must belong to the *same* vector space. And damningly, this definition leads to explicit basis dependence (and even if one starts with a coordinate space, one can always find a nonstandard basis and thus another sequence of numbers, to which this definition would still apply, but with a basis-dependent result). Thus, however you word it, you need to either imply a standard basis, or restrict the definition to the original Cartesian product/power if you do not want to reference any basis. Perhaps, Sławomir, you are objecting to my reference to bases? These can be removed, but not the reference to a vector space. Every reference I've seen has clearly intended the definition in refer to a vector space, even if said vector space was defined in terms of the Cartesian product of *n* copies of **R**. One disclaimer: my access to references is limited, but certainly in Shaum's Outlines the intended vectorial nature of the "sequence" (and it is not called a sequence) is clear. The article also claims equivalence of the geometric and algebraic definitions, which does not hold as defined in the lead. What makes the idea of arbitrary "sequences of numbers" so notable? — *Quondum* 01:57, 30 June 2013 (UTC)

- If you're referring to vectors, instead of "sequences of numbers", why not "ordered tuple" (of components)?
**M∧***Ŝ**c*^{2}*ħ*ε*И*08:01, 30 June 2013 (UTC)_{τlk}

- My point is that the properties of a vector space are needed, specifically scalar multiplication and addition. These are not implied in the definition of a tuple, which is the same thing as "a sequence". Without these, half the listed properties of the dot product do not follow. Let's get back to basic WP principles: what notable reference defines a dot product on an arbitrary tuple of numbers (even for the case of reals), without the assumption that the tuple belongs to a vector space? And if you find such a reference, we will have to make it clear that two different dot products exist in the literature, one for which properties 2, 3, 4, 6, 7 and 8
*cannot*be derived. —*Quondum*12:13, 30 June 2013 (UTC)

- My point is that the properties of a vector space are needed, specifically scalar multiplication and addition. These are not implied in the definition of a tuple, which is the same thing as "a sequence". Without these, half the listed properties of the dot product do not follow. Let's get back to basic WP principles: what notable reference defines a dot product on an arbitrary tuple of numbers (even for the case of reals), without the assumption that the tuple belongs to a vector space? And if you find such a reference, we will have to make it clear that two different dot products exist in the literature, one for which properties 2, 3, 4, 6, 7 and 8

- I'm afraid I don't understand what specifically the issue is. Is it the wording "sequences"? I'm not married to this wording, and would be happy with
*n*-tuples or other equivalent expression. My complaint with the original edit was that there is no need to mention bases (or even vector spaces) to define the dot product. It is specifically defined on . By*definition*, is the space of*n*-tuples of real numbers, and the dot product of two*n*-tuples is . That's really all there is to it. The dot product is*not*defined on a general vector space. (That's a related notion, called an inner product.) In a general vector space*V*, if you have a basis , then you can push forward the dot product along φ to define an inner product on*V*, but this is not the same thing as the dot product since*V*is not in general a space of tuples (or even if it is a space of tuples, then the inner product defined in this manner might be different from the dot product on that space). Here you can discuss basis-dependence, because the basis is actually relevant, but that's not really a topic for the article. - I really also don't see what the issue is with the properties section. It's certainly well-known without bringing in special bases how to add tuples and multiply them by scalars. Perhaps these notions could be described more clearly than they currently are in the properties section, but I don't see how User:Quondum could possibly be confused by what is intended.
- I also don't understand the final charge that our article somehow diverges from the account in most published sources. Many thousands of sources exist that define the dot product the way this article does (some of them give both definitions; some deduce one as a consequence of the other). See, for instance, the Harvard consortium textbook "Calculus" (the "reform calculus" textbook, which includes both definitions side by side). Volume 1 of Tom Apostol's
*Calculus*, a classic book which is well-regarded by many mathematicians, Peter Lax's "Linear algebra", Paul Halmos's "Finite dimensional vector spaces" all include the algebraic definition and deduce the geometric definition from it. Josiah Willard Gibbs's classic textbook "Vector analysis" starts from the geometric definition and deduces the algebraic definition from it, as does Richard Courant's "Differential and integral calculus". There are slight differences in exposition in these sources, but there is certainly no essential logical departure from the account given in this article. Sławomir Biały (talk) 23:54, 30 June 2013 (UTC)- The article starts,
*In mathematics*, not*In linear algebra*. As such, the concept of tuple covers the Cartesian product of arbitrary sets, and does not carry with it the structure of addition or scalar multiplication of tuples. The texts that you mention, from their very titles are in the context of linear algebra, where tuples (sequences) could well be taken to refer specifically to vectors, and to form a vector space with the associated properties. —*Quondum*00:09, 1 July 2013 (UTC)- I'm sorry, but I am still grasping to see what the point of this discussion actually is. The structure of addition and scalar multiplication is not actually needed to
*define*the dot product. You just need a tuple of numbers (let's say real numbers, for the sake of conversation). Many of the aforementioned texts do not even mention vector spaces, which would seem to obviate your point that they all implicitly are relying on this more general perspective. But, in any case, what additional sources would you have us consider that might elucidate your point of view on the subject? Is there some particular text in the article that you would like to change and, if so, what would you change it to? Sławomir Biały (talk) 00:16, 1 July 2013 (UTC)- I am happy to have the dot product defined on tuples of pair-wise multiplication-compatible components. However, said dot product then does not in general have all the properties attributed to it in the article; it can only acquire these if the tuples are then additionally constrained to being in a vector space. What I am really driving at is trying to distinguish whether the dot product for the purposes of this article should be this sum-of-products-over-tuples that you evidently prefer, or else it is defined specifically on a vector space. If we assume the former, then half the guts of the article does not belong, and the claimed equivalence with the more abstract geometric definition on Euclidean spaces does not hold (because the latter is at best a subclass). —
*Quondum*00:34, 1 July 2013 (UTC)- Well, I wouldn't say that it's necessarily
*my*preference to define the dot product as a sum of products over tuples. But that's certainly the prevailing definition in most sources I have seen. I've already given you a sample. To say that the dot product is "defined specifically on a vector space" is as immaterial as saying that it is "defined specifically on a topological space" or "defined specifically on a measure space". It's defined on . This space of tuples happens to support operations of addition and scalar multiplication (among other things: it's also a measure space, a topological space, a topological group, and so on). It also happens to be the Cartesian model for Euclidean space, and that's how the two definitions are related. (Incidentally, even the geometric definition does not rely specifically on the vector space structure of Euclidean space, even though Euclidean space does support such a structure in addition to being a topological space, a measure space, and so on.) Saying that "half the guts of the article does not belong, and the claimed equivalence with the more abstract geometric definition on Euclidean spaces does not hold" seems either to be WP:POINTy, or at least to show a disregard for what many high quality sources have to say on the matter. Sławomir Biały (talk) 01:32, 1 July 2013 (UTC)- I can imagine addition on
**R**^{n}that does not correspond to component-wise addition, so unless we specifically define a tuple addition, to me it is not defined. I can imagine it being used to parameterize a map on a manifold, which would make predefined addition and scalar multiplication undesirable. As you suggest, this process is absorbing your and my energy – too much now relative to the potential benefit to WP. I'm not happy that what I consider to be at best non-obvious properties of**R**^{n}are not stated in the article (or anywhere else?). You obviously have the superior mathematical background and experience, but this leaves me concerned by what you feel the reader can be expected to assume. The dot product should be a simple topic, but as you can see, I am having serious difficulty pinning down a rigorous understanding of the starting point. I suggest that I should take a (long) rest from it. —*Quondum*02:22, 1 July 2013 (UTC)

- I can imagine addition on

- Well, I wouldn't say that it's necessarily

- I am happy to have the dot product defined on tuples of pair-wise multiplication-compatible components. However, said dot product then does not in general have all the properties attributed to it in the article; it can only acquire these if the tuples are then additionally constrained to being in a vector space. What I am really driving at is trying to distinguish whether the dot product for the purposes of this article should be this sum-of-products-over-tuples that you evidently prefer, or else it is defined specifically on a vector space. If we assume the former, then half the guts of the article does not belong, and the claimed equivalence with the more abstract geometric definition on Euclidean spaces does not hold (because the latter is at best a subclass). —

- I'm sorry, but I am still grasping to see what the point of this discussion actually is. The structure of addition and scalar multiplication is not actually needed to

- The article starts,

- I'm afraid I don't understand what specifically the issue is. Is it the wording "sequences"? I'm not married to this wording, and would be happy with

## Constraint on basis transformation for invariance of the dot product

§Properties currently reads:

**Invariance under isometric changes of basis**

Ignoring the dispute about the algebraic definition, the geometric definition is invariant under *any* change of basis, exactly as it should be. Any definition of the dot product that produces a value that could vary with a change in basis is simply insane. Why is there any restriction placed on the change of basis? — *Quondum* 22:32, 30 June 2013 (UTC)

- I am new to this particular article, but it seems to me that if we divide all the components of a vector by 2 (which corresponds to a certain change of basis) then the dot product of that vector with itself will not usually remain the same, if the dot product is defined (as I assume it is) in terms of the components. Really the issue is that the dot product is only defined in that sense relative to a basis, and then "invariance" is just jargon for a theorem that two bases which are isometric with respect to each other (whatever that means...) will give the same values for all dot products. Actually, I am not certain exactly what an "isometric change of basis" is supposed to be. One possibility: if the new basis gives every vector the same length as the old basis, then by the Parallelogram law the two dot products have to be the same. — Carl (CBM · talk) 23:19, 30 June 2013 (UTC)

- There used to be a paragraph in the lede that clarified what was going on, but it was removed in this edit: [1]. As that paragraph used to explain, the dot product defined in terms of components in an arbitrary basis is not, in general, the same as the inner product of two vectors in , which is the special case of the dot product in the standard basis. — Carl (CBM · talk) 23:53, 30 June 2013 (UTC)
- That removed paragraph maybe does not clarify much. However what you might be reading from it is that the dot product is defined on the coordinate vector representing an abstract vector in terms of a basis. In this picture, the original vector and the basis are not relevant, and we are dealing with a linear transformation of coordinate vectors. The section would have to be updated to
*Invariance under isometric linear transformations*, though this is a tautology: an isometry is defined as a transformation that preserves the dot product. —*Quondum*00:24, 1 July 2013 (UTC)- Because any coordinate-free definition of the dot product is (as you said originally) preserved under every isomorphism, for that definition of an isometry to make sense, the dot product has to be defined relative to a basis in a way that makes essential use of the specific coordinates. The geometric definition is such a definition: it is just a special case of the coordinate-based dot product relative to a particular basis (the standard basis). — Carl (CBM · talk) 00:30, 1 July 2013 (UTC)
- Okay, so we define the dot product as a sum of products of corresponding (real) coordinates of two tuples. Then you identify a map onto Euclidean space so that the original tuple happens to be equal to the coordinate vector of each Euclidean vector relative to some basis, as well as this basis being orthonormal. Fine, you can do this. The dot product then matches the Euclidean length (we'll ignore such inconvenient things such as units for now). The problem comes in with the implicit pull-back (well, I think you call it that) of properties such as scalar multiplication and vector addition:
*you added these properties when you did the mapping*. You do not have these properties without the mapping, or defining them on the original space of tuples. You also do not have the concept of a basis without these properties. —*Quondum*01:02, 1 July 2013 (UTC)- If I have as an abstract vector space (i.e. I have a vector space over of dimension
*n*but I know nothing else about the space) then I already have scalar multiplication and vector addition, because I have a vector space. But I have no dot product or inner product, because I just have an abstract vector space. One way to define an inner product on my space is to choose any basis for the space (the definition of a basis is coordinate-free, and a basis exists for each vector space), so that I can write a coordinate tuple for each vector relative to the basis. Then I can define an inner product on the original space to be the dot product on the coordinate tuples of the two vectors. Now I can ask: for which pairs of bases*B*,*B′*do I obtain the same values for all these dot products from*B*and from*B′*? One answer is that if*B*and*B′*give every vector the same length (i.e the dot product of every vector with itself relative to*B*is the same as the dot product with itself relative to*B′*) then all the dot products will be equal under*B*and*B′*. That is what the original sentence says to me. The key point is that the dot product of two vectors is only defined relative to a choice of basis, when all I know about the vectors is that they belong to an abstract copy of . — Carl (CBM · talk) 01:31, 1 July 2013 (UTC)

- If I have as an abstract vector space (i.e. I have a vector space over of dimension

- Okay, so we define the dot product as a sum of products of corresponding (real) coordinates of two tuples. Then you identify a map onto Euclidean space so that the original tuple happens to be equal to the coordinate vector of each Euclidean vector relative to some basis, as well as this basis being orthonormal. Fine, you can do this. The dot product then matches the Euclidean length (we'll ignore such inconvenient things such as units for now). The problem comes in with the implicit pull-back (well, I think you call it that) of properties such as scalar multiplication and vector addition:

- Because any coordinate-free definition of the dot product is (as you said originally) preserved under every isomorphism, for that definition of an isometry to make sense, the dot product has to be defined relative to a basis in a way that makes essential use of the specific coordinates. The geometric definition is such a definition: it is just a special case of the coordinate-based dot product relative to a particular basis (the standard basis). — Carl (CBM · talk) 00:30, 1 July 2013 (UTC)

- That removed paragraph maybe does not clarify much. However what you might be reading from it is that the dot product is defined on the coordinate vector representing an abstract vector in terms of a basis. In this picture, the original vector and the basis are not relevant, and we are dealing with a linear transformation of coordinate vectors. The section would have to be updated to

The main issue here is that what I call **elementary** books (e.g. for Calculus 3, or elementary linear algebra) define to be a particular space, the space of *n*-tuples of real numbers with pointwise operations. But from a **categorical** viewpoint is any space that is a product where there are *n* factors, and this product is *not* unique - like any categorical product it is only unique up to isomorphism. From the viewpoint of the elementary texts already comes with a standard basis using the components of the tuples; from the categorical viewpoint the choice of basis is arbitrary, because is only defined up to isomorphism. The original versions of this article were written from the categorical perspective, i.e. [2], where the dot product can be *interpreted* geometrically when an appropriate basis is used. At some point someone edited the article to use more of the elementary viewpoint, where the dot product *is* the geometrical one - but then parts of the previous article that talk about the dot product in other bases no longer fit. — Carl (CBM · talk) 01:43, 1 July 2013 (UTC)

- I don't think this distinction between "elementary" and "categorical" is right. As a categorical product is equipped with
*n*projection maps into . This is all we need to define the dot product. Sławomir Biały (talk) 02:00, 1 July 2013 (UTC)

- Each individual copy of has projections, but there is no unique space that "is" in the category of vector spaces over . So there is an element of arbitrariness in choosing one particular space to call "", which is the same as the arbitrariness of choosing a basis. But in any case my point is that the original article was written from a viewpoint where a vector space does not come equipped with a basis - including - while the present version assumes that is a specific space of tuples. Either approach is fine with me, but it helps if everyone realizes which approach is being taken, because it will affect the way that the material is presented. — Carl (CBM · talk) 02:15, 1 July 2013 (UTC)

- I still think this is a false dichotomy. Surely defining a dot product requires that we work in , not in a vector space that is abstractly isomorphic to . The latter would more properly be called an inner product (perhaps an inner product induced by the isomorphism), but not the dot product. See, for instance, the text "Linear algebra" by Greub. He uses the term "standard inner product" to refer to the dot product on and then discusses the induced inner product in a basis as a completely different case referring to it as an "inner product". Sławomir Biały (talk) 12:04, 1 July 2013 (UTC)
- We both agree that the "dot" product requires working with tuples of real numbers. The distinction is whether is defined to be the set of such tuples, as is common in elementary settings, or is instead a more abstract structure. In the former case, because vectors are defined as tuples, we can define the "dot product of two vectors" by referring directly to the tuples, or by identifying the vectors with geometric objects and referring to their length. In the latter case, when a "vector" does not come pre-equipped with components, it is impossible to define the "dot product of two vectors", only "the dot product of two vectors relative to a basis". The latter is exactly the induced inner product on a basis. In this more abstract setting, for example, the "angle" between two vectors is only defined
*after*we pick (arbitrarily) an inner product, the angle is not an inherent property of the vectors themselves. (The same distinction would come if we treated as an affine space: if is the set of tuples then we know which vector is "really" zero, but if is presented as an abstract affine space we have no way to tell.) — Carl (CBM · talk) 13:02, 1 July 2013 (UTC)- Then I think the point is not that there is a problem with the meaning of , but rather what "Euclidean space" signifies. Then I would agree that there is an elementary notion of Euclidean space (one that satisfies the axioms); an elementary model of Euclidean space: the "Cartesian model" equipped with the usual metric; and a less elementary model of Euclidean space: an inner product space. Sławomir Biały (talk) 19:44, 1 July 2013 (UTC)

- We both agree that the "dot" product requires working with tuples of real numbers. The distinction is whether is defined to be the set of such tuples, as is common in elementary settings, or is instead a more abstract structure. In the former case, because vectors are defined as tuples, we can define the "dot product of two vectors" by referring directly to the tuples, or by identifying the vectors with geometric objects and referring to their length. In the latter case, when a "vector" does not come pre-equipped with components, it is impossible to define the "dot product of two vectors", only "the dot product of two vectors relative to a basis". The latter is exactly the induced inner product on a basis. In this more abstract setting, for example, the "angle" between two vectors is only defined

- I still think this is a false dichotomy. Surely defining a dot product requires that we work in , not in a vector space that is abstractly isomorphic to . The latter would more properly be called an inner product (perhaps an inner product induced by the isomorphism), but not the dot product. See, for instance, the text "Linear algebra" by Greub. He uses the term "standard inner product" to refer to the dot product on and then discusses the induced inner product in a basis as a completely different case referring to it as an "inner product". Sławomir Biały (talk) 12:04, 1 July 2013 (UTC)

- Back on topic: I have removed the property under discussion. It seems to me that this is already mentioned in a more appropriate context at the end of the section on the equivalence of the two definitions. Sławomir Biały (talk) 12:25, 1 July 2013 (UTC)

## Section on "Scalar projection and the equivalence of the definitions" is jumbled and missing steps

1. It is not clear how "Scalar projection" and "equivalence of the definitions" are related.

2. It is not clear where discussion of scalar projection ends and proof of equivalence begins.

3. It is not clear how : follows.

Furthermore, the section does not start with the trig definition and end with algebraic definition, or visa versa. So where is the equivalence?
I'm addressing this to whomever reverted my changes from 2013-07-27, which were an improvement. You know who you are. Terse mathematics is fine for research papers and dissertations where the reader is expected to be an expert and math is a common language. But "Dot product" is not advanced mathematics. This subject can and should be readily accessible by first year college students who have never had course in abstract algebra and probably never will. So please step down off your high horse and join us peasants laboring in the fields. You may not be a smart as you think. — Preceding unsigned comment added by ThomasMcLeod (talk • contribs) 02:01, 2 August 2013 (UTC)