Talk:Linear map
Result of using matrix representation
The result gives coordinates in the destination vector space. It is important to stress this as the original linear map f(v) can be a m-by-n matrix applied to a vector space with n-by-1 matrices as elements which results in a m-by-1 matrix. The application of the matrix from a matrix representation of this linear map gives a m-by-1 matrix too but this one is not f(v). 193.174.53.122 (talk) 10:31, 25 February 2009 (UTC)
Definition and first consequences
The claim "For example, the conjugation of complex numbers is an R-linear map C → C, but it is not C-linear" needs a proof. The claim should be removed if it's not proved right there. — Preceding unsigned comment added by ThinkerFeeler (talk • contribs) 06:58, 23 January 2012 (UTC)
Mapping of closed shapes
From my browsing I can't find any page which talks about the area of closed shapes under a linear transformation. Unless anyone could direct me to such a page I believe that this article would be the place to put such a comment. I do know that the area of the image of a closed shape under a linear transformation T:(x,y)→(ax+by,cx+dy) is:
- Area of the original shape × —Preceding unsigned comment added by 124.168.226.120 (talk) 09:54, 14 November 2008 (UTC)
Preliminary remarks
Endomorphism f from V into V, where V is a vector space. is it nessary f to be one-one mapping?
- Nope.
Part about solving a system of equations
("f(x)=0 is called...") should probably be moved somewhere else.
What about other articles about that topic?
dimension formula only in finite dimensions?
the article states that the formula dim ker f + dim im f = dim V is only valid if V is finite dimensional. Is that really true? Can someone give me a counterexample? -July 2, 2005 01:19 (UTC)
- The formula either doesn't make sense (initially) or isn't very useful (if you define addition on infinite cardinals somehow) in the infinite dimensional case. By and large, you're likely to get formulae such as , which are not very useful. If you're really prepared for some heavy mathematics, however, you might want to check out Fredholm operators which have an associated index, a finite number that in the case where V, W are finite dimensional is just dimV - dimU (by the formula in the article). The index has many mystical properties I wot not of. Ben 13:02, 10 August 2006 (UTC)
Clarification of my move
I moved a pagraph to the bottom of the article, with the edit summary
- moved the continuity section to the very bottom. Linear transformations are about vector spaces. For continuity, you need a Banach space. So, continuity is not the primary concern of this article.
Let me clarify myself. First of all, I definitely agree with what the paragraph says, that linear transformations are not necessarily continuous. But the problem is the following. Linear trasformations are about vector spaces. That vector space can be the reals, the complex numbers, a vector space of over a field finite characteristic, over a field which is a Galois extension, etc. All that matters in a vector space is addition and multiplication by scalar.
As such, inserting in the middle of that article a paragraph operators on a Banach space (or if you wish, a linear topological space) was wrong. It distracts the reader from the main point, which is the linearity, addition and multiplication by scalars. To talk about continuity you need topology, you need a norm. It is a totally different realm than the one of a vector space. That's why inserting that continuity paragraph was out of place. It has to of course be mentioned somewhere, but since all the other topics in this article are closely bound together, I put this periferial one at the bottom. Oleg Alexandrov 18:34, 24 September 2005 (UTC)
- Ok, that is fine.--Patrick 00:07, 25 September 2005 (UTC)
Examples
Why are the examples talking about eigenvectors and eigenvalues? Is this not a distraction? --anon
- I agree. I cut that out. Oleg Alexandrov (talk) 15:32, 20 July 2006 (UTC)
Example Suggestion x->x+1 is a non-linear map
Currently, the following is in the examples section:
The map x to x^2 is nonlinear because it does not satisfy homogeneity or additivity. Would it not be more "enlightening" to add the example:
I suggest this because x^2 is automatically parsed as non-linear by anyone who has plotted a function in 2D space, however x+1 is a nice line in 2D space, yet is a non-linear transformation. Raazer 19:48, 12 October 2007 (UTC)
- Yeah its got mi fooled. Why is it non linear?--ProperFraction (talk) 02:08, 8 August 2008 (UTC)
- If f(x) = x+1, then f(0) + f(0) is not equal to f(0 + 0). JackSchmidt (talk) 03:08, 8 August 2008 (UTC)
Title
I wonder if this article should be entitled Linear map rather than Linear transformation.
Both terms are in common use, but the term transformation suggests a specialization to the case of endomorphisms (see function), just as the term Linear operator suggests an infinite dimensional context. Geometry guy 20:10, 9 February 2007 (UTC)
No objections received, so I've moved it, fixed the double redirects, and edited the article. Geometry guy 19:25, 13 February 2007 (UTC)
- I object! The traditional terminology is "linear transformation" and this is still very widely used. Also, the term "map" could just as well be "mapping". How to choose one? The term "operator" usually means it is an endomorphism. The term "transformation" does not. The article should be moved back to the old title. I say this based on teaching linear algebra.
- But are there other opinions, pro or con? Zaslav (talk) 07:28, 26 February 2011 (UTC)
Examples question
the example matrix of "rotation by 90 degrees counterclockwise" under "Examples of linear transformation matrices" -- doesnt this example rotate 90 degrees clockwise? or is it meant to work on the computer-screen type of coordinates, where +y axis is down?
Thanks for any clarification, i'm just starting to learn about this stuff. This is a great resource.
Modules
Why vector spaces? I wandered over to this page from preadditive category which tells me that "composition of morphisms is bilinear over the integers"; and that page says it means "linear in both of its arguments." Of course, the integers are not a field, and the abelian group of homomorphisms do not form a vector space. On the other hand, any abelian group is a Z-module. --192.75.48.150 20:47, 3 August 2007 (UTC)
Clarification Request
I'm taking multivariate calc at the moment, and our book uses the term 'Linear Transformation' as synonymous with 'Linear Function'. Unfortunately, it gives no explanation as to WHY linear functions are also called transformations. Could someone in the know please make an addition to address this? Celemourn 14:37, 4 October 2007 (UTC)
An example of a mapping where the additive property is satisfied but not the homogene
I've discussed this with some of the professors at my university if they could come up with a linear mapping satisfying that:
f(x+y)=f(x)+f(y)
But that f(a*x)!=af(x)
They couldn't think of one, but I assume that there is one? Snailwalker | talk 16:31, 7 March 2008 (UTC)
The complex numbers is a vector space over itself, so take f:C->C to be complex conjugation. Then f(a+b)=f(a)+f(b), but -1 = f(i*i) != i*f(i) = 1. JackSchmidt (talk) 16:47, 7 March 2008 (UTC)
- Thanks for the help. Snailwalker | talk 17:12, 7 March 2008 (UTC)
Removed unclear sentence about alternative name
- "It immediately follows from the definition that f(0) = 0. Hence linear maps are sometimes called homogeneous linear maps (see linear function)."
The logic of the second part of this sentence is not clear. Why should they call homogeneous something which is by definition both additive and homogeneous? It seems (incompletely) redundant. Either we call it "additive homogeneus map" or simply "linear map". Are you sure that the expression "homogeneous linear map" is used in the literature? Paolo.dL (talk) 18:38, 28 June 2008 (UTC)
Terminology wrong!
I have known linear algebra for decades, and have seen many books on the subject. Every book has used "transformation" as the main term, with "map" used informally later (presumably because the author gets too lazy to write "transformation"). According to google, "linear map" is more common than "linear transformation", but again, I am inclined to attribute this to laziness or the existence of other uses of "linear map" (i.e., linear function specified by a linear polynomial). I move that the main term is changed to "transformation", as well as the name of the article, but maybe my bias is cultural.
More importantly, the statement that a "transformation" is often from a set to itself is totally bogus. The literature is clear that "transformation" is synonymous with "function" or "map", but "operator" has a tendency to mean a function with inputs and outputs drawn from the same set. Not only does my own experience say this, but so does the American Heritage Dictionary[1]. Can anyone back me up here? I think we really need to change this in the introduction, and I also posted something to the "operator" wikipedia page, which I think is also factually incorrect. —Preceding unsigned comment added by B2smith (talk • contribs) 23:26, 9 December 2008 (UTC)
- The terminology is not wrong, but correct. Which 2 terms you prefer is a matter of personal taste, but either is ok. I don't think a a lengthy (and most likely rather subjective) discussion, which term might be slightly "more common" or "more appropriate" is of any use for the article.--Kmhkmh (talk) 00:40, 2 March 2009 (UTC)
- The first commenter is basically correct in both points. The usual term is "linear transformation". That is the most commonly recognized term. (Google is certainly no authority.) There are indeed some people who prefer "map" and it is used, so it's not "wrong". Nevertheless the title should be changed back on the ground of greatest familiarity. (I also teach linear algebra.) Zaslav (talk) 07:32, 26 February 2011 (UTC)
- "Linear transformation" is unnecessarily long, old fashioned and misleading: it is sometimes used to mean that the domain and codomain are the same, sometimes not. If you do not believe me, search for "linear transformation of" in Google books. You will also find lots of informal usages such as "linear transformation of a random variable", "linear transformation of the Lebesgue integral on R". Many of the textbooks are old with usages such as "linear transformation of V into W" which may suggest injectivity when it is not intended. In contrast, many modern textbooks, as well as some modern classics, use "linear map". See:
- A. Knapp, Basic Algebra
- K. Janich, Linear Algebra
- H. E. Rose, Linear Algebra, a pure mathematical approach
- S. Lang, Introduction to linear algebra
- S. Axler, Linear algebra done right
- A good mathematician is a lazy mathematician: far from being informal, the term "linear map" is precise: it is a map which is linear, nothing more, nothing less. Geometry guy 14:23, 26 February 2011 (UTC)
- "Linear transformation" is unnecessarily long, old fashioned and misleading: it is sometimes used to mean that the domain and codomain are the same, sometimes not. If you do not believe me, search for "linear transformation of" in Google books. You will also find lots of informal usages such as "linear transformation of a random variable", "linear transformation of the Lebesgue integral on R". Many of the textbooks are old with usages such as "linear transformation of V into W" which may suggest injectivity when it is not intended. In contrast, many modern textbooks, as well as some modern classics, use "linear map". See:
- It's hard not to think Geometry guy's last remarks are pretty silly (as well as POV). Unless he really is The Ultimate Authority. Geometry guy, please explain what makes your opinions authoritative. This is not a snide remark, I really want to know. My credentials: I have been a working mathematician for several decades. Notice that I am not claiming that makes me an Authority. It means I have a lot of experience. Zaslav (talk) 03:29, 27 February 2011 (UTC)
- I am the only editor commenting here (so far) with the ability to refer to reliable secondary sources. Geometry guy 03:40, 27 February 2011 (UTC)
- It's hard not to think Geometry guy's last remarks are pretty silly (as well as POV). Unless he really is The Ultimate Authority. Geometry guy, please explain what makes your opinions authoritative. This is not a snide remark, I really want to know. My credentials: I have been a working mathematician for several decades. Notice that I am not claiming that makes me an Authority. It means I have a lot of experience. Zaslav (talk) 03:29, 27 February 2011 (UTC)
- There are plenty of examples of books using either term, see these Google Books searches:
- However It might be instructive to read some of the books which mention both terms:
- Paul August ☎ 13:49, 27 February 2011 (UTC)
Material from eigenvalue article
I removed the following text from the eigenvalue article as extraneous. But just in case it is not all repeated here (I have not yet checked) I wanted to preserve it:
Linear transformations of a vector space, such as rotation, reflection, stretching, compression, shear or any combination of these, may be visualized by the effect they produce on vectors. In other words, they are vector functions. More formally, in a vector space L, a vector function A is defined if for each vector x of L there corresponds a unique vector y = A(x) of L. For the sake of brevity, the parentheses around the vector on which the transformation is acting are often omitted. A vector function A is linear if it has the following two properties:
- Additivity: A(x + y) = Ax + Ay
- Homogeneity: A(αx) = αAx
where x and y are any two vectors of the vector space L and α is any scalar.[1] Such a function is variously called a linear transformation, linear operator, or linear endomorphism on the space L.
Dhollm (talk) 21:01, 15 July 2010 (UTC)
Differentiating the vectors from numbers of the field K
Why don't we use or instead of to denote vectors?--Netheril96 (talk) 01:47, 15 October 2010 (UTC)
- Usually lower-case Latin letters are vectors, lowercase greek letters are scalars, and upper-case Latin letters are vector spaces. Thus, no confusion. Bender2k14 (talk) 23:05, 31 January 2011 (UTC)
- Bender's right that there's no confusion, but I think saying that it is the "usual" convention is a bit introverted. It is the convention usually used by pure mathematicians, whereas bold Latin letters for vectors is the one usually used by applied mathematicians, physicists, engineers, etc. By sheer weight of numbers, I imagine the latter convention is much more common. The real answer to the question is that this article is a pure mathematics one, so that convention makes most sense here. Of course either convention is clear so long as it is used consistently. Quietbritishjim (talk) 17:03, 26 February 2011 (UTC)
Change of Basis Section
The section about change-of-basis matrices is written very badly. There are multiple grammatical errors. His explanation barely makes sense to me, and I already understand what he's trying to say. I could go into further detail about what needs to be fixed, but I'm hoping that others will see what I mean. Omarschall (talk) 02:27, 19 December 2011 (UTC)
Proof that f(0) = 0
I'm not a regular user of Wikipedia, so I won't change this (I can't figure out how to do it without messing up the math notation). But in my view, the current proof that the linearity of f implies that f(0) = 0 leaves a bit to be desired.
The last equality of this proof asserts without comment that the result of scaling the vector f(0) by the scalar 0 is the 0 vector. The issue is that "0v = v for all vectors v" is not one of the vector space axioms. Rather, it is a quick but nontrivial consequence of these axioms. And its proof from (the axioms) is of roughly "the same difficulty" as a proof (from the axioms and the definition of linearity) that f(0) = 0. So the current proof that f(0) = 0, although not wrong, is certainly less "elementary" and "self-contained" than it might otherwise be.
Here is a chain of equalities establishing that f(0) = 0 without appealing to nontrivial consequences of the vector space axioms.
0 = f(0) + (-f(0)) = f(0 + 0) + (-f(0)) = (f(0) + f(0)) + (-f(0)) = f(0) + (f(0) + -f(0)) = f(0) + 0 = f(0).
Note that each equality in this chain is an immediate specialization of either a single vector space axiom or the additivity of f (and note that the current proof does not have this property). FWIW, this chain also shows that only the additivity of f is needed to deduce that f(0) = 0; homogeneity is not required.
IMVHO the rest of the page's content would benefit greatly from being re-organized somewhat to distinguish more carefully between fundamentals (common to all linear transformations) and specific examples. At the moment, there is a prominent and untamed zoo of 2x2 matrix examples this page, and quite a lot of discussion of theory that recapitulates stuff that seems (to me) to be more appropriate for pages on more specific topics (e.g. the discussion of cokernel, and "algebraic classification of linear transformations") 68.40.167.85 (talk) —Preceding undated comment added 00:31, 1 June 2012 (UTC)
"a linear map is a homomorphism of modules" – incorrect?
A homomorphism of modules does not imply homogeneity: one can have a homomorphism of modules in which the respective scalar rings are merely isomorphic, but not even the same ring; alternatively where they are the same ring but the homorphism involves a non-identity automorphism between the scalar rings of the two modules. This statement would therefore have to be modified to read something like "a linear map is a homomorphism of modules over the same ring". — Quondum 07:40, 15 December 2012 (UTC)
- ↑ See Template:Harvnb; Template:Harvnb; Template:Harvnb; Template:Harvnb; Rowland, Todd and Weisstein, Eric W. Linear transformation From MathWorld − A Wolfram Web Resource