|
|
Line 1: |
Line 1: |
| {{about||the measure of steepness of a road or other physical feature|Grade (slope)|the measure of steepness of a line|Slope|a progression between colors|Color gradient}}
| |
| {{no footnotes|date=December 2011}}
| |
| [[File:Gradient2.svg|thumb|300px|In the above two images, the values of the function are represented in black and white, black representing higher values, and its corresponding gradient is represented by blue arrows.]]
| |
|
| |
|
| In [[mathematics]], the '''gradient''' is a generalization of the usual concept of [[derivative]] to the [[function of several variables|functions of several variables]]. If {{math|''f''(''x''<sub>1</sub>, ..., ''x''<sub>''n''</sub>)}} is a [[differentiable function|differentiable]] function of several variables, also called "[[scalar field]]", its '''gradient''' is the [[vector (mathematics)|vector]] of the ''n'' [[partial derivative]]s of ''f''. It is thus a [[vector-valued function]] also called [[vector field]].
| |
|
| |
|
| Similarly to the usual derivative, the gradient represents the [[slope]] of and the [[tangent]] of the [[graph of a function|graph of the function]]. More precisely, the gradient points in the direction of the greatest rate of increase of the function and its [[magnitude (mathematics)|magnitude]] is the slope of the graph in that direction. The components of the gradient are the non-constant coefficients of the equation of the [[tangent space]] to the graph.
| | Eternity Warriors three Hack Tool And Cheats Free of charge Download No Survey 2013<br><br>True Racing 3 from electronic arts for the iphone and ipad sets a new standard for mobile racing games. With its stunning visuals and superb rendering of the graphics and animation, it is certainly one particular of the ideal bang for your buck games out there. But you are not right here for praises for the game, you are right here for the cheating, correct?<br><br>You can cheat R$ money for Real Racing three employing this 2013 tool as correctly, loaded with proxy assistance functionality for anti ban. We dont want you to waist a distinct minute attempting to [http://www.Bing.com/search?q=uncover+Actual&form=MSNNWS&mkt=en-us&pq=uncover+Actual uncover Actual] Racing three Cheats for ipad 2013 to get you Gold devoid of wasting a penny. It has a wide assortment of racing automobile brands with involve those models that are made by Porsche, Bugatti, Dodge and Audi to make specific that all players can get the greatest auto they can use.<br><br>All players of historical Real Racing 3 Hack No Survey one particular simple content in order to win in this on-line mettlesome which is to get person amounts of money and golden to lay disparate types of upgrades and improvements in the cars that they are using in apiece competitors to rule the gritty. According to what ever controversies with respect to the statement and features of the gamy, umpteen sorts of upgrades and dismantle ups are alcoholic to reach because many of them demand immense amounts of yellow. This contend slave is usable on Hacks no survey for fast download to lie the taxon tracks of this EA intentional job.<br><br>You should really know that Real Racing 3 Hack it's a single of a sort for the reason that of quite a few of factors. The plan Real Racing 3 Hack was tested several occasions and will operate very excellent on each devices that have Android or iOS, or you can use it on your individual personal computer, Mac OS and al the browsers that you know. Real Racing 3 Hack will use Proxy and that tends to make it 100% secure and undetectable, and a single a lot more crucial thing about Real Racing 3 Hack is that by itself will automatically update at the finish of the day.<br><br>Every Real Racing 3 Hack alternatives operates effectively on all mobile devices, iphone also you can use it on Ipad, Ipod and other ios or android devices. You will not want to Rooting or Jailbroken your device. This tool gives you possibilities such as RS and Gold Hack, use it anytime you want to produce free of charge dollars. Devote all money to unlock or upgrades cars. If you liked this article and you would certainly like to get more facts pertaining to [http://backlinks.egynt.net/story.php?id=426931 download Real Racing 3 Hack] kindly browse through our own site. We assure that our Cheats for Genuine Racing three can provide for you maximum entertaining in the game. You can run this hack many instances. Never worry about banned, we added particular anti ban system and you will never getrestrictions from the game. You can Download protected this hack, click here and stick to all actions ! |
| | |
| Let {{math|''f''}} be differentiable function defined on a [[Euclidean space]]. It becomes a differentiable function of several variables as soon as one chooses an [[orthonormal basis|orthonormal frame]]. The gradient does not depend of the choice of this orthonormal frame. It follows that one may speak of the gradient of {{math|''f''}} without choosing explicitly a frame.
| |
| | |
| The [[Jacobian matrix and determinant|Jacobian]] is the generalization of the gradient for vector valued functions of several variables and [[differentiable map]]s between [[Euclidean space]]s or, more generally, [[manifold]]s. A further generalization for a function between [[Banach space]]s is the [[Fréchet derivative]].
| |
| | |
| == Interpretations ==
| |
| [[File:Gradient of a Function.tif|thumb|350px|Gradient of the 2-d function {{math|''f''(''x'',''y'') {{=}} ''xe''<sup>-''x''<sup>2</sup>-''y''<sup>2</sup></sup>}} is plotted as blue arrows over the pseudocolor plot of the function.]]
| |
| | |
| Consider a room in which the temperature is given by a scalar field, {{math|''T''}}, so at each point {{math|(''x'',''y'',''z'')}} the temperature is {{math|''T''(''x'',''y'',''z'')}}. (We will assume that the temperature does not change over time.) At each point in the room, the gradient of ''T'' at that point will show the direction the temperature rises most quickly. The magnitude of the gradient will determine how fast the temperature rises in that direction.
| |
| | |
| Consider a surface whose height above sea level at a point (''x'', ''y'') is ''H''(''x'', ''y''). The gradient of ''H'' at a point is a vector pointing in the direction of the steepest [[slope]] or [[Grade (slope)|grade]] at that point. The steepness of the slope at that point is given by the magnitude of the gradient vector.
| |
| | |
| The gradient can also be used to measure how a scalar field changes in other directions, rather than just the direction of greatest change, by taking a [[dot product]]. Suppose that the steepest slope on a hill is 40%. If a road goes directly up the hill, then the steepest slope on the road will also be 40%. If, instead, the road goes around the hill at an angle, then it will have a shallower slope. For example, if the angle between the road and the uphill direction, projected onto the horizontal plane, is 60°, then the steepest slope along the road will be 20%, which is 40% times the [[cosine]] of 60°.
| |
| | |
| This observation can be mathematically stated as follows. If the hill height function ''H'' is [[differentiable function|differentiable]], then the gradient of ''H'' [[dot product|dotted]] with a [[unit vector]] gives the slope of the hill in the direction of the vector. More precisely, when ''H'' is differentiable, the dot product of the gradient of ''H'' with a given unit vector is equal to the [[directional derivative]] of ''H'' in the direction of that unit vector.
| |
| | |
| == Definition ==
| |
| [[File:Gradient99.png|thumb|350px|The gradient of the function {{math|''f''(''x'',''y'') {{=}} −(cos<sup>2</sup>''x'' + cos<sup>2</sup>''y'')<sup>2</sup>}} depicted as a projected vector field on the bottom plane.]]
| |
| | |
| The gradient (or gradient vector field) of a scalar function ''f''(''x''<sub>1</sub>, ''x''<sub>2</sub>, ''x''<sub>3</sub>, ..., ''x<sub>n</sub>'') is denoted ∇''f'' or <math>\vec{\nabla} f</math> where ∇ (the [[nabla symbol]]) denotes the vector [[differential operator]], [[del]]. The notation "grad(f)" is also commonly used for the gradient. The gradient of ''f'' is defined as the unique [[vector field]] whose [[dot product]] with any [[Euclidean vector|vector]] '''v''' at each point ''x'' is the [[directional derivative]] of ''f'' along '''v'''. That is,
| |
| | |
| :<math>(\nabla f(x))\cdot \mathbf{v} = D_{\mathbf v}f(x).</math>
| |
| | |
| In a rectangular coordinate system, the gradient is the vector field whose components are the [[partial derivative]]s of ''f'':
| |
| :<math> \nabla f = \frac{\partial f}{\partial x_1 }\mathbf{e}_1 + \cdots + \frac{\partial f}{\partial x_n }\mathbf{e}_n</math>
| |
| | |
| <center>{{math|big=1|∇''f'' {{=}} {{sfrac|∂''f''|∂''x''<sub>1</sub>}} e<sub>1</sub> + ''...'' + {{sfrac|∂''f''|∂''x''<sub>n</sub>}} e<sub>n</sub>}}</center>
| |
| | |
| where the '''e'''<sub>''i''</sub> are the orthogonal unit vectors pointing in the coordinate directions. When a function also depends on a parameter such as time, the gradient often refers simply to the vector of its spatial derivatives only.
| |
| | |
| In the three-dimensional [[Cartesian coordinate system]], this is given by
| |
| | |
| :<math>\nabla f = \frac{\partial f}{\partial x} \mathbf{i} +
| |
| \frac{\partial f}{\partial y} \mathbf{j} +
| |
| \frac{\partial f}{\partial z} \mathbf{k}</math>
| |
| | |
| where '''i''', '''j''', '''k''' are the [[standard basis|standard]] [[unit vector]]s. For example, the gradient of the function
| |
| : <math>f(x,y,z)= \ 2x+3y^2-\sin(z)</math>
| |
| is:
| |
| :<math>\nabla f=
| |
| \frac{\partial f}{\partial x} \mathbf{i} +
| |
| \frac{\partial f}{\partial y} \mathbf{j} +
| |
| \frac{\partial f}{\partial z} \mathbf{k}
| |
| = 2\mathbf{i}+ 6y\mathbf{j} -\cos(z)\mathbf{k}.
| |
| </math>
| |
| | |
| In some applications it is customary to represent the gradient as a [[row vector]] or [[column vector]] of its components in a rectangular coordinate system.
| |
| | |
| == Gradient and the derivative or differential ==
| |
| {{Calculus |Vector}}
| |
| | |
| ===Linear approximation to a function===
| |
| The gradient of a [[function (mathematics)|function]] ''f'' from the [[Euclidean space]] ℝ<sup>''n''</sup> to ℝ at any particular point ''x''<sub>0</sub> in ℝ<sup>''n''</sup> characterizes the best [[linear approximation]] to ''f'' at ''x''<sub>0</sub>. The approximation is as follows:
| |
| | |
| :<math> f(x) \approx f(x_0) + (\nabla f)_{x_0}\cdot(x-x_0) </math>
| |
| | |
| for ''x'' close to ''x''<sub>0</sub>, where <math>(\nabla f)_{x_0}</math> is the gradient of ''f'' computed at ''x''<sub>0</sub>, and the dot denotes the [[dot product]] on ℝ<sup>''n''</sup>. This equation is equivalent to the first two terms in the multi-variable [[Taylor Series]] expansion of ''f'' at ''x''<sub>0</sub>.
| |
| | |
| ===Differential or (exterior) derivative===
| |
| | |
| The best linear approximation to a function
| |
| :<math>f: \mathbb{R}^n \to \mathbb{R}</math>
| |
| at a point ''x'' in ℝ<sup>''n''</sup> is a linear map from ℝ<sup>''n''</sup> to ℝ which is often denoted by d''f<sub>x</sub>'' or ''Df''(''x'') and called the '''[[differential (calculus)|differential]]''' or [[total derivative|('''total''') '''derivative''']] of ''f'' at ''x''. The gradient is therefore related to the differential by the formula
| |
| :<math> (\nabla f)_x\cdot v = \mathrm d f_x(v)</math>
| |
| for any ''v'' ∈ ℝ<sup>''n''</sup>. The function d''f'', which maps ''x'' to d''f''<sub>''x''</sub>, is called the differential or [[exterior derivative]] of ''f'' and is an example of a [[differential 1-form]].
| |
| | |
| If ℝ<sup>''n''</sup> is viewed as the space of (length ''n'') column vectors (of real numbers), then one can regard d''f'' as the row vector with components
| |
| :<math> \left( \frac{\partial f}{\partial x_1}, \dots, \frac{\partial f}{\partial x_n}\right) </math>
| |
| so that d''f''<sub>''x''</sub>(''v'') is given by matrix multiplication. The gradient is then the corresponding column vector, i.e.,
| |
| :<math>(\nabla f)_i = \mathrm{d} f^\mathsf{T}_i</math>.
| |
| | |
| ===Gradient as a derivative===
| |
| Let ''U'' be an [[open set]] in '''R'''<sup>''n''</sup>. If the function {{nowrap|''f'' : ''U'' → '''R'''}} is [[Fréchet derivative|differentiable]], then the differential of ''f'' is the [[Fréchet derivative|(Fréchet) derivative]] of ''f''. Thus ∇''f'' is a function from ''U'' to the space '''R''' such that
| |
| :<math>\lim_{h\to 0} \frac{\|f(x+h)-f(x) -\nabla f(x)\cdot h\|}{\|h\|} = 0</math>
| |
| where ⋅ is the dot product.
| |
| | |
| As a consequence, the usual properties of the derivative hold for the gradient:
| |
| | |
| ;[[Linearity]]
| |
| The gradient is linear in the sense that if ''f'' and ''g'' are two real-valued functions differentiable at the point {{nowrap|''a'' ∈ '''R'''<sup>''n''</sup>}}, and α and β are two constants, then {{nowrap|α''f'' + β''g''}} is differentiable at ''a'', and moreover
| |
| : <math>\nabla\left(\alpha f+\beta g\right)(a) = \alpha \nabla f(a) + \beta\nabla g (a).</math>
| |
| | |
| ;[[Product rule]]
| |
| If ''f'' and ''g'' are real-valued functions differentiable at a point {{nowrap|''a'' ∈ '''R'''<sup>''n''</sup>}}, then the product rule asserts that the product {{nowrap|1=(''fg'')(''x'') = ''f''(''x'')''g''(''x'')}} of the functions ''f'' and ''g'' is differentiable at ''a'', and
| |
| :<math>\nabla (fg)(a) = f(a)\nabla g(a) + g(a)\nabla f(a).</math>
| |
| | |
| ;[[Chain rule]]
| |
| Suppose that {{nowrap|''f'' : ''A'' → '''R'''}} is a real-valued function defined on a subset ''A'' of '''R'''<sup>''n''</sup>, and that ''f'' is differentiable at a point ''a''. There are two forms of the chain rule applying to the gradient. First, suppose that the function ''g'' is a [[parametric curve]]; that is, a function {{nowrap|''g'' : ''I'' → '''R'''<sup>''n''</sup>}} maps a subset {{nowrap|''I'' ⊂ '''R'''}} into '''R'''<sup>''n''</sup>. If ''g'' is differentiable at a point {{nowrap|''c'' ∈ ''I''}} such that {{nowrap|1=''g''(''c'') = ''a''}}, then
| |
| | |
| :<math>(f\circ g)'(c) = \nabla f(a)\cdot g'(c),</math>
| |
| | |
| where ∘ is the [[composition operator]] : (g ∘ f )(x) = g(f(x)).
| |
| More generally, if instead {{nowrap|''I'' ⊂ '''R'''<sup>''k''</sup>}}, then the following holds:
| |
| | |
| :<math>\nabla (f\circ g)(c) = (Dg(c))^\mathsf{T} (\nabla f(a))</math>
| |
| | |
| where (''Dg'')<sup>T</sup> denotes the transpose [[Jacobian matrix]].
| |
| | |
| For the second form of the chain rule, suppose that {{nowrap|''h'' : ''I'' → '''R'''}} is a real valued function on a subset ''I'' of '''R''', and that ''h'' is differentiable at the point {{nowrap|''f''(''a'') ∈ ''I''}}. Then
| |
| :<math>\nabla (h\circ f)(a) = h'(f(a))\nabla f(a).</math>
| |
| | |
| ==Further properties and applications==
| |
| | |
| ===Level sets===
| |
| {{see also|Level set#Level sets versus the gradient}}
| |
| A level surface, or [[isosurface]], is the set of all points where some function has a given value.
| |
| | |
| If ''f'' is differentiable, then the [[dot product]] {{nowrap|(∇''f'')<sub>''x''</sub> ⋅ ''v''}} of the gradient at a point ''x'' with a vector ''v'' gives the [[directional derivative]] of ''f'' at ''x'' in the direction ''v''. It follows that in this case the gradient of ''f'' is [[orthogonal]] to the [[level set]]s of ''f''. For example, a level surface in three-dimensional space is defined by an equation of the form {{nowrap|1=''F''(''x'', ''y'', ''z'') = ''c''}}. The gradient of ''F'' is then normal to the surface.
| |
| | |
| More generally, any [[embedded submanifold|embedded]] [[hypersurface]] in a Riemannian manifold can be cut out by an equation of the form {{nowrap|1=''F''(''P'') = 0}} such that d''F'' is nowhere zero. The gradient of ''F'' is then normal to the hypersurface.
| |
| | |
| Similarly, an [[affine algebraic variety|affine algebraic hypersurface]] may be defined by an equation {{nowrap|1=''F''(''x''<sub>1</sub>, ..., ''x''<sub>n</sub>) = 0}}, where ''F'' is a polynomial. The gradient of ''F'' is zero at a singular point of the hypersurface (this the definition of a singular point). At a non-singular point, it is a nonzero normal vector.
| |
| | |
| ===Conservative vector fields and the gradient theorem===
| |
| | |
| {{main|Gradient theorem}}
| |
| | |
| The gradient of a function is called a gradient field. A (continuous) gradient field is always a [[conservative vector field]]: its [[line integral]] along any path depends only on the endpoints of the path, and can be evaluated by the [[gradient theorem]] (the fundamental theorem of calculus for line integrals). Conversely, a (continuous) conservative vector field is always the gradient of a function.
| |
| | |
| ==Riemannian manifolds==
| |
| For any smooth function f on a [[Riemannian manifold]] (''M'',''g''), the gradient of ''f'' is the [[vector field]] ∇''f'' such that for any vector field ''X'',
| |
| :<math>g(\nabla f, X) = \partial_X f, \qquad \text{i.e.,}\quad g_x((\nabla f)_x, X_x ) = (\partial_X f) (x)</math>
| |
| where {{nowrap|''g''<sub>''x''</sub>( , )}} denotes the [[inner product]] of tangent vectors at ''x'' defined by the metric ''g'' and ∂<sub>''X''</sub>''f'' (sometimes denoted ''X''(''f'')) is the function that takes any point {{nowrap|''x'' ∈ ''M''}} to the [[directional derivative]] of ''f'' in the direction ''X'', evaluated at ''x''. In other words, in a [[coordinate chart]] φ from an open subset of ''M'' to an open subset of '''R'''<sup>''n''</sup>, (∂<sub>''X''</sub>''f'')(''x'') is given by:
| |
| :<math>\sum_{j=1}^n X^{j} (\varphi(x)) \frac{\partial}{\partial x_{j}}(f \circ \varphi^{-1}) \Big|_{\varphi(x)},</math>
| |
| where ''X''<sup>''j''</sup> denotes the ''j''th component of ''X'' in this coordinate chart.
| |
| | |
| So, the local form of the gradient takes the form:
| |
| | |
| :<math> \nabla f= g^{ik}\frac{\partial f}{\partial x^{k}}\frac{\partial}{\partial x^{i}}.</math>
| |
| | |
| Generalizing the case {{nowrap|1=''M'' = '''R'''<sup>''n''</sup>}}, the gradient of a function is related to its [[exterior derivative]], since
| |
| :<math>(\partial_X f) (x) = df_x(X_x)\ .</math>
| |
| More precisely, the gradient ∇''f'' is the vector field associated to the differential 1-form d''f'' using the [[musical isomorphism]]
| |
| :<math>\sharp=\sharp^g\colon T^*M\to TM</math>
| |
| (called "sharp") defined by the metric ''g''. The relation between the exterior derivative and the gradient of a function on '''R'''<sup>''n''</sup> is a special case of this in which the metric is the flat metric given by the dot product.
| |
| | |
| ==Cylindrical and spherical coordinates==
| |
| {{main|Del in cylindrical and spherical coordinates}}
| |
| | |
| In [[cylindrical coordinates]], the gradient is given by {{harv|Schey|1992|pp=139–142}}:
| |
| | |
| :<math>\nabla f(\rho, \phi, z) =
| |
| \frac{\partial f}{\partial \rho}\mathbf{e}_\rho+
| |
| \frac{1}{\rho}\frac{\partial f}{\partial \phi}\mathbf{e}_\phi+
| |
| \frac{\partial f}{\partial z}\mathbf{e}_z
| |
| </math>
| |
| | |
| where ϕ is the azimuthal angle, ''z'' is the axial coordinate, and '''e'''<sub>ρ</sub>, '''e'''<sub>φ</sub> and '''e'''<sub>''z''</sub> are unit vectors pointing along the coordinate directions.
| |
| | |
| In [[spherical coordinates]] {{harv|Schey|1992|pp=139–142}}:
| |
| | |
| :<math>\nabla f(r, \theta, \phi) =
| |
| \frac{\partial f}{\partial r}\mathbf{e}_r+
| |
| \frac{1}{r}\frac{\partial f}{\partial \theta}\mathbf{e}_\theta+
| |
| \frac{1}{r \sin\theta}\frac{\partial f}{\partial \phi}\mathbf{e}_\phi
| |
| </math>
| |
| | |
| where ϕ is the [[azimuth]] angle and θ is the [[zenith#Relevance and use|zenith]] angle.
| |
| | |
| For the gradient in other [[orthogonal coordinate system]]s, see [[Orthogonal coordinates#Differential operators in three dimensions|Orthogonal coordinates (Differential operators in three dimensions)]].
| |
| | |
| ==Gradient of a vector==
| |
| In rectangular coordinates, the gradient of a vector '''f''' = (''f''<sub>1</sub>, ''f''<sub>2</sub>, ''f''<sub>3</sub>) is defined by
| |
| | |
| :<math>\nabla \mathbf{f}=\frac{\partial {{f}_{i}}}{\partial {{x}_{j}}}{{\mathbf{e}}_{i}}{{\mathbf{e}}_{j}}</math>
| |
| | |
| or the [[Jacobian matrix]]
| |
| | |
| :<math>\frac{\partial ({{f}_{1}},{{f}_{2}},{{f}_{3}})}{\partial ({{x}_{1}},{{x}_{2}},{{x}_{3}})}</math>.
| |
| | |
| In curvilinear coordinates, the gradient involves [[Christoffel symbols]].
| |
| | |
| ==See also==
| |
| *[[Curl (mathematics)|Curl]]
| |
| *[[Del]]
| |
| *[[Divergence]]
| |
| *[[Gradient theorem]]
| |
| *[[Hessian matrix]]
| |
| *[[Skew gradient]]
| |
| | |
| ==References==
| |
| {{Reflist}}
| |
| | |
| *{{citation |first1=Theresa M.|last1=Korn|last2=Korn|first2=Granino Arthur |title=Mathematical Handbook for Scientists and Engineers: Definitions, Theorems, and Formulas for Reference and Review |publisher=Dover Publications |location=New York |year=2000 |pages=157–160 |isbn=0-486-41147-8 |oclc=43864234 |doi=}}.
| |
| | |
| *{{citation|first=H.M.|last=Schey|title=Div, Grad, Curl, and All That|publisher=W.W. Norton|edition=2nd|year=1992|isbn=0-393-96251-2|oclc=25048561}}.
| |
| | |
| *{{citation|first=B.A.|last=Dubrovin|coauthors=A.T. Fomenko, S.P. Novikov|title=Modern Geometry--Methods and Applications: Part I: The Geometry of Surfaces, Transformation Groups, and Fields (Graduate Texts in Mathematics)|publisher=Springer|edition=2nd|year=1991|pages=14–17| isbn=978-0-387-97663-1}}
| |
| | |
| ==External links==
| |
| {{wiktionary}}
| |
| * [[Khan Academy]] [http://www.khanacademy.org/video/gradient-1?playlist=Calculus Gradient lesson 1]
| |
| *{{springer
| |
| | title=Gradient
| |
| | id= G/g044680
| |
| | last= Kuptsov
| |
| | first= L.P.
| |
| | author-link=
| |
| }}
| |
| *{{MathWorld |title=Gradient |urlname=Gradient}}
| |
| | |
| [[Category:Differential calculus]]
| |
| [[Category:Generalizations of the derivative]]
| |
| [[Category:Linear operators in calculus]]
| |
| [[Category:Vector calculus]]
| |