Curved space: Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
en>Solomonfromfinland
en>Wavelength
 
Line 1: Line 1:
{{single source|date=June 2013}}
Hi there, I am Andrew Berryhill. Alaska is exactly where I've always been living. I am an invoicing officer and I'll be promoted soon. The preferred hobby for him and his kids is to perform lacross and he'll be starting some thing else alongside with it.<br><br>Feel free to visit my site :: free psychic ([http://www.seekavideo.com/playlist/2199/video/ in the know])
{{Merge|Differentiation under the integral sign|date=January 2013}}
 
In [[calculus]], '''Leibniz's rule''' for '''[[differentiation under the integral sign]]''', named after [[Gottfried Leibniz]], tells us that if we have an [[integral]] of the form
 
: <math>\int_{y_0}^{y_1} f(x, y) \,dy</math>
 
then for ''x'' in (''x''<sub>0</sub>, ''x''<sub>1</sub>) the derivative of this integral is thus expressible
 
: <math>{d\over dx} \left ( \int_{y_0}^{y_1} f(x, y) \,dy \right )= \int_{y_0}^{y_1} f_x(x,y)\,dy</math>
 
provided that ''f'' and its  [[partial derivative]]'' f<sub>x</sub>'' are both continuous over a region in the form [''x''<sub>0</sub>, ''x''<sub>1</sub>] × [''y''<sub>0</sub>, ''y''<sub>1</sub>].
 
Thus under certain conditions, one may interchange the integral and partial differential [[operator (mathematics)|operators]]. This important result is particularly useful in the differentiation of [[integral transform]]s. An example of such is the [[moment generating function]] in [[probability| probability theory]], a variation of the [[Laplace transform]], which can be differentiated to generate the [[moment (mathematics)|moments]] of a [[random variable]]. Whether Leibniz's integral rule applies is essentially a question about the interchange of [[limit (mathematics)|limits]].
 
==Formal statement==
Let ''f''(''x'', θ) be a function such that ''f''<sub>θ</sub>(''x'', θ) exists, and is continuous. Then,
 
:<math>\frac{d}{d\theta} \left (\int_{a(\theta)}^{b(\theta)} f(x,\theta)\,dx \right )= \int_{a(\theta)}^{b(\theta)}f_{\theta} (x,\theta)\,dx + f(b(\theta),\theta)b'(\theta)-f(a(\theta),\theta)a'(\theta)</math>
 
where the partial derivative of ''f'' indicates that inside the integral only the variation of ''f''(''x'', θ) with θ is considered in taking the derivative.
 
==Three-dimensional, time-dependent case==
{{See also|Differentiation under the integral sign in higher dimensions}}
 
[[Image:Vector field on a surface.PNG|right|thumb|250px|Figure 1: A vector field '''F'''('''r''', ''t'') defined throughout space, and a surface Σ bounded by curve ∂Σ moving with velocity '''v''' over which the field is integrated.]]
A Leibniz integral rule for [[Differentiation under the integral sign#Higher dimensions|three dimensions]] is:<ref>Flanders, Harley (June–July 1973). "Differentiation under the integral sign". American Mathematical Monthly 80 (6): 615–627. doi:[http://dx.doi.org/10.2307/2319163 Article Link on JSTOR]</ref>
 
:<math>\frac {d}{dt} \iint_{\Sigma (t)} \mathbf{F} (\mathbf{r}, t) \cdot d \mathbf{A} = \iint_{\Sigma (t)}\left(\mathbf{F}_t (\mathbf{r}, t) + \left[\mathrm{\nabla} \cdot \mathbf{F} (\mathbf{r}, t) \right] \mathbf{v} \right) \cdot d \mathbf{A} -\oint_{\partial \Sigma (t)} \left[ \mathbf{v} \times \mathbf{F} ( \mathbf{r}, t) \right] \cdot d \mathbf{s} </math>
 
where:
::'''F'''('''r''', ''t'') is a vector field at the spatial position '''r''' at time ''t''
::Σ is a moving surface in three-space bounded by the closed curve ∂Σ
::''d'''''A''' is a vector element of the surface Σ
::''d'''''s''' is a vector element of the curve ∂Σ
::'''v''' is the velocity of movement of the region Σ
::∇⋅ is the vector [[divergence]]
::× is the [[vector cross product]]
::The double integrals are [[surface integral]]s over the surface Σ, and the [[line integral]] is over the bounding curve ∂Σ.
 
==Measure theory statement==
 
Let <math>X</math> be an open subset of <math>\mathbb{R}</math> , and <math>\Omega</math> be
a [[measure space]]. Suppose <math>f: X \times \Omega \rightarrow \mathbb{R} </math> satisfies the following conditions:
 
::(1) <math>f(x,\omega)</math> is a Lebesgue-integrable function of <math>\omega</math> for each <math>x \in X</math>
 
::(2) For almost all <math>\omega \in \Omega</math> , the derivative <math>f_x</math> exists for all <math>x \in X</math>
 
::(3) There is an integrable function <math> \theta: \Omega \rightarrow \mathbb{R}</math> such that <math>|f_x(x,\omega)| \leq \theta ( \omega)</math> for all <math>x \in X</math>
 
Then for all <math>x \in X</math>
::<math> \frac{\mathrm{d}}{\mathrm{d} x} \int_{\Omega} \, f(x, \omega) \mathrm{d} \omega = \int_{\Omega}  \, f_x ( x, \omega) \mathrm{d} \omega </math>
 
== Proofs ==
 
=== Proof of basic form ===
Let:
 
: <math> u(x) = \int_{y_0}^{y_1} f(x, y) \,dy \qquad (1)</math>
 
So that, using [[difference quotient]]s
 
: <math> u'(x) = \lim_{h \rightarrow 0} \frac{u(x + h) - u(x)}{h} \qquad (2)</math>
 
Substitute equation (1) into equation (2), combine the integrals (since the difference of two integrals equals the integral of the difference) and use the fact that 1/''h'' is a constant:
 
:<math>\begin{align}
u'(x) &= \lim_{h \rightarrow 0} \frac{\int_{y_0}^{y_1}f(x + h, y)\,dy - \int_{y_0}^{y_1}f(x, y)\,dy}{h} \\
&= \lim_{h \rightarrow 0} \frac{\int_{y_0}^{y_1}\left( f(x + h, y) - f(x,y) \right)\,dy}{h} \\
&= \lim_{h \rightarrow 0} \int_{y_0}^{y_1} \frac{f(x + h, y) - f(x, y)}{h} \,dy
\end{align}</math>
 
Provided that the limit can be passed under the integral sign, we obtain
 
: <math>u'(x) = \int_{y_0}^{y_1} f_x(x, y)\,dy </math>
 
We claim that the passage of the limit under the integral sign is valid. Indeed, the bounded convergence theorem (a corollary of the [[dominated convergence theorem]]) of real analysis states that if a sequence of functions on a set of finite measure is uniformly bounded and converges pointwise, then passage of the limit under the integral is valid.  To complete the proof, we show that these hypotheses are satisfied by the family of difference quotients
 
:<math> f_n(y) = \frac{f(x + \tfrac{1}{n}, y) -f (x, y)}{\tfrac{1}{n}}.</math>
 
Continuity of ''f<sub>x</sub>''(''x'', ''y'') and compactness implies that ''f<sub>x</sub>''(''x'', ''y'') is uniformly bounded.  Uniform boundedness of the difference quotients follows from uniform boundedness of ''f<sub>x</sub>''(''x'', ''y'') and the [[mean value theorem]], since for all y and n, there exists z in the interval [''x'', ''x'' + 1/''n''] such that
 
:<math> f_x(z, y) =  \frac{f(x + \tfrac{1}{n}, y) -f (x, y)}{\tfrac{1}{n}}.</math>
 
The difference quotients converge pointwise to ''f<sub>x</sub>''(''x'', ''y'') since ''f<sub>x</sub>''(''x'', ''y'') exists.  This completes the proof.
 
For a simpler proof using [[Fubini's theorem]], see the references.
 
=== Variable limits form ===
For a monovariant function ''g'':
 
: <math> {d\over dx} \left( \int_{f_1(x)}^{f_2(x)} g(t) \,dt \right )= g[f_2(x)] {f_2'(x)} -  g[f_1(x)] {f_1'(x)} </math>
 
This follows from the [[chain rule]].
 
=== General form with variable limits ===
Now, set
 
:<math>\varphi(\alpha) = \int_a^b f(x,\alpha)dx,</math>
 
where ''a'' and ''b'' are functions of α that exhibit increments Δ''a'' and Δ''b'', respectively, when α is increased by Δα. Then,
 
: <math>\begin{align}
\Delta\varphi &= \varphi(\alpha + \Delta\alpha) - \varphi(\alpha) \\
&= \int_{a + \Delta a}^{b + \Delta b}f(x, \alpha + \Delta\alpha)\,dx - \int_a^b f(x, \alpha)\,dx \\
&= \int_{a + \Delta a}^af(x, \alpha + \Delta\alpha)dx + \int_a^bf(x, \alpha + \Delta\alpha)dx + \int_b^{b + \Delta b} f(x, \alpha+\Delta\alpha)\,dx - \int_a^b f(x, \alpha)\,dx \\
&= -\int_a^{a + \Delta a} f(x, \alpha + \Delta\alpha) \, dx + \int_a^b [f(x, \alpha + \Delta\alpha) - f(x,\alpha)]\,dx + \int_b^{b + \Delta b} f(x, \alpha + \Delta\alpha)\,dx
\end{align}</math>
 
A form of the [[mean value theorem]], <math>\int_a^bf(x)dx = (b - a)f(\xi)</math>, where ''a'' < ξ < ''b'', may be applied to the first and last integrals of the formula for Δφ above, resulting in
 
:<math>\Delta\varphi = -\Delta a f(\xi_1, \alpha + \Delta\alpha) + \int_a^b [f(x, \alpha + \Delta\alpha) - f(x,\alpha)]\,dx + \Delta b f(\xi_2, \alpha + \Delta\alpha)</math>
 
Dividing by Δα, and letting Δα → 0, and noticing ξ<sub>1</sub> → ''a'' and ξ<sub>2</sub> → ''b'' and using the result
 
:<math>\lim_{\Delta\alpha\to 0}\int_a^b \frac{f(x,\alpha + \Delta\alpha) - f(x,\alpha)}{\Delta\alpha} dx = \int_a^b f_{\alpha} (x,\alpha)\,dx</math>
 
yields the general form of the Leibniz integral rule below:
 
:<math>\frac{d\varphi}{d\alpha} = \int_a^b f_{\alpha}(x, \alpha)\,dx + f(b, \alpha) \frac{db}{d\alpha} - f(a, \alpha)\frac{da}{d\alpha} </math>
 
===Three-dimensional, time-dependent form===
{{See also|Differentiation under the integral sign in higher dimensions}}
 
At time ''t'' the surface Σ in [[#Three-dimensional, time-dependent case|Figure 1]] contains a set of points arranged about a centroid '''C'''(''t'') and function '''F'''('''r''', ''t'') can be written as '''F'''('''C'''(''t'') + '''r''' − '''C'''(t), ''t'') = '''F'''('''C'''(''t'') + '''I''', ''t''), with '''I''' independent of time. Variables are shifted to a new frame of reference attached to the moving surface, with origin at '''C'''(''t''). For a rigidly translating surface, the limits of integration are then independent of time, so:
 
:<math>\frac {d}{dt} \left (\iint_{\Sigma (t)} d \mathbf{A}_{\mathbf{r}}\cdot \mathbf{F}(\mathbf{r}, t) \right) = \iint_{\Sigma} d \mathbf{A}_{\mathbf{I}} \cdot \frac {d}{dt}\mathbf{F}(\mathbf{C}(t) + \mathbf{I}, t)</math>
 
where the limits of integration confining the integral to the region Σ no longer are time dependent so differentiation passes through the integration to act on the integrand only:
 
:<math> \frac {d}{dt}\mathbf{F}( \mathbf{C}(t) + \mathbf{I}, t) = \mathbf{F}_t(\mathbf{C}(t) + \mathbf{I}, t) + \mathbf{v \cdot \nabla F}(\mathbf{C}(t) + \mathbf{I}, t) = \mathbf{F}_t(\mathbf{r}, t) + \mathbf{v} \cdot \nabla \mathbf{F}(\mathbf{r}, t) </math>
 
with the velocity of motion of the surface defined by:
 
:<math>\mathbf{v} = \frac {d}{dt} \mathbf{C} (t) </math>
 
This equation expresses the [[material derivative]] of the field, that is, the derivative with respect to a coordinate system attached to the moving surface. Having found the derivative, variables can be switched back to the original frame of reference. We notice that (see [[Curl (mathematics)#Three_common_examples|article on ''' ''curl'' ''']]):
 
:<math>\mathbf{ \nabla \times} \left( \mathbf{v \times F} \right) = (\nabla \cdot \mathbf{F} + \mathbf{F} \cdot \nabla) \mathbf{v}- (\nabla \cdot  \mathbf{v} + \mathbf{v} \cdot \nabla) \mathbf{F} </math>
 
and that [[Stokes_theorem#Kelvin–Stokes_theorem|Stokes theorem]] allows the surface integral of the ''curl'' over Σ to be made a line integral over ∂Σ:
 
:<math>\frac {d}{dt} \left ( \iint_{\Sigma (t)} \mathbf{F} (\mathbf{r}, t) \cdot d \mathbf{A} \right ) = \iint_{\Sigma (t)} \big(\mathbf{F}_t (\mathbf{r}, t) +\left( \mathbf{F \cdot \nabla} \right)\mathbf{v} +  \left(\mathbf{ \nabla \cdot F } \right)  \mathbf{v} -(\nabla \cdot \mathbf{v})\mathbf{F}\big) \cdot d \mathbf{A} - \oint_{\partial \Sigma (t) }\left( \mathbf{\mathbf{v} \times F }\right)\mathbf{\cdot} d \mathbf{s}. </math>
 
The sign of the line integral is based on the [[right-hand rule]] for the choice of direction of line element ''d'''''s'''. To establish this sign, for example, suppose the field '''F''' points in the positive ''z''-direction, and the surface Σ is a portion of the ''xy''-plane with perimeter ∂Σ. We adopt the normal to Σ to be in the positive ''z''-direction. Positive traversal of ∂Σ is then counterclockwise (right-hand rule with thumb along ''z''-axis). Then the integral on the left-hand side determines a ''positive'' flux of '''F''' through Σ. Suppose Σ translates in the positive ''x''-direction at velocity '''v'''. An element of the boundary of Σ parallel to the ''y''-axis, say ''d'''''s''', sweeps out an area '''v'''''t'' × ''d'''''s''' in time ''t''. If we integrate around the boundary ∂Σ in a counterclockwise sense, '''v'''''t'' × ''d'''''s''' points in the negative ''z''-direction on the left side of ∂Σ (where ''d'''''s''' points downward), and in the positive ''z''-direction on the right side of ∂Σ (where ''d'''''s''' points upward), which makes sense because Σ is moving to the right, adding area on the right and losing it on the left. On that basis, the flux of '''F''' is increasing on the right of ∂Σ and decreasing on the left. However, the dot-product '''v''' × '''F •''' ''d'''''s''' = −'''F''' × '''v''' • ''d'''''s''' = −'''F • v''' × ''d'''''s'''. Consequently, the sign of the line integral is taken as negative.
 
If '''v''' is a constant,
 
:<math>\frac {d}{dt} \iint_{ \Sigma (t)} \mathbf{F} (\mathbf{r}, t) \cdot d \mathbf{A} = \iint_{\Sigma (t)} \big(\mathbf{F}_t (\mathbf{r}, t) +  \left(\mathbf{ \nabla \cdot F } \right)  \mathbf{v}\big) \cdot d \mathbf{A} - \oint_{\partial \Sigma (t)}\left(\mathbf{\mathbf{v} \times F }\right)\mathbf{\cdot} d \mathbf{s} </math>
 
which is the quoted result. This proof does not consider the possibility of the surface deforming as it moves.
 
==See also==
*[[Chain rule]]
*[[Leibniz rule (generalized product rule)]]
*[[Differentiation under the integral sign#Higher dimensions|Differentiation under the integral sign]]
*[[Reynolds transport theorem]] a generalization of Leibniz rule
 
==References and notes==
{{Reflist}}
 
==External links==
*[http://math.bu.edu/people/rharron/teaching/MAT203/LeibnizRule.pdf "The Leibniz Rule" by Rob Harron]
 
{{DEFAULTSORT:Leibniz Integral Rule}}
[[Category:Gottfried Leibniz]]
[[Category:Multivariable calculus]]
[[Category:Integral calculus]]
[[Category:Articles containing proofs]]

Latest revision as of 03:10, 26 February 2014

Hi there, I am Andrew Berryhill. Alaska is exactly where I've always been living. I am an invoicing officer and I'll be promoted soon. The preferred hobby for him and his kids is to perform lacross and he'll be starting some thing else alongside with it.

Feel free to visit my site :: free psychic (in the know)