Main Page: Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
No edit summary
No edit summary
Line 1: Line 1:
In [[mathematics]], a '''differential operator''' is an [[Operator (mathematics)|operator]] defined as a function of the [[derivative|differentiation]] operator. It is helpful, as a matter of notation first, to consider differentiation as an abstract operation that accepts a function and returns another function (in the style of a [[higher-order function]] in [[computer science]]).
'''Systematic sampling''' is a [[statistics|statistical method]] involving the selection of elements from an ordered [[sampling frame]].


This article considers mainly [[linear map|linear]] operators, which are the most common type. However, non-linear differential operators, such as the [[Schwarzian derivative]] also exist.
The most common form of systematic sampling is an equal-probability method. In this approach, progression through the list is treated circularly, with a return to the top once the end of the list is passed. The sampling starts by selecting an element from the list at random and then every ''k''<sup>th</sup> element in the frame is selected, where ''k'', the sampling interval (sometimes known as the ''skip''): this is calculated as:<ref name="ken_black_india">{{cite book
| title = Business Statistics for Contemporary Decision Making
| author = Ken Black
| edition = Fourth (Wiley Student Edition for India)
| publisher = Wiley-India
| isbn = 978-81-265-0809-9
| year = 2004


==Notations==
}}</ref>
The most common differential operator is the action of taking the [[derivative]] itself. Common notations for taking the first derivative with respect to a variable ''x'' include:


: <math>{d \over dx},  D,\,  D_x,\,</math>  and  <math>\partial_x.</math>
:<math>k = \frac Nn</math>


When taking higher, ''n''th order derivatives, the operator may also be written:
where ''n'' is the sample size, and ''N'' is the population size.


: <math>{d^n \over dx^n},</math>  <math>D^n\,,</math>  or <math>D^n_x.\,</math>
Using this procedure each element in the [[statistical population|population]] has a known and equal probability of selection. This makes systematic sampling functionally similar to [[simple random sampling]]. It is however, much more efficient (if variance within systematic sample is more than variance of population).{{cn|date=July 2012}}


The derivative of a function ''f'' of an argument ''x'' is sometimes given as either of the following:
Systematic sampling is to be applied only if the given population is logically homogeneous, because systematic sample units are uniformly distributed over the population. The researcher must ensure that the chosen sampling interval does not hide a pattern. Any pattern would threaten randomness.


: <math>[f(x)]'\,\!</math>
Example: Suppose a supermarket wants to study buying habits of their customers, then using systematic sampling they can choose every 10th or 15th customer entering the supermarket and conduct the study on this sample.
: <math>f'(x).\,\!</math>


The ''D'' notation's use and creation is credited to [[Oliver Heaviside]], who considered differential operators of the form
This is random sampling with a system. From the sampling frame, a starting point is chosen at random, and choices thereafter are at regular intervals. For example, suppose you want to sample 8 houses from a street of 120 houses. 120/8=15, so every 15th house is chosen after a random starting point between 1 and 15. If the random starting point is 11, then the houses selected are 11, 26, 41, 56, 71, 86, 101, and 116.


: <math>\sum_{k=0}^n c_k D^k</math>
If, as more frequently, the population is not evenly divisible (suppose you want to sample 8 houses out of 125, where 125/8=15.625), should you take every 15th house or every 16th house? If you take every 16th house, 8*16=128, so there is a risk that the last house chosen does not exist. On the other hand, if you take every 15th house, 8*15=120, so the last five houses will never be selected. The random starting point should instead be selected as a noninteger between 0 and 15.625 (inclusive on one endpoint only) to ensure that every house has equal chance of being selected; the interval should now be nonintegral (15.625); and each noninteger selected should be rounded up to the next integer. If the random starting point is 3.6, then the houses selected are 4, 19, 35, 51, 66, 82, 98, and 113, where there are 3 cyclic intervals of 15 and 5 intervals of 16.


in his study of [[differential equation]]s.
To illustrate the danger of systematic skip concealing a pattern, suppose we were to sample a planned neighbourhood where each street has ten houses on each block. This places houses No. 1, 10, 11, 20, 21, 30... on block corners; corner blocks may be less valuable, since more of their area is taken up by streetfront etc. that is unavailable for building purposes. If we then sample every 10th household, our sample will either be made up ''only'' of corner houses (if we start at 1 or 10) or have ''no'' corner houses (any other start); either way, it will not be representative.


One of the most frequently seen differential operators is the [[Laplace operator|Laplacian operator]], defined by
Systematic sampling may also be used with non-equal selection probabilities. In this case, rather than simply counting through elements of the population and selecting every ''k''<sup>th</sup> unit, we allocate each element a space along a [[number line]] according to its selection probability. We then generate a random start from a uniform distribution between 0 and 1, and move along the number line in steps of 1.


:<math>\Delta=\nabla^{2}=\sum_{k=1}^n {\partial^2\over \partial x_k^2}.</math>
Example: We have a population of 5 units (A to E). We want to give unit A a 20% probability of selection, unit B a 40% probability, and so on up to unit E (100%). Assuming we maintain alphabetical order, we allocate each unit to the following interval:
A: 0 to 0.2
B: 0.2 to 0.6 (= 0.2 + 0.4)
C: 0.6 to 1.2 (= 0.6 + 0.6)
D: 1.2 to 2.0 (= 1.2 + 0.8)
E: 2.0 to 3.0 (= 2.0 + 1.0)


Another differential operator is the Θ operator, or [[theta operator]], defined by<ref>{{cite web|url=http://mathworld.wolfram.com/ThetaOperator.html|title=Theta Operator|author=E. W. Weisstein|accessdate=2009-06-12}}</ref>
If our random start was 0.156, we would first select the unit whose interval contains this number (i.e. A). Next, we would select the interval containing 1.156 (element C), then 2.156 (element E). If instead our random start was 0.350, we would select from points 0.350 (B), 1.350 (D), and 2.350 (E).


:<math>\Theta = z {d \over dz}.</math>
== References ==
{{reflist|colwidth=30em}}


This is sometimes also called the '''homogeneity operator''', because its [[eigenfunction]]s are the [[monomial]]s in ''z'':
==External links==
 
*[http://trsl.sourceforge.net/ TRSL &ndash; Template Range Sampling Library] is a free-software and open-source C++ library that implements systematic sampling behind an (STL-like) iterator interface.
:<math>\Theta (z^k) = k z^k,\quad k=0,1,2,\dots </math>
*[http://www.juliantrubin.com/encyclopedia/mathematics/dictionary_statistics.html The use of systematic sampling to estimate the No. of headwords in a dictionary]
 
In ''n'' variables the homogeneity operator is given by
 
:<math>\Theta = \sum_{k=1}^n x_k \frac{\partial}{\partial x_k}.</math>
 
As in one variable, the [[eigenspace]]s of Θ are the spaces of [[homogeneous polynomial]]s.
 
The result of applying the differential to the left{{Clarify|date=February 2012}} and to the right{{Clarify|date=February 2012}}, and the difference obtained when applying the differential operator to the left and to the right, are denoted by arrows as follows:
:<math>f \overleftarrow{\partial_x} g = g \partial_x f</math>
:<math>f \overrightarrow{\partial_x} g = f \partial_x g</math>
:<math>f \overleftrightarrow{\partial_x} g = f \partial_x g - g \partial_x f.</math>
Such a bidirectional-arrow notation is frequently used for describing the [[probability current]] of quantum mechanics.
 
==Del==
{{Main|Del}}
The differential operator del, also called nabla operator, is an important [[Euclidean vector|vector]] differential operator. It appears frequently in [[physics]] in places like the differential form of [[Maxwell's Equations]]. In three-dimensional [[Cartesian coordinates]], del is defined:
 
:<math>\nabla = \mathbf{\hat{x}} {\partial \over \partial x}  + \mathbf{\hat{y}} {\partial \over \partial y} + \mathbf{\hat{z}} {\partial \over \partial z}.</math>
 
Del is used to calculate the [[gradient]], [[curl (mathematics)|curl]], [[divergence]], and [[Laplacian]] of various objects.
 
==Adjoint of an operator==
{{See also|Hermitian adjoint}}
Given a linear differential operator T
: <math>Tu = \sum_{k=0}^n a_k(x) D^k u</math>
the [[Hermitian adjoint|adjoint of this operator]] is defined as the operator <math>T^*</math> such that
: <math>\langle Tu,v \rangle = \langle u, T^*v \rangle</math>
where the notation <math>\langle\cdot,\cdot\rangle</math> is used for the [[scalar product]] or [[inner product]].  This definition  therefore depends on the definition of the scalar product.
 
=== Formal adjoint in one variable ===
 
In the functional space of [[square integrable]] functions, the scalar product is defined by
 
: <math>\langle f, g \rangle = \int_a^b f(x) \, \overline{g(x)} \,dx , </math>
 
where the line over ''g(x)'' denotes the complex conjugate of ''g(x)''.  If one moreover adds the condition that ''f'' or ''g'' vanishes for <math>x \to a</math> and <math>x \to b</math>, one can also define the adjoint of ''T'' by
 
: <math>T^*u = \sum_{k=0}^n (-1)^k D^k [\overline{a_k(x)}u].\,</math>
 
This formula does not explicitly depend on the definition of the scalar product.  It is therefore sometimes chosen as a definition of the adjoint operator.  When <math>T^*</math> is defined according to this formula, it is called the '''formal adjoint''' of ''T''.
 
A (formally) '''[[self-adjoint operator|self-adjoint]]''' operator is an operator equal to its own (formal) adjoint.
 
=== Several variables ===
 
If Ω is a domain in '''R'''<sup>n</sup>, and ''P'' a differential operator on Ω, then the adjoint of ''P'' is defined in [[Lp space|''L''<sup>2</sup>(&Omega;)]] by duality in the analogous manner:
 
:<math>\langle f, P^* g\rangle_{L^2(\Omega)} = \langle P f, g\rangle_{L^2(\Omega)}</math>
 
for all smooth ''L''<sup>2</sup> functions ''f'', ''g''.  Since smooth functions are dense in ''L''<sup>2</sup>, this defines the adjoint on a dense subset of ''L''<sup>2</sup>:  P<sup>*</sup> is a [[densely defined operator]].
 
=== Example ===
The [[Sturm&ndash;Liouville theory|Sturm&ndash;Liouville]] operator is a well-known example of a formal self-adjoint operator.  This second-order linear differential operator ''L'' can be written in the form
 
: <math>Lu = -(pu')'+qu=-(pu''+p'u')+qu=-pu''-p'u'+qu=(-p) D^2 u +(-p') D u + (q)u.\;\!</math>
 
This property can be proven using the formal adjoint definition above.
 
: <math>\begin{align}
L^*u & {} = (-1)^2 D^2 [(-p)u] + (-1)^1 D [(-p')u] + (-1)^0 (qu) \\
& {} = -D^2(pu) + D(p'u)+qu \\
& {} = -(pu)''+(p'u)'+qu \\
& {} = -p''u-2p'u'-pu''+p''u+p'u'+qu \\
& {} = -p'u'-pu''+qu \\
& {} = -(pu')'+qu \\
& {} = Lu
\end{align}</math>
 
This operator is central to [[Sturm&ndash;Liouville theory]] where the [[eigenfunctions]] (analogues to [[eigenvectors]]) of this operator are considered.
 
==Properties of differential operators==
 
Differentiation is [[linearity of differentiation|linear]], i.e.,
 
:<math>D(f+g) = (Df)+(Dg)\,</math>
 
:<math>D(af) = a(Df)\,</math>
 
where ''f'' and ''g'' are functions, and ''a'' is a constant.
 
Any polynomial in ''D'' with function coefficients is also a differential operator. We may also compose differential operators by the rule
 
:<math>(D_1 \circ D_2)(f) = D_1(D_2(f)).\,</math>
 
Some care is then required: firstly any function coefficients in the operator ''D''<sub>2</sub> must be [[differentiable]] as many times as the application of ''D''<sub>1</sub> requires. To get a [[ring (mathematics)|ring]] of such operators we must assume derivatives of all orders of the coefficients used. Secondly, this ring will not be [[commutative]]: an operator ''gD'' isn't the same in general as ''Dg''. In fact we have for example the relation basic in [[quantum mechanics]]:
 
:<math>Dx - xD = 1.\,</math>
 
The subring of operators that are polynomials in ''D'' with [[constant coefficients]] is, by contrast, commutative. It can be characterised another way: it consists of the translation-invariant operators.


The differential operators also obey the [[shift theorem]].
{{DEFAULTSORT:Systematic Sampling}}
 
[[Category:Sampling (statistics)]]
==Several variables==
[[Category:Sampling techniques]]
 
The same constructions can be carried out with [[partial derivative]]s, differentiation with respect to different variables giving rise to operators that commute (see [[symmetry of second derivatives]]).
 
==Coordinate-independent description==
In [[differential geometry]] and [[algebraic geometry]] it is often convenient to have a [[coordinate]]-independent description of differential operators between two [[vector bundle]]s.  Let ''E'' and ''F'' be two vector bundles over a [[differentiable manifold]] ''M''. An '''R'''-linear mapping of [[vector bundle|sections]] {{nowrap|''P'' : &Gamma;(''E'') &rarr; &Gamma;(''F'')}} is said to be a '''''k''th-order linear differential operator''' if it factors through the [[jet bundle]] ''J''<sup>''k''</sup>(''E'').
In other words, there exists a linear mapping of vector bundles
 
:<math>i_P: J^k(E) \rightarrow F\,</math>
 
such that
 
:<math>P = i_P\circ j^k</math>
 
where {{nowrap | ''j''<sup>''k''</sup>: &Gamma;(''E'') &rarr; &Gamma;(''J''<sup>''k''</sup>(''E''))}} is the prolongation that associates to any section of ''E'' its [[jet (mathematics)|''k''-jet]].
 
This just means that for a given [[vector bundle|sections]] ''s'' of ''E'', the value of ''P''(''s'') at a point ''x''&nbsp;&isin;&nbsp;''M'' is fully determined by the ''k''th-order infinitesimal behavior of ''s'' in ''x''. In particular this implies that ''P''(''s'')(''x'') is determined by the [[sheaf (mathematics)|germ]] of ''s'' in ''x'', which is expressed by saying that differential operators are local. A foundational result is the [[Peetre theorem]] showing that the converse is also true: any (linear) local operator is differential.
 
===Relation to commutative algebra===
An equivalent, but purely algebraic description of linear differential operators is as follows: an '''R'''-linear map ''P'' is a ''k''th-order linear differential operator, if for any ''k''&nbsp;+&nbsp;1 smooth functions <math>f_0,\ldots,f_k \in C^\infty(M)</math> we have
 
:<math>[f_k,[f_{k-1},[\cdots[f_0,P]\cdots]]=0.</math>
 
Here the bracket <math>[f,P]:\Gamma(E)\rightarrow \Gamma(F)</math> is defined as the commutator
 
:<math>[f,P](s)=P(f\cdot s)-f\cdot P(s).\,</math>
 
This characterization of linear differential operators shows that they are particular mappings between [[module (mathematics)|modules]] over a commutative [[algebra (ring theory)|algebra]], allowing the concept to be seen as a part of [[commutative algebra]].
 
==Examples==
 
* In applications to the physical sciences, operators such as the [[Laplace operator]] play a major role in setting up and solving [[partial differential equation]]s.
 
* In [[differential topology]] the [[exterior derivative]] and [[Lie derivative]] operators have intrinsic meaning.
 
* In [[abstract algebra]], the concept of a [[derivation (abstract algebra)|derivation]] allows for generalizations of differential operators which do not require the use of calculus.  Frequently such generalizations are employed in [[algebraic geometry]] and [[commutative algebra]].  See also [[jet (mathematics)]].
 
* In the development of [[holomorphic function]]s of a [[complex variable]] ''z'' = ''x'' + i ''y'', sometimes a complex function is considered to be a function of two real variables ''x'' and ''y''. Use is made of the [[Wirtinger derivative]]s, which are partial differential operators:
::<math> \frac{\partial}{\partial z} = \frac{1}{2} \left( \frac{\partial}{\partial x} - i \frac{\partial}{\partial y} \right) \quad,\quad \frac{\partial}{\partial\bar{z}}= \frac{1}{2} \left( \frac{\partial}{\partial x} + i \frac{\partial}{\partial y} \right) \ .</math>
This approach is also used to study functions of [[several complex variables]] and functions of a [[motor variable]].
 
==History==
The conceptual step of writing a differential operator as something free-standing is attributed to [[Louis François Antoine Arbogast]] in 1800.<ref>James Gasser (editor), ''A Boole Anthology: Recent and classical studies in the logic of George Boole'' (2000), p. 169; [http://books.google.co.uk/books?id=A2Q5Yghl000C&pg=PA169 Google Books].</ref>
 
==See also==
* [[Difference operator]]
* [[Delta operator]]
* [[Elliptic operator]]
* [[Fractional calculus]]
* [[Invariant differential operator]]
* [[Differential calculus over commutative algebras]]
* [[Lagrangian system]]
* [[Spectral theory]]
* [[Energy operator]]
* [[Momentum operator]]
* [[DBAR operator]]
 
==References==
{{Reflist}}
==External links==
* {{springer|title=Differential operator|id=p/d032250}}


[[Category:Calculus]]
[[de:Systematische Stichprobe]]
[[Category:Multivariable calculus]]
[[pl:Losowanie systematyczne]]
[[Category:Differential operators|*]]
[[fi:Systemaattinen otanta]]
[[tr:Örüntülü örnekleme]]

Revision as of 03:05, 11 August 2014

Systematic sampling is a statistical method involving the selection of elements from an ordered sampling frame.

The most common form of systematic sampling is an equal-probability method. In this approach, progression through the list is treated circularly, with a return to the top once the end of the list is passed. The sampling starts by selecting an element from the list at random and then every kth element in the frame is selected, where k, the sampling interval (sometimes known as the skip): this is calculated as:[1]

k=Nn

where n is the sample size, and N is the population size.

Using this procedure each element in the population has a known and equal probability of selection. This makes systematic sampling functionally similar to simple random sampling. It is however, much more efficient (if variance within systematic sample is more than variance of population).Template:Cn

Systematic sampling is to be applied only if the given population is logically homogeneous, because systematic sample units are uniformly distributed over the population. The researcher must ensure that the chosen sampling interval does not hide a pattern. Any pattern would threaten randomness.

Example: Suppose a supermarket wants to study buying habits of their customers, then using systematic sampling they can choose every 10th or 15th customer entering the supermarket and conduct the study on this sample.

This is random sampling with a system. From the sampling frame, a starting point is chosen at random, and choices thereafter are at regular intervals. For example, suppose you want to sample 8 houses from a street of 120 houses. 120/8=15, so every 15th house is chosen after a random starting point between 1 and 15. If the random starting point is 11, then the houses selected are 11, 26, 41, 56, 71, 86, 101, and 116.

If, as more frequently, the population is not evenly divisible (suppose you want to sample 8 houses out of 125, where 125/8=15.625), should you take every 15th house or every 16th house? If you take every 16th house, 8*16=128, so there is a risk that the last house chosen does not exist. On the other hand, if you take every 15th house, 8*15=120, so the last five houses will never be selected. The random starting point should instead be selected as a noninteger between 0 and 15.625 (inclusive on one endpoint only) to ensure that every house has equal chance of being selected; the interval should now be nonintegral (15.625); and each noninteger selected should be rounded up to the next integer. If the random starting point is 3.6, then the houses selected are 4, 19, 35, 51, 66, 82, 98, and 113, where there are 3 cyclic intervals of 15 and 5 intervals of 16.

To illustrate the danger of systematic skip concealing a pattern, suppose we were to sample a planned neighbourhood where each street has ten houses on each block. This places houses No. 1, 10, 11, 20, 21, 30... on block corners; corner blocks may be less valuable, since more of their area is taken up by streetfront etc. that is unavailable for building purposes. If we then sample every 10th household, our sample will either be made up only of corner houses (if we start at 1 or 10) or have no corner houses (any other start); either way, it will not be representative.

Systematic sampling may also be used with non-equal selection probabilities. In this case, rather than simply counting through elements of the population and selecting every kth unit, we allocate each element a space along a number line according to its selection probability. We then generate a random start from a uniform distribution between 0 and 1, and move along the number line in steps of 1.

Example: We have a population of 5 units (A to E). We want to give unit A a 20% probability of selection, unit B a 40% probability, and so on up to unit E (100%). Assuming we maintain alphabetical order, we allocate each unit to the following interval:

A: 0 to 0.2
B: 0.2 to 0.6 (= 0.2 + 0.4)
C: 0.6 to 1.2 (= 0.6 + 0.6)
D: 1.2 to 2.0 (= 1.2 + 0.8)
E: 2.0 to 3.0 (= 2.0 + 1.0)

If our random start was 0.156, we would first select the unit whose interval contains this number (i.e. A). Next, we would select the interval containing 1.156 (element C), then 2.156 (element E). If instead our random start was 0.350, we would select from points 0.350 (B), 1.350 (D), and 2.350 (E).

References

43 year old Petroleum Engineer Harry from Deep River, usually spends time with hobbies and interests like renting movies, property developers in singapore new condominium and vehicle racing. Constantly enjoys going to destinations like Camino Real de Tierra Adentro.

External links

de:Systematische Stichprobe pl:Losowanie systematyczne fi:Systemaattinen otanta tr:Örüntülü örnekleme

  1. 20 year-old Real Estate Agent Rusty from Saint-Paul, has hobbies and interests which includes monopoly, property developers in singapore and poker. Will soon undertake a contiki trip that may include going to the Lower Valley of the Omo.

    My blog: http://www.primaboinca.com/view_profile.php?userid=5889534