Truncated 5-cell: Difference between revisions
en>Tomruen |
|||
Line 1: | Line 1: | ||
In [[mathematics]], a '''π-system''' on a [[Set (mathematics)|set]] Ω is a collection ''P'' of certain [[subsets]] of Ω, such that | |||
* ''P'' is non-empty. | |||
* ''A'' ∩ ''B'' ∈ ''P'' whenever ''A'' and ''B'' are in ''P''. | |||
That is, ''P'' is a [[non-empty]] family of subsets of Ω that is [[closure (mathematics)|closed]] under finite [[intersection (set theory)|intersections]]. | |||
The importance of π-systems arise from the fact that if two probability measures agree on a π-system, then they agree on the [[sigma algebra]] | |||
generated by that π-system. Moreover, if some property, such as independence, holds for the π-system, then it holds for the generated sigma | |||
algebra as well. | |||
This is desirable because in practice, π-systems are often simpler to work with than sigma algebras. | |||
For example, it may be awkward to work with sigma algebras generated by infinitely many sets <math>\sigma(E_1, E_2, \ldots)</math>. | |||
So instead we may examine the union of all sigma algebras generated by finitely many random sets <math>\bigcup_n \sigma(E_1, \ldots, E_n)</math>. This forms a π-system that generates the desired sigma algebra. | |||
==Properties== | |||
* For any subset Σ ⊆ Ω, there exists a π-system <math> \mathcal I_{\Sigma} </math> which is the unique smallest π-system of Ω to contain every element of Σ, and is called the π-system '' generated '' by Σ. | |||
* For any measurable function <math> f \colon \Omega \rightarrow \mathbb{R} </math>, the set <math> \mathcal{I}_f = \left \{ f^{-1}\left(\left( - \infty, x \right]\right) \colon x \in \mathbb{R} \right \} </math> defines a π-system, and is called the π-system '' generated '' by ''f''. | |||
* Any σ-algebra is a π-system. | |||
== Relationship to λ-Systems == | |||
A [[Dynkin system|λ-system]] on Ω is a set ''D'' of subsets of Ω, satisfying | |||
* Ω ∈ <math>D</math>, | |||
* if A, B ∈ <math>D</math> and A ⊆ B, then B \ A ∈ <math>D</math>, | |||
* if A<sub>1</sub>, A<sub>2</sub>, A<sub>3</sub>, ... is a sequence of subsets in <math>D</math> and A<sub>n</sub> ⊆ A<sub>n+1</sub> for all n ≥ 1, then <math>\cup_{n=1}^{\infty}A_n\in D</math>. | |||
Whilst it is true that any σ-algebra satisfies the properties of being a π-system and a λ-system, in general it is not true that any π-system is a λ-system, and moreover it is not true that any π-system is a σ-algebra. However, a useful classification is that any set system which is both a λ-system and a π-system is a σ-algebra. This is used as a step in proving the π-λ theorem. | |||
===The π-λ Theorem=== | |||
Let <math> D </math> be a λ-system, and let <math> \mathcal{I} \subseteq D </math> be a π-system contained in <math> D </math>. The π-λ Theorem<ref name="Kallenberg">Kallenberg, Foundations Of Modern Probability, p.2</ref> states that the σ-algebra <math>\sigma( \mathcal{I}) </math> generated by <math> \mathcal{I} </math> is contained in <math> D </math>: <math> \sigma( \mathcal{I}) \subset D </math>. | |||
The π-λ theorem can be used to prove many elementary measure theoretic results. For instance, it is used in proving the existence claim of the [[Carathéodory extension theorem]] for σ-finite measures.<ref name="Durrett">Durrett, Probability Theory and Examples, p.345</ref> | |||
The π-λ theorem is closely related to the [[monotone class theorem]], which provides a similar relationship between monotone classes and algebras, and can be used to derive many of the same results. Whilst the two theorems are different, the π-λ theorem is sometimes referred to as the monotone class theorem.<ref name="Kallenberg"/> | |||
====Example==== | |||
Let ''μ''<sub>1 </sub>, ''μ''<sub>2</sub> : ''F'' → ''R'' be two measures on the σ-algebra ''F'', and suppose that ''F'' ''='' ''σ''(''I'') is generated by a π-system ''I''. If | |||
#''μ''<sub>1</sub>(''A'') = ''μ''<sub>2</sub>(''A''), ''∀'' A ''∈'' ''I'', and | |||
#''μ''<sub>1</sub>(''Ω'') ''<'' ''∞'', ''μ''<sub>2</sub>(''Ω'') ''<'' ''∞'', | |||
then ''μ''<sub>1 </sub> = ''μ''<sub>2</sub>. | |||
This is the uniqueness statement of the Carathéodory extension theorem for finite measures. | |||
'''Idea of Proof'''<ref name="Durrett"/> | |||
Define the collection of sets | |||
: <math> D = \left\{ A \in \sigma(I) \colon \mu_1(A) = \mu_2(A) \right\}. </math> | |||
It can be shown that ''D'' defines a λ-system. By our first assumption ''μ''<sub>1</sub> and''μ''<sub>2</sub> agree on ''I'' it follows that ''I'' ''⊆'' ''D''. It follows from the π-λ theorem that ''σ''(''I'') ''⊆'' ''D'' ''⊆'' ''σ''(''I''), and so ''D'' = ''σ''(''I''). That is to say the measures agree on ''σ''(''I'') | |||
==Π-Systems in Probability== | |||
π-systems are more commonly used in the study of probability theory, than in the general field of measure theory. This is primarily due to the probabilistic notion of independence, though may also be a consequence of the fact that the π-λ theorem was proven by the probabilist [[Eugene Dynkin]]. Standard measure theory texts will prove the same results via monotone classes, rather than π-systems. | |||
===Equality in Distribution=== | |||
The π-λ theorem motivates the definition of [[Convergence of random variables#Convergence in distribution|equality in distribution]]. Recalling that the [[cumulative distribution function]] of a random variable <math> X \colon(\Omega, \mathcal F, \mathbb P) \rightarrow \mathbb R </math> is defined as | |||
: <math>F_X(a) = \mathbb{P}\left[ X \leq a \right], \qquad a \in \mathbb{R}</math>, | |||
and the law of the variable is the probability measure | |||
: <math>\mathcal{L}_X(B) = \mathbb{P}\left[ X^{-1}(B) \right], \qquad B \in \mathcal{B}(\mathbb R) </math>, | |||
where <math> \mathcal{B}(\mathbb R) </math> is the Borel σ-algebra. We say that the random variables <math> X \colon(\Omega, \mathcal F, \mathbb P) </math>, and <math> Y \colon(\tilde\Omega,\tilde{ \mathcal F}, \tilde{\mathbb P}) \rightarrow \mathbb R </math> (on two possibly different probability spaces) are equal in distribution (or ''law''), <math> X \stackrel{\mathcal D}{=} Y</math>, if they have the same cumulative distribution functions, ''F''<sub>X</sub> = ''F''<sub>Y</sub>. The motivation for the definition stems from the observation that if ''F''<sub>X</sub> = ''F''<sub>Y</sub>, then that is exactly to say that <math> \mathcal{L}_X </math> and <math> \mathcal{L}_Y </math> agree on the π-system <math> \left\{(-\infty, a] \colon a \in \mathbb R \right\}</math> which generates <math> \mathcal{B}(\mathbb R) </math>, and so by the [[#Example|example]] above: <math>\mathcal{L}_X = \mathcal{L}_Y</math>. | |||
In the theory of stochastic processes, two processes <math> (X_t)_{t \in T}, (Y_t)_{t \in T}</math> are known to be equal in distribution if and only if they agree on all finite dimmensional distributions. i.e. for all <math>t_1,\ldots, t_n \in T, \, n \in \mathbb N</math> | |||
:<math> (X_{t_1},\ldots, X_{t_n}) \stackrel{\mathcal{D}}{=} (Y_{t_1},\ldots, Y_{t_n})</math>. | |||
The proof of this is an application of the π-λ theorem<ref>Kallenberg, Foundations Of Modern probability, p.48</ref> | |||
===Independent Random Variables=== | |||
The theory of π-system plays an important role in the probabilistic notion of [[Independence (probability theory)|independence]]. If ''X'' and ''Y'' are two [[random variables]] defined on the same probability space <math> (\Omega, \mathcal{F}, \mathbb{P}) </math> then the random variables are independent if and only if their π-systems <math> \mathcal{I}_X, \mathcal{I}_Y </math> satisfy | |||
: <math> \mathbb{P}\left[ A \cap B \right] = \mathbb{P}\left[ A \right] \mathbb{P}\left[ B \right], \qquad \forall A \in \mathcal{I}_X, \, B \in \mathcal{I}_Y, </math> | |||
which is to say that <math> \mathcal{I}_X, \mathcal{I}_Y</math> are independent. | |||
====Example==== | |||
Let <math> Z = (Z_1, Z_2) </math>, where <math> Z_1, Z_2 \sim \mathcal{N}(0,1) </math> are [[Independent and identically distributed random variables|iid]] standard normal random variables. Define the radius and argument variables | |||
: <math> R = \sqrt{Z_1^2 + Z_2^2}, \qquad \Theta = \tan^{-1}(Z_2/Z_1)</math>. | |||
Then <math>R</math> and <math>\Theta</math> are independent random variables. | |||
To prove this, it is sufficient to show that the π-systems <math> \mathcal{I}_R, \mathcal{I}_\Theta</math> are independent: i.e. | |||
: <math> \mathbb P [ R \leq \rho, \Theta \leq \theta] = \mathbb P[R \leq \rho] \mathbb P[\Theta \leq \theta] \quad \forall \rho \in [0,\infty), \, \theta \in [0,2\pi].</math> | |||
Confirming that this is the case is an exercise in changing variables. Fix <math> \rho \in [0,\infty), \, \theta \in [0,2\pi]</math>, then the probability can be expressed as an integral of the probability density function of <math>Z</math> | |||
: <math>\begin{align} \mathbb P [ R \leq \rho, \Theta \leq \theta] &= \int_{R \leq \rho, \, \Theta \leq \theta} \frac{1}{2\pi}\exp\left({-\frac12(z_1^2 + z_2^2)}\right) dz_1dz_2 \\ | |||
& = \int_0^\theta \int_0^\rho \frac{1}{2\pi}e^{-\frac{r^2}{2}}r dr d\tilde\theta \\ | |||
& = \left( \int_0^\theta \frac{1}{2\pi}d\tilde \theta \right) \left( \int_0^\rho e^{-\frac{r^2}{2}}r dr\right) \\ | |||
& = \mathbb P[\Theta \leq \theta]\mathbb P[R \leq \rho] \end{align} | |||
</math> | |||
==See also== | |||
* [[Dynkin system|λ-systems]] | |||
* [[Sigma-algebra|σ-algebra]] | |||
* [[Monotone class theorem]] | |||
* [[Independence (probability theory)|Independence]] | |||
== Notes == | |||
<references/> | |||
== References == | |||
* {{cite book | |||
| last = Gut | |||
| first = Allan | |||
| title = Probability: A Graduate Course | |||
| publisher = Springer | |||
| year = 2005 | |||
| location = New York | |||
| doi = 10.1007/b138932 | |||
| isbn = 0-387-22833-0 | |||
}} | |||
* {{cite book | author=David Williams | year=1991 | title=Probability with Martingales | publisher=Cambridge University Press | isbn=0-521-40605-6 }} | |||
{{DEFAULTSORT:Pi System}} | |||
[[Category:Measure theory]] | |||
[[Category:Set families]] |
Revision as of 04:25, 23 September 2013
In mathematics, a π-system on a set Ω is a collection P of certain subsets of Ω, such that
- P is non-empty.
- A ∩ B ∈ P whenever A and B are in P.
That is, P is a non-empty family of subsets of Ω that is closed under finite intersections. The importance of π-systems arise from the fact that if two probability measures agree on a π-system, then they agree on the sigma algebra generated by that π-system. Moreover, if some property, such as independence, holds for the π-system, then it holds for the generated sigma algebra as well.
This is desirable because in practice, π-systems are often simpler to work with than sigma algebras. For example, it may be awkward to work with sigma algebras generated by infinitely many sets . So instead we may examine the union of all sigma algebras generated by finitely many random sets . This forms a π-system that generates the desired sigma algebra.
Properties
- For any subset Σ ⊆ Ω, there exists a π-system which is the unique smallest π-system of Ω to contain every element of Σ, and is called the π-system generated by Σ.
- For any measurable function , the set defines a π-system, and is called the π-system generated by f.
- Any σ-algebra is a π-system.
Relationship to λ-Systems
A λ-system on Ω is a set D of subsets of Ω, satisfying
- Ω ∈ ,
- if A, B ∈ and A ⊆ B, then B \ A ∈ ,
- if A1, A2, A3, ... is a sequence of subsets in and An ⊆ An+1 for all n ≥ 1, then .
Whilst it is true that any σ-algebra satisfies the properties of being a π-system and a λ-system, in general it is not true that any π-system is a λ-system, and moreover it is not true that any π-system is a σ-algebra. However, a useful classification is that any set system which is both a λ-system and a π-system is a σ-algebra. This is used as a step in proving the π-λ theorem.
The π-λ Theorem
Let be a λ-system, and let be a π-system contained in . The π-λ Theorem[1] states that the σ-algebra generated by is contained in : .
The π-λ theorem can be used to prove many elementary measure theoretic results. For instance, it is used in proving the existence claim of the Carathéodory extension theorem for σ-finite measures.[2]
The π-λ theorem is closely related to the monotone class theorem, which provides a similar relationship between monotone classes and algebras, and can be used to derive many of the same results. Whilst the two theorems are different, the π-λ theorem is sometimes referred to as the monotone class theorem.[1]
Example
Let μ1 , μ2 : F → R be two measures on the σ-algebra F, and suppose that F = σ(I) is generated by a π-system I. If
- μ1(A) = μ2(A), ∀ A ∈ I, and
- μ1(Ω) < ∞, μ2(Ω) < ∞,
then μ1 = μ2. This is the uniqueness statement of the Carathéodory extension theorem for finite measures.
Idea of Proof[2] Define the collection of sets
It can be shown that D defines a λ-system. By our first assumption μ1 andμ2 agree on I it follows that I ⊆ D. It follows from the π-λ theorem that σ(I) ⊆ D ⊆ σ(I), and so D = σ(I). That is to say the measures agree on σ(I)
Π-Systems in Probability
π-systems are more commonly used in the study of probability theory, than in the general field of measure theory. This is primarily due to the probabilistic notion of independence, though may also be a consequence of the fact that the π-λ theorem was proven by the probabilist Eugene Dynkin. Standard measure theory texts will prove the same results via monotone classes, rather than π-systems.
Equality in Distribution
The π-λ theorem motivates the definition of equality in distribution. Recalling that the cumulative distribution function of a random variable is defined as
and the law of the variable is the probability measure
where is the Borel σ-algebra. We say that the random variables , and (on two possibly different probability spaces) are equal in distribution (or law), , if they have the same cumulative distribution functions, FX = FY. The motivation for the definition stems from the observation that if FX = FY, then that is exactly to say that and agree on the π-system which generates , and so by the example above: .
In the theory of stochastic processes, two processes are known to be equal in distribution if and only if they agree on all finite dimmensional distributions. i.e. for all
The proof of this is an application of the π-λ theorem[3]
Independent Random Variables
The theory of π-system plays an important role in the probabilistic notion of independence. If X and Y are two random variables defined on the same probability space then the random variables are independent if and only if their π-systems satisfy
which is to say that are independent.
Example
Let , where are iid standard normal random variables. Define the radius and argument variables
Then and are independent random variables.
To prove this, it is sufficient to show that the π-systems are independent: i.e.
Confirming that this is the case is an exercise in changing variables. Fix , then the probability can be expressed as an integral of the probability density function of
See also
Notes
References
- 20 year-old Real Estate Agent Rusty from Saint-Paul, has hobbies and interests which includes monopoly, property developers in singapore and poker. Will soon undertake a contiki trip that may include going to the Lower Valley of the Omo.
My blog: http://www.primaboinca.com/view_profile.php?userid=5889534 - 20 year-old Real Estate Agent Rusty from Saint-Paul, has hobbies and interests which includes monopoly, property developers in singapore and poker. Will soon undertake a contiki trip that may include going to the Lower Valley of the Omo.
My blog: http://www.primaboinca.com/view_profile.php?userid=5889534