# Talk:Kolmogorov's zero–one law

## ??

There is a minor sloppyness in language. The term ${\displaystyle \sum _{k=1}^{\infty }X_{k}}$ is the limit of the series ${\displaystyle {\big (}\sum _{k=1}^{N}X_{k}{\big )}_{N=1}^{\infty }}$.

That's not a series; that's a sequence! Michael Hardy 00:36, 1 Nov 2004 (UTC)

The later converges or not, the prior exists or not.

What does it mean for a sum of random variables not to exist? I think you've misunderstood the point.

Don't know how to correct this without an overhead of explanation.

Btw., what are the exact requirements on ${\displaystyle X_{k}^{}}$ for the series to converge at all? Certainly ${\displaystyle \operatorname {E} (X_{k})=0}$ is needed, but not sufficient. I even believe the convergences of the series would be equivalent to ${\displaystyle \operatorname {P} \{X_{k}=0\}=1}$.

No on both accounts. See eg. the central limit theorem.
ouch, now I see where I was wrong: in many theorems the random variables have to be identically distributed, not so here. 217.230.28.82 12:16, 31 Oct 2004 (UTC)

How about a rigorous definition for a tail event or tail sigma algebra? —Preceding unsigned comment added by 71.207.219.120 (talk) 07:48, 23 April 2009 (UTC)

Are ${\displaystyle F_{k}}$ and ${\displaystyle B_{k}}$ the same or do I miss something? Pr.elvis (talk) 14:15, 10 August 2009 (UTC)

Let ${\displaystyle X_{i}(\omega )\ :\ i\in \mathbb {N} }$ be i.i.d. Bernoulli random variables on ${\displaystyle \Omega }$ satisfying ${\displaystyle \operatorname {P} (X_{i}=1)=1/2}$. Define ${\displaystyle I_{X}\subset \mathbb {N} }$ as the set of all ${\displaystyle i\in \mathbb {N} }$ such that ${\displaystyle X_{i}=1}$. Using the axiom of choice, choose a family ${\displaystyle {\mathcal {A}}}$ of subsets of ${\displaystyle \mathbb {N} }$ such that for each ${\displaystyle B\subset \mathbb {N} }$ there exists unique ${\displaystyle A\in {\mathcal {A}}}$ such that the symmetric difference of ${\displaystyle A}$ and ${\displaystyle B}$ is finite. Now let ${\displaystyle {\mathcal {A}}'}$ be a subset of ${\displaystyle {\mathcal {A}}}$ of cardinality ${\displaystyle 2^{\aleph _{0}}}$ whose complement is also of cardinality ${\displaystyle 2^{\aleph _{0}}}$. Let ${\displaystyle Z}$ be the set ${\displaystyle \{\omega \in \Omega :\exists A\in {\mathcal {A}}'{\textrm {s.t.}}\,I_{X}\bigtriangleup A\,{\textrm {is}}\,{\textrm {finite}}\}}$. Let ${\displaystyle {\mathcal {F}}}$ be the σ-algebra on ${\displaystyle \Omega }$ generated by the ${\displaystyle X_{i}}$ and let ${\displaystyle {\mathcal {G}}}$ be the σ-algebra generated by ${\displaystyle {\mathcal {F}}\cup \{Z\}}$. An arbitrary element of ${\displaystyle {\mathcal {G}}}$ can be expressed in the form ${\displaystyle (A\cap Z)\cup (B\cap Z^{c})}$ where ${\displaystyle A,B\in {\mathcal {F}}}$. Define the probability of such an event to be ${\displaystyle (\mathop {P} (A)+\mathop {P} (B))/2}$. Then the event ${\displaystyle Z}$ is independent of all finite subsets of the ${\displaystyle X_{i}}$ and is uniquely determined by the 'tail' ${\displaystyle (X_{n},X_{n+1},X_{n+2},\ldots )}$ for any ${\displaystyle n}$, but has probability 1/2.
The reason why this doesn't contradict the theorem is because the theorem should only be stated for events belonging to ${\displaystyle {\mathcal {F}}}$.
Another issue is that you define a tail event as being one that is independent of all finite subsets of the ${\displaystyle X_{i}}$, but this itself follows from the hypothesis that the ${\displaystyle X_{i}}$ are independent. In fact, under the article's definition of 'tail event', surely we don't even need to assume that the ${\displaystyle X_{i}}$ are independent for the result to be true (provided it's only stated for events in ${\displaystyle {\mathcal {F}}}$)? —Preceding unsigned comment added by 92.15.133.253 (talk) 10:38, 6 August 2010 (UTC)