Euclid's orchard: Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
en>David Eppstein
No edit summary
 
Line 1: Line 1:
{{Mergefrom|Coupon collector's problem (generating function approach)|date=August 2011}}
Hello! I am Teddy. I am delighted that I can join to the whole globe. I live in Norway, in the south region. I dream to visit the various nations, to obtain acquainted with intriguing people.<br><br>Have a look at my web site; [http://madgamers.eu/index/users/view/id/85446 backup plugin]
[[File:Coupon_collector_problem.svg|thumb|400px|Graph of number of coupons, ''n'' vs the expected number of tries needed to collect them all, ''E''&thinsp;(''T''&thinsp;)]]
In [[probability theory]], the '''coupon collector's problem''' describes the "collect all coupons and win" contests. It asks the following question: Suppose that there are ''n'' different [[coupon]]s, equally likely, from which coupons are being collected with replacement. What is the probability that more than ''t'' sample trials are needed to collect all ''n'' coupons? An alternative statement is: Given ''n'' coupons, how many coupons do you expect you need to draw with replacement before having drawn each coupon at least once? The mathematical analysis of the problem reveals that the [[expected value|expected number]] of trials needed grows as <math>\Theta(n\log(n))</math>.<ref>Here and throughout this article, "log" refers to the [[natural logarithm]] rather than a logarithm to some other base. The use of &Theta; here invokes [[big O notation]].</ref> For example, when ''n''&nbsp;=&nbsp;50 it takes about 225<ref>E(50) = 50*(1 + 1/2 + 1/3 + .. 1/50) = 224.96, the expected number of trials to collect all 50 coupons. The approximation <math>n\log n+\gamma n+1/2</math> for this expected number gives in this case <math>50\log 50+50\gamma+1/2 \approx 195.60+28.86+0.5\approx 224.96</math>.</ref> trials to collect all 50 coupons.
 
==Understanding the problem ==
The key to solving the problem is understanding that it takes very little time to collect the first few coupons. On the other hand, it takes a long time to collect the last few coupons. In fact, for 50 coupons, it takes on average 50 trials to collect the very last coupon after the other 49 coupons have been collected. This is why the expected time to collect all coupons is much longer than 50. The idea now is to split the total time into 50 intervals where the expected time can be calculated.
 
==Answer==
 
The following table (click [show] to expand) gives the expected number of tries to get sets of 1 to 100 coupons.
{| class="wikitable collapsible collapsed" style="text-align:center; font:80% monospace; border:none;"
|- style="font:125% sans-serif;"
! Number<br /> of coupons,<br /> ''n'' !! Expected number<br /> of tries per coupon<br /> = ''H<sub>n</sub>'' !! Expected total<br /> number of tries,<br /> ''E''&thinsp;(''T''&thinsp;) = &#8968;''n&nbsp;H<sub>n</sub>&nbsp;''&#8969;
| rowspan="51" style="background:none; border:none;" |
! Number<br /> of coupons,<br /> ''n'' !! Expected number<br /> of tries per coupon<br /> = ''H<sub>n</sub>'' !! Expected total<br /> number of tries,<br /> ''E''&thinsp;(''T''&thinsp;) = &#8968;''n&nbsp;H<sub>n</sub>&nbsp;''&#8969;
|-
|  1 || 1&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; ||  1 ||  51 || 4.51881 || 231
|-
|  2 || 1.5&nbsp;&nbsp;&nbsp;&nbsp; ||  3 ||  52 || 4.53804 || 236
|-
|  3 || 1.83333 ||  6 ||  53 || 4.55691 || 242
|-
|  4 || 2.08333 ||  9 ||  54 || 4.57543 || 248
|-
|  5 || 2.28333 ||  12 ||  55 || 4.59361 || 253
|-
|  6 || 2.45&nbsp;&nbsp;&nbsp; ||  15 ||  56 || 4.61147 || 259
|-
|  7 || 2.59286 ||  19 ||  57 || 4.62901 || 264
|-
|  8 || 2.71786 ||  22 ||  58 || 4.64625 || 270
|-
|  9 || 2.82897 ||  26 ||  59 || 4.66320 || 276
|-
|  10 || 2.92897 ||  30 ||  60 || 4.67987 || 281
|-
|  11 || 3.01988 ||  34 ||  61 || 4.69626 || 287
|-
|  12 || 3.10321 ||  38 ||  62 || 4.71239 || 293
|-
|  13 || 3.18013 ||  42 ||  63 || 4.72827 || 298
|-
|  14 || 3.25156 ||  46 ||  64 || 4.74389 || 304
|-
|  15 || 3.31823 ||  50 ||  65 || 4.75928 || 310
|-
|  16 || 3.38073 ||  55 ||  66 || 4.77443 || 316
|-
|  17 || 3.43955 ||  59 ||  67 || 4.78935 || 321
|-
|  18 || 3.49511 ||  63 ||  68 || 4.80406 || 327
|-
|  19 || 3.54774 ||  68 ||  69 || 4.81855 || 333
|-
|  20 || 3.59774 ||  72 ||  70 || 4.83284 || 339
|-
|  21 || 3.64536 ||  77 ||  71 || 4.84692 || 345
|-
|  22 || 3.69081 ||  82 ||  72 || 4.86081 || 350
|-
|  23 || 3.73429 ||  86 ||  73 || 4.87451 || 356
|-
|  24 || 3.77596 ||  91 ||  74 || 4.88802 || 362
|-
|  25 || 3.81596 ||  96 ||  75 || 4.90136 || 368
|-
|  26 || 3.85442 || 101 ||  76 || 4.91451 || 374
|-
|  27 || 3.89146 || 106 ||  77 || 4.92750 || 380
|-
|  28 || 3.92717 || 110 ||  78 || 4.94032 || 386
|-
|  29 || 3.96165 || 115 ||  79 || 4.95298 || 392
|-
|  30 || 3.99499 || 120 ||  80 || 4.96548 || 398
|-
|  31 || 4.02725 || 125 ||  81 || 4.97782 || 404
|-
|  32 || 4.05850 || 130 ||  82 || 4.99002 || 410
|-
|  33 || 4.08880 || 135 ||  83 || 5.00207 || 416
|-
|  34 || 4.11821 || 141 ||  84 || 5.01397 || 422
|-
|  35 || 4.14678 || 146 ||  85 || 5.02574 || 428
|-
|  36 || 4.17456 || 151 ||  86 || 5.03737 || 434
|-
|  37 || 4.20159 || 156 ||  87 || 5.04886 || 440
|-
|  38 || 4.22790 || 161 ||  88 || 5.06022 || 446
|-
|  39 || 4.25354 || 166 ||  89 || 5.07146 || 452
|-
|  40 || 4.27854 || 172 ||  90 || 5.08257 || 458
|-
|  41 || 4.30293 || 177 ||  91 || 5.09356 || 464
|-
|  42 || 4.32674 || 182 ||  92 || 5.10443 || 470
|-
|  43 || 4.35000 || 188 ||  93 || 5.11518 || 476
|-
|  44 || 4.37273 || 193 ||  94 || 5.12582 || 482
|-
|  45 || 4.39495 || 198 ||  95 || 5.13635 || 488
|-
|  46 || 4.41669 || 204 ||  96 || 5.14676 || 495
|-
|  47 || 4.43796 || 209 ||  97 || 5.15707 || 501
|-
|  48 || 4.45880 || 215 ||  98 || 5.16728 || 507
|-
|  49 || 4.47921 || 220 ||  99 || 5.17738 || 513
|-
|  50 || 4.49921 || 225 || 100 || 5.18738 || 519
|}
 
==Solution==
=== Calculating the expectation ===
Let ''T'' be the time to collect all ''n'' coupons, and let ''t<sub>i</sub>'' be the time to collect the ''i''-th coupon after ''i''&nbsp;&minus;&nbsp;1 coupons have been collected. Think of ''T'' and ''t<sub>i</sub>'' as [[random variable]]s. Observe that the probability of collecting a '''new''' coupon given ''i''&nbsp;&minus;&nbsp;1 coupons is ''p<sub>i</sub>'' = (''n''&nbsp;&minus;&nbsp;(''i''&nbsp;&minus;&nbsp;1))/''n''. Therefore, ''t<sub>i</sub>'' has [[geometric distribution]] with expectation 1/''p<sub>i</sub>''. By the [[Expected value#Linearity|linearity of expectations]] we have:
 
:<math>
\begin{align}
\operatorname{E}(T)& = \operatorname{E}(t_1) + \operatorname{E}(t_2) + \cdots + \operatorname{E}(t_n)
= \frac{1}{p_1} + \frac{1}{p_2} +  \cdots + \frac{1}{p_n} \\
& = \frac{n}{n} + \frac{n}{n-1} +  \cdots + \frac{n}{1}  = n \cdot \left(\frac{1}{1} + \frac{1}{2} + \cdots + \frac{1}{n}\right) \, = \, n \cdot H_n.
\end{align}
</math>
 
Here ''H<sub>n</sub>'' is the [[harmonic number]]. Using the [[asymptotics]] of the harmonic numbers, we obtain:
 
:<math>
\operatorname{E}(T)  = n \cdot H_n = n \log n + \gamma n + \frac1{2} + o(1), \ \
\text{as}  \ n \to \infty,
</math>
where <math>\gamma \approx 0.5772156649</math> is the [[Euler–Mascheroni constant]].
 
Now one can use the [[Markov inequality]] to bound the desired probability:
 
:<math>\operatorname{P}(T \geq c \, n H_n) \le \frac1c.</math>
 
=== Calculating the variance ===
Using the independence of random variables ''t<sub>i</sub>'', we obtain:
 
:<math>
\begin{align}
\operatorname{Var}(T)& = \operatorname{Var}(t_1) + \operatorname{Var}(t_2) + \cdots + \operatorname{Var}(t_n) \\
& = \frac{1-p_1}{p_1^2} + \frac{1-p_2}{p_2^2} +  \cdots + \frac{1-p_n}{p_n^2} \\
& \leq \frac{n^2}{n^2} + \frac{n^2}{(n-1)^2} +  \cdots + \frac{n^2}{1^2} \\
& \leq n^2 \cdot \left(\frac{1}{1^2} + \frac{1}{2^2} + \cdots \right) = \frac{\pi^2}{6} n^2 < 2 n^2,
\end{align}
</math>
 
where the last equality uses a value of the [[Riemann zeta function]] (see [[Basel problem]]).
 
Now one can use the [[Chebyshev inequality]] to bound the desired probability:
 
:<math>\operatorname{P}\left(|T- n H_n| \geq c\, n\right) \le \frac{2}{c^2}.</math>
 
=== Tail estimates ===
 
A different upper bound can be derived from the following observation. Let <math>{Z}_n^r</math> denote the event that the <math>n</math>-th coupon was not picked in the first <math>r</math> trials. Then:
:<math>
\begin{align}
P\left [ {Z}_n^r \right ] = \left(1-\frac{1}{n}\right)^r \le e^{-r / n}
\end{align}
</math>
 
Thus, for <math>r = \beta n \log n</math>, we have <math>P\left [ {Z}_i^r \right ] \le e^{(-\beta n \log n ) / n} = n^{-\beta}</math>.
:<math>
\begin{align}
P\left [ T > \beta n \log n \right ] = P \left [ \bigcup_i {Z}_i^{\beta n \log n} \right ] \le n \cdot P [ {Z}_1^{\beta n \log n} ] \le n^{-\beta + 1}
\end{align}
</math>
 
=== Connection to probability generating functions ===
Another [[combinatorial]] technique can also be used to resolve the problem: see [[Coupon collector's problem (generating function approach)]].
 
==Extensions and generalizations==
* [[Paul Erdős]] and [[Alfréd Rényi]] proved the limit theorem for the distribution of ''T''. This result is a further extension of previous bounds.
 
::<math>
\operatorname{P}(T < n\log n + cn) \to e^{-e^{-c}}, \ \  \text{as}  \ n \to \infty.
</math>
 
* [[Donald J. Newman]] and [[Lawrence Shepp]] found a generalization of the coupon collector's problem when ''m'' copies of each coupon needs to be collected. Let ''T<sub>m</sub>'' be the first time ''m'' copies of each coupon are collected. They showed that the expectation in this case satisfies:
 
::<math>
\operatorname{E}(T_m) =  n \log n + (m-1) n \log\log n + O(n), \ \
\text{as}  \ n \to \infty.
</math>
:Here ''m'' is fixed.  When ''m'' = 1 we get the earlier formula for the expectation.
 
* Common generalization, also due to Erdős and Rényi:
 
::<math>
\operatorname{P}(T_k < n\log n + (k-1) n \log\log n + cn) \to e^{-e^{-c}/(k-1)!}, \ \  \text{as}  \ n \to \infty.
</math>
 
== See also ==
 
* [[Watterson estimator]]
 
== Notes ==
{{Reflist}}
 
==References==
*{{citation
| last1 = Blom | first1 = Gunnar
| last2 = Holst | first2 = Lars
| last3 = Sandell | first3 = Dennis
| contribution = 7.5 Coupon collecting I, 7.6 Coupon collecting II, and 15.4 Coupon collecting III
| isbn = 0-387-94161-4
| location = New York
| mr = 1265713
| pages = 85–87, 191
| publisher = Springer-Verlag
| title = Problems and Snapshots from the World of Probability
| url = http://books.google.com/books?id=KCsSWFMq2u0C&pg=PA85
| year = 1994}}.
*{{citation
| last = Dawkins | first = Brian
| issue = 1
| journal = The American Statistician
| jstor = 2685247
| pages = 76–82
| title = Siobhan's problem: the coupon collector revisited
| volume = 45
| year = 1991}}.
*{{citation
| last1 = Erdős | first1 = Paul | author1-link = Paul Erdős
| last2 = Rényi | first2 = Alfréd | author2-link = Alfréd Rényi
| journal = Magyar Tudományos Akadémia Matematikai Kutató Intézetének Közleményei
| mr = 0150807
| pages = 215–220
| title = On a classical problem of probability theory
| url = http://www.renyi.hu/~p_erdos/1961-09.pdf
| volume = 6
| year = 1961}}.
*{{citation
| last1 = Newman | first1 = Donald J. | author1-link = Donald J. Newman
| last2 = Shepp | first2 = Lawrence | author2-link = Lawrence Shepp
| doi = 10.2307/2308930
| journal = [[American Mathematical Monthly]]
| mr = 0120672
| pages = 58–61
| title = The double dixie cup problem
| volume = 67
| year = 1960}}
*{{citation
| last1 = Flajolet | first1 = Philippe | author1-link = Philippe Flajolet
| last2 = Gardy | first2 = Danièle
| last3 = Thimonier | first3 = Loÿs
| doi = 10.1016/0166-218X(92)90177-C
| issue = 3
| journal = Discrete Applied Mathematics
| mr = 1189469
| pages = 207–229
| title = Birthday paradox, coupon collectors, caching algorithms and self-organizing search
| url = http://algo.inria.fr/flajolet/Publications/alloc2.ps.gz
| volume = 39
| year = 1992}}.
*{{citation
| last = Isaac | first = Richard
| contribution = 8.4 The coupon collector's problem solved
| isbn = 0-387-94415-X
| location = New York
| mr = 1329545
| pages = 80–82
| publisher = Springer-Verlag
| series = Undergraduate Texts in Mathematics
| title = The Pleasures of Probability
| url = http://books.google.com/books?id=a_2vsIx4FQMC&pg=PA80
| year = 1995}}.
*{{citation
| last1 = Motwani | first1 = Rajeev | author1-link = Rajeev Motwani
| last2 = Raghavan | first2 = Prabhakar
| contribution = 3.6. The Coupon Collector's Problem
| location = Cambridge
| mr = 1344451
| pages = 57–63
| publisher = Cambridge University Press
| title = Randomized algorithms
| url = http://books.google.com/books?id=QKVY4mDivBEC&pg=PA57
| year = 1995}}.
 
==External links==
* "[http://demonstrations.wolfram.com/CouponCollectorProblem/ Coupon Collector Problem]" by [[Ed Pegg, Jr.]], the [[Wolfram Demonstrations Project]]. Mathematica package.
* [http://www-stat.stanford.edu/~susan/surprise/Collector.html Coupon Collector Problem], a simple [[Java applet]].
* ''[http://www.math.rutgers.edu/~zeilberg/mamarim/mamarimhtml/coupon.html How Many Singles, Doubles, Triples, Etc., Should The Coupon Collector Expect?]'', a short note by [[Doron Zeilberger]].
 
[[Category:Articles containing proofs]]
[[Category:Games (probability)]]
[[Category:Probability theorems]]
[[Category:Named probability problems]]

Latest revision as of 13:10, 21 July 2014

Hello! I am Teddy. I am delighted that I can join to the whole globe. I live in Norway, in the south region. I dream to visit the various nations, to obtain acquainted with intriguing people.

Have a look at my web site; backup plugin