Irreducible polynomial: Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
 
en>Rjwilmsi
m Journal cites, using AWB (9871)
Line 1: Line 1:
A calorie calculator is a dieter's ideal friend. The web has provided those which are attempting to get rid of fat many free diet tools which can aid. The following are merely a few that could aid you learn what the body mass index is, and offer choices to track a calories.<br><br>The issue with this program is the fact that it doesn't consider variation inside body kind. Personally I have a big frame, so my BMI indicates which I am overweight despite the reality practitioners agree I am inside advantageous health. The reverse issue is also possible. Some individuals might fall in the acceptable range on the [http://safedietplansforwomen.com/bmi-chart bmi chart] and still have problems with too much body fat. This is the condition that doctors and experts at The Mayo Clinic have labeled normal weight weight.<br><br>A BMI calculator shows whether you're underweight, general, obese, or obese. Keep inside mind, though, it is a rough estimate of the ratio of your fat plus height. Although the BMI is considered a body fat calculator, sometimes individuals that are muscular are not able to properly gauge their body fat. Why behind this really is that muscle fat is heavier than fat. So, they could weigh more for their height, however, it happens to be muscle fat plus not fat at all. However, for those that are barely active, the BMI calculator is a desirable tool inside gauging their body fat.<br><br>The chart provided earlier was for men above the age of 20 and below the age of 60. However, men above the age of 50 could make a note of the truth that, irrespective of your body frame kind, it happens to be important for you to cut down found on the fat element of your body weight. This can help in protecting you from age-related weight disorders, heart ailments and other wellness issues. The BMI (Body Mass Index) is a superior technique of finding out what the ideal fat range is, or ought to be. A BMI range of 19 to 25 is considered to be healthy. Anything above 25 would place you inside the obese category.<br><br>A high BMI is correlated with improved risk of Type 2 diabetes, heart disease, various cancers, and osteoarthritis. Doesn't that put BMI on a sound scientific footing? Not absolutely. Why not?<br><br>In the beginning, only three kilograms of melons are taken daily for three days. Next bmi chart women the quantity is increased by 1 kg daily till it's enough to appease the hunger. Just the sugary plus fresh fruits of the best variety are second-hand inside the treatment.<br><br>A commonly used diagnostic tool, the body mass index measures the body fat based found on the weight and the height of an individual. Developed by a Belgian scientist Adolphe Quetelet, it helps to calculate how healthy a person is based on his fat plus identify whether the person is underweight, overweight, or overweight. The relation of BMI to fatness differs for individuals of different age and gender. For instance, the BMI of ladies is likely to be high than which of guys.<br><br>Please see my website at: http://www.bodyconceptspersonaltraining.com/aboutus.html for more info on Personal Training Services along with a Better Lifestyle"- Live Better!
{{Statistical mechanics|cTopic=[[Particle statistics|Particle Statistics]]}}
In [[quantum statistics]], '''Bose–Einstein statistics''' (or more colloquially '''B–E statistics''') is one of two possible ways in which a collection of non-interacting indistinguishable [[particles]] may occupy a set of available discrete [[Energy level|energy states]], at [[thermodynamic equilibrium]]. The aggregation of particles in the same state, which is a characteristic of particles obeying Bose–Einstein statistics, accounts for the cohesive streaming of [[Laser#Quantum vs. classical emission processes|laser light]] and the frictionless creeping of [[superfluid helium]]. The theory of this behaviour was developed (1924–25) by [[Satyendra Nath Bose]], who recognized that a collection of identical and indistinguishable particles can be distributed in this way. The idea was later adopted and extended by [[Albert Einstein]] in collaboration with Bose.
 
The Bose–Einstein statistics apply only to those particles not limited to single occupancy of the same state—that is, particles that do not obey the [[Pauli exclusion principle]] restrictions. Such particles have integer values of [[spin (physics)|spin]] and are named [[boson]]s, after the statistics that correctly describe their behaviour. There must also be no significant interaction between the particles.
 
==Concept==
At low temperatures, bosons behave differently from [[fermion]]s (which obey the [[Fermi–Dirac statistics]]) in that an unlimited number of them can "condense" into the same energy state. This apparently unusual property also gives rise to the special state of matter &ndash; [[Bose Einstein Condensate]]. Fermi–Dirac and Bose–Einstein statistics apply when [[quantum|quantum effects]] are important and the particles are "[[Identical particles|indistinguishable]]". Quantum effects appear if the concentration of particles satisfies,
 
:<math>\frac{N}{V} \ge n_q </math>
 
where ''N'' is the number of particles and ''V'' is the volume and ''n''<sub>''q''</sub> is the [[quantum concentration]], for which the interparticle distance is equal to the [[thermal de Broglie wavelength]], so that the [[wavefunction]]s of the particles are barely overlapping. Fermi–Dirac statistics apply to fermions (particles that obey the [[Pauli exclusion principle]]), and Bose–Einstein statistics apply to [[bosons]]. As the quantum concentration depends on temperature, most systems at high temperatures obey the classical (Maxwell–Boltzmann) limit unless they have a very high density, as for a [[white dwarf]].  Both Fermi–Dirac and Bose–Einstein become [[Maxwell–Boltzmann statistics]] at high temperature or at low concentration.
 
B–E statistics was introduced for [[photon]]s in 1924 by [[Satyendra Nath Bose|Bose]] and generalized to atoms by [[Albert Einstein|Einstein]] in 1924–25.
 
The expected number of particles in an energy state ''i''&nbsp; for B–E statistics is
 
:<math>n_i(\varepsilon_i) = \frac{g_i}{e^{(\varepsilon_i-\mu)/kT}-1}</math>
with ''ε<sub>i</sub>''&nbsp;> ''μ'' and where ''n<sub>i</sub>''&nbsp; is the number of particles in state ''i'', ''g<sub>i</sub>''&nbsp; is the [[Degenerate energy level|degeneracy]] of state ''i'', ''ε<sub>i</sub>''&nbsp; is the [[energy]] of the ''i''th state, ''μ'' is the [[chemical potential]], ''k'' is the [[Boltzmann constant]], and ''T'' is absolute [[temperature]]. For comparison, the average number of fermions with energy <math>\epsilon_i</math> given by [[Fermi–Dirac statistics#Distribution of particles over energy|Fermi–Dirac particle-energy distribution]] has a similar form,
 
:<math> \bar{n}_i(\epsilon_i) = \frac{g_i}{e^{(\epsilon_i-\mu) / k T} + 1} </math>
 
B–E statistics reduces to the [[Rayleigh–Jeans Law]] distribution for <math> kT \gg \varepsilon_i-\mu </math>, namely <math>
n_i = \frac{g_i kT}{\varepsilon_i-\mu} </math>.
 
==History==
While presenting a lecture at the [[University of Dhaka]] on the theory of radiation and the ultraviolet catastrophe, [[Satyendra Nath Bose]] intended to show his students that the contemporary theory was inadequate, because it predicted results not in accordance with experimental results. During this lecture, Bose committed an error in applying the theory, which unexpectedly gave a prediction that agreed with the experiment. The error was a simple mistake—similar to arguing that flipping two fair coins will produce two heads one-third of the time—that would appear obviously wrong to anyone with a basic understanding of statistics (remarkably, this error resembled the famous blunder by [[Jean le Rond d'Alembert|d'Alembert]] known from his "[http://www.cs.xu.edu/math/Sources/Dalembert/croix_ou_pile.pdf Croix ou Pile]" Article)  . However, the results it predicted agreed with experiment, and Bose realized it might not be a mistake after all. He for the first time took the position that the Maxwell–Boltzmann distribution would not be true for microscopic particles where fluctuations due to Heisenberg's uncertainty principle will be significant. Thus he stressed the probability of finding particles in the phase space, each state having volume h<sup>3</sup>, and discarding the distinct position and momentum of the particles.
 
Bose adapted this lecture into a short article called "Planck's Law and the Hypothesis of Light Quanta"<ref>See p. 14, note 3, of the Ph.D. Thesis entitled
''Bose–Einstein condensation: analysis of problems and rigorous results'', presented by Alessandro Michelangeli to the International School for Advanced Studies, Mathematical Physics Sector, October 2007 for the degree of Ph.D. See: http://digitallibrary.sissa.it/handle/1963/5272?show=full, and download from
http://digitallibrary.sissa.it/handle/1963/5272</ref><ref>To download the Bose paper, see: http://www.condmat.uni-oldenburg.de/TeachingSP/bose.ps</ref> and submitted it to the ''[[Philosophical Magazine]]''. However, the referee's report was negative, and the paper was rejected. Undaunted, he sent the manuscript to Albert Einstein requesting publication in the ''[[Zeitschrift für Physik]]''. Einstein immediately agreed, personally translated the article into German (Bose had earlier translated Einstein's article on the theory of General Relativity from German to English), and saw to it that it was published. Bose's theory achieved respect when Einstein sent his own paper in support of Bose's to ''Zeitschrift für Physik'', asking that they be published together. This was done in 1924.
 
The reason Bose produced accurate results was that since photons are indistinguishable from each other, one cannot treat any two photons having equal energy as being two distinct identifiable photons. By analogy, if in an alternate universe coins were to behave like photons and other bosons, the probability of producing two heads would indeed be one-third, and so is the probability of getting a head and a tail which equals one-half for the conventional (classical, distinguishable) coins. Bose's "error" lead to what is now called Bose–Einstein statistics.
 
Bose and Einstein extended the idea to atoms and this led to the prediction of the existence of phenomena which became known as [[Bose–Einstein condensate]], a dense collection of bosons (which are particles with integer spin, named after Bose), which was demonstrated to exist by experiment in 1995.
 
==Two derivations of the Bose–Einstein distribution==
 
=== Derivation from the grand canonical ensemble ===
The Bose-Einstein distribution, which applies only to a quantum system of non-interacting bosons, is easily derived from the [[grand canonical ensemble]].<ref name="sriva">Chapter 7 of {{cite isbn|9788120327825}}</ref> In this ensemble, the system is able to exchange energy and exchange particles with a reservoir (temperature ''T'' and chemical potential ''µ'' fixed by the reservoir).
 
Due to the non-interacting quality, each available single-particle level (with energy level ''ϵ'') forms a separate thermodynamic system in contact with the reservoir.
In other words, each single-particle level is a separate, tiny grand canonical ensemble.
With bosons there is no limit on the number of particles ''N'' in the level, but due to [[indistinguishability]] each possible ''N'' corresponds to only one [[microstate (statistical mechanics)|microstate]] (with energy ''Nϵ'').
The resulting partition function for that single-particle level therefore forms a [[geometric series]]:
:<math> \begin{align}\mathcal Z & = \sum_{N=0}^{\infty} \exp(N(\mu - \epsilon)/k_B T) = \sum_{N=0}^{\infty} [\exp((\mu - \epsilon)/k_B T)]^N \\
& = \frac{1}{1 - \exp((\mu - \epsilon)/k_B T)}\end{align}</math>
and the average particle number for that single-particle substate is given by
:<math> \langle N\rangle = k_B T \frac{1}{\mathcal Z} \left(\frac{\partial \mathcal Z}{\partial \mu}\right)_{V,T} = \frac{1}{\exp((\epsilon-\mu)/k_B T)-1} </math>
This result applies for each single-particle level and thus forms the Bose-Einstein distribution for the entire state of the system.<ref name="sriva">Chapter 6 of {{cite isbn|9788120327825}}</ref>
 
The variance in particle number (due to [[thermal fluctuations]]) may also be derived:
:<math> \langle (\Delta N)^2 \rangle = k_B T \left(\frac{d\langle N\rangle}{d\mu}\right)_{V,T} = \langle N\rangle^2 + \langle N\rangle </math>
This level of fluctuation is much larger than for [[Maxwell-Boltzmann statistics|distinguishable particles]], which would instead show [[Poisson statistics]] (<math>\langle (\Delta N)^2 \rangle = \langle N\rangle </math>).
This is because the [[probability distribution]] for the number of bosons in a given energy level is a [[geometric distribution]], not a [[Poisson distribution]].
 
=== Derivation in the canonical approach ===
 
It is also possible to derive approximate Bose–Einstein statistics in the [[canonical ensemble]].
These derivations are lengthy and only yield the above results in the asymptotic limit of a large number of particles.
The reason is that the total number of bosons is fixed in the canonical ensemble. That contradicts the implication in Bose–Einstein statistics that each energy level is filled independently from the others (which would require the number of particles to be flexible).
 
{{collapse top|title=Derivation}}
Suppose we have a number of energy levels, labeled by index
<math>\displaystyle i</math>, each level
having energy <math>\displaystyle \varepsilon_i</math> and containing a total of
<math>\displaystyle n_i</math> particles.  Suppose each level contains
<math>\displaystyle g_i</math>
distinct sublevels, all of which have the same energy, and which are distinguishable. For example, two particles may have different momenta, in which case they are distinguishable from each other, yet they can still have the same energy.
The value of 
<math>\displaystyle g_i</math> associated with level <math>\displaystyle i</math> is called the "degeneracy" of that energy level. Any number of bosons can occupy the same sublevel.
 
Let <span id="w(n,g)"><math>\displaystyle w(n,g)</math></span> be the number of ways of distributing
<math>\displaystyle n</math> particles among the
<math>\displaystyle g</math> sublevels of an energy level. There is only one way of distributing
<math>\displaystyle n</math> particles with one sublevel, therefore
<math>\displaystyle w(n,1)=1</math>. It is easy to see that
there are <math>\displaystyle (n+1)</math> ways of distributing
<math>\displaystyle n</math> particles in two sublevels which we will write as:
 
:<math>
w(n,2)=\frac{(n+1)!}{n!1!}.
</math>
 
With a little thought
(see [[#Notes|Notes]] below)
it can be seen that the number of ways of distributing
<math>\displaystyle n</math> particles in three sublevels is
 
:<math>w(n,3) = w(n,2) + w(n-1,2) + \cdots + w(1,2) + w(0,2)
</math>
so that
 
:<math>
w(n,3)=\sum_{k=0}^n w(n-k,2) = \sum_{k=0}^n\frac{(n-k+1)!}{(n-k)!1!}=\frac{(n+2)!}{n!2!}
</math>
 
where we have used the following <span id="theorem">theorem</span> involving [[binomial coefficient]]s:
 
:<math>
\sum_{k=0}^n\frac{(k+a)!}{k!a!}=\frac{(n+a+1)!}{n!(a+1)!}.
</math>
 
Continuing this process, we can see that
<span id="w(n,g)"><math>\displaystyle w(n,g)</math></span>
is just a binomial coefficient
(See [[#Notes|Notes]] below)
 
:<math>
w(n,g)=\frac{(n+g-1)!}{n!(g-1)!}.
</math>
 
For example, the population numbers for two particles in three sublevels are 200, 110, 101, 020, 011, or 002 for a total of six which equals 4!/(2!2!). The number of ways that a set of occupation numbers <math>\displaystyle n_i</math> can be realized is the product of the ways that each individual energy level can be populated:
 
:<math>
W = \prod_i w(n_i,g_i) =  \prod_i \frac{(n_i+g_i-1)!}{n_i!(g_i-1)!}
\approx\prod_i \frac{(n_i+g_i)!}{n_i!(g_i-1)!}
</math>
 
where the approximation assumes that <math>n_i \gg 1</math>.
 
Following the same procedure used in deriving the [[Maxwell–Boltzmann statistics]], we wish to find the set of  <math>\displaystyle n_i</math> for which  ''W'' is maximised, subject to the constraint that there be a fixed total number of particles, and a fixed total energy. The maxima of <math>\displaystyle W</math> and <math>\displaystyle \ln(W)</math> occur at the same value of <math>\displaystyle n_i</math> and, since it is easier to accomplish mathematically, we will maximise the latter function instead. We constrain our solution using [[Lagrange multipliers]] forming the function:
 
:<math>
f(n_i)=\ln(W)+\alpha(N-\sum n_i)+\beta(E-\sum n_i \varepsilon_i)
</math>
 
Using the <math>n_i \gg 1</math> approximation and using [[Stirling's approximation]] for the factorials <math>\left(x!\approx x^x\,e^{-x}\,\sqrt{2\pi x}\right)</math> gives
 
:<math>f(n_i)=\sum_i (n_i + g_i) \ln(n_i + g_i) - n_i \ln(n_i) +\alpha\left(N-\sum n_i\right)+\beta\left(E-\sum n_i \varepsilon_i\right)+K.</math>
 
Where ''K'' is the sum of a number of terms which are not functions of the <math>n_i</math>. Taking the derivative with respect to <math>\displaystyle n_i</math>, and setting the result to zero and solving for  <math>\displaystyle n_i</math>, yields the Bose–Einstein population numbers:
 
:<math>
n_i = \frac{g_i}{e^{\alpha+\beta \varepsilon_i}-1}.
</math>
 
By a process similar to that outlined in the [[Maxwell–Boltzmann statistics]] article, it can be seen that:
:<math>d\ln W=\alpha\,dN+\beta\,dE</math>
 
which, using Boltzmann's famous relationship <math>S=k\,\ln W</math> becomes a statement of the [[second law of thermodynamics]] at constant volume, and it follows that <math>\beta = \frac{1}{kT}</math> and <math>\alpha = - \frac{\mu}{kT}</math> where ''S'' is the [[entropy]], <math>\mu</math> is the [[chemical potential]], ''k'' is [[Boltzmann's constant]] and ''T'' is the [[temperature]], so that finally:
 
:<math>
n_i = \frac{g_i}{e^{(\varepsilon_i-\mu)/kT}-1}.
</math>
 
Note that the above formula is sometimes written:
 
:<math>
n_i = \frac{g_i}{e^{\varepsilon_i/kT}/z-1},
</math>
 
where
<math>\displaystyle z=\exp(\mu/kT)</math>
is the absolute [[Thermodynamic activity|activity]], as noted by McQuarrie.<ref>See McQuarrie in citations</ref>
 
Also note that when the particle numbers are not conserved, removing the conservation of particle numbers constraint is equivalent to setting <math>\alpha</math> and therefore the chemical potential <math>\mu</math> to zero. This will be the case for photons and massive particles in mutual equilibrium and the resulting distribution will be the [[Planck's law|Planck distribution]].
{{collapse bottom}}
 
{{collapse top|title=Notes}}
 
A much simpler way to think of Bose–Einstein distribution function is to consider that '''n''' particles are denoted by identical balls and '''g shells are marked by g-1 line partitions.
'''
It is clear that the [[permutation]]s of these '''n balls''' and '''g-1 partitions''' will give different ways of arranging bosons in different energy levels.
 
Say, for 3(=n) particles and 3(=g) shells, therefore (g-1)=2, the arrangement might be '''|●●|●''', or '''||●●●''', or '''|●|●● ''', etc.
 
Hence the number of distinct permutations of n + (g-1) objects which have n identical items and (g-1) identical items will be:
 
(n+g-1)!/n!(g-1)!
 
'''OR'''
 
The purpose of these notes is to clarify some aspects of the derivation of the Bose–Einstein (B–E)
distribution for beginners. The enumeration of cases (or ways) in the B–E distribution can be recast as
follows.  Consider a game of dice throwing in which there are
<math>\displaystyle n</math> dice,
with each die taking values in the set
<math>\displaystyle \left\{ 1, \dots, g \right\}</math>, for <math>g \ge 1</math>. 
The constraints of the game are that the value of a die
<math>\displaystyle i</math>, denoted by <math>\displaystyle m_i</math>, has to be
'''''greater than or equal to''''' the value of die
<math>\displaystyle (i-1)</math>, denoted by
<math>\displaystyle m_{i-1}</math>, in the previous throw, i.e.,
<math>m_i \ge m_{i-1}</math>.  Thus a valid sequence of die throws can be described by an
''n''-tuple
<math>\displaystyle \left( m_1 , m_2 , \dots , m_n \right)</math>, such that <math>m_i \ge m_{i-1}</math>.  Let
<math>\displaystyle S(n,g)</math> denote the set of these valid ''n''-tuples:
 
:{| style="width:100%" border="0"
|-
| style="width:95%" |
<math>
  S(n,g) =
  \Big\{
      \left( m_1 , m_2 , \dots , m_n \right)
      \Big| \Big.
      m_i \ge m_{i-1} ,
      m_i \in \left\{ 1,  \dots, g \right\} ,
      \forall i = 1, \dots , n
  \Big\}.
</math>
| style="width:5%" | (1)
|}
 
Then the quantity <math>\displaystyle w(n,g)</math> ([[#w(n,g)|defined above]] as the number of ways to distribute
<math>\displaystyle n</math> particles among the
<math>\displaystyle g</math> sublevels of an energy level) is the cardinality of <math>\displaystyle S(n,g)</math>, i.e., the number of elements (or valid ''n''-tuples) in <math>\displaystyle S(n,g)</math>.
Thus the problem of finding an expression for
<math>\displaystyle w(n,g)</math>  
becomes the problem of counting the elements in <math>\displaystyle S(n,g)</math>.
<!--
'''Example ''n'' = 3, ''g'' = 3:'''
-->
 
'''Example ''n'' = 4, ''g'' = 3:'''
:<math>
  S(4,3) =
  \left\{
      \underbrace{(1111), (1112), (1113)}_{(a)},
      \underbrace{(1122), (1123), (1133)}_{(b)},
      \underbrace{(1222), (1223), (1233), (1333)}_{(c)},
  \right.
</math>
:::::<math>
  \left.
      \underbrace{(2222), (2223), (2233), (2333), (3333)}_{(d)}
  \right\}
</math>
:<math>\displaystyle w(4,3) = 15</math> (there are <math>\displaystyle 15</math> elements in <math>\displaystyle S(4,3)</math>)
<!--
<math>
  \displaystyle S(4,3)
</math>)
-->
 
Subset
<math>\displaystyle (a)</math>
is obtained by fixing all indices
<math>\displaystyle m_i</math> to
<math>\displaystyle 1</math>, except for the last index,
<math>\displaystyle m_n</math>, which is incremented from
<math>\displaystyle 1</math> to
<math>\displaystyle g=3</math>.
Subset
<math>\displaystyle (b)</math>
is obtained by fixing
<math>\displaystyle m_1 = m_2 = 1</math>, and incrementing
<math>\displaystyle m_3</math> from
<math>\displaystyle 2</math> to
<math>\displaystyle g=3</math>.  Due to the constraint
<math>
  \displaystyle
  m_i \ge m_{i-1}
</math>
on the indices in
<math>\displaystyle S(n,g)</math>,
the index
<math>\displaystyle m_4</math> must
automatically
take values in
<math>\displaystyle \left\{ 2, 3 \right\}</math>.
The construction of subsets
<math>\displaystyle (c)</math> and
<math>\displaystyle (d)</math>
follows in the same manner.
 
Each element of  
<math>\displaystyle S(4,3)</math> can be thought of as a
[[multiset]]
of cardinality
<math>\displaystyle n=4</math>;
the elements of such multiset are taken from the set
<math>\displaystyle \left\{ 1, 2, 3 \right\}</math>
of cardinality
<math>\displaystyle g=3</math>,
and the number of such multisets is the  
[[multiset#Multiset coefficients|multiset coefficient]]
:<math>
  \displaystyle
  \left\langle
      \begin{matrix}
3
\\
4
      \end{matrix}
  \right\rangle
  = {3 + 4 - 1 \choose 3-1}
  = {3 + 4 - 1 \choose 4}
  =
  \frac
  {6!}
  {4! 2!}
  = 15
</math>
 
More generally, each element of
<math>\displaystyle S(n,g)</math>
is a
[[multiset]]
of cardinality
<math>\displaystyle n</math>
(number of dice)
with elements taken from the set
<math>\displaystyle \left\{ 1, \dots, g \right\}</math>
of cardinality
<math>\displaystyle g</math>
(number of possible values of each die),
and the number of such multisets, i.e.,
<math>\displaystyle w(n,g)</math>
is the
[[multiset#Multiset coefficients|multiset coefficient]]
:{| style="width:100%" border="0"
|-
| style="width:95%" |
<math>
  \displaystyle
  w(n,g)
  =
  \left\langle
      \begin{matrix}
g
\\
n
      \end{matrix}
  \right\rangle
  = {g + n - 1 \choose g-1}
  = {g + n - 1 \choose n}
  =
  \frac{(g + n - 1)!}
  {n! (g-1)!}
</math>
| style="width:5%" | (2)
|}
which is exactly the same as the
[[#w(n,g)|formula]] for <math>\displaystyle w(n,g)</math>, as derived above with the aid
of
a [[#theorem|theorem]] involving binomial coefficients, namely
 
:{| style="width:100%" border="0"
|-
| style="width:95%" |
<math>
\sum_{k=0}^n\frac{(k+a)!}{k!a!}=\frac{(n+a+1)!}{n!(a+1)!}.
</math>
| style="width:5%" | (3)
|}
 
To understand the decomposition
:{| style="width:100%" border="0"
|-
| style="width:95%" |
<math>
  \displaystyle
  w(n,g)
  =
  \sum_{k=0}^{n}
  w(n-k, g-1)
  =
  w(n, g-1)
  +
  w(n-1, g-1)
  +
  \cdots
  +
  w(1, g-1)
  +
  w(0, g-1)
</math>
| style="width:5%" | (4)
|}
or for example,
<math>\displaystyle n=4</math>
and
<math>\displaystyle g=3</math>
:<math>
  \displaystyle
  w(4,3)
  =
  w(4,2)
  +
  w(3,2)
  +
  w(2,2)
  +
  w(1,2)
  +
  w(0,2),
</math>
 
let us rearrange the elements of  
<math>\displaystyle S(4,3)</math> as follows
:<math>
  S(4,3) =
  \left\{
      \underbrace{
(1111),
(1112),
(1122),
(1222),
(2222)
      }_{(\alpha)},
      \underbrace{
(111{\color{Red}\underset{=}{3}}),
(112{\color{Red}\underset{=}{3}}),
(122{\color{Red}\underset{=}{3}}),
(222{\color{Red}\underset{=}{3}})
      }_{(\beta)},
  \right.
</math>
:::::<math>
  \left.
      \underbrace{
(11{\color{Red}\underset{==}{33}}),
(12{\color{Red}\underset{==}{33}}),
(22{\color{Red}\underset{==}{33}})
      }_{(\gamma)},
      \underbrace{
(1{\color{Red}\underset{===}{333}}),
(2{\color{Red}\underset{===}{333}})
      }_{(\delta)}
      \underbrace{
({\color{Red}\underset{====}{3333}})
      }_{(\omega)}
  \right\}.
</math>
 
Clearly, the subset
<math>\displaystyle (\alpha)</math>
of  
<math>\displaystyle S(4,3)</math>
is the same as the set
:<math>
  \displaystyle
  S(4,2)
  =
  \left\{
(1111),
(1112),
(1122),
(1222),
(2222)
  \right\}
</math>.
 
By deleting the index
<math>\displaystyle m_4=3</math>
(shown in <span style="color:red;">red with double underline</span>)
in
the subset
<math>\displaystyle (\beta)</math>
of
<math>\displaystyle S(4,3)</math>,
one obtains
the set
:<math>
  \displaystyle
  S(3,2)
  =
  \left\{
(111),
(112),
(122),
(222)
  \right\}
</math>.
In other words, there is a one-to-one correspondence between the subset
<math>\displaystyle (\beta)</math>
of  
<math>\displaystyle S(4,3)</math>
and the set
<math>\displaystyle S(3,2)</math>. We write
:<math>
  \displaystyle
  (\beta)
  \longleftrightarrow
  S(3,2)
</math>.
 
Similarly, it is easy to see that
:<math>
  \displaystyle
  (\gamma)
  \longleftrightarrow
  S(2,2)
  =
  \left\{
(11),
(12),
(22)
  \right\}
</math>
:<math>
  \displaystyle
  (\delta)
  \longleftrightarrow
  S(1,2)
  =
  \left\{
(1),
(2)
  \right\}
</math>
:<math>
  \displaystyle
  (\omega)
  \longleftrightarrow
  S(0,2)
  =
  \varnothing
</math> (empty set).
 
Thus we can write
:<math>
  \displaystyle
  S(4,3)
  =
  \bigcup_{k=0}^{4}
  S(4-k,2)
</math>
 
or more generally,
:{| style="width:100%" border="0"
|-
| style="width:95%" |
<math>
  \displaystyle
  S(n,g)
  =
  \bigcup_{k=0}^{n}
  S(n-k,g-1)
</math>;
| style="width:5%" | (5)
|}
and since the sets
:<math>
  \displaystyle
  S(i,g-1) \ , \ {\rm for} \ i = 0, \dots , n
</math>
are non-intersecting, we thus have
:{| style="width:100%" border="0"
|-
| style="width:95%" |
<math>
  \displaystyle
  w(n,g)
  =
  \sum_{k=0}^{n}
  w(n-k,g-1)
</math>,
| style="width:5%" | (6)
|}
with the convention that
:{| style="width:100%" border="0"
|-
| style="width:95%" |
:<math>
  \displaystyle
  w(0,g)
  =
  1 \ , \forall g
  \ ,
  {\rm and}
  \
  w(n,0)
  =
  1 \ , \forall n
</math>.
| style="width:5%" | (7)
|}
Continuing the process, we arrive at the following formula
:<math>
  \displaystyle
  w(n,g)
  =
  \sum_{k_1=0}^{n}
  \sum_{k_2=0}^{n-k_1}
  w(n - k_1 - k_2, g-2)
  =
  \sum_{k_1=0}^{n}
  \sum_{k_2=0}^{n-k_1}
  \cdots
  \sum_{k_g=0}^{n-\sum_{j=1}^{g-1} k_j}
  w(n - \sum_{i=1}^{g} k_i, 0).
</math>
Using the convention (7)<sub>2</sub> above, we obtain the formula
:{| style="width:100%" border="0"
|-
| style="width:95%" |
<math>
  \displaystyle
  w(n,g)
  =
  \sum_{k_1=0}^{n}
  \sum_{k_2=0}^{n-k_1}
  \cdots
  \sum_{k_g=0}^{n-\sum_{j=1}^{g-1} k_j}
  1,
</math>
| style="width:5%" | (8)
|}
 
keeping in mind that for
<math>\displaystyle q</math>
and
<math>\displaystyle p</math>
being constants, we have
:{| style="width:100%" border="0"
|-
| style="width:95%"  |
<math>
  \displaystyle
  \sum_{k=0}^{q}
  p
  =
  q p
</math>.
| style= | (9)
|}
 
It can then be verified that (8) and (2) give the same result for
<math>\displaystyle w(4,3)</math>,
<math>\displaystyle w(3,3)</math>,
<math>\displaystyle w(3,2)</math>, etc.
 
{{collapse bottom}}
 
==Interdisciplinary applications==
 
Viewed as a pure [[probability distribution]], the Bose–Einstein distribution has found application in other fields:
 
* In recent years, Bose Einstein statistics have also been used as a method for term weighting in [[information retrieval]]. The method is one of a collection of DFR ("Divergence From Randomness") models,<ref name=bia>Amati, G.; C. J. Van Rijsbergen (2002). "[http://dl.acm.org/citation.cfm?id=582416 Probabilistic models of information retrieval based on measuring the divergence from randomness ]" ''[[ACM Transactions on Information Systems|ACM TOIS]]'' '''20 (4)''':357–389.</ref> the basic notion being that Bose Einstein statistics may be a useful indicator in cases where a particular term and a particular document have a significant relationship that would not have occurred purely by chance. Source code for implementing this model is available from the [http://ir.dcs.gla.ac.uk/terrier/doc/dfr_description.html Terrier project] at the University of Glasgow.
 
* {{Main|Bose–Einstein condensation (network theory)}} The evolution of many complex systems, including the World Wide Web, business, and citation networks, is encoded in the dynamic web describing the interactions between the system’s constituents. Despite their irreversible and nonequilibrium nature these networks follow Bose statistics and can undergo Bose–Einstein condensation. Addressing the dynamical properties of these nonequilibrium systems within the framework of equilibrium quantum gases predicts that the “first-mover-advantage,” “fit-get-rich('''FGR'''),” and “winner-takes-all” phenomena observed in competitive systems are thermodynamically distinct phases of the underlying evolving networks.<ref name=bia>Bianconi, G.;  Barabási, A.-L. (2001). "[http://prola.aps.org/abstract/PRL/v86/i24/p5632_1 Bose–Einstein Condensation in Complex Networks.]" ''[[Physical Review Letters|Phys. Rev. Lett.]]'' '''86''': 5632–35.</ref>
 
==See also==
* [[Bose–Einstein correlations]]
* [[Higgs boson]]
* [[Parastatistics]]
* [[Planck's law of black body radiation]]
* [[Superconductivity]]
 
==Notes==
{{Reflist}}
 
==References==
*{{cite book |title=Superconductivity, Superfluids and Condensates |last=Annett |first=James F. |authorlink= |coauthors= |year=2004 |publisher=Oxford University Press |location=New York |isbn=0-19-850755-0 |page= |pages= |url= }}
* Bose<!--The paper gives the name of the author as just this single word--> (1924). "Plancks Gesetz und Lichtquantenhypothese", ''Zeitschrift für Physik'' 26:178–181. [[doi:10.1007/BF01327326]] ''(Einstein's translation into German of Bose's paper on Planck's law)''.
*{{cite book |title=Classical and Statistical Thermodynamics |last=Carter |first=Ashley H. |authorlink= |coauthors= |year=2001 |publisher=Prentice Hall |location=Upper Saddle River, NJ |isbn=0-13-779208-5 |page= |pages= |url= }}
*{{cite book |title=Introduction to Quantum Mechanics |last=Griffiths |first=David J. |authorlink= |coauthors= |year=2005 |edition=2nd |publisher=Pearson, Prentice Hall |location=Upper Saddle River, NJ |isbn=0-13-191175-9 |page= |pages= |url= }}
*{{cite book |title=Statistical Mechanics |last=McQuarrie |first=Donald A. |authorlink= |coauthors= |year=2000 |edition=1st |publisher=University Science Books |location=Sausalito, CA 94965 |isbn=1-891389-15-7 |page=55 |pages= |url= }}
 
{{Statistical mechanics topics}}
{{Einstein}}
<!-- Editors: Please do not add the probability distributions template here. The Bose Einstein distribution is not a probability distribution. -->
 
{{DEFAULTSORT:Bose-Einstein statistics}}
[[Category:Bose–Einstein statistics| ]]
[[Category:Concepts in physics]]
[[Category:Quantum field theory]]
[[Category:Albert Einstein]]
[[Category:Continuous distributions]]
[[Category:Statistical mechanics]]

Revision as of 08:51, 20 January 2014

Template:Statistical mechanics In quantum statistics, Bose–Einstein statistics (or more colloquially B–E statistics) is one of two possible ways in which a collection of non-interacting indistinguishable particles may occupy a set of available discrete energy states, at thermodynamic equilibrium. The aggregation of particles in the same state, which is a characteristic of particles obeying Bose–Einstein statistics, accounts for the cohesive streaming of laser light and the frictionless creeping of superfluid helium. The theory of this behaviour was developed (1924–25) by Satyendra Nath Bose, who recognized that a collection of identical and indistinguishable particles can be distributed in this way. The idea was later adopted and extended by Albert Einstein in collaboration with Bose.

The Bose–Einstein statistics apply only to those particles not limited to single occupancy of the same state—that is, particles that do not obey the Pauli exclusion principle restrictions. Such particles have integer values of spin and are named bosons, after the statistics that correctly describe their behaviour. There must also be no significant interaction between the particles.

Concept

At low temperatures, bosons behave differently from fermions (which obey the Fermi–Dirac statistics) in that an unlimited number of them can "condense" into the same energy state. This apparently unusual property also gives rise to the special state of matter – Bose Einstein Condensate. Fermi–Dirac and Bose–Einstein statistics apply when quantum effects are important and the particles are "indistinguishable". Quantum effects appear if the concentration of particles satisfies,

NVnq

where N is the number of particles and V is the volume and nq is the quantum concentration, for which the interparticle distance is equal to the thermal de Broglie wavelength, so that the wavefunctions of the particles are barely overlapping. Fermi–Dirac statistics apply to fermions (particles that obey the Pauli exclusion principle), and Bose–Einstein statistics apply to bosons. As the quantum concentration depends on temperature, most systems at high temperatures obey the classical (Maxwell–Boltzmann) limit unless they have a very high density, as for a white dwarf. Both Fermi–Dirac and Bose–Einstein become Maxwell–Boltzmann statistics at high temperature or at low concentration.

B–E statistics was introduced for photons in 1924 by Bose and generalized to atoms by Einstein in 1924–25.

The expected number of particles in an energy state i  for B–E statistics is

ni(εi)=gie(εiμ)/kT1

with εi > μ and where ni  is the number of particles in state i, gi  is the degeneracy of state i, εi  is the energy of the ith state, μ is the chemical potential, k is the Boltzmann constant, and T is absolute temperature. For comparison, the average number of fermions with energy ϵi given by Fermi–Dirac particle-energy distribution has a similar form,

n¯i(ϵi)=gie(ϵiμ)/kT+1

B–E statistics reduces to the Rayleigh–Jeans Law distribution for kTεiμ, namely ni=gikTεiμ.

History

While presenting a lecture at the University of Dhaka on the theory of radiation and the ultraviolet catastrophe, Satyendra Nath Bose intended to show his students that the contemporary theory was inadequate, because it predicted results not in accordance with experimental results. During this lecture, Bose committed an error in applying the theory, which unexpectedly gave a prediction that agreed with the experiment. The error was a simple mistake—similar to arguing that flipping two fair coins will produce two heads one-third of the time—that would appear obviously wrong to anyone with a basic understanding of statistics (remarkably, this error resembled the famous blunder by d'Alembert known from his "Croix ou Pile" Article) . However, the results it predicted agreed with experiment, and Bose realized it might not be a mistake after all. He for the first time took the position that the Maxwell–Boltzmann distribution would not be true for microscopic particles where fluctuations due to Heisenberg's uncertainty principle will be significant. Thus he stressed the probability of finding particles in the phase space, each state having volume h3, and discarding the distinct position and momentum of the particles.

Bose adapted this lecture into a short article called "Planck's Law and the Hypothesis of Light Quanta"[1][2] and submitted it to the Philosophical Magazine. However, the referee's report was negative, and the paper was rejected. Undaunted, he sent the manuscript to Albert Einstein requesting publication in the Zeitschrift für Physik. Einstein immediately agreed, personally translated the article into German (Bose had earlier translated Einstein's article on the theory of General Relativity from German to English), and saw to it that it was published. Bose's theory achieved respect when Einstein sent his own paper in support of Bose's to Zeitschrift für Physik, asking that they be published together. This was done in 1924.

The reason Bose produced accurate results was that since photons are indistinguishable from each other, one cannot treat any two photons having equal energy as being two distinct identifiable photons. By analogy, if in an alternate universe coins were to behave like photons and other bosons, the probability of producing two heads would indeed be one-third, and so is the probability of getting a head and a tail which equals one-half for the conventional (classical, distinguishable) coins. Bose's "error" lead to what is now called Bose–Einstein statistics.

Bose and Einstein extended the idea to atoms and this led to the prediction of the existence of phenomena which became known as Bose–Einstein condensate, a dense collection of bosons (which are particles with integer spin, named after Bose), which was demonstrated to exist by experiment in 1995.

Two derivations of the Bose–Einstein distribution

Derivation from the grand canonical ensemble

The Bose-Einstein distribution, which applies only to a quantum system of non-interacting bosons, is easily derived from the grand canonical ensemble.[3] In this ensemble, the system is able to exchange energy and exchange particles with a reservoir (temperature T and chemical potential µ fixed by the reservoir).

Due to the non-interacting quality, each available single-particle level (with energy level ϵ) forms a separate thermodynamic system in contact with the reservoir. In other words, each single-particle level is a separate, tiny grand canonical ensemble. With bosons there is no limit on the number of particles N in the level, but due to indistinguishability each possible N corresponds to only one microstate (with energy ). The resulting partition function for that single-particle level therefore forms a geometric series:

𝒵=N=0exp(N(μϵ)/kBT)=N=0[exp((μϵ)/kBT)]N=11exp((μϵ)/kBT)

and the average particle number for that single-particle substate is given by

N=kBT1𝒵(𝒵μ)V,T=1exp((ϵμ)/kBT)1

This result applies for each single-particle level and thus forms the Bose-Einstein distribution for the entire state of the system.[3]

The variance in particle number (due to thermal fluctuations) may also be derived:

(ΔN)2=kBT(dNdμ)V,T=N2+N

This level of fluctuation is much larger than for distinguishable particles, which would instead show Poisson statistics ((ΔN)2=N). This is because the probability distribution for the number of bosons in a given energy level is a geometric distribution, not a Poisson distribution.

Derivation in the canonical approach

It is also possible to derive approximate Bose–Einstein statistics in the canonical ensemble. These derivations are lengthy and only yield the above results in the asymptotic limit of a large number of particles. The reason is that the total number of bosons is fixed in the canonical ensemble. That contradicts the implication in Bose–Einstein statistics that each energy level is filled independently from the others (which would require the number of particles to be flexible).

Template:Collapse top Suppose we have a number of energy levels, labeled by index i, each level having energy εi and containing a total of ni particles. Suppose each level contains gi distinct sublevels, all of which have the same energy, and which are distinguishable. For example, two particles may have different momenta, in which case they are distinguishable from each other, yet they can still have the same energy. The value of gi associated with level i is called the "degeneracy" of that energy level. Any number of bosons can occupy the same sublevel.

Let w(n,g) be the number of ways of distributing n particles among the g sublevels of an energy level. There is only one way of distributing n particles with one sublevel, therefore w(n,1)=1. It is easy to see that there are (n+1) ways of distributing n particles in two sublevels which we will write as:

w(n,2)=(n+1)!n!1!.

With a little thought (see Notes below) it can be seen that the number of ways of distributing n particles in three sublevels is

w(n,3)=w(n,2)+w(n1,2)++w(1,2)+w(0,2)

so that

w(n,3)=k=0nw(nk,2)=k=0n(nk+1)!(nk)!1!=(n+2)!n!2!

where we have used the following theorem involving binomial coefficients:

k=0n(k+a)!k!a!=(n+a+1)!n!(a+1)!.

Continuing this process, we can see that w(n,g) is just a binomial coefficient (See Notes below)

w(n,g)=(n+g1)!n!(g1)!.

For example, the population numbers for two particles in three sublevels are 200, 110, 101, 020, 011, or 002 for a total of six which equals 4!/(2!2!). The number of ways that a set of occupation numbers ni can be realized is the product of the ways that each individual energy level can be populated:

W=iw(ni,gi)=i(ni+gi1)!ni!(gi1)!i(ni+gi)!ni!(gi1)!

where the approximation assumes that ni1.

Following the same procedure used in deriving the Maxwell–Boltzmann statistics, we wish to find the set of ni for which W is maximised, subject to the constraint that there be a fixed total number of particles, and a fixed total energy. The maxima of W and ln(W) occur at the same value of ni and, since it is easier to accomplish mathematically, we will maximise the latter function instead. We constrain our solution using Lagrange multipliers forming the function:

f(ni)=ln(W)+α(Nni)+β(Eniεi)

Using the ni1 approximation and using Stirling's approximation for the factorials (x!xxex2πx) gives

f(ni)=i(ni+gi)ln(ni+gi)niln(ni)+α(Nni)+β(Eniεi)+K.

Where K is the sum of a number of terms which are not functions of the ni. Taking the derivative with respect to ni, and setting the result to zero and solving for ni, yields the Bose–Einstein population numbers:

ni=gieα+βεi1.

By a process similar to that outlined in the Maxwell–Boltzmann statistics article, it can be seen that:

dlnW=αdN+βdE

which, using Boltzmann's famous relationship S=klnW becomes a statement of the second law of thermodynamics at constant volume, and it follows that β=1kT and α=μkT where S is the entropy, μ is the chemical potential, k is Boltzmann's constant and T is the temperature, so that finally:

ni=gie(εiμ)/kT1.

Note that the above formula is sometimes written:

ni=gieεi/kT/z1,

where z=exp(μ/kT) is the absolute activity, as noted by McQuarrie.[4]

Also note that when the particle numbers are not conserved, removing the conservation of particle numbers constraint is equivalent to setting α and therefore the chemical potential μ to zero. This will be the case for photons and massive particles in mutual equilibrium and the resulting distribution will be the Planck distribution. Template:Collapse bottom

Template:Collapse top

A much simpler way to think of Bose–Einstein distribution function is to consider that n particles are denoted by identical balls and g shells are marked by g-1 line partitions. It is clear that the permutations of these n balls and g-1 partitions will give different ways of arranging bosons in different energy levels.

Say, for 3(=n) particles and 3(=g) shells, therefore (g-1)=2, the arrangement might be |●●|●, or ||●●●, or |●|●● , etc.

Hence the number of distinct permutations of n + (g-1) objects which have n identical items and (g-1) identical items will be:

(n+g-1)!/n!(g-1)!

OR

The purpose of these notes is to clarify some aspects of the derivation of the Bose–Einstein (B–E) distribution for beginners. The enumeration of cases (or ways) in the B–E distribution can be recast as follows. Consider a game of dice throwing in which there are n dice, with each die taking values in the set {1,,g}, for g1. The constraints of the game are that the value of a die i, denoted by mi, has to be greater than or equal to the value of die (i1), denoted by mi1, in the previous throw, i.e., mimi1. Thus a valid sequence of die throws can be described by an n-tuple (m1,m2,,mn), such that mimi1. Let S(n,g) denote the set of these valid n-tuples:

S(n,g)={(m1,m2,,mn)|mimi1,mi{1,,g},i=1,,n}.

(1)

Then the quantity w(n,g) (defined above as the number of ways to distribute n particles among the g sublevels of an energy level) is the cardinality of S(n,g), i.e., the number of elements (or valid n-tuples) in S(n,g). Thus the problem of finding an expression for w(n,g) becomes the problem of counting the elements in S(n,g).

Example n = 4, g = 3:

S(4,3)={(1111),(1112),(1113)(a),(1122),(1123),(1133)(b),(1222),(1223),(1233),(1333)(c),
(2222),(2223),(2233),(2333),(3333)(d)}
w(4,3)=15 (there are 15 elements in S(4,3))

Subset (a) is obtained by fixing all indices mi to 1, except for the last index, mn, which is incremented from 1 to g=3. Subset (b) is obtained by fixing m1=m2=1, and incrementing m3 from 2 to g=3. Due to the constraint mimi1 on the indices in S(n,g), the index m4 must automatically take values in {2,3}. The construction of subsets (c) and (d) follows in the same manner.

Each element of S(4,3) can be thought of as a multiset of cardinality n=4; the elements of such multiset are taken from the set {1,2,3} of cardinality g=3, and the number of such multisets is the multiset coefficient

34=(3+4131)=(3+414)=6!4!2!=15

More generally, each element of S(n,g) is a multiset of cardinality n (number of dice) with elements taken from the set {1,,g} of cardinality g (number of possible values of each die), and the number of such multisets, i.e., w(n,g) is the multiset coefficient

w(n,g)=gn=(g+n1g1)=(g+n1n)=(g+n1)!n!(g1)!

(2)

which is exactly the same as the formula for w(n,g), as derived above with the aid of a theorem involving binomial coefficients, namely

k=0n(k+a)!k!a!=(n+a+1)!n!(a+1)!.

(3)

To understand the decomposition

w(n,g)=k=0nw(nk,g1)=w(n,g1)+w(n1,g1)++w(1,g1)+w(0,g1)

(4)

or for example, n=4 and g=3

w(4,3)=w(4,2)+w(3,2)+w(2,2)+w(1,2)+w(0,2),

let us rearrange the elements of S(4,3) as follows

S(4,3)={(1111),(1112),(1122),(1222),(2222)(α),(1113=),(1123=),(1223=),(2223=)(β),
(1133==),(1233==),(2233==)(γ),(1333===),(2333===)(δ)(3333====)(ω)}.

Clearly, the subset (α) of S(4,3) is the same as the set

S(4,2)={(1111),(1112),(1122),(1222),(2222)}.

By deleting the index m4=3 (shown in red with double underline) in the subset (β) of S(4,3), one obtains the set

S(3,2)={(111),(112),(122),(222)}.

In other words, there is a one-to-one correspondence between the subset (β) of S(4,3) and the set S(3,2). We write

(β)S(3,2).

Similarly, it is easy to see that

(γ)S(2,2)={(11),(12),(22)}
(δ)S(1,2)={(1),(2)}
(ω)S(0,2)= (empty set).

Thus we can write

S(4,3)=k=04S(4k,2)

or more generally,

S(n,g)=k=0nS(nk,g1);

(5)

and since the sets

S(i,g1),fori=0,,n

are non-intersecting, we thus have

w(n,g)=k=0nw(nk,g1),

(6)

with the convention that

w(0,g)=1,g,andw(n,0)=1,n.
(7)

Continuing the process, we arrive at the following formula

w(n,g)=k1=0nk2=0nk1w(nk1k2,g2)=k1=0nk2=0nk1kg=0nj=1g1kjw(ni=1gki,0).

Using the convention (7)2 above, we obtain the formula

w(n,g)=k1=0nk2=0nk1kg=0nj=1g1kj1,

(8)

keeping in mind that for q and p being constants, we have

k=0qp=qp.

(9)

It can then be verified that (8) and (2) give the same result for w(4,3), w(3,3), w(3,2), etc.

Template:Collapse bottom

Interdisciplinary applications

Viewed as a pure probability distribution, the Bose–Einstein distribution has found application in other fields:

  • In recent years, Bose Einstein statistics have also been used as a method for term weighting in information retrieval. The method is one of a collection of DFR ("Divergence From Randomness") models,[5] the basic notion being that Bose Einstein statistics may be a useful indicator in cases where a particular term and a particular document have a significant relationship that would not have occurred purely by chance. Source code for implementing this model is available from the Terrier project at the University of Glasgow.
  • Mining Engineer (Excluding Oil ) Truman from Alma, loves to spend time knotting, largest property developers in singapore developers in singapore and stamp collecting. Recently had a family visit to Urnes Stave Church. The evolution of many complex systems, including the World Wide Web, business, and citation networks, is encoded in the dynamic web describing the interactions between the system’s constituents. Despite their irreversible and nonequilibrium nature these networks follow Bose statistics and can undergo Bose–Einstein condensation. Addressing the dynamical properties of these nonequilibrium systems within the framework of equilibrium quantum gases predicts that the “first-mover-advantage,” “fit-get-rich(FGR),” and “winner-takes-all” phenomena observed in competitive systems are thermodynamically distinct phases of the underlying evolving networks.[5]

See also

Notes

43 year old Petroleum Engineer Harry from Deep River, usually spends time with hobbies and interests like renting movies, property developers in singapore new condominium and vehicle racing. Constantly enjoys going to destinations like Camino Real de Tierra Adentro.

References

  • 20 year-old Real Estate Agent Rusty from Saint-Paul, has hobbies and interests which includes monopoly, property developers in singapore and poker. Will soon undertake a contiki trip that may include going to the Lower Valley of the Omo.

    My blog: http://www.primaboinca.com/view_profile.php?userid=5889534
  • Bose (1924). "Plancks Gesetz und Lichtquantenhypothese", Zeitschrift für Physik 26:178–181. doi:10.1007/BF01327326 (Einstein's translation into German of Bose's paper on Planck's law).
  • 20 year-old Real Estate Agent Rusty from Saint-Paul, has hobbies and interests which includes monopoly, property developers in singapore and poker. Will soon undertake a contiki trip that may include going to the Lower Valley of the Omo.

    My blog: http://www.primaboinca.com/view_profile.php?userid=5889534
  • 20 year-old Real Estate Agent Rusty from Saint-Paul, has hobbies and interests which includes monopoly, property developers in singapore and poker. Will soon undertake a contiki trip that may include going to the Lower Valley of the Omo.

    My blog: http://www.primaboinca.com/view_profile.php?userid=5889534
  • 20 year-old Real Estate Agent Rusty from Saint-Paul, has hobbies and interests which includes monopoly, property developers in singapore and poker. Will soon undertake a contiki trip that may include going to the Lower Valley of the Omo.

    My blog: http://www.primaboinca.com/view_profile.php?userid=5889534

Template:Statistical mechanics topics Template:Einstein

  1. See p. 14, note 3, of the Ph.D. Thesis entitled Bose–Einstein condensation: analysis of problems and rigorous results, presented by Alessandro Michelangeli to the International School for Advanced Studies, Mathematical Physics Sector, October 2007 for the degree of Ph.D. See: http://digitallibrary.sissa.it/handle/1963/5272?show=full, and download from http://digitallibrary.sissa.it/handle/1963/5272
  2. To download the Bose paper, see: http://www.condmat.uni-oldenburg.de/TeachingSP/bose.ps
  3. 3.0 3.1 Chapter 7 of Template:Cite isbn Cite error: Invalid <ref> tag; name "sriva" defined multiple times with different content
  4. See McQuarrie in citations
  5. 5.0 5.1 Amati, G.; C. J. Van Rijsbergen (2002). "Probabilistic models of information retrieval based on measuring the divergence from randomness " ACM TOIS 20 (4):357–389. Cite error: Invalid <ref> tag; name "bia" defined multiple times with different content