Bolometric correction: Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
en>Bibcode Bot
m Adding 1 arxiv eprint(s), 0 bibcode(s) and 0 doi(s). Did it miss something? Report bugs, errors, and suggestions at User talk:Bibcode Bot
 
en>Monkbot
Line 1: Line 1:
You shоuld dօ your vеry best to understand what kind of nutrition your body needs еvery dɑy. Yоu will find numerous important helpful information on learning more about the foodstuffs you eat аnd the way they contribute to gгeat health. Make uѕe of the tips prߋvided beneath to be able to boost your Ԁɑilƴ diet  [http://Www.Marthasvineyardpondview.com/UserProfile/tabid/42/userId/26661/Default.aspx vigrx plus jp] and make certаin you're having thе nutrients and vitamіns your bodү neеds.<br><br>
{{For|the partition function in number theory|Partition (number theory)}}
The '''partition function''' or '''configuration integral''', as used in [[probability theory]], [[information science]] and [[dynamical systems]], is a generalization of the definition of a [[partition function in statistical mechanics]]. It is a special case of a [[normalizing constant]] in probability theory, for the [[Boltzmann distribution]]. The partition function occurs in many problems of probability theory because, in situations where there is a natural symmetry, its associated [[probability measure]], the [[Gibbs measure]], has the [[Markov property]]. This means that the partition function occurs not only in physical systems with translation symmetry, but also in such varied settings as neural networks (the [[Hopfield network]]), and applications such as [[genomics]], [[corpus linguistics]] and [[artificial intelligence]], which employ [[Markov network]]s, and [[Markov logic network]]s. The Gibbs measure is also the unique measure that has the property of maximizing the [[entropy (general concept)|entropy]] for a fixed expectation value of the energy; this underlies the appearance of the partition function in [[maximum entropy method]]s and the algorithms derived therefrom.  


When you truly feel pleased, you ouցht to stop eating. Thіs is a good way to never take in excessive meаls. Once yoս cease cօnsuming prіor to completing, you retain tҺе handle you look for аnd you strengthens your resolve.<br><br>Take [http://www.Max-roach.com/mochatest/groups/como-se-usa-el-vigrx-plus-living-the-proper-way-to-help-you-are-living-much-longer/ Vigrx Plus in zimbabwe] many diffеrent protein. Seleсt such things as fish, slim lean meats and poultry minus the skin. Consume a lot of ovum ɑs well. We now realize that owning an egg cell ɗaily will not damage yoս. Еach week, ɡo 1 day without having meats. Alternative beans, nut рroducts, peas, peɑnut butter or seed products in their locatіon.<br><br>For a well balanced, nutгitious diеt, aim for cօnsսming roughly 8 oz . of low fat meat every day. This will likеly makе certain you receive the required proteins and steel your system needs. Excellent pгotеіn places consist of bison, venison and alsօ other low fat slices of various meats.<br><br>Moderating youг diet plan [http://www.max-roach.com/mochatest/groups/is-vigrx-plus-safe-to-use-how-to-eat-correctly-in-order-to-avoid-elevated-blood-pressure/ where can i buy vigrx plus in australia] assist you feel good in the daytime. Bʏ taking in far more food items than your body needs, you will have lots of nutrition and may put on pounds. It iѕ esѕential to vіew your food intakе to preѵent overeаting.<br><br>Dօ you wish to consume a lot a lot less steak? If that's the cаse, try it out as being a condiment. Use red meat to incorporate far more flavoring to your vegetables and also otheг wholesome food. This exercise is famous in Eastern countrіes which mаy have cardiovascular system-wholesome weight loss plans when compared to the Western side.<br><br>It's typically stated that packaged grains preference much betteг tҺan grain. In many pгеpared items, white flour does ցenerate better outcomes. Generally speaking, althoսgh, whole ɡrains offer a гicher, more complicated taste, and also nutritionally-crucial fibers which helps the proceԁure of digestive function.<br><br>You will certainly be signifiсantly less inclined to break down making an unhealthy option when you stay with this technique. By having the ɑbility to choose from a numbeг of dinner alternatives, it is possiƅlе and also hardwearing . fascination in աhat you eat going. You do not desire to sense chained гight down to the identical meals day after day.<br><br>100g of Quinoɑ includes 14g of proteins. Quinoa may be usеd in a number of recipes. It is possible tߋ put tοgether it in the morning and add more fruit and sweеtener оr create a pilaf from this.<br><br>Consider going а ѕhort peгiod of time without having ingesting cereals. Fօr quite a while, human beіngs lived away fruits, vegetables, peanuts, mеats and legumes. In thе ǥrand plan of stuff it's only been recent that cereals, that are essentially an invention, started to be consumеd. In the event you minimize or eliminate whole grains from your Ԁiet regime, you may observe improvements in the wɑү that you sense.<br><br>Use wholegrain loaves of bread with seed productѕ, as an аlternatіve to bгigҺt wɦite bгeads, աhen making a sandwіch. Wholegrain a loaf of bread features a lower glycemic directorʏ, reducing food cravings pangs, aiding weight managemеnt, and safeguarding you from heart problems. WҺolеgrain seeded a loaf of brеad also includes еssеntial fatty acids and is full of fiber content, ѡhich ensures yoս keep the gut operatіng ρroductivity.<br><br>Cobalt helpѕ fat burning capacity the B vitamin supplements that yߋu simply consume, especiallƴ the B12 nutritional vitamins. You wіll find it in darkish vegetables like spinach. Furthermore you will gеt theѕе [http://photobucket.com/images/natural+vitamins natural vitamins] in meals for example livers and rеnal system.<br><br>Nourishment is a crucial part of yoսr гespective emotional well-being and health. Insufficient nutritional vitamins can cause you [http://www.max-roach.com/mochatest/groups/where-can-i-buy-vigrx-plus-in-australia-great-nutritional-tips-for-everybody/ vigrx plus where to buy] have a lot less concentratiߋn or [http://www.sharkbayte.com/keyword/sense+sluggish sense sluggish]. You may stay away from a lot of phyѕical and mental health isѕues when you see your food consumption and lоok after a gooɗ diet.<br><br>With any good fortune, these guidelines ought to have demonstrated you sоme very nice information on appropгiate diet. Continue to keep what you've study under consideration, and apply these guidelines along the way abοut your existence. Remain healthy and stay delighted.
The partition function ties together many different concepts, and thus  offers a general framework in which many different kinds of quantities may be calculated.  In particular, it shows how to calculate [[expectation value]]s and [[Green's function]]s, forming a bridge to [[Fredholm theory]]. It also provides a natural setting for the [[information geometry]] approach to [[information theory]], where  the [[Fisher information metric]] can be understood to be a [[correlation function]] derived from the partition function; it happens to define a [[Riemannian manifold]].
 
When the setting for random variables is on [[complex projective space]] or [[projective Hilbert space]], geometrized with the [[Fubini-Study metric]], the theory of [[quantum mechanics]] and more generally [[quantum field theory]] results.  In these theories, the partition function is heavily exploited in the [[path integral formulation]], with great success, leading to many formulas nearly identical to those reviewed here. However, because the underlying measure space is complex-valued, as opposed to the real-valued [[simplex]] of probability theory, an extra factor  of ''i'' appears in many formulas. Tracking this factor is troublesome, and is not done here. This article focuses primarily on classical probability theory, where the sum of probabilities total to one.
 
==Definition==
Given a set of [[random variables]] <math>X_i</math> taking on values <math>x_i</math>, and some sort of [[potential function]] or [[Hamiltonian function|Hamiltonian]]  <math>H(x_1,x_2,\dots)</math>, the partition function is defined as
 
:<math>Z(\beta) = \sum_{x_i} \exp \left(-\beta H(x_1,x_2,\dots) \right)</math>
 
The function ''H'' is understood to be a real-valued function on the space of states <math>\{X_1,X_2,\cdots\}</math>, while <math>\beta</math> is a real-valued free parameter (conventionally, the [[inverse temperature]]). The sum over the <math>x_i</math> is understood to be a sum over all possible values that each of the random variables <math>X_i</math> may take. Thus, the sum is to be replaced by an [[integral]] when the <math>X_i</math> are continuous, rather than discrete. Thus, one writes
 
:<math>Z(\beta) = \int \exp \left(-\beta H(x_1,x_2,\dots) \right) dx_1 dx_2 \cdots</math>
 
for the case of continuously-varying <math>X_i</math>.
 
When ''H'' is an [[observable]], such as a finite-dimensional [[matrix (mathematics)|matrix]] or an infinite-dimensional [[Hilbert space]] [[operator (mathematics)|operator]] or element of a [[C-star algebra]], it is common to express the summation as a [[trace (linear algebra)|trace]], so that
 
:<math>Z(\beta) = \mbox{tr}\left(\exp\left(-\beta H\right)\right)</math>
 
When ''H'' is infinite-dimensional, then, for the above notation to be valid, the argument must be [[trace class]], that is, of a form such that the summation exists and is bounded.
 
The number of variables <math>X_i</math> need not be [[countable]], in which case the sums are to be replaced by [[functional integral]]s. Although there are many notations for functional integrals, a common one would be
 
:<math>Z = \int \mathcal{D} \phi \exp \left(- \beta H[\phi] \right)</math>
 
Such is the case for the [[partition function in quantum field theory]].
 
A common, useful modification to the partition function is to introduce auxiliary functions. This allows, for example, the partition function to be used as a [[generating function]] for [[correlation function]]s. This is discussed in greater detail below.
 
==The parameter &beta; ==
 
The role or meaning of the parameter <math>\beta</math> can be understood in a variety of different ways. In classical thermodynamics, it is an [[inverse temperature]]. More generally, one would say that it is the variable that is [[Conjugate variables (thermodynamics)|conjugate]] to some (arbitrary) function <math>H</math> of the random variables <math>X</math>. The word ''conjugate'' here is used in the sense of conjugate [[generalized coordinates]] in [[Lagrangian mechanics]], thus, properly <math>\beta</math> is a [[Lagrange multiplier]].  It is not uncommonly called the [[generalized force]].  All of these concepts have in common the idea that one value is meant to be kept fixed, as others, interconnected in some complicated way, are allowed to vary. In the current case, the value to be kept fixed is the [[expectation value]] of <math>H</math>, even as many different [[probability distribution]]s can give rise to exactly this same (fixed) value.
 
For the general case, one considers a set of functions <math>\{H_k(x_1,\cdots)\}</math> that each depend on the random variables <math>X_i</math>. These functions are chosen because one wants to hold their expectation values constant, for one reason or another. To constrain the expectation values in this way, one applies the method of [[Lagrange multiplier]]s.  In the general case, [[maximum entropy method]]s illustrate the manner in which this is done. 
 
Some specific examples are in order. In basic thermodynamics problems, when using the [[canonical ensemble]], the use of just one parameter <math>\beta</math> reflects the fact that there is only one expectation value that must be held constant: the [[free energy]] (due to [[conservation of energy]]). For chemistry problems involving chemical reactions, the [[grand canonical ensemble]] provides the appropriate foundation, and there are two Lagrange multipliers. One is to hold the energy constant, and another, the [[fugacity]], is to hold the particle count constant (as chemical reactions involve the recombination of a fixed number of atoms).
 
For the general case, one has
 
:<math>Z(\beta) = \sum_{x_i} \exp \left(-\sum_k\beta_k H_k(x_i) \right)</math>
 
with <math>\beta=(\beta_1, \beta_2,\cdots)</math> a point in a space.
 
For a collection of observables <math>H_k</math>, one would write
 
:<math>Z(\beta) = \mbox{tr}\left[\,\exp \left(-\sum_k\beta_k H_k\right)\right]</math>
 
As before, it is presumed that the argument of tr is [[trace class]].
 
The corresponding [[Gibbs measure]] then provides a probability distribution such that the expectation value of each <math>H_k</math> is a fixed value. More precisely, one has
 
:<math>\frac{\partial}{\partial \beta_k} \left(- \log Z \right) = \langle H_k\rangle = \mathrm{E}\left[H_k\right]</math>
 
with the angle brackets <math>\langle H_k \rangle</math> denoting the expected value of <math>H_k</math>, and <math>\mathrm{E}[\;]</math> being a common alternative notation.  A precise definition of this expectation value is given below.
 
Although the value of <math>\beta</math> is commonly taken to be real, it need not be, in general; this is discussed in the section [[#Normalization|Normalization]] below. The values of <math>\beta</math> can be understood to be the coordinates of points in a space; this space is in fact a [[manifold]], as sketched below.  The study of these spaces as manifolds constitutes the field of [[information geometry]].
 
== Symmetry ==
The potential function itself commonly takes the form of a sum:
 
:<math>H(x_1,x_2,\dots) = \sum_s V(s)\,</math>
 
where the sum over ''s'' is a sum over some subset of the [[power set]] ''P''(''X'') of the set <math>X=\lbrace x_1,x_2,\dots \rbrace</math>.  For example, in [[statistical mechanics]], such as the [[Ising model]], the sum is over pairs of nearest neighbors. In probability theory, such as [[Markov networks]], the sum might be over the [[clique (graph theory)|cliques]] of a graph; so, for the Ising model and other [[lattice model (physics)|lattice models]], the maximal cliques are edges. 
 
The fact that the potential function can be written as a sum usually reflects the fact that it is invariant under the [[group action|action]] of a [[group (mathematics)|group symmetry]], such as [[translational invariance]]. Such symmetries can be discrete or continuous; they materialize in the [[correlation function]]s for the random variables (discussed below). Thus a symmetry in the Hamiltonian becomes a symmetry of the correlation function (and vice-versa).
 
This symmetry has a critically important interpretation in probability theory: it implies that the [[Gibbs measure]] has the [[Markov property]]; that is, it is independent of the random variables in a certain way, or, equivalently, the measure is identical on the [[equivalence class]]es of the symmetry.  This leads to the widespread appearance of the partition function in problems with the Markov property, such as [[Hopfield network]]s.
 
==As a measure==
The value of the expression
:<math>\exp \left(-\beta H(x_1,x_2,\dots) \right)</math>
 
can be interpreted as a likelihood that a specific [[configuration space|configuration]] of values <math>(x_1,x_2,\dots)</math> occurs in the system. Thus, given a specific configuration <math>(x_1,x_2,\dots)</math>,  
 
:<math>P(x_1,x_2,\dots) = \frac{1}{Z(\beta)} \exp \left(-\beta H(x_1,x_2,\dots) \right)</math>
 
is the [[probability density function|probability]] of the configuration <math>(x_1,x_2,\dots)</math> occurring in the system, which is now properly normalized so that <math>0\le P(x_1,x_2,\dots)\le 1</math>, and such that the sum over all configurations totals to one.  As such, the partition function can be understood to provide a [[measure (mathematics)|measure]] (a [[probability measure]]) on the [[probability space]]; formally, it is  called the [[Gibbs measure]].  It generalizes the narrower concepts of the [[grand canonical ensemble]] and [[canonical ensemble]] in statistical mechanics.
 
There exists at least one configuration <math>(x_1,x_2,\dots)</math> for which the probability is maximized; this configuration is conventionally called the [[ground state]]. If the configuration is unique, the ground state is said to be '''non-degenerate''', and the system is said to be [[ergodic]]; otherwise the ground state is '''degenerate'''.  The ground state may or may not commute with the generators of the symmetry; if commutes, it is said to be an [[invariant measure]]. When it does not commute, the symmetry is said to be [[spontaneously broken]].
 
Conditions under which a ground state exists and is unique are given by the [[Karush–Kuhn–Tucker conditions]]; these conditions are commonly used to justify the use of the Gibbs measure in maximum-entropy problems.{{Citation needed|date=June 2013}}
 
==Normalization==
The values taken by <math>\beta</math> depend on the [[mathematical space]] over which the random field varies. Thus, real-valued random fields take values on a [[simplex]]: this is the geometrical way of saying that the sum of probabilities must total to one.  For quantum mechanics, the random variables range over [[complex projective space]] (or complex-valued [[projective Hilbert space]]), where the random variables are interpreted as [[probability amplitude]]s. The emphasis here is on the word ''projective'', as the amplitudes are still normalized to one. The normalization for the potential function is the [[Jacobian matrix and determinant|Jacobian]] for the appropriate mathematical space: it is 1 for ordinary probabilities, and ''i'' for Hilbert space; thus, in [[quantum field theory]], one sees <math>it H</math> in the exponential, rather than <math>\beta H</math>.  The partition function is very heavily exploited in the [[path integral formulation]] of quantum field theory, to great effect. The theory there is very nearly identical to that presented here, aside from this difference, and the fact that it is usually formulated on four-dimensional space-time, rather than in a general way.
 
==Expectation values==
The partition function is commonly used as a [[generating function]] for [[expectation value]]s of various functions of the random variables.  So, for example, taking <math>\beta</math> as an adjustable parameter, then the derivative of <math>\log(Z(\beta))</math> with respect to <math>\beta</math>
 
:<math>\bold{E}[H] = \langle H \rangle = -\frac {\partial \log(Z(\beta))} {\partial \beta}</math>
 
gives the average (expectation value) of ''H''. In physics, this would be called the average [[energy]] of the system.  
 
Given the definition of the probability measure above, the expectation value of any function ''f'' of the random variables ''X'' may now be written as expected: so, for discrete-valued ''X'', one writes
:<math>\begin{align}
\langle f\rangle
& = \sum_{x_i} f(x_1,x_2,\dots) P(x_1,x_2,\dots) \\
& = \frac{1}{Z(\beta)} \sum_{x_i} f(x_1,x_2,\dots) \exp \left(-\beta H(x_1,x_2,\dots) \right)
\end{align}
</math>
 
The above notation is strictly correct for a finite number of discrete random variables, but should be seen to be somewhat 'informal' for continuous variables; properly, the summations above should be replaced with the notations of the underlying [[sigma algebra]] used to define a [[probability space]].  That said, the identities continue to hold, when properly formulated on a [[measure space]].
 
Thus, for example, the [[entropy (general concept)|entropy]] is given by
 
:<math>\begin{align} S
& = -k_B \langle\ln P\rangle \\
& = -k_B\sum_{x_i} P(x_1,x_2,\dots) \ln P(x_1,x_2,\dots) \\
& = k_B(\beta \langle H\rangle + \log Z(\beta))
\end{align}
</math>
 
The Gibbs measure is the unique statistical distribution that maximizes the entropy for a fixed expectation value of the energy; this underlies its use in [[maximum entropy method]]s.
 
== Information geometry ==
The points <math>\beta</math> can be understood to form a space, and specifically, a [[manifold]]. Thus, it is reasonable to ask about the structure of this manifold; this is the task of [[information geometry]].
 
Multiple derivatives with regard to the Lagrange multipliers gives rise to a positive semi-definite [[covariance matrix]]
:<math>g_{ij}(\beta) = \frac{\partial^2}{\partial \beta^i\partial \beta^j} \left(-\log Z(\beta)\right) =
\langle \left(H_i-\langle H_i\rangle\right)\left( H_j-\langle H_j\rangle\right)\rangle</math>
This matrix is positive semi-definite, and may be interpreted as a [[metric tensor]], specifically, a [[Riemannian metric]].  Equiping the space of lagrange multipliers with a metric in this way turns it into a [[Riemannian manifold]].<ref>Gavin E. Crooks, "Measuring thermodynamic length" (2007), [http://arxiv.org/abs/0706.0559 ArXiv 0706.0559]</ref>  The study of such manifolds is referred to as [[information geometry]]; the metric above is the [[Fisher information metric]]. Here, <math>\beta</math> serves as a coordinate on the manifold.  It is interesting to compare the above definition to the simpler [[Fisher information]], from which it is inspired.
 
That the above defines the Fisher information metric can be readily seen by explicitly substituting for the expectation value:
:<math>\begin{align} g_{ij}(\beta)
& = \langle \left(H_i-\langle H_i\rangle\right)\left( H_j-\langle H_j\rangle\right)\rangle \\
& = \sum_{x} P(x) \left(H_i-\langle H_i\rangle\right)\left( H_j-\langle H_j\rangle\right) \\
& = \sum_{x} P(x)
\left(H_i + \frac{\partial\log Z}{\partial \beta_i}\right)
\left(H_j + \frac{\partial\log Z}{\partial \beta_j}\right)
\\
& = \sum_{x} P(x)
\frac{\partial \log P(x)}{\partial \beta^i}
\frac{\partial \log P(x)}{\partial \beta^j} \\
\end{align}
</math>
 
where we've written <math>P(x)</math> for <math>P(x_1,x_2,\dots)</math> and the summation is understood to be over all values of all random variables <math>X_k</math>. For continuous-valued random variables, the summations are replaced by integrals, of course.
 
Curiously, the [[Fisher information metric]] can also be understood as the flat-space [[Euclidean metric]], after appropriate change of variables, as described in the main article on it.  When the <math>\beta</math> are complex-valued, the resulting metric is the [[Fubini-Study metric]].  When written in terms of [[mixed state (physics)|mixed states]], instead of [[pure state]]s, it is known as the [[Bures metric]].
 
== Correlation functions==
By introducing artificial auxiliary functions <math>J_k</math> into the partition function, it can then be used to obtain the expectation value of the random variables.  Thus, for example, by writing
 
:<math>\begin{align} Z(\beta,J)
& = Z(\beta,J_1,J_2,\dots) \\
& = \sum_{x_i} \exp \left(-\beta H(x_1,x_2,\dots) +
\sum_n J_n x_n
\right)
\end{align}
</math>
 
one then has
:<math>\bold{E}[x_k] = \langle x_k \rangle = \left.
\frac{\partial}{\partial J_k}
\log Z(\beta,J)\right|_{J=0}
</math>
 
as the expectation value of <math>x_k</math>. In the [[path integral formulation]] of [[quantum field theory]], these auxiliary functions are commonly referred to as [[source field]]s.
 
Multiple differentiations lead to the [[Ursell function|connected correlation function]]s of the random variables. Thus the correlation function <math>C(x_j,x_k)</math> between variables <math>x_j</math> and <math>x_k</math> is given by:
 
:<math>C(x_j,x_k) = \left.
\frac{\partial}{\partial J_j}
\frac{\partial}{\partial J_k}
\log Z(\beta,J)\right|_{J=0}
</math>
 
For the case where ''H'' can be written as a [[quadratic form]] involving a [[differential operator]], that is, as
 
:<math>H = \frac{1}{2} \sum_n x_n D x_n</math>
 
then the correlation function <math>C(x_j,x_k)</math> can be understood to be the [[Green's function]] for the differential operator (and generally giving rise to [[Fredholm theory]]).  In the quantum field theory setting, such functions are referred to as [[propagator]]s; higher order correlators are called n-point functions; working with them defines the [[effective action]] of a theory.
 
==General properties==
Partition functions are used to discuss [[critical scaling]], [[universality (dynamical systems)|universality]] and are subject to the [[renormalization group]].
 
==See also==
* [[Exponential family]]
* [[Partition function (statistical mechanics)]]
== References==
{{reflist}}
 
[[Category:Partition functions| ]]
[[Category:Entropy and information]]

Revision as of 12:39, 15 January 2014

28 year-old Painting Investments Worker Truman from Regina, usually spends time with pastimes for instance interior design, property developers in new launch ec Singapore and writing. Last month just traveled to City of the Renaissance. The partition function or configuration integral, as used in probability theory, information science and dynamical systems, is a generalization of the definition of a partition function in statistical mechanics. It is a special case of a normalizing constant in probability theory, for the Boltzmann distribution. The partition function occurs in many problems of probability theory because, in situations where there is a natural symmetry, its associated probability measure, the Gibbs measure, has the Markov property. This means that the partition function occurs not only in physical systems with translation symmetry, but also in such varied settings as neural networks (the Hopfield network), and applications such as genomics, corpus linguistics and artificial intelligence, which employ Markov networks, and Markov logic networks. The Gibbs measure is also the unique measure that has the property of maximizing the entropy for a fixed expectation value of the energy; this underlies the appearance of the partition function in maximum entropy methods and the algorithms derived therefrom.

The partition function ties together many different concepts, and thus offers a general framework in which many different kinds of quantities may be calculated. In particular, it shows how to calculate expectation values and Green's functions, forming a bridge to Fredholm theory. It also provides a natural setting for the information geometry approach to information theory, where the Fisher information metric can be understood to be a correlation function derived from the partition function; it happens to define a Riemannian manifold.

When the setting for random variables is on complex projective space or projective Hilbert space, geometrized with the Fubini-Study metric, the theory of quantum mechanics and more generally quantum field theory results. In these theories, the partition function is heavily exploited in the path integral formulation, with great success, leading to many formulas nearly identical to those reviewed here. However, because the underlying measure space is complex-valued, as opposed to the real-valued simplex of probability theory, an extra factor of i appears in many formulas. Tracking this factor is troublesome, and is not done here. This article focuses primarily on classical probability theory, where the sum of probabilities total to one.

Definition

Given a set of random variables Xi taking on values xi, and some sort of potential function or Hamiltonian H(x1,x2,), the partition function is defined as

Z(β)=xiexp(βH(x1,x2,))

The function H is understood to be a real-valued function on the space of states {X1,X2,}, while β is a real-valued free parameter (conventionally, the inverse temperature). The sum over the xi is understood to be a sum over all possible values that each of the random variables Xi may take. Thus, the sum is to be replaced by an integral when the Xi are continuous, rather than discrete. Thus, one writes

Z(β)=exp(βH(x1,x2,))dx1dx2

for the case of continuously-varying Xi.

When H is an observable, such as a finite-dimensional matrix or an infinite-dimensional Hilbert space operator or element of a C-star algebra, it is common to express the summation as a trace, so that

Z(β)=tr(exp(βH))

When H is infinite-dimensional, then, for the above notation to be valid, the argument must be trace class, that is, of a form such that the summation exists and is bounded.

The number of variables Xi need not be countable, in which case the sums are to be replaced by functional integrals. Although there are many notations for functional integrals, a common one would be

Z=𝒟ϕexp(βH[ϕ])

Such is the case for the partition function in quantum field theory.

A common, useful modification to the partition function is to introduce auxiliary functions. This allows, for example, the partition function to be used as a generating function for correlation functions. This is discussed in greater detail below.

The parameter β

The role or meaning of the parameter β can be understood in a variety of different ways. In classical thermodynamics, it is an inverse temperature. More generally, one would say that it is the variable that is conjugate to some (arbitrary) function H of the random variables X. The word conjugate here is used in the sense of conjugate generalized coordinates in Lagrangian mechanics, thus, properly β is a Lagrange multiplier. It is not uncommonly called the generalized force. All of these concepts have in common the idea that one value is meant to be kept fixed, as others, interconnected in some complicated way, are allowed to vary. In the current case, the value to be kept fixed is the expectation value of H, even as many different probability distributions can give rise to exactly this same (fixed) value.

For the general case, one considers a set of functions {Hk(x1,)} that each depend on the random variables Xi. These functions are chosen because one wants to hold their expectation values constant, for one reason or another. To constrain the expectation values in this way, one applies the method of Lagrange multipliers. In the general case, maximum entropy methods illustrate the manner in which this is done.

Some specific examples are in order. In basic thermodynamics problems, when using the canonical ensemble, the use of just one parameter β reflects the fact that there is only one expectation value that must be held constant: the free energy (due to conservation of energy). For chemistry problems involving chemical reactions, the grand canonical ensemble provides the appropriate foundation, and there are two Lagrange multipliers. One is to hold the energy constant, and another, the fugacity, is to hold the particle count constant (as chemical reactions involve the recombination of a fixed number of atoms).

For the general case, one has

Z(β)=xiexp(kβkHk(xi))

with β=(β1,β2,) a point in a space.

For a collection of observables Hk, one would write

Z(β)=tr[exp(kβkHk)]

As before, it is presumed that the argument of tr is trace class.

The corresponding Gibbs measure then provides a probability distribution such that the expectation value of each Hk is a fixed value. More precisely, one has

βk(logZ)=Hk=E[Hk]

with the angle brackets Hk denoting the expected value of Hk, and E[] being a common alternative notation. A precise definition of this expectation value is given below.

Although the value of β is commonly taken to be real, it need not be, in general; this is discussed in the section Normalization below. The values of β can be understood to be the coordinates of points in a space; this space is in fact a manifold, as sketched below. The study of these spaces as manifolds constitutes the field of information geometry.

Symmetry

The potential function itself commonly takes the form of a sum:

H(x1,x2,)=sV(s)

where the sum over s is a sum over some subset of the power set P(X) of the set X={x1,x2,}. For example, in statistical mechanics, such as the Ising model, the sum is over pairs of nearest neighbors. In probability theory, such as Markov networks, the sum might be over the cliques of a graph; so, for the Ising model and other lattice models, the maximal cliques are edges.

The fact that the potential function can be written as a sum usually reflects the fact that it is invariant under the action of a group symmetry, such as translational invariance. Such symmetries can be discrete or continuous; they materialize in the correlation functions for the random variables (discussed below). Thus a symmetry in the Hamiltonian becomes a symmetry of the correlation function (and vice-versa).

This symmetry has a critically important interpretation in probability theory: it implies that the Gibbs measure has the Markov property; that is, it is independent of the random variables in a certain way, or, equivalently, the measure is identical on the equivalence classes of the symmetry. This leads to the widespread appearance of the partition function in problems with the Markov property, such as Hopfield networks.

As a measure

The value of the expression

exp(βH(x1,x2,))

can be interpreted as a likelihood that a specific configuration of values (x1,x2,) occurs in the system. Thus, given a specific configuration (x1,x2,),

P(x1,x2,)=1Z(β)exp(βH(x1,x2,))

is the probability of the configuration (x1,x2,) occurring in the system, which is now properly normalized so that 0P(x1,x2,)1, and such that the sum over all configurations totals to one. As such, the partition function can be understood to provide a measure (a probability measure) on the probability space; formally, it is called the Gibbs measure. It generalizes the narrower concepts of the grand canonical ensemble and canonical ensemble in statistical mechanics.

There exists at least one configuration (x1,x2,) for which the probability is maximized; this configuration is conventionally called the ground state. If the configuration is unique, the ground state is said to be non-degenerate, and the system is said to be ergodic; otherwise the ground state is degenerate. The ground state may or may not commute with the generators of the symmetry; if commutes, it is said to be an invariant measure. When it does not commute, the symmetry is said to be spontaneously broken.

Conditions under which a ground state exists and is unique are given by the Karush–Kuhn–Tucker conditions; these conditions are commonly used to justify the use of the Gibbs measure in maximum-entropy problems.Potter or Ceramic Artist Truman Bedell from Rexton, has interests which include ceramics, best property developers in singapore developers in singapore and scrabble. Was especially enthused after visiting Alejandro de Humboldt National Park.

Normalization

The values taken by β depend on the mathematical space over which the random field varies. Thus, real-valued random fields take values on a simplex: this is the geometrical way of saying that the sum of probabilities must total to one. For quantum mechanics, the random variables range over complex projective space (or complex-valued projective Hilbert space), where the random variables are interpreted as probability amplitudes. The emphasis here is on the word projective, as the amplitudes are still normalized to one. The normalization for the potential function is the Jacobian for the appropriate mathematical space: it is 1 for ordinary probabilities, and i for Hilbert space; thus, in quantum field theory, one sees itH in the exponential, rather than βH. The partition function is very heavily exploited in the path integral formulation of quantum field theory, to great effect. The theory there is very nearly identical to that presented here, aside from this difference, and the fact that it is usually formulated on four-dimensional space-time, rather than in a general way.

Expectation values

The partition function is commonly used as a generating function for expectation values of various functions of the random variables. So, for example, taking β as an adjustable parameter, then the derivative of log(Z(β)) with respect to β

E[H]=H=log(Z(β))β

gives the average (expectation value) of H. In physics, this would be called the average energy of the system.

Given the definition of the probability measure above, the expectation value of any function f of the random variables X may now be written as expected: so, for discrete-valued X, one writes

f=xif(x1,x2,)P(x1,x2,)=1Z(β)xif(x1,x2,)exp(βH(x1,x2,))

The above notation is strictly correct for a finite number of discrete random variables, but should be seen to be somewhat 'informal' for continuous variables; properly, the summations above should be replaced with the notations of the underlying sigma algebra used to define a probability space. That said, the identities continue to hold, when properly formulated on a measure space.

Thus, for example, the entropy is given by

S=kBlnP=kBxiP(x1,x2,)lnP(x1,x2,)=kB(βH+logZ(β))

The Gibbs measure is the unique statistical distribution that maximizes the entropy for a fixed expectation value of the energy; this underlies its use in maximum entropy methods.

Information geometry

The points β can be understood to form a space, and specifically, a manifold. Thus, it is reasonable to ask about the structure of this manifold; this is the task of information geometry.

Multiple derivatives with regard to the Lagrange multipliers gives rise to a positive semi-definite covariance matrix

gij(β)=2βiβj(logZ(β))=(HiHi)(HjHj)

This matrix is positive semi-definite, and may be interpreted as a metric tensor, specifically, a Riemannian metric. Equiping the space of lagrange multipliers with a metric in this way turns it into a Riemannian manifold.[1] The study of such manifolds is referred to as information geometry; the metric above is the Fisher information metric. Here, β serves as a coordinate on the manifold. It is interesting to compare the above definition to the simpler Fisher information, from which it is inspired.

That the above defines the Fisher information metric can be readily seen by explicitly substituting for the expectation value:

gij(β)=(HiHi)(HjHj)=xP(x)(HiHi)(HjHj)=xP(x)(Hi+logZβi)(Hj+logZβj)=xP(x)logP(x)βilogP(x)βj

where we've written P(x) for P(x1,x2,) and the summation is understood to be over all values of all random variables Xk. For continuous-valued random variables, the summations are replaced by integrals, of course.

Curiously, the Fisher information metric can also be understood as the flat-space Euclidean metric, after appropriate change of variables, as described in the main article on it. When the β are complex-valued, the resulting metric is the Fubini-Study metric. When written in terms of mixed states, instead of pure states, it is known as the Bures metric.

Correlation functions

By introducing artificial auxiliary functions Jk into the partition function, it can then be used to obtain the expectation value of the random variables. Thus, for example, by writing

Z(β,J)=Z(β,J1,J2,)=xiexp(βH(x1,x2,)+nJnxn)

one then has

E[xk]=xk=JklogZ(β,J)|J=0

as the expectation value of xk. In the path integral formulation of quantum field theory, these auxiliary functions are commonly referred to as source fields.

Multiple differentiations lead to the connected correlation functions of the random variables. Thus the correlation function C(xj,xk) between variables xj and xk is given by:

C(xj,xk)=JjJklogZ(β,J)|J=0

For the case where H can be written as a quadratic form involving a differential operator, that is, as

H=12nxnDxn

then the correlation function C(xj,xk) can be understood to be the Green's function for the differential operator (and generally giving rise to Fredholm theory). In the quantum field theory setting, such functions are referred to as propagators; higher order correlators are called n-point functions; working with them defines the effective action of a theory.

General properties

Partition functions are used to discuss critical scaling, universality and are subject to the renormalization group.

See also

References

43 year old Petroleum Engineer Harry from Deep River, usually spends time with hobbies and interests like renting movies, property developers in singapore new condominium and vehicle racing. Constantly enjoys going to destinations like Camino Real de Tierra Adentro.

  1. Gavin E. Crooks, "Measuring thermodynamic length" (2007), ArXiv 0706.0559