Tropical geometry: Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
Basic definitions: Chan & Sturmfels (2013)
en>Macrakis
+ full text link
 
Line 1: Line 1:
In [[statistics]], '''importance sampling''' is a general technique for estimating properties of a particular [[probability distribution|distribution]], while only having samples generated from a different distribution rather than the distribution of interest. It is related to [[umbrella sampling]] in [[computational physics]]. Depending on the application, the term may refer to the process of sampling from this alternative distribution, the process of inference, or both.
Most folks have this habit of doing all stuff by themselves, regardless of how critical or simple they are! These individuals won't let others interfere inside their affairs. While this stance could work in alternative regions of lifetime, it's not how to reply when you want to fix your Windows registry. There are some jobs such as removing spywares, virus and obsolete registry entries, which are best left to expert softwares. In this article I can tell we why it happens to be important to fix Windows registry NOW!<br><br>StreamCI.dll errors are caused by a amount of different difficulties, including which the file itself has been moved on your program, the file is outdated or you have installed certain third-party audio motorists that are conflicting with the file. The superior news is that if you would like to resolve the error you're seeing, you should look to initially ensure the file & drivers are working okay on your PC and also then resolving any StreamCI.dll errors that may be inside the registry of the computer.<br><br>The Windows registry is a program database of information. Windows plus alternative software store a lot of settings and alternative information inside it, and retrieve such info within the registry all the time. The registry is moreover a bottleneck inside which considering it's the heart of the operating system, any issues with it may result mistakes plus bring the running system down.<br><br>Handling intermittent mistakes - whenever there is a message to the effect that "memory or difficult disk is malfunctioning", we could put inside new hardware to substitute the defective part until the actual issue is discovered. There are h/w diagnostic programs to identify the faulty portions.<br><br>When you are shopping for the best [http://bestregistrycleanerfix.com/tune-up-utilities tuneup utilities 2014] system, be sure to look for one that defragments the registry. It should also scan for assorted aspects, such as invalid paths and invalid shortcuts plus programs. It could additionally detect invalid fonts, check for device driver issues plus repair files. Also, be sure that it has a scheduler. That way, you are able to set it to scan your program at certain instances on certain days. It sounds like a lot, but it's completely vital.<br><br>The many probable cause of the trouble is the program problem - Registry Errors! That is the reason why folks who already have more than 2 G RAM on their computers are nevertheless constantly bothered by the problem.<br><br>Google Chrome is my lifeline plus to this day fortunately. My all settings and research associated bookmarks were saved in Chrome and stupidly I didn't synchronize them with the Gmail to shop them online. I could not afford to install brand-new version plus sacrifice all my function settings. There was no method to retrieve the aged settings. The only way left for me was to miraculously fix it browser inside a method that all of the information plus settings stored in it are recovered.<br><br>There is a lot a wise registry cleaner may do for your computer. It could check for and download updates for Windows, Java and Adobe. Keeping changes present is an important piece of advantageous computer health. It could additionally protect the individual plus business confidentiality and your online safety.
 
== Basic theory ==
Let <math>X:\Omega\to \mathbb{R}</math> be a [[random variable]] in some [[probability space]] <math>(\Omega,\mathcal{F},P)</math>. We wish to estimate the [[expected value]] of ''X'' under ''P''. If we have random samples <math>x_1, \ldots, x_n</math>, generated according to ''P'', then an empirical estimate of '''E'''[''X;P''] is
 
: <math>
  \hat{\mathbf{E}}_{n}[X;P] = \frac{1}{n} \sum_{i=1}^n x_i.
</math>
 
The basic idea of importance sampling is to change the probability ''P'' so that the estimation of '''E'''[''X;P''] is easier. Choose a random variable <math>L\geq 0</math> such that '''E'''''[L;P]=1'' and that ''P''-[[almost everywhere]] <math>L(\omega)\neq 0</math>. The variate ''L'' defines another probability <math>P^{(L)}=L\, P</math> that satisfies
: <math>
  \mathbf{E}[X;P] = \mathbf{E}\left[\frac{X}{L};P^{(L)}\right].
</math>
 
The variable ''X/L'' will thus be sampled under ''P<sup>(L)</sup>'' to estimate <math> \mathbf{E}[X;P]</math> as above. This procedure will improve the estimation when <math>\operatorname{var}\left[\frac{X}{L};P^{(L)}\right] < \operatorname{var}[X;P]</math>. Another case of interest is when ''X/L'' is easier to sample under ''P<sup>(L)</sup>'' than ''X'' under ''P''.
 
When ''X'' is of constant sign over Ω, the best variable ''L'' would clearly be <math>L^*=\frac{X}{\mathbf{E}[X;P]}\geq 0</math>, so that ''X/L*'' is the searched constant '''E'''''[X;P]'' and a single sample under ''P<sup>(L*)</sup>'' suffices to give its value. Unfortunately we cannot take that choice, because '''E'''''[X;P]'' is precisely the value we are looking for! However this theoretical best case ''L*'' gives us an insight into what importance sampling does:
 
: <math>
\begin{align}\forall a\in\mathbb{R}, \; P^{(L^*)}(X\in[a;a+da]) &= \int_{\omega\in\{X\in[a;a+da]\}} \frac{X(\omega)}{E[X;P]}dP(\omega) \\ &= \frac{1}{E[X;P]}\; a\,P(X\in[a;a+da])
\end{align}</math>
to the right, <math>a\,P(X\in[a;a+da])</math> is one of the infinitesimal elements that sum up to '''E'''''[X;P]'':
 
: <math>E[X;P] = \int_{a=-\infty}^{+\infty} a\,P(X\in[a;a+da]) </math>
therefore, '''a good probability change ''P<sup>(L)</sup>'' in importance sampling will redistribute the law of ''X'' so that its samples' frequencies are sorted directly according to their weights in''' '''E'''''[X;P]''. Hence the name "importance sampling."
 
Note that whenever <math>P</math> is the uniform distribution and <math>\Omega =\mathbb{R}</math>, we are just estimating the integral of the real function <math>X:\mathbb{R}\to\mathbb{R}</math>, so the method can also be used for estimating simple integrals.
 
<!-- There are two main applications of importance sampling methods which, naturally, are related.  The field of probabilistic inference focuses more on the estimation of <math>p</math> or related statistics, while the field of simulation focuses more on the tradeoffs involved with choices of <math>q</math>.  Nevertheless, the basic theory and tools are identical. -->
 
== Application to probabilistic inference ==
 
Such methods are frequently used to estimate posterior densities or expectations in state and/or parameter estimation problems in probabilistic models that are too hard to treat analytically, for example in [[Bayesian network]]s.
 
== Application to simulation ==
'''Importance sampling''' is a [[variance reduction]] technique that can be used in the [[Monte Carlo method]]. The idea behind importance sampling is that certain values of the input [[random variables]] in a [[simulation]] have more impact on the parameter being estimated than others. If these "important" values are emphasized by sampling more frequently, then the [[estimator]] variance can be reduced. Hence, the basic methodology in importance sampling is to choose a distribution which "encourages" the important values. This use of "biased" distributions will result in a biased estimator if it is applied directly in the simulation. However, the simulation outputs are weighted to correct for the use of the biased distribution, and this ensures that the new importance sampling estimator is unbiased. The weight is given by the [[likelihood ratio]], that is, the [[Radon–Nikodym derivative]] of the true underlying distribution with respect to the biased simulation distribution.  
 
The fundamental issue in implementing importance sampling simulation is the choice of the biased distribution which encourages the important regions of the input variables. Choosing or designing a good biased distribution is the "art" of importance sampling. The rewards for a good distribution can be huge run-time savings; the penalty for a bad distribution can be longer run times than for a general Monte Carlo simulation without importance sampling.
 
=== Mathematical approach ===
 
Consider estimating by simulation the probability <math>p_t\,</math> of an event <math>{ X \ge t\ }</math>, where <math>X</math> is a random variable with [[probability distribution|distribution]] <math>F</math> and [[probability density function]] <math>f(x)= F'(x)\,</math>, where prime denotes [[derivative]]. A <math>K</math>-length [[independent and identically distributed]] (i.i.d.) sequence <math>X_i\,</math> is generated from the distribution <math>F</math>, and the number <math>k_t</math> of random variables that lie above the threshold <math>t</math> are counted. The random variable <math>k_t</math> is characterized by the [[Binomial distribution]]
 
:<math>P(k_t = k)={K\choose k}p_t^k(1-p_t)^{K-k},\,\quad \quad k=0,1,\dots,K.</math>
 
One can show that <math>\operatorname{E} [k_t/K] = p_t</math>, and <math>\operatorname{var} [k_t/K] = p_t(1-p_t)/K</math>, so in the limit <math>K \to \infty</math> we are able to obtain <math>p_t</math>.  Note that the variance is low if <math>p_t \approx 1</math>. Importance sampling is concerned with the determination and use of an alternate density function <math>f_*\,</math>(for X), usually referred to as a biasing density, for the simulation experiment. This density allows the event <math>{ X \ge t\ }</math> to occur more frequently, so the sequence lengths <math>K</math> gets smaller for a given [[estimator]] variance. Alternatively, for a given <math>K</math>, use of the biasing density results in a variance smaller than that of the conventional Monte Carlo estimate. From the definition of <math>p_t\,</math>, we can introduce <math>f_*\,</math> as below.
 
:<math>
\begin{align}
p_t & {} = {E} [1(X \ge t)] \\
& {} = \int 1(x \ge t) \frac{f(x)}{f_*(x)} f_*(x) \,dx \\
& {} = {E_*} [1(X \ge t) W(X)]
\end{align}
</math>
 
where
 
:<math>W(\cdot) \equiv \frac{f(\cdot)}{f_*(\cdot)} </math>
 
is a likelihood ratio and is referred to as the weighting function. The last equality in the above equation motivates the estimator
 
:<math> \hat p_t = \frac{1}{K}\,\sum_{i=1}^K 1(X_i \ge t) W(X_i),\,\quad \quad X_i \sim  f_*</math>
 
This is the importance sampling estimator of <math>p_t\,</math> and is unbiased. That is, the estimation procedure is to generate i.i.d. samples from <math>f_*\,</math> and for each sample which exceeds <math>t\,</math>, the estimate is incremented by the weight <math>W\,</math> evaluated at the sample value. The results are averaged over <math>K\,</math> trials. The variance of the importance sampling estimator is easily shown to be
 
:<math>
\begin{align}
\operatorname{var}_*\hat p_t & {} =  \frac{1}{K}\operatorname{var}_* [1(X \ge t)W(X)] \\
& {} = \frac{1}{K}\left\{{E_*}[1(X \ge t)^2 W^2(X)] - p_t^2\right\} \\
& {} = \frac{1}{K}\left\{{E}[1(X \ge t) W(X)] - p_t^2\right\}
\end{align}
</math>
 
Now, the importance sampling problem then focuses on finding a biasing density <math>f_*\,</math> such that the variance of the importance sampling estimator is less than the variance of the general Monte Carlo estimate. For some biasing density function, which minimizes the variance, and under certain conditions reduces it to zero, it is called an optimal biasing density function.
 
=== Conventional biasing methods ===
 
Although there are many kinds of biasing methods, the following two methods are most widely used in the applications of importance sampling.
 
==== Scaling ====
 
Shifting probability mass into the event region <math>{ X \ge t\ }</math> by positive scaling of the random variable <math>X\,</math> with a number greater than unity has the effect of increasing the variance (mean also) of the density function. This results in a heavier tail of the density, leading to an increase in the event probability. Scaling is probably one of the earliest biasing methods known and has been extensively used in practice. It is simple to implement and usually provides conservative simulation gains as compared to other methods.
 
In importance sampling by scaling, the simulation density is chosen as the density function of the scaled random variable <math>aX\,</math>, where usually <math>a>1</math> for tail probability estimation. By transformation,
 
:<math> f_*(x)=\frac{1}{a} f \bigg( \frac{x}{a} \bigg)\,</math>
 
and the weighting function is
 
:<math> W(x)= a \frac{f(x)}{f(x/a)} \,</math>
 
While scaling shifts probability mass into the desired event region, it also pushes mass into the complementary region <math>X<t\,</math> which is undesirable. If <math>X\,</math> is a sum of <math>n\,</math> random variables, the spreading of mass takes place in an <math>n\,</math> dimensional space. The consequence of this is a decreasing importance sampling gain for increasing <math>n\,</math>, and is called the dimensionality effect.
 
==== Translation ====
 
Another simple and effective biasing technique employs translation of the density function (and hence random variable) to place much of its probability mass in the rare event region. Translation does not suffer from a dimensionality effect and has been successfully used in several applications relating to simulation of [[digital communication]] systems. It often provides better simulation gains than scaling. In biasing by translation, the simulation density is given by
 
:<math> f_*(x)= f(x-c), \quad c>0 \,</math>
 
where <math>c\,</math> is the amount of shift and is to be chosen to minimize the variance of the importance sampling estimator.
 
=== Effects of system complexity  ===
 
The fundamental problem with importance sampling is that designing good biased distributions becomes more complicated as the system complexity increases. Complex systems are the systems with long memory since complex processing of a few inputs is much easier to handle. This dimensionality or memory can cause problems in three ways:
 
* long memory (severe [[intersymbol interference]] (ISI))
* unknown memory ([[Viterbi decoder]]s)
* possibly infinite memory (adaptive equalizers)
 
In principle, the importance sampling ideas remain the same in these situations, but the design becomes much harder. A successful approach to combat this problem is essentially breaking down a simulation into several smaller, more sharply defined subproblems. Then importance sampling strategies are used to target each of the simpler subproblems. Examples of techniques to break the simulation down are conditioning and error-event simulation (EES) and regenerative simulation.
 
=== Evaluation of importance sampling ===
 
In order to identify successful importance sampling techniques, it is useful to be able to quantify the run-time savings due to the use of the importance sampling approach. The performance measure commonly used is <math>\sigma^2_{MC} / \sigma^2_{IS} \,</math>, and this can be interpreted as the speed-up factor by which the importance sampling estimator achieves the same precision as the MC estimator. This has to be computed empirically since the estimator variances are not likely to be analytically possible when their mean is intractable. Other useful concepts in quantifying an importance sampling estimator are the variance bounds and the notion of asymptotic efficiency.
 
=== Variance cost function ===
 
Variance is not the only possible [[Loss function|cost function]] for a simulation, and other cost functions, such as the mean absolute deviation, are used in various statistical applications. Nevertheless, the variance is the primary cost function addressed in the literature, probably due to the use of variances in [[confidence interval]]s and in the performance measure <math>\sigma^2_{MC} / \sigma^2_{IS} \,</math>.
 
An associated issue is the fact that the ratio <math>\sigma^2_{MC} / \sigma^2_{IS} \,</math> overestimates the run-time savings due to importance sampling since it does not include the extra computing time required to compute the weight function. Hence, some people evaluate the net run-time improvement by various means. Perhaps a more serious overhead to importance sampling is the time taken to devise and program the technique and analytically derive the desired weight function.
 
== See also ==
* [[Monte Carlo method]]
* [[variance reduction]]
* [[Stratified sampling]]
* [[Monte_Carlo_integration#Recursive_stratified_sampling|Recursive stratified sampling]]
* [[VEGAS algorithm]]
* [[Particle filter]] &mdash; a sequential Monte Carlo method, which uses importance sampling
* [[Auxiliary field Monte Carlo]]
* [[Rejection sampling]]
 
== References ==
* Arouna. Adaptative Monte Carlo Method, A Variance Reduction Technique. Monte Carlo Methods and Their Applications. 2004
* ''Introduction to rare event simulation'', James Antonio Bucklew, Springer-Verlag, New York, 2004.
* ''Sequential Monte Carlo Methods in Practice'', by A Doucet, N de Freitas and N Gordon. Springer, 2001. ISBN 978-0-387-95146-1
* M. Ferrari, S. Bellini, "Importance Sampling simulation of turbo product codes," ICC2001, The IEEE International Conference on Communications, vol. 9, pp.&nbsp;2773–2777, June 2001.
* Tommy Oberg, Modulation, Detection, and Coding, John Wiley & Sons, Inc., New York, 2001.
*{{Cite book | last1=Press | first1=WH | last2=Teukolsky | first2=SA | last3=Vetterling | first3=WT | last4=Flannery | first4=BP | year=2007 | title=Numerical Recipes: The Art of Scientific Computing | edition=3rd | publisher=Cambridge University Press |  publication-place=New York | isbn=978-0-521-88068-8 | chapter=Section 7.9.1 Importance Sampling | chapter-url=http://apps.nrbook.com/empanel/index.html#pg=411}}
* B. D. Ripley, ''Stochastic Simulation'', 1987, Wiley & Sons
* P. J.Smith, M.Shafi, and H. Gao, "Quick simulation: A review of importance sampling techniques in communication systems," IEEE J.Select.Areas Commun., vol. 15, pp.&nbsp;597–613, May 1997.
* R. Srinivasan, ''Importance sampling - Applications in communications and detection'', Springer-Verlag, Berlin, 2002.
 
==External links==
* [http://www-sigproc.eng.cam.ac.uk/smc/ Sequential Monte Carlo Methods (Particle Filtering)] homepage on University of Cambridge
* [http://www.iop.org/EJ/abstract/0143-0807/22/4/315 Introduction to importance sampling in rare-event simulations] European journal of Physics. PDF document.
* [http://portal.acm.org/citation.cfm?id=1030470 Adaptive monte carlo methods for rare event simulation: adaptive monte carlo methods for rare event simulations] Winter Simulation Conference
 
[[Category:Monte Carlo methods]]
[[Category:Variance reduction]]
[[Category:Stochastic simulation]]

Latest revision as of 04:46, 12 January 2015

Most folks have this habit of doing all stuff by themselves, regardless of how critical or simple they are! These individuals won't let others interfere inside their affairs. While this stance could work in alternative regions of lifetime, it's not how to reply when you want to fix your Windows registry. There are some jobs such as removing spywares, virus and obsolete registry entries, which are best left to expert softwares. In this article I can tell we why it happens to be important to fix Windows registry NOW!

StreamCI.dll errors are caused by a amount of different difficulties, including which the file itself has been moved on your program, the file is outdated or you have installed certain third-party audio motorists that are conflicting with the file. The superior news is that if you would like to resolve the error you're seeing, you should look to initially ensure the file & drivers are working okay on your PC and also then resolving any StreamCI.dll errors that may be inside the registry of the computer.

The Windows registry is a program database of information. Windows plus alternative software store a lot of settings and alternative information inside it, and retrieve such info within the registry all the time. The registry is moreover a bottleneck inside which considering it's the heart of the operating system, any issues with it may result mistakes plus bring the running system down.

Handling intermittent mistakes - whenever there is a message to the effect that "memory or difficult disk is malfunctioning", we could put inside new hardware to substitute the defective part until the actual issue is discovered. There are h/w diagnostic programs to identify the faulty portions.

When you are shopping for the best tuneup utilities 2014 system, be sure to look for one that defragments the registry. It should also scan for assorted aspects, such as invalid paths and invalid shortcuts plus programs. It could additionally detect invalid fonts, check for device driver issues plus repair files. Also, be sure that it has a scheduler. That way, you are able to set it to scan your program at certain instances on certain days. It sounds like a lot, but it's completely vital.

The many probable cause of the trouble is the program problem - Registry Errors! That is the reason why folks who already have more than 2 G RAM on their computers are nevertheless constantly bothered by the problem.

Google Chrome is my lifeline plus to this day fortunately. My all settings and research associated bookmarks were saved in Chrome and stupidly I didn't synchronize them with the Gmail to shop them online. I could not afford to install brand-new version plus sacrifice all my function settings. There was no method to retrieve the aged settings. The only way left for me was to miraculously fix it browser inside a method that all of the information plus settings stored in it are recovered.

There is a lot a wise registry cleaner may do for your computer. It could check for and download updates for Windows, Java and Adobe. Keeping changes present is an important piece of advantageous computer health. It could additionally protect the individual plus business confidentiality and your online safety.