Odds: Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
en>John of Reading
m Typo/general fixing, replaced: vice-versa → vice versa using AWB
 
(One intermediate revision by one other user not shown)
Line 1: Line 1:
{{Refimprove|date=December 2009}}
<br><br>It is very common to have a dental emergency -- a fractured tooth, an abscess, or severe pain when chewing. Over-the-counter pain medication is just masking the problem. Seeing an emergency dentist is critical to getting the source of the problem diagnosed and corrected as soon as possible.<br><br>Here are some common dental emergencies:<br>Toothache: The most common dental emergency. This generally means a badly decayed tooth. As the pain affects the tooth's nerve, treatment involves gently removing any debris lodged in the cavity being careful not to poke deep as this will cause severe pain if the nerve is touched. Next rinse vigorously with warm water. Then soak a small piece of cotton in oil of cloves and insert it in the cavity. This will give temporary relief until a dentist can be reached.<br><br>At times the pain may have a more obscure location such as decay under an old filling. As this can be only corrected by a dentist there are two things you can do to help the pain. Administer a pain pill (aspirin or some other analgesic) internally or dissolve a tablet in a half glass (4 oz) of warm water holding it in the mouth for several minutes before spitting it out. DO NOT PLACE A WHOLE TABLET OR ANY PART OF IT IN THE TOOTH OR AGAINST THE SOFT GUM TISSUE AS IT WILL RESULT IN A NASTY BURN.<br><br>Swollen Jaw: This may be caused by several conditions the most probable being an abscessed tooth. In any case the treatment should be to reduce pain and swelling. An ice pack held on the outside of the jaw, (ten minutes on and ten minutes off) will take care of both. If this does not control the pain, an analgesic tablet can be given every four hours.<br><br>Other Oral Injuries: Broken teeth, cut lips, bitten tongue or lips if severe means a trip to a dentist as soon as possible. In the mean time rinse the mouth with warm water and place cold compression the face opposite the injury. If there is a lot of bleeding, apply direct pressure to the bleeding area. If bleeding does not stop get patient to the emergency room of a hospital as stitches may be necessary.<br><br>Prolonged Bleeding Following Extraction: Place a gauze pad or better still a moistened tea bag over the socket and have the patient bite down gently on it for 30 to 45 minutes. The tannic acid in the tea seeps into the tissues and often helps stop the bleeding. If bleeding continues after two hours, call the dentist or take patient to the emergency room of the nearest hospital.<br><br>Broken Jaw: If you suspect the patient's jaw is broken, bring the upper and lower teeth together. Put a necktie, handkerchief or towel under the chin, tying it over the head to immobilize the jaw until you can get the patient to a dentist or the emergency room of a hospital.<br><br>Painful Erupting Tooth: In young children teething pain can come from a loose baby tooth or from an erupting permanent tooth. Some relief can be given by crushing a little ice and wrapping it in gauze or a clean piece of cloth and putting it directly on the tooth or gum tissue where it hurts. The numbing effect of the cold, along with an appropriate dose of aspirin, usually provides temporary relief.<br><br>In young adults, an erupting 3rd molar (Wisdom tooth), especially if it is impacted, can cause the jaw to swell and be quite painful. Often the gum around the tooth will show signs of infection. Temporary relief can be had by giving aspirin or some other painkiller and by dissolving an aspirin in half a glass of warm water and holding this solution in the mouth over the sore gum. AGAIN DO NOT PLACE A TABLET DIRECTLY OVER THE GUM OR CHEEK OR USE THE ASPIRIN SOLUTION ANY STRONGER THAN RECOMMENDED TO PREVENT BURNING THE TISSUE. The swelling of the jaw can be reduced by using an ice pack on the outside of the face at intervals of ten minutes on and ten minutes off.<br><br>If you treasured this article and also you would like to be given more info regarding [http://www.youtube.com/watch?v=90z1mmiwNS8 dentist DC] please visit the page.
 
'''Simulated annealing (SA)''' is a generic [[probabilistic algorithm|probabilistic]] [[metaheuristic]] for the [[global optimization]] problem of locating a good approximation to the [[global optimum]] of a given [[function (mathematics)|function]] in a large [[Mathematical optimization#Optimization problems|search space]].  It is often used when the search space is discrete (e.g., all tours that visit a given set of cities).  For certain problems, simulated annealing may be more efficient than [[brute force search|exhaustive enumeration]] — provided that the goal is merely to find an acceptably good solution in a fixed amount of time, rather than the best possible solution.
 
The name and inspiration come from [[annealing (metallurgy)|annealing in metallurgy]], a technique involving heating and controlled cooling of a material to increase the size of its [[crystal]]s and reduce their [[crystallographic defect|defects]]. Both are attributes of the material that depend on its [[thermodynamic free energy]]. Heating and cooling the material affects both the temperature and the thermodynamic free energy.  While the same amount of cooling brings the same amount of decrease in temperature it will bring a bigger or smaller decrease in the thermodynamic free energy depending on the rate that it occurs, with a slower rate producing a bigger decrease.
 
This notion of slow cooling is implemented in the Simulated Annealing algorithm as a slow decrease in the probability of accepting worse solutions as it explores the solution space. Accepting worse solutions is a fundamental property of metaheuristics because it allows for a more extensive search for the optimal solution.
 
The method was independently described by Scott Kirkpatrick, C. Daniel Gelatt and Mario P. Vecchi in 1983,<ref>{{cite journal |jstor=1690046 |pages=671–680 |last1=Kirkpatrick |first1=S. |last2=Gelatt Jr |first2=C. D. |last3=Vecchi |first3=M. P. |title=Optimization by Simulated Annealing |volume=220 |issue=4598 |journal=Science |year=1983 |pmid=17813860 |doi=10.1126/science.220.4598.671}}</ref> and by Vlado Černý in 1985.<ref>{{cite journal |doi=10.1007/BF00940812 |title=Thermodynamical approach to the traveling salesman problem: An efficient simulation algorithm |year=1985 |last1=Černý |first1=V. |journal=Journal of Optimization Theory and Applications |volume=45 |pages=41–51}}</ref> The method is an adaptation of  the [[Metropolis-Hastings algorithm]], a [[Monte Carlo method]] to generate sample states of a thermodynamic system, invented by [[Marshall Rosenbluth|M.N. Rosenbluth]] and published in a paper by [[Nicholas Metropolis|N. Metropolis]] et al. in 1953.<ref>{{cite journal |doi=10.1063/1.1699114 |title=Equation of State Calculations by Fast Computing Machines |year=1953 |last1=Metropolis |first1=Nicholas |last2=Rosenbluth |first2=Arianna W. |last3=Rosenbluth |first3=Marshall N. |last4=Teller |first4=Augusta H. |last5=Teller |first5=Edward |journal=The Journal of Chemical Physics |volume=21 |issue=6 |pages=1087}}</ref>
 
==Overview==
[[File:Hill Climbing with Simulated Annealing.gif|thumb|Simulated annealing solving a hill climbing problem. The objective here is to get to the highest point, however, it is not enough to use a simple [[hill climbing|hill climb algorithm]], as there are many [[Hill climbing algorithm#Local maxima|local maxima]]. By cooling the temperature slowly the global maximum is found.|500px]]
 
The [[thermodynamic state|state]] of some [[physical system]]s, and the function ''E''(''s'') to be minimized is analogous to the [[internal energy]] of the system in that state. The goal is to bring the system, from an arbitrary ''initial state'', to a state with the minimum possible energy.
 
===The basic iteration===
At each step, the SA heuristic considers some neighbouring state ''s' ''of the current state ''s'', and [[probabilistic]]ally decides between moving the system to state ''s' ''or staying in state ''s''.  These probabilities ultimately lead the system to move to states of lower energy.  Typically this step is repeated until the system reaches a state that is good enough for the application, or until a given computation budget has been exhausted.
 
===The neighbours of a state===
The neighbours of a state are new states of the problem that are produced after altering a given state in some well-defined way. For example, in the [[traveling salesman problem]] each state is typically defined as a [[permutation]] of the cities to be visited. The neighbours of a state are the set of permutations that are produced, for example, by reversing the order of any two successive cities. The well-defined way in which the states are altered in order to find neighbouring states is called a "move" and different moves give different sets of neighbouring states. These moves usually result in minimal alterations of the last state, as the previous example depicts, in order to help the algorithm keep the better parts of the solution and change only the worse parts. In the traveling salesman problem, the parts of the solution are the city connections.
 
Searching for neighbours of a state is fundamental to optimization because the final solution will come after a tour of successive neighbours. Simple [[heuristic]]s move by finding best neighbour after best neighbour and stop when they have reached a solution which has no neighbours that are better solutions. The problem with this approach is that the neighbours of a state are not guaranteed to contain any of the existing better solutions which means that failure to find a better solution among them does not guarantee that no better solution exists. This is why the best solution found by such algorithms is called a [[local optimum]] in contrast with the actual best solution which is called a [[global optimum]]. Metaheuristics use the neighbours of a solution as a way to explore the solutions space and although they prefer better neighbours they also accept worse neighbours in order to avoid getting stuck in local optima. As a result, if the algorithm is run for an infinite amount of time, the global optimum will be found.
 
===Acceptance probabilities===
The probability of making the [[state transition|transition]] from the current state <math>s</math> to a candidate new state <math>s'</math> is specified by an ''acceptance probability function'' <math>P(e, e', T)</math>, that depends on the energies <math>e = E(s)</math> and <math>e' = E(s')</math> of the two states, and on a global time-varying parameter <math>T</math> called the ''temperature''.
States with a smaller energy are better than those with a greater energy.
The probability function <math>P</math> must be positive even when <math>e'</math> is greater than <math>e</math>.  This feature prevents the method from becoming stuck at a local minimum that is worse than the global one.
 
When <math>T</math> tends to zero, the probability <math>P(e, e', T)</math> must tend to zero if <math>e' > e</math> and to a positive value otherwise. For sufficiently small values of <math>T</math>, the system will then increasingly favor moves that go "downhill" (i.e., to lower energy values), and avoid those that go "uphill."  With <math>T=0</math> the procedure reduces to the [[greedy algorithm]], which makes only the downhill transitions.
 
In the original description of SA, the probability <math>P(e, e', T)</math> was equal to 1 when <math>e' < e</math> — i.e., the procedure always moved downhill when it found a way to do so, irrespective of the temperature.  Many descriptions and implementations of SA still take this condition as part of the method's definition.  However, this condition is not essential for the method to work.
 
The <math>P</math> function is usually chosen so that the probability of accepting a move decreases when the difference
<math>e'-e</math> increases—that is, small uphill moves are more likely than large ones.  However, this requirement is not strictly necessary, provided that the above requirements are met.
 
Given these properties, the temperature <math>T</math> plays a crucial role in controlling the evolution of the state <math>s</math> of the system with regard to its sensitivity to the variations of system energies. To be precise, for a large <math>T</math>, the evolution of <math>s</math> is sensitive to coarser energy variations, while it is sensitive to finer energy variations when <math>T</math> is small.
 
===The annealing schedule===
The name and inspiration of the algorithm demand an interesting feature related to the temperature variation to be embedded in the operational characteristics of the algorithm. This necessitates a gradual reduction of the temperature as the simulation proceeds. The algorithm starts initially with <math>T</math> set to a high value (or infinity), and then it is decreased at each step following some ''annealing schedule''—which may be specified by the user, but must end with <math>T=0</math> towards the end of the allotted time budget. In this way, the system is expected to wander initially towards a broad region of the search space containing good solutions, ignoring small features of the energy function; then drift towards low-energy regions that become narrower and narrower; and finally move downhill according to the [[steepest descent]] heuristic.
 
<center>
<table border=0 width="90%">
<tr>
<td>[[Image:SimulatedAnnealingFast.jpg|center|thumb|200px]]</td>
<td>[[Image:SimulatedAnnealingSlow.jpg|center|thumb|200px]]</td>
</tr>
<tr>
<td colspan=2>
Example illustrating the effect of cooling schedule on the performance of simulated annealing.  The problem is to rearrange the [[pixel]]s of an image so as to minimize a certain [[potential energy]] function, which causes similar [[colour]]s to attract at short range and repel at a slightly larger distance.  The elementary moves swap two adjacent pixels.  These images were obtained with a fast cooling schedule (left) and a slow cooling schedule (right), producing results similar to [[amorphous solid|amorphous]] and [[crystalline solid]]s, respectively.
</td>
</tr>
</table>
</center>
 
For any given finite problem, the probability that the simulated annealing algorithm terminates with a [[global optimum|global optimal]] solution approaches 1 as the annealing schedule is extended.<ref>{{cite journal |doi=10.1109/34.295910 |title=Simulated annealing: A proof of convergence |year=1994 |last1=Granville |first1=V. |last2=Krivanek |first2=M. |last3=Rasson |first3=J.-P. |journal=IEEE Transactions on Pattern Analysis and Machine Intelligence |volume=16 |issue=6 |pages=652–656}}</ref> This theoretical result, however, is not particularly helpful, since the time required to ensure a significant probability of success will usually exceed the time required for a [[Brute-force search|complete search]] of the [[solution space]].{{Citation needed|date=June 2011}}
 
==Pseudocode==
The following [[pseudocode]] presents the simulated annealing heuristic as described above. It starts from a state <tt>s0</tt> and continues to either a maximum of <tt>kmax</tt> steps or until a state with an energy of <tt>emax</tt> or less is found. In the process, the call <tt>neighbour(s)</tt> should generate a randomly chosen neighbour of a given state s; the call <tt>random()</tt> should return a random value in the range <math>[0, 1]</math>. The annealing schedule is defined by the call <tt>temperature(r)</tt>, which should yield the temperature to use, given the fraction <tt>r</tt> of the time budget that has been expended so far.
 
<code>
s ← s0; e ← E(s)                                  // ''Initial state, energy.''
sbest ← s; ebest ← e                              // ''Initial "best" solution''
k ← 0                                            // ''Energy evaluation count.''
'''while''' k < kmax '''and''' e > emax                      // ''While time left & not good enough:''
  T ← temperature(k/kmax)                        // ''Temperature calculation.''
  snew ← neighbour(s)                            // ''Pick some neighbour.''
  enew ← E(snew)                                  // ''Compute its energy.''
  '''if''' P(e, enew, T) > random() '''then'''                // ''Should we move to it?''
    s ← snew; e ← enew                            // ''Yes, change state.''
  '''if''' enew < ebest '''then'''                            // ''Is this a new best?''
    sbest ← snew; ebest ← enew                    // ''Save 'new neighbour' to 'best found'.''
  k ← k + 1                                      // ''One more evaluation done''
'''return''' sbest                                      // ''Return the best solution found.''
</code>
 
Pedantically speaking, the "pure" SA algorithm does not keep track of the best solution found so far: it does not use the variables <tt>sbest</tt> and <tt>ebest</tt>, it lacks the second <tt>'''if'''</tt> inside the loop, and, at the end,  it returns the current state <tt>s</tt> instead of <tt>sbest</tt>. While remembering the best state is a standard technique in optimization  that can be used in any [[metaheuristic]], it does not have an analogy with physical annealing — since a physical system can "store" a single state only.
 
Even more pedantically speaking, saving the best state is not necessarily an improvement, since one may have to specify a smaller <tt>kmax</tt> in order to compensate for the higher cost per iteration and since there is a good probability that <tt>sbest</tt> equals <tt>s</tt> in the final iteration anyway. However, the step <tt>sbest ← snew</tt> happens only on a small fraction of the moves. Therefore, the optimization is usually worthwhile, even when state-copying is an expensive operation.{{Citation needed|date=November 2010}}
 
==Selecting the parameters==
In order to apply the SA method to a specific problem, one must specify the following parameters: the state space, the energy (goal) function <tt>E()</tt>, the candidate generator procedure <tt>neighbour()</tt>, the acceptance probability function <tt>P()</tt>, and the annealing schedule <tt>temperature()</tt> AND initial temperature <init temp>. These choices can have a significant impact on the method's effectiveness.  Unfortunately, there are no choices of these parameters that will be good for all problems, and there is no general way to find the best choices for a given problem. The following sections give some general guidelines.
 
===Diameter of the search graph===
Simulated annealing may be modeled as a random walk on a ''search [[graph theory|graph]]'', whose vertices are all possible states, and whose edges are the candidate moves. An essential requirement for the <tt>neighbour()</tt> function is that it must provide a sufficiently short path on this graph from the initial state to any state which may be the global optimum.  (In other words, the [[diameter (graph theory)|diameter]] of the search graph must be small.)  In the traveling salesman example above, for instance, the search space for <math>n=20</math> cities has [[factorial|<math>n!</math>]] = 2,432,902,008,176,640,000 (2.4 [[quintillion]]) states; yet the neighbour generator function that swaps two consecutive cities can get from any state (tour) to any other state in at most <math>n(n-1)/2 = 190</math> steps (this is equivalent to <math>\sum_{i=1}^n i</math>).
 
===Transition probabilities===
For each edge <math>(s,s')</math> of the search graph, one defines a ''transition probability'', which is the probability that the SA algorithm will move to state  <math>s'</math> when its current state is <math>s</math>.  This probability depends on the current temperature as specified by <tt>temp()</tt>, by the order in which the candidate moves are generated by the <tt>neighbour()</tt> function, and by the acceptance probability function <tt>P()</tt>. (Note that the transition probability is '''not''' simply <math>P(e, e', T)</math>, because the candidates are tested serially.)
 
===Acceptance probabilities===
The specification of <tt>neighbour()</tt>, <tt>P()</tt>, and <tt>temperature()</tt> is partially redundant.  In practice, it's common to use the same acceptance function <tt>P()</tt> for many problems, and adjust the other two functions according to the specific problem.
 
In the formulation of the method by Kirkpatrick et al., the acceptance probability function <math>P(e, e', T)</math> was defined as 1 if <math>e' < e</math>, and <math>\exp(-(e'-e)/T)</math> otherwise. This formula was superficially justified by analogy with the transitions of a physical system; it corresponds to the [[Metropolis-Hastings algorithm]], in the case where the proposal distribution of Metropolis-Hastings is symmetric. However, this acceptance probability is often used for simulated annealing even when the <tt>neighbour()</tt> function, which is analogous to the proposal distribution in Metropolis-Hastings, is not symmetric, or not probabilistic at all.  As a result, the transition probabilities of the simulated annealing algorithm do not correspond to the transitions of the analogous physical system, and the long-term distribution of states at a constant temperature <math>T</math> need not bear any resemblance to the thermodynamic equilibrium distribution over states of that physical system, at any temperature. Nevertheless, most descriptions of SA assume the original acceptance function, which is probably hard-coded in many implementations of SA.
 
===Efficient candidate generation===
When choosing the candidate generator <tt>neighbour()</tt>, one must consider that after a few iterations of the SA algorithm, the current state is expected to have much lower energy than a random state. Therefore, as a general rule, one should skew the generator towards candidate moves where the energy of the destination state <math>s'</math> is likely to be similar to that of the current state.  This [[heuristic]] (which is the main principle of the [[Metropolis-Hastings algorithm]]) tends to exclude "very good" candidate moves as well as "very bad" ones; however, the former are usually much less common than the latter, so the heuristic is generally quite effective.
 
In the traveling salesman problem above, for example, swapping two ''consecutive'' cities in a low-energy tour is expected to have a modest effect on its energy (length); whereas swapping two ''arbitrary'' cities is far more likely to increase its length than to decrease it.  Thus, the consecutive-swap neighbour generator is expected to perform better than the arbitrary-swap one, even though the latter could provide a somewhat shorter path to the optimum (with <math>n-1</math> swaps, instead of <math>n(n-1)/2</math>).
 
A more precise statement of the heuristic is that one should try first candidate states <math>s'</math> for which <math>P(E(s), E(s'), T)</math> is large.  For the "standard" acceptance function <math>P</math> above, it means that <math>E(s') - E(s)</math> is on the order of <math>T</math> or less. Thus, in the traveling salesman example above, one could use a <tt>neighbour()</tt> function that swaps two random cities, where the probability of choosing a city pair vanishes as their distance increases beyond <math>T</math>.
 
===Barrier avoidance===
When choosing the candidate generator <tt>neighbour()</tt> one must also try to reduce the number of "deep" local minima — states (or sets of connected states) that have much lower energy than all its neighbouring states.  Such "closed [[drainage basin|catchment]] basins" of the energy function may trap the SA algorithm with high probability (roughly proportional to the number of states in the basin) and for a very long time (roughly exponential on the energy difference between the surrounding states and the bottom of the basin).
 
As a rule, it is impossible to design a candidate generator that will satisfy this goal and also prioritize candidates with similar energy.  On the other hand, one can often vastly improve the efficiency of SA by relatively simple changes to the generator. In the traveling salesman problem, for instance, it is not hard to exhibit two tours <math>A</math>, <math>B</math>, with nearly equal lengths, such that (1) <math>A</math> is optimal, (2) every sequence of city-pair swaps that converts <math>A</math> to <math>B</math> goes through tours that are much longer than both, and (3) <math>A</math> can be transformed into <math>B</math> by flipping (reversing the order of) a set of consecutive cities.  In this example, <math>A</math> and <math>B</math> lie in different "deep basins" if the generator performs only random pair-swaps; but they will be in the same basin if the generator performs random segment-flips.
 
===Cooling schedule===
 
The physical analogy that is used to justify SA assumes that the cooling rate is low enough for the probability distribution of the current state to be near [[thermodynamic equilibrium]] at all times.  Unfortunately, the ''relaxation time''—the time one must wait for the equilibrium to be restored after a change in temperature—strongly depends on the "topography" of the energy function and on the current temperature.  In the SA algorithm, the relaxation time also depends on the candidate generator, in a very complicated way.  Note that all these parameters are usually provided as [[procedural parameter|black box functions]] to the SA algorithm. Therefore, the ideal cooling rate cannot be determined beforehand, and should be empirically adjusted for each problem. [[Adaptive simulated annealing]] algorithms address this problem by connecting the cooling schedule to the search progress.
 
==Restarts==
Sometimes it is better to move back to a solution that was significantly better rather than always moving from the current state.  This process is called ''restarting'' of simulated annealing. To do this we set <code>s</code> and <code>e</code> to <code>sbest</code> and <code>ebest</code> and perhaps restart the annealing schedule.  The decision to restart could be based on several criteria. Notable among these include restarting based on a fixed number of steps, based on whether the current energy is too high compared to the best energy obtained so far, restarting randomly, etc.
 
==Related methods ==
* [[Quantum annealing]] uses "quantum fluctuations" instead of thermal fluctuations to get through high but thin barriers in the target function.
 
* [[Stochastic tunneling]] attempts to overcome the increasing difficulty simulated annealing runs have in escaping from local minima as the temperature decreases, by 'tunneling' through barriers.
 
* [[Tabu search]] normally moves to neighbouring states of lower energy, but will take uphill moves when it finds itself stuck in a local minimum; and avoids cycles by keeping a "taboo list" of solutions already seen.
 
* [[Reactive search optimization]] focuses on combining machine learning with optimization, by adding an internal feedback loop to self-tune the free parameters of an algorithm to the characteristics of the problem, of the instance, and of the local situation around the current solution.
 
* [[Stochastic gradient descent]] runs many greedy searches from random initial locations.
 
* [[Genetic algorithms]] maintain a pool of solutions rather than just one. New candidate solutions are generated not only by "mutation" (as in SA), but also by "recombination" of two solutions from the pool. Probabilistic criteria, similar to those used in SA, are used to select the candidates for mutation or combination, and for discarding excess solutions from the pool.
 
* [[Graduated optimization]] digressively "smooths" the target function while optimizing.
 
* [[Ant colony optimization]] (ACO) uses many ants (or agents) to traverse the solution space and find locally productive areas.
 
* The [[cross-entropy method]] (CE) generates candidates solutions via a parameterized probability distribution. The parameters are updated via cross-entropy minimization, so as to generate better samples in the next iteration.
 
* [[Harmony search]] mimics musicians in improvisation process where each musician plays a note for finding a best harmony all together.
 
* [[Stochastic optimization]] is an umbrella set of methods that includes simulated annealing and numerous other approaches.
 
* [[Particle swarm optimization]] is an algorithm modelled on swarm intelligence that finds a solution to an optimization problem in a search space, or model and predict social behavior in the presence of objectives.
 
*[[Intelligent Water Drops]] (IWD) which mimics the behavior of natural water drops to solve optimization problems
 
* [[Parallel tempering]] is a simulation of model copies at different temperatures (or [[Hamiltonian (quantum mechanics)|Hamiltonian]]s) to overcome the potential barriers.
 
==See also==
{{columns-list|2|
* [[Adaptive simulated annealing]]
* [[Markov chain]]
* [[Combinatorial optimization]]
* [[Automatic label placement]]
* [[Multidisciplinary optimization]]
* [[Place and route]]
* [[Molecular dynamics]]
* [[Traveling salesman problem]]
* [[Reactive search optimization]]
* [[Graph cuts in computer vision]]
* [[Particle swarm optimization]]
* [[Intelligent Water Drops]]
}}
 
==References==
{{Reflist}}
 
==Further reading==
* A. Das and B. K. Chakrabarti (Eds.), ''Quantum Annealing and Related Optimization Methods,'' Lecture Note in Physics, Vol. 679, Springer, Heidelberg (2005)
*{{cite journal |doi=10.1007/BF00202749 |title=Correlated and uncorrelated fitness landscapes and how to tell the difference |year=1990 |last1=Weinberger |first1=E. |journal=Biological Cybernetics |volume=63 |issue=5 |pages=325–336}}
*{{cite journal |doi=10.1016/j.physleta.2003.08.070 |title=Placement by thermodynamic simulated annealing |year=2003 |last1=De Vicente |first1=Juan |last2=Lanchares |first2=Juan |last3=Hermida |first3=Román |journal=Physics Letters A |volume=317 |issue=5–6 |pages=415–423}}
*{{Cite book | last1=Press | first1=WH | last2=Teukolsky | first2=SA | last3=Vetterling | first3=WT | last4=Flannery | first4=BP | year=2007 | title=Numerical Recipes: The Art of Scientific Computing | edition=3rd | publisher=Cambridge University Press |  publication-place=New York | isbn=978-0-521-88068-8 | chapter=Section 10.12. Simulated Annealing Methods | chapter-url=http://apps.nrbook.com/empanel/index.html#pg=549 | postscript=<!-- Bot inserted parameter. Either remove it; or change its value to "." for the cite to end in a ".", as necessary. -->{{inconsistent citations}}}}
 
==External links==
* [http://yuval.bar-or.org/index.php?item=9 Simulated Annealing visualization] A visualization of a simulated annealing solution to the N-Queens puzzle by Yuval Baror.
*[http://biomath.ugent.be/~brecht/downloads.html Global optimization algorithms for MATLAB]
* [http://www.heatonresearch.com/articles/64/page1.html Simulated Annealing] A Java applet that allows you to experiment with simulated annealing. Source code included.
* [http://www.mathworks.com/matlabcentral/fileexchange/loadFile.do?objectId=10548&objectType=file "General Simulated Annealing Algorithm"] An open-source MATLAB program for general simulated annealing exercises.
* [http://en.wikiversity.org/wiki/Simulated_Annealing_Project Self-Guided Lesson on Simulated Annealing] A Wikiversity project.
* [http://arstechnica.com/science/news/2009/12/uncertainty-hovers-over-claim-googles-using-quantum-computer.ars Google in superposition of using, not using quantum computer] Ars Technica discusses the possibility that the D-Wave computer being used by google may, in fact, be an efficient SA co-processor
* [http://www.netlib.org/opt/simann.f Minimizing Multimodal Functions of Continuous Variables with Simulated Annealing] A Fortran 77 simulated annealing code.
* [http://apmonitor.com/me575/index.php/Main/SimulatedAnnealing Simulated Annealing Tutorial and Code in Python and MATLAB]
* Simulated Annealing in R via [http://stat.ethz.ch/R-manual/R-patched/library/stats/html/optim.html optim] or [http://cran.r-project.org/web/packages/GenSA/ GenSA]
 
{{Major subfields of optimization}}
 
{{DEFAULTSORT:Simulated Annealing}}
[[Category:Heuristic algorithms]]
[[Category:Optimization algorithms and methods]]
[[Category:Monte Carlo methods]]

Latest revision as of 17:10, 6 January 2015



It is very common to have a dental emergency -- a fractured tooth, an abscess, or severe pain when chewing. Over-the-counter pain medication is just masking the problem. Seeing an emergency dentist is critical to getting the source of the problem diagnosed and corrected as soon as possible.

Here are some common dental emergencies:
Toothache: The most common dental emergency. This generally means a badly decayed tooth. As the pain affects the tooth's nerve, treatment involves gently removing any debris lodged in the cavity being careful not to poke deep as this will cause severe pain if the nerve is touched. Next rinse vigorously with warm water. Then soak a small piece of cotton in oil of cloves and insert it in the cavity. This will give temporary relief until a dentist can be reached.

At times the pain may have a more obscure location such as decay under an old filling. As this can be only corrected by a dentist there are two things you can do to help the pain. Administer a pain pill (aspirin or some other analgesic) internally or dissolve a tablet in a half glass (4 oz) of warm water holding it in the mouth for several minutes before spitting it out. DO NOT PLACE A WHOLE TABLET OR ANY PART OF IT IN THE TOOTH OR AGAINST THE SOFT GUM TISSUE AS IT WILL RESULT IN A NASTY BURN.

Swollen Jaw: This may be caused by several conditions the most probable being an abscessed tooth. In any case the treatment should be to reduce pain and swelling. An ice pack held on the outside of the jaw, (ten minutes on and ten minutes off) will take care of both. If this does not control the pain, an analgesic tablet can be given every four hours.

Other Oral Injuries: Broken teeth, cut lips, bitten tongue or lips if severe means a trip to a dentist as soon as possible. In the mean time rinse the mouth with warm water and place cold compression the face opposite the injury. If there is a lot of bleeding, apply direct pressure to the bleeding area. If bleeding does not stop get patient to the emergency room of a hospital as stitches may be necessary.

Prolonged Bleeding Following Extraction: Place a gauze pad or better still a moistened tea bag over the socket and have the patient bite down gently on it for 30 to 45 minutes. The tannic acid in the tea seeps into the tissues and often helps stop the bleeding. If bleeding continues after two hours, call the dentist or take patient to the emergency room of the nearest hospital.

Broken Jaw: If you suspect the patient's jaw is broken, bring the upper and lower teeth together. Put a necktie, handkerchief or towel under the chin, tying it over the head to immobilize the jaw until you can get the patient to a dentist or the emergency room of a hospital.

Painful Erupting Tooth: In young children teething pain can come from a loose baby tooth or from an erupting permanent tooth. Some relief can be given by crushing a little ice and wrapping it in gauze or a clean piece of cloth and putting it directly on the tooth or gum tissue where it hurts. The numbing effect of the cold, along with an appropriate dose of aspirin, usually provides temporary relief.

In young adults, an erupting 3rd molar (Wisdom tooth), especially if it is impacted, can cause the jaw to swell and be quite painful. Often the gum around the tooth will show signs of infection. Temporary relief can be had by giving aspirin or some other painkiller and by dissolving an aspirin in half a glass of warm water and holding this solution in the mouth over the sore gum. AGAIN DO NOT PLACE A TABLET DIRECTLY OVER THE GUM OR CHEEK OR USE THE ASPIRIN SOLUTION ANY STRONGER THAN RECOMMENDED TO PREVENT BURNING THE TISSUE. The swelling of the jaw can be reduced by using an ice pack on the outside of the face at intervals of ten minutes on and ten minutes off.

If you treasured this article and also you would like to be given more info regarding dentist DC please visit the page.