Pair production: Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
en>Monkbot
 
en>Flyer22
Line 1: Line 1:
{{Refimprove|date=July 2012}}
Hello, ӏ'm Kellee, a 28 yeаr ߋld from Aгnhem, Netherlаnds.<br>My hobbies include (but аre not limited to) Baking, Рainting and watching Two and a Half Men.<br><br>[http://www.iasianlanguage.org/ssfm/20150110rpp/dxx-cheap-cleveland-browns-isaiah-trufant-nike-jerseys.html cheap cleveland browns isaiah trufant nike jerseys]
 
'''Loss of significance''' is an undesirable effect in calculations using [[floating point|floating-point]] arithmetic. It occurs when an operation on two numbers increases [[relative error]] substantially more than it increases [[absolute error]], for example in subtracting two nearly equal numbers (known as ''catastrophic cancellation''). The effect is that the number of [[significant digit|accurate (significant) digits]] in the result is reduced unacceptably. Ways to avoid this effect are studied in [[numerical analysis]].
 
==Demonstration of the problem==
The effect can be demonstrated with decimal numbers.
The following example demonstrates loss of significance for a decimal floating-point data type with 10 significant digits:
 
Consider the decimal number
 
    0.1234567891234567890
 
A floating-point representation of this number on a machine that keeps 10 floating-point digits would be
 
    0.1234567891
 
which is fairly close – the difference is very small in comparison with either of the two numbers.
 
Now perform the calculation
 
    0.1234567891234567890 − 0.1234567890
 
The answer, accurate to 10 digits, is
 
    0.0000000001234567890
 
However, on the 10-digit floating-point machine, the calculation yields
 
    0.1234567891 − 0.1234567890 = 0.0000000001
 
Whereas the original numbers are accurate in all of the first (most significant) 10 digits, their floating-point difference is only accurate in its first nonzero digit. This amounts to loss of significance.
 
==Workarounds==
It is possible to do computations using an exact fractional representation of rational numbers and keep all significant digits, but this is often prohibitively slower than floating-point arithmetic. Furthermore, it usually only postpones the problem: What if the data are accurate to only ten digits? The same effect will occur.
 
One of the most important parts of numerical analysis is to avoid or minimize loss of significance in calculations. If the underlying problem is well-posed, there should be a stable algorithm for solving it.
 
== Loss of significant bits ==
 
Let ''x'' and ''y'' be positive normalized floating point numbers.
 
In the subtraction ''x'' − ''y'', ''r'' significant bits are lost where
 
:<math>q \le r \le p </math>
 
:<math>2^{-p} \le 1 - \frac{y}{x} \le 2^{-q} </math>
 
for some positive integers ''p'' and ''q''.
 
== Instability of the quadratic equation ==
 
For example, consider the [[quadratic equation]]:
 
:<math>a x^2 + b x + c = 0</math>,
 
with the two exact solutions:
 
:<math> x = \frac{-b \pm \sqrt{b^2 - 4ac}}{2a}.</math>
 
This formula may not always produce an accurate result.  For example, when c is very small, loss of significance can occur in either of the root calculations, depending on the sign of b.
 
The case <math>a = 1</math>, <math>b = 200</math>, <math>c = -0.000015</math> will serve to illustrate the problem:
 
:<math>x^2 + 200 x - 0.000015 = 0.</math>
 
We have
 
:<math>\sqrt{b^2 - 4 a c} = \sqrt{200^2 + 4 \times 1 \times 0.000015} = 200.00000015...</math>
 
In real arithmetic, the roots are
 
:<math>( -200 - 200.00000015 ) / 2 = -200.000000075,</math>
:<math>( -200 + 200.00000015 ) / 2 = 0.000000075.</math>
 
In 10-digit floating-point arithmetic,
 
:<math>( -200 - 200.0000001 ) / 2 = -200.00000005,</math>
:<math>( -200 + 200.0000001 ) / 2 = 0.00000005.</math>
 
Notice that the solution of greater [[absolute value|magnitude]] is accurate to ten digits, but the first nonzero digit of the solution of lesser magnitude is wrong.
 
Because of the subtraction that occurs in the quadratic equation, it does not constitute a stable algorithm to calculate the two roots.
 
=== A better algorithm ===
A careful [[floating point]] computer implementation combines several strategies to produce a robust result. Assuming the discriminant, {{nowrap|''b''<sup>2</sup> − 4''ac''}}, is positive and ''b'' is nonzero, the computation would be as follows:<ref>{{Citation
|last=Press
|first= William H.
|last2= Flannery
|first2= Brian P.
|last3= Teukolsky
|first3= Saul A.
|last4= Vetterling
|first4= William T.
|title= Numerical Recipes in C
|year= 1992
|edition= Second
|url=http://www.nrbook.com/a/bookcpdf.php}}, Section 5.6: "Quadratic and Cubic Equations.</ref>
 
:<math>\begin{align}
x_1 &= \frac{-b - \sgn (b) \,\sqrt {b^2-4ac}}{2a}, \\
x_2 &= \frac{2c}{-b - \sgn (b) \,\sqrt {b^2-4ac}} = \frac{c}{ax_1}.
\end{align}</math>
 
Here sgn denotes the [[sign function]], where sgn(b) is 1 if b is positive and −1 if b is negative. This avoids cancellation problems between b and the square root of the discriminant by ensuring that only numbers of the same sign are added.
 
To illustrate the instability of the standard quadratic formula ''versus'' this variant formula, consider a quadratic equation with roots <math>1.786737589984535</math> and <math>1.149782767465722 \times 10^{-8}</math>. To sixteen significant figures, roughly corresponding to [[double-precision]] accuracy on a computer, the monic quadratic equation with these roots may be written as:
 
::<math>x^2 - 1.786737601482363 x + 2.054360090947453 \times 10^{-8} = 0</math>
Using the standard quadratic formula and maintaining sixteen significant figures at each step, the standard quadratic formula yields
::<math>\sqrt{\Delta} = 1.786737578486707 </math>
::<math>x_1 = (1.786737601482363 + 1.786737578486707) / 2 = 1.786737589984535</math>
::<math>x_2 = (1.786737601482363 - 1.786737578486707) / 2 = 0.000000011497828</math>
Note how cancellation has resulted in <math>x_2</math> being computed to only eight significant digits of accuracy.
The variant formula presented here, however, yields the following:
::<math>x_1 = (1.786737601482363 + 1.786737578486707) / 2 = 1.786737589984535</math>
::<math>x_2 = 2.054360090947453 \times 10^{-8} / 1.786737589984535 = 1.149782767465722 \times 10^{-8}</math>
Note the retention of all significant digits for <math>x_2 .</math>
 
Note that while the above formulation avoids catastrophic cancellation between ''b'' and <math>\scriptstyle\sqrt{b^2-4ac}</math>, there remains a form of cancellation between the terms <math>b^2</math> and <math>-4ac</math> of the discriminant, which can still lead to loss of up to half of correct significant figures.<ref name="kahan"/><ref name="Higham2002">{{Citation |first=Nicholas |last=Higham |title=Accuracy and Stability of Numerical Algorithms |edition=2nd |publisher=SIAM |year=2002 |isbn=978-0-89871-521-7 |page=10 }}</ref>  The discriminant <math>b^2-4ac</math> needs to be computed in arithmetic of twice the precision of the result to avoid this (e.g. [[Quadruple-precision floating-point format|quad]] precision if the final result is to be accurate to full [[double-precision floating-point format|double]] precision).<ref>{{Citation|last=Hough|first=David|journal=IEEE Computer|title=Applications of the proposed IEEE 754 standard for floating point arithmetic|volume=14|issue=3|pages=70–74|doi=10.1109/C-M.1981.220381|date=March 1981|postscript=.}}</ref>  This can be in the form of a [[fused multiply-add]] operation.<ref name="kahan"/>
 
To illustrate this, consider the following quadratic equation, adapted from Kahan (2004):<ref name="kahan">{{Citation |first=Willian |last=Kahan |title=On the Cost of Floating-Point Computation Without Extra-Precise Arithmetic |url=http://www.cs.berkeley.edu/~wkahan/Qdrtcs.pdf |date=November 20, 2004 |accessdate=2012-12-25}}</ref>
:<math>94906265.625x^2 - 189812534x + 94906268.375</math>
This equation has <math>\Delta = 7.5625</math> and has roots
:<math>x_1 = 1.000000028975958</math>
:<math>x_2 = 1.000000000000000 .</math>
However, when computed using IEEE 754 double-precision arithmetic corresponding to 15 to 17 significant digits of accuracy, <math>\Delta</math> is rounded to 0.0, and the computed roots are
:<math>x_1 = 1.000000014487979</math>
:<math>x_2 = 1.000000014487979</math>
which are both false after the eighth significant digit. This is despite the fact that superficially, the problem seems to require only eleven significant digits of accuracy for its solution.
 
==See also==
* [[wikibooks:Fractals/Mathematics/Numerical#Escaping_test|example in wikibooks : Cancellation of significant digits in numerical computations]]
* [[Kahan summation algorithm]]
 
==References==
{{Reflist}}
 
[[Category:Numerical analysis]]

Revision as of 21:03, 16 February 2014

Hello, ӏ'm Kellee, a 28 yeаr ߋld from Aгnhem, Netherlаnds.
My hobbies include (but аre not limited to) Baking, Рainting and watching Two and a Half Men.

cheap cleveland browns isaiah trufant nike jerseys