Secure multi-party computation: Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
en>Altenmann
No edit summary
en>Bender235
m clean up; enabling HTTP Secure for selected external links. using AWB
Line 1: Line 1:
The '''false position method''' or '''regula falsi method''' is a term for problem-solving methods in arithmetic, algebra, and calculus.  In simple terms, these methods begin by attempting to evaluate a problem using test ("false") values for the variables, and then adjust the values accordingly.
Adrianne is what you can call me but We all don't like when [http://Www.tumblr.com/tagged/individuals individuals] use my full user name. What I love doing is to play croquet and furthermore now I have your time to take on new things. The job I've been occupying for years is an shop clerk. My husband and I select to reside in Guam but I will own to move in a suitable year or two. You has the capability to find my website here: http://circuspartypanama.com<br><br>


Two basic types of false position method can be distinguished, ''simple false position'' and ''double false position''. ''Simple false position'' is aimed at solving problems involving direct proportion. Such problems can be written algebraically in the form: determine ''x'' such that
Visit my web page [http://circuspartypanama.com clash of clans hack no survey no password download]
<blockquote>
<math> ax = b </math>,
</blockquote>
if ''a'' and ''b'' are known. ''Double false position'' is aimed at solving more difficult problems that can be written algebraically in the form: determine ''x'' such that
<blockquote>
<math> f(x) = b </math>,
</blockquote>
if it is known that
<blockquote>
<math> f(x_1) = b_1, f(x_2) = b_2</math>.
</blockquote>
Double false position is mathematically equivalent to [[linear interpolation]]; for an affine [[linear function]],
<blockquote>
<math> f(x) = ax + c</math>,
</blockquote>
it provides the exact solution, while for a [[nonlinear system|nonlinear]] function ''f'' it provides an [[approximation]] that can be successively improved by [[iterative method|iteration]].
 
==Arithmetic and algebra==
In problems involving [[arithmetic]] or [[algebra]],  the '''false position method''' or '''regula falsi''' is used to refer to basic [[trial and error]] methods of solving problems by substituting test values for the unknown quantities.  This is sometimes also referred to as "guess and check".  Versions of this method predate the advent of [[algebra]] and the use of [[equations]].
 
For simple false position, the method of solving what we would now write as ''ax'' = ''b'' begins by using a test input value ''x''′, and finding the corresponding output value ''b''′ by multiplication:  ''ax''′  = ''b''′. The correct answer is then found by proportional adjustment, ''x''  =  ''x''′ · ''b'' ÷ ''b''′. This technique is found in [[cuneiform]] tablets from ancient [[Babylonian mathematics]], and possibly in [[papyrus|papyri]] from ancient [[Egyptian mathematics]].<ref>Jean-Luc Chabert, ed., ''A History of Algorithms: From the Pebble to the Microchip'' (Berlin: Springer, 1999), pp. 86-91.</ref>
 
Likewise, double false position arose in late antiquity as a purely arithmetical algorithm. It was used mostly to solve what are now called affine linear problems by using a pair of test inputs and the corresponding pair of outputs. This algorithm would be memorized and carried out by rote. In the ancient [[Chinese mathematics|Chinese mathematical]] text called ''[[The Nine Chapters on the Mathematical Art]]'' (九章算術), dated from 200 BC to AD 100, most of Chapter 7 was devoted to the algorithm. There, the procedure was justified by concrete arithmetical arguments, then applied creatively to a wide variety of story problems, including one involving what we would call [[secant line]]s on a [[quadratic polynomial]]. A more typical example is this "joint purchase" problem:
<blockquote>
Now an item is purchased jointly; everyone contributes 8 [coins], the excess is 3; everyone contributes 7, the deficit is 4. Tell: The number of people, the item price, what is each? Answer: 7 people, item price 53.<ref>Shen Kangshen, John N. Crossley and Anthony W.-C. Lun, 1999. ''The Nine Chapters on the Mathematical Art: Companion and Commentary''. Oxford: Oxford University Press, p. 358.</ref>
</blockquote>
 
Between the 9th and 10th centuries, the [[Egyptians|Egyptian]] [[Muslim]] mathematician [[Abu Kamil]] wrote a now-lost treatise on the use of double false position, known as the ''Book of the Two Errors'' (''Kitāb al-khaṭāʾayn''). The oldest surviving writing on double false position from the [[Middle East]] is that of [[Qusta ibn Luqa]] (10th century), a [[Christian]] [[Arab]] mathematician from [[Baalbek]], [[Lebanon]]. He justified the technique by a formal, [[Euclidean geometry|Euclidean-style geometric proof]]. Within the tradition of [[Mathematics in medieval Islam|medieval Muslim mathematics]], double false position was known as ''hisāb al-khaṭāʾayn'' ("reckoning by two errors"). It was used for centuries, especially in the [[Maghreb]], to solve practical problems such as commercial and juridical questions (estate partitions according to rules of [[Islamic inheritance jurisprudence|Quranic inheritance]]), as well as purely recreational problems. The algorithm was often memorized with the aid of [[mnemonics]], such as a verse attributed to [[Ibn al-Yasamin]] and balance-scale diagrams explained by [[al-Hassar]] and [[Ibn al-Banna]], all three being mathematicians of [[Moroccan people|Moroccan]] origin.<ref name="Schwartz">{{Cite conference |conference=Eighth North African Meeting on the History of Arab Mathematics |last=Schwartz |first=R. K. |title=Issues in the Origin and Development of Hisab al-Khata’ayn (Calculation by Double False Position) |location=Radès, Tunisia |year=2004}} Available online at:  http://facstaff.uindy.edu/~oaks/Biblio/COMHISMA8paper.doc and http://www.ub.edu/islamsci/Schwartz.pdf</ref>
 
Leonardo of Pisa ([[Fibonacci]]) devoted Chapter 13 of his book ''[[Liber Abaci]]'' (AD 1202) to explaining and demonstrating the uses of double false position, terming the method ''regulis elchatayn'' after the ''al-khaṭāʾayn'' method that he had learned from [[Arab]] sources.<ref name="Schwartz"/>
 
==Numerical analysis==
 
In [[numerical analysis]], double false position became a [[root-finding algorithm]] that combines features from the [[bisection method]] and the [[secant method]].
 
[[Image:False position method.svg|right|351px|thumb|The first two iterations of the false position method. The red curve shows the function f and the blue lines are the secants.]]
Like the bisection method, the false position method starts with two points ''a''<sub>0</sub> and ''b''<sub>0</sub> such that ''f''(''a''<sub>0</sub>) and ''f''(''b''<sub>0</sub>) are of opposite signs, which implies by the [[intermediate value theorem]] that the function ''f'' has a root in the interval [''a''<sub>0</sub>, ''b''<sub>0</sub>], assuming continuity of the function ''f''. The method proceeds by producing a sequence of shrinking intervals [''a''<sub>''k''</sub>, ''b''<sub>''k''</sub>] that all contain a root of ''f''.
 
At iteration number ''k'', the number
:<math> c_k = b_k-\frac{f(b_k) (b_k-a_k)}{f(b_k)-f(a_k)} </math>
is computed. As explained below, ''c''<sub>''k''</sub> is the root of the secant line through (''a''<sub>''k''</sub>, f(''a''<sub>''k''</sub>)) and (''b''<sub>''k''</sub>, f(''b''<sub>''k''</sub>)). If f(''a''<sub>''k''</sub>) and f(''c''<sub>''k''</sub>) have the same sign, then we set ''a''<sub>''k''+1</sub> = ''c''<sub>''k''</sub> and ''b''<sub>''k''+1</sub> = ''b''<sub>''k''</sub>, otherwise we set ''a''<sub>''k''+1</sub> = ''a''<sub>''k''</sub> and ''b''<sub>''k''+1</sub> = ''c''<sub>''k''</sub>. This process is repeated until the root is approximated sufficiently well.
 
The above formula is also used in the secant method, but the secant method always retains the last two computed points, while the false position method retains two points which certainly bracket a root. On the other hand, the only difference between the false position method and the bisection method is that the latter uses ''c''<sub>''k''</sub> = (''a''<sub>''k''</sub> + ''b''<sub>''k''</sub>) / 2.
 
===Finding the root of the secant===
 
Given ''a''<sub>''k''</sub> and ''b''<sub>''k''</sub>, we construct the line through the points (''a''<sub>''k''</sub>, ''f''(''a''<sub>''k''</sub>)) and (''b''<sub>''k''</sub>, ''f''(''b''<sub>''k''</sub>)), as demonstrated in the picture immediately above. Note that this line is a [[secant method|secant]] or chord of the graph of the function ''f''.  In [[slope|point-slope form]], it can be defined as
 
:<math> y - f(b_k) = \frac{f(b_k)-f(a_k)}{b_k-a_k} (x-b_k). </math>
 
We now choose ''c''<sub>''k''</sub> to be the root of this line (substituting for ''x''), and setting <math>y = 0 </math> and see that
 
:<math> f(b_k) + \frac{f(b_k)-f(a_k)}{b_k-a_k} (c_k-b_k) = 0. </math>
 
Solving this equation gives the above equation for ''c''<sub>''k''</sub>.
 
==Analysis==
 
If the initial end-points
''a''<sub>0</sub> and ''b''<sub>0</sub> are chosen such that ''f''(''a''<sub>0</sub>) and ''f''(''b''<sub>0</sub>) are of opposite signs, then at each step, one of the end-points will get closer to a root of ''f''.
If the second derivative of ''f'' is of constant sign (so there is no [[inflection point]]) in the interval,
then one endpoint (the one where ''f'' also has the same sign) will remain fixed for all subsequent
iterations while the converging endpoint becomes updated.  As a result,
unlike the [[bisection method]], the width of the bracket does not tend to
zero (unless the zero is at an inflection point around which ''sign(f)=-sign(f″)'').  As a consequence, the linear
approximation to ''f''(''x''), which is used to pick the false position,
does not improve in its quality.
 
One example of this phenomenon is the function
:<math> f(x) = 2x^3-4x^2+3x </math>
on the initial bracket
[&minus;1,1].  The left end, &minus;1, is never replaced (after the first three iterations, ''f″'' is negative on the interval) and thus the width
of the bracket never falls below 1.  Hence, the right endpoint approaches 0 at
a linear rate (the number of accurate digits grows linearly, with a [[rate of convergence]] of 2/3).
 
For discontinuous functions, this method can only be expected to find a point where the function changes sign (for example at ''x=0'' for [[multiplicative inverse|''1/x'']] or the [[sign function]]).  In addition to sign changes, it is also possible for the method to converge to a point where the limit of the function is zero, even if the function is undefined (or has another value) at that point (for example at ''x=0'' for the function given by ''f(x)=abs(x)-x²'' when ''x≠0'' and by ''f(0)=5'', starting with the interval [-0.5, 3.0]).
It is mathematically possible with discontinuous functions for the method to fail to converge to a zero limit or sign change, but this is not a problem in practice since it would require an infinite sequence of coincidences for both endpoints to get stuck converging to discontinuities where the sign does not change (for example at ''x=±1'' in ''f(x)=1/(x-1)²+1/(x+1)²'').  The [[method of bisection]] avoids this hypothetical convergence problem.
 
==Illinois algorithm==
While it is a misunderstanding to think that the method of false position is a good method, it is equally a mistake to think that it is unsalvageable.  The failure mode is easy to detect (the same end-point is retained twice in a row) and easily remedied by next picking a modified false position, such as
:<math> c_k = \frac{\frac{1}{2}f(b_k) a_k- f(a_k) b_k}{\frac{1}{2}f(b_k)-f(a_k)}</math>
or
:<math> c_k = \frac{f(b_k) a_k- \frac{1}{2}f(a_k) b_k}{f(b_k)-\frac{1}{2}f(a_k)}</math>
down-weighting one of the endpoint values to force the next ''c''<sub>k</sub> to occur on that side of the function. The factor of 2 above looks like a hack, but it guarantees superlinear convergence (asymptotically, the algorithm will perform two regular steps after any modified step, and has order of convergence 1.442).  There are other ways to pick the rescaling which give even better superlinear convergence rates.{{citation needed|date=June 2013}}
 
The above adjustment to ''regula falsi'' is sometimes called the '''Illinois algorithm'''.<ref>{{cite book |title=Numerical Methods |first1=Germund |last1=Dahlquist |authorlink1=Germund Dahlquist |first2=Åke |last2=Björck |pages=231–232 |url=http://books.google.com/books?id=armfeHpJIwAC&pg=PA232 |origyear=1974 |year=2003 |publisher=Dover |isbn=978-0486428079 }}</ref><ref>{{cite doi|10.1007/BF01934364}}</ref> Ford (1995) summarizes and analyzes this and other similar superlinear variants of the method of false position.<ref>{{Citation |first=J. A. |last=Ford |year=1995 |url=http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.53.8676 |title=Improved Algorithms of Illinois-type for the Numerical Solution of Nonlinear Equations |series=Technical Report |id=CSM-257 |publisher=University of Essex Press }}</ref>
 
==Example code==
 
This example programme, written in the [[C (programming language)|C programming language]],
has been written for clarity instead of efficiency.  It was designed to
solve the same problem as solved by the [[Newton's method]] and [[secant method]]
code: to find the positive number ''x'' where cos(''x'') = ''x''<sup>3</sup>.  This problem is
transformed into a root-finding problem of the form ''f''(''x'') = cos(''x'') - ''x''<sup>3</sup> = 0.
 
<syntaxhighlight lang="c">
#include <stdio.h>
#include <math.h>
 
double f(double x)
{
  return cos(x) - x*x*x;
}
/* s,t: endpoints of an interval where we search
  e: half of upper bound for relative error
  m: maximal number of iterations */
double FalsiMethod(double s, double t, double e, int m)
{
  double r,fr;
  int n, side=0;
  /* starting values at endpoints of interval */
  double fs = f(s);
  double ft = f(t);
 
  for (n = 0; n < m; n++)
  {
 
      r = (fs*t - ft*s) / (fs - ft);
      if (fabs(t-s) < e*fabs(t+s)) break;
      fr = f(r);
 
      if (fr * ft > 0)
      {
        /* fr and ft have same sign, copy r to t */
        t = r; ft = fr;
        if (side==-1) fs /= 2;
        side = -1;
      }
      else if (fs * fr > 0)
      {
        /* fr and fs have same sign, copy r to s */
        s = r;  fs = fr;
        if (side==+1) ft /= 2;
        side = +1;
      }
      else
      {
        /* fr * f_ very small (looks like zero) */
        break;
      }
    }
    return r;
}
 
int main(void)
{
    printf("%0.15f\n", FalsiMethod(0, 1, 5E-15, 100));
    return 0;
}
</syntaxhighlight>
 
After running this code, the final answer is approximately
0.865474033101614
 
==See also==
* [[Ridders' method]], another root-finding method based on the false position method
* [[Brent's method]]
* [[Secant method]]
 
==References==
{{reflist|30em}}
 
==Further reading==
* Richard L. Burden, J. Douglas Faires (2000). ''Numerical Analysis'', 7th ed. Brooks/Cole. ISBN 0-534-38216-9.
* L.E. Sigler (2002). ''Fibonacci's Liber Abaci, Leonardo Pisano's Book of Calculation''. Springer-Verlag, New York. ISBN 0-387-40737-5.
 
==External links==
*[http://math.fullerton.edu/mathews/n2003/RegulaFalsiMod.html The Regula Falsi Method by John H. Mathews]
 
[[Category:Root-finding algorithms]]
[[Category:Articles with example C code]]

Revision as of 15:29, 5 February 2014

Adrianne is what you can call me but We all don't like when individuals use my full user name. What I love doing is to play croquet and furthermore now I have your time to take on new things. The job I've been occupying for years is an shop clerk. My husband and I select to reside in Guam but I will own to move in a suitable year or two. You has the capability to find my website here: http://circuspartypanama.com

Visit my web page clash of clans hack no survey no password download