Principle of least action: Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
en>Cydebot
m Robot - Moving category Fundamental physics concepts to Category:Concepts in physics per CFD at Wikipedia:Categories for discussion/Log/2012 July 12.
 
Line 1: Line 1:
Roberto is what's written located on his birth certificate fortunately he never really enjoyed reading that name. Managing people is without question where his primary purchases comes from. Base jumping is something that he or she is been doing for a very long time. Massachusetts has always been his livelihood place and his spouse and children members loves it. Go with regard to his website to appear out more: http://[https://www.flickr.com/search/?q=prometeu.net prometeu.net]<br><br>
[[Image:Binary logarithm plot.png|thumbnail|right|200px|Plot of log<sub>2</sub> ''n'']]


Here is my blog - [http://prometeu.net clash of clans hack no survey]
In [[mathematics]], the '''binary logarithm''' (log<sub>2</sub>&nbsp;''n'') is the [[logarithm]] to the [[Binary numeral system|base 2]]. It is the [[inverse function]] of ''n''&nbsp;↦&nbsp;2<sup>''n''</sup>. The binary logarithm of ''n'' is the power to which the number 2 must be raised to obtain the value&nbsp;''n''.  This makes the binary logarithm useful for anything involving [[powers of 2]], i.e. doubling. For example, the binary logarithm of 1 is 0, the binary logarithm of 2 is 1, the binary logarithm of 4 is&nbsp;2, the binary logarithm of 8 is 3, the binary logarithm of 16 is 4 and the binary logarithm of 32 is&nbsp;5.
 
==Applications==
===Information theory===
The binary logarithm is often used in [[computer science]] and [[information theory]] because it is closely connected to the [[binary numeral system]]. It is frequently written '''ld ''n''''', from [[Latin]] ''[[wikt:en:logarithmus#Latin|logarithmus]] [[wikt:en:dualis#Latin|duālis]]'', or '''lg ''n''''', although the [[ISO 31-11|ISO specification]] is that it should be '''lb (''n'')''', lg (''n'') being reserved for log<sub>10</sub> ''n''. The number of digits ([[bit]]s) in the binary representation of a positive integer ''n'' is the [[Floor and ceiling functions|integral part]] of 1&nbsp;+&nbsp;lb&nbsp;''n'', i.e.
 
:<math> \lfloor \operatorname{lb}\, n\rfloor + 1. \, </math>
 
In information theory, the definition of the amount of [[self-information]] and [[information entropy]] involves the binary logarithm; this is needed because the unit of information, the bit, refers to information resulting from an occurrence of one of two equally probable alternatives.
 
===Computational complexity===
The binary logarithm also frequently appears in the [[analysis of algorithms]]. If a number ''n'' greater than 1 is divided by 2 repeatedly, the number of iterations needed to get a value at most 1 is again the integral part of lb&nbsp;''n''. This idea is used in the analysis of several [[algorithm]]s and [[data structure]]s. For example, in [[binary search]], the size of the problem to be solved is halved with each iteration, and therefore roughly lb ''n'' iterations are needed to obtain a problem of size 1, which is solved easily in constant time. Similarly, a perfectly balanced [[binary search tree]] containing ''n'' elements has height lb&nbsp;''n''&nbsp;+&nbsp;1.
 
However, the running time of an algorithm is usually expressed in [[big O notation]], ignoring constant factors.  Since log<sub>2</sub> ''n'' = (1/log<sub>''k''</sub>&nbsp;2)log<sub>''k''</sub>&nbsp;''n'', where ''k'' can be any number greater than 1, algorithms that run in ''O''(log<sub>2</sub>&nbsp;''n'') time can also be said to run in, say, ''O''(log<sub>13</sub>&nbsp;''n'') time. The base of the logarithm in expressions such as ''O''(log&nbsp;''n'') or ''O''(''n''&nbsp;log&nbsp;''n'') is therefore not important.
In other contexts, though, the base of the logarithm needs to be specified. For example ''O''(2<sup>lb&nbsp;''n''</sup>) is not the same as ''O''(2<sup>ln&nbsp;''n''</sup>) because the former is equal to ''O''(''n'') and the latter to ''O''(''n''<sup>0.6931...</sup>).
 
Algorithms with running time ''n''&nbsp;lb&nbsp;''n'' are sometimes called [[linearithmic]]. Some examples of algorithms with running time ''O''(lb&nbsp;''n'') or ''O''(''n''&nbsp;lb&nbsp;''n'') are:
 
*[[quicksort|average time quicksort]]
*[[binary search tree]]s
*[[merge sort]]
*[[Monge array]] calculation
 
=== Single-elimination tournaments ===
 
In competitive games and sports involving two players/teams in each game/match, the binary logarithm indicates the number of rounds necessary in a [[single-elimination tournament]] in order to determine a winner. For example, a tournament of 4 players requires lb (4) or 2 rounds to determine the winner, a tournament of 32 teams requires lb (32) rounds, which is 5 rounds, etc. In this case, for n players/teams where n is not a power of 2, lb (n) is rounded up since it will be necessary to have at least one round in which not all remaining competitors play. For example, lb (6) is approximately 2.585, rounded up, indicates that a tournament of 6 requires 3 rounds (either 2 teams will sit out the first round, or one team will sit out the second round).
 
==Using calculators==
An easy way to calculate the log<sub>2</sub>(''n'') on [[calculator]]s that do not have a log<sub>2</sub>-function is to use the [[natural logarithm]] "ln" or the [[common logarithm]] "log" functions, which are found on most "scientific calculators". The specific [[Logarithm#Change_of_base|change of logarithm base]] [[formulae]] for this are:
 
:log<sub>2</sub>(''n'') = ln(''n'')/ln(2) = log(''n'')/log(2)
 
so
 
:log<sub>2</sub>(''n'') = log<sub>''e''</sub>(''n'')&times;1.442695... = log<sub>10</sub>(''n'')&times;3.321928...
 
and this produces the curiosity that log<sub>2</sub>(''n'') is within 0.6% of log<sub>''e''</sub>(''n'')&nbsp;+&nbsp;log<sub>10</sub>(''n''). log<sub>''e''</sub>(''n'')+log<sub>10</sub>(''n'') is actually log<sub>2.0081359...</sub>(''n'') where the base is ''e''<sup>1/(1+log<sub>10</sub>''e'')</sup> =&nbsp;10<sup>1/(1&nbsp;+&nbsp;log<sub>''e''</sub>10)</sup> ≈&nbsp;2.00813&nbsp;59293&nbsp;46243&nbsp;95422&nbsp;87563&nbsp;25191&nbsp;0 to (32 significant figures). Of course, log<sub>10</sub>10 =&nbsp;log<sub>''e''</sub>''e''&nbsp;=&nbsp;1.
 
==Algorithm==
===Integer===
For integer [[domain of a function|domain]] and [[range (mathematics)|range]], the binary logarithm can be computed [[rounding]] up or down.  These two forms of integer binary logarithm are related by this formula:
 
:<math> \lfloor \log_2(n) \rfloor = \lceil \log_2(n + 1) \rceil - 1, \text{ if }n \ge 1.</math> <ref name="Hackers">{{Cite book | title=[[Hacker's Delight]] | first1=Henry S. | last1=Warren Jr. | year=2002 | publisher=Addison Wesley | isbn=978-0-201-91465-8 | pages=215}}</ref>
The definition can be extended by defining <math> \lfloor \log_2(0) \rfloor = -1</math>. This function is related to the [[number of leading zeros]] of the 32-bit unsigned binary representation of ''x'', nlz(''x'').
:<math>\lfloor \log_2(n) \rfloor = 31 - \operatorname{nlz}(n).</math><ref name="Hackers" />
 
The integer binary logarithm can be interpreted as the zero-based index of the most significant 1 bit in the input. In this sense it is the complement of the [[find first set]] operation, which finds the index of the least significant 1 bit. The article [[find first set]] contains more information on algorithms, architecture support, and applications for the integer binary logarithm.
 
===Real number===
 
For a general [[positive number|positive real number]], the binary logarithm may be computed in two parts:
# Compute the [[integer]] part, <math>\lfloor\operatorname{lb}(x)\rfloor</math>
# Compute the fractional part
 
Computing the integral part is straightforward.  For any ''x''&nbsp;>&nbsp;0, there exists a unique integer ''n'' such that 2<sup>''n''</sup>&nbsp;≤&nbsp;''x''&nbsp;<&nbsp;2<sup>''n''+1</sup>, or equivalently 1&nbsp;≤&nbsp;2<sup>&minus;''n''</sup>''x''&nbsp;<&nbsp;2.  Now the integer part of the logarithm is simply ''n'', and the fractional part is lb(2<sup>&minus;''n''</sup>''x'').  In other words:
 
:<math>\operatorname{lb}(x) = n + \operatorname{lb}(y) \quad\text{where } y = 2^{-n}x \text{ and } y \in [1,2)</math>
 
The fractional part of the result is <math>\operatorname{lb} y</math>, and can be computed [[recursion|recursively]], using only elementary multiplication and division.  To compute the fractional part:
# We start with a real number <math>y \in [1,2)</math>.  If <math>y=1</math>, then we are done and the fractional part is zero.
# Otherwise, square <math>y</math> repeatedly until the result is <math>z \in [2,4)</math>.  Let <math>m</math> be the number of squarings needed.  That is, <math>z = y^{2\uparrow m}</math> with <math>m</math> chosen such that <math>z \in [2,4)</math>.
# Taking the logarithm of both sides and doing some algebra:
#:<math>\begin{align}
\operatorname{lb}\,z &= 2^m \operatorname{lb}\,y \\
\operatorname{lb}\,y &= \frac{ \operatorname{lb} z }{ 2^m } \\
&= \frac{ 1 + \operatorname{lb}(z/2) }{ 2^m } \\
&= 2^{-m} + 2^{-m}\operatorname{lb}(z/2)
\end{align}</math>
# Notice that <math>z/2</math> is once again a real number in the interval <math>[1,2)</math>.
# Return to step 1, and compute the binary logarithm of <math>z/2</math> using the same method recursively.
 
The result of this is expressed by the following formulas, in which <math>m_i</math> is the number of squarings required in the ''i''-th recursion of the algorithm:
:<math>\begin{align}
\operatorname{lb}\,x &= n + 2^{-m_1} \left( 1 + 2^{-m_2} \left( 1 + 2^{-m_3} \left( 1 + \cdots \right)\right)\right) \\
&= n + 2^{-m_1} + 2^{-m_1-m_2} + 2^{-m_1-m_2-m_3} + \cdots
\end{align}</math>
 
In the special case where the fractional part in step 1 is found to be zero, this is a ''finite'' sequence terminating at some point.  Otherwise, it is an [[infinite series]] which [[convergent series|converge]]s according to the [[ratio test]], since each term is strictly less than the previous one (since every <math>m_i>0</math>).  For practical use, this infinite series must be truncated to reach an approximate result.  If the series is truncated after the ''i''-th term, then the error in the result is less than <math>2^{-(m_1+m_2+\cdots+m_i)}</math>.
 
Fortunately, in practice we can do the computation and know the error margin without doing any algebra or any infinite series truncation. Suppose we want to compute the binary log of 1.65 with four binary digits. Repeat these steps four times:
# square the number
# if the square is >= 2, divide it by two and write a 1. Else write a 0.
The numbers we wrote are the logarithm written in binary.
That will work when we start with any number between 1 and 2.
So:
** 1.65 squared is 2.72, which is more than two, so we halve it to 1.36 and write a 1
** 1.36 squared is 1.85, less than two, so no halving, and write a 0
** 1.85 squared is 3.43, more than two, so halve it to 1.72 and write a 1
** 1.72 squared is 2.95. more than two, so write a 1 (no need to halve 2.95 because we are already done)
We wrote 1011 so far, so the binary logarithm of 1.65 written in binary is 0.1011 (or, written as a fraction, 11/16), and the error is less than 1/16. Adding 1/32, we get 23/32 which has error less than 1/32.  In general, to get error less than 0.5 raised to the 1+N, we need N squarings and N or less halvings.
 
==See also==
* [[Natural logarithm]] (base [[e (mathematical constant)|e]])
* [[Common logarithm]] (base 10)
 
==References==
{{reflist}}
 
[[Category:Binary arithmetic]]
[[Category:Calculus]]
[[Category:Logarithms]]
[[Category:Articles with example Perl code]]
[[Category:Articles with example Python code]]

Revision as of 18:48, 24 January 2014

Plot of log2 n

In mathematics, the binary logarithm (log2 n) is the logarithm to the base 2. It is the inverse function of n ↦ 2n. The binary logarithm of n is the power to which the number 2 must be raised to obtain the value n. This makes the binary logarithm useful for anything involving powers of 2, i.e. doubling. For example, the binary logarithm of 1 is 0, the binary logarithm of 2 is 1, the binary logarithm of 4 is 2, the binary logarithm of 8 is 3, the binary logarithm of 16 is 4 and the binary logarithm of 32 is 5.

Applications

Information theory

The binary logarithm is often used in computer science and information theory because it is closely connected to the binary numeral system. It is frequently written ld n, from Latin logarithmus duālis, or lg n, although the ISO specification is that it should be lb (n), lg (n) being reserved for log10 n. The number of digits (bits) in the binary representation of a positive integer n is the integral part of 1 + lb n, i.e.

In information theory, the definition of the amount of self-information and information entropy involves the binary logarithm; this is needed because the unit of information, the bit, refers to information resulting from an occurrence of one of two equally probable alternatives.

Computational complexity

The binary logarithm also frequently appears in the analysis of algorithms. If a number n greater than 1 is divided by 2 repeatedly, the number of iterations needed to get a value at most 1 is again the integral part of lb n. This idea is used in the analysis of several algorithms and data structures. For example, in binary search, the size of the problem to be solved is halved with each iteration, and therefore roughly lb n iterations are needed to obtain a problem of size 1, which is solved easily in constant time. Similarly, a perfectly balanced binary search tree containing n elements has height lb n + 1.

However, the running time of an algorithm is usually expressed in big O notation, ignoring constant factors. Since log2 n = (1/logk 2)logk n, where k can be any number greater than 1, algorithms that run in O(log2 n) time can also be said to run in, say, O(log13 n) time. The base of the logarithm in expressions such as O(log n) or O(n log n) is therefore not important. In other contexts, though, the base of the logarithm needs to be specified. For example O(2lb n) is not the same as O(2ln n) because the former is equal to O(n) and the latter to O(n0.6931...).

Algorithms with running time n lb n are sometimes called linearithmic. Some examples of algorithms with running time O(lb n) or O(n lb n) are:

Single-elimination tournaments

In competitive games and sports involving two players/teams in each game/match, the binary logarithm indicates the number of rounds necessary in a single-elimination tournament in order to determine a winner. For example, a tournament of 4 players requires lb (4) or 2 rounds to determine the winner, a tournament of 32 teams requires lb (32) rounds, which is 5 rounds, etc. In this case, for n players/teams where n is not a power of 2, lb (n) is rounded up since it will be necessary to have at least one round in which not all remaining competitors play. For example, lb (6) is approximately 2.585, rounded up, indicates that a tournament of 6 requires 3 rounds (either 2 teams will sit out the first round, or one team will sit out the second round).

Using calculators

An easy way to calculate the log2(n) on calculators that do not have a log2-function is to use the natural logarithm "ln" or the common logarithm "log" functions, which are found on most "scientific calculators". The specific change of logarithm base formulae for this are:

log2(n) = ln(n)/ln(2) = log(n)/log(2)

so

log2(n) = loge(n)×1.442695... = log10(n)×3.321928...

and this produces the curiosity that log2(n) is within 0.6% of loge(n) + log10(n). loge(n)+log10(n) is actually log2.0081359...(n) where the base is e1/(1+log10e) = 101/(1 + loge10) ≈ 2.00813 59293 46243 95422 87563 25191 0 to (32 significant figures). Of course, log1010 = logee = 1.

Algorithm

Integer

For integer domain and range, the binary logarithm can be computed rounding up or down. These two forms of integer binary logarithm are related by this formula:

[1]

The definition can be extended by defining . This function is related to the number of leading zeros of the 32-bit unsigned binary representation of x, nlz(x).

[1]

The integer binary logarithm can be interpreted as the zero-based index of the most significant 1 bit in the input. In this sense it is the complement of the find first set operation, which finds the index of the least significant 1 bit. The article find first set contains more information on algorithms, architecture support, and applications for the integer binary logarithm.

Real number

For a general positive real number, the binary logarithm may be computed in two parts:

  1. Compute the integer part,
  2. Compute the fractional part

Computing the integral part is straightforward. For any x > 0, there exists a unique integer n such that 2n ≤ x < 2n+1, or equivalently 1 ≤ 2nx < 2. Now the integer part of the logarithm is simply n, and the fractional part is lb(2nx). In other words:

The fractional part of the result is , and can be computed recursively, using only elementary multiplication and division. To compute the fractional part:

  1. We start with a real number . If , then we are done and the fractional part is zero.
  2. Otherwise, square repeatedly until the result is . Let be the number of squarings needed. That is, with chosen such that .
  3. Taking the logarithm of both sides and doing some algebra:
  4. Notice that is once again a real number in the interval .
  5. Return to step 1, and compute the binary logarithm of using the same method recursively.

The result of this is expressed by the following formulas, in which is the number of squarings required in the i-th recursion of the algorithm:

In the special case where the fractional part in step 1 is found to be zero, this is a finite sequence terminating at some point. Otherwise, it is an infinite series which converges according to the ratio test, since each term is strictly less than the previous one (since every ). For practical use, this infinite series must be truncated to reach an approximate result. If the series is truncated after the i-th term, then the error in the result is less than .

Fortunately, in practice we can do the computation and know the error margin without doing any algebra or any infinite series truncation. Suppose we want to compute the binary log of 1.65 with four binary digits. Repeat these steps four times:

  1. square the number
  2. if the square is >= 2, divide it by two and write a 1. Else write a 0.

The numbers we wrote are the logarithm written in binary. That will work when we start with any number between 1 and 2. So:

    • 1.65 squared is 2.72, which is more than two, so we halve it to 1.36 and write a 1
    • 1.36 squared is 1.85, less than two, so no halving, and write a 0
    • 1.85 squared is 3.43, more than two, so halve it to 1.72 and write a 1
    • 1.72 squared is 2.95. more than two, so write a 1 (no need to halve 2.95 because we are already done)

We wrote 1011 so far, so the binary logarithm of 1.65 written in binary is 0.1011 (or, written as a fraction, 11/16), and the error is less than 1/16. Adding 1/32, we get 23/32 which has error less than 1/32. In general, to get error less than 0.5 raised to the 1+N, we need N squarings and N or less halvings.

See also

References

43 year old Petroleum Engineer Harry from Deep River, usually spends time with hobbies and interests like renting movies, property developers in singapore new condominium and vehicle racing. Constantly enjoys going to destinations like Camino Real de Tierra Adentro.

  1. 1.0 1.1 20 year-old Real Estate Agent Rusty from Saint-Paul, has hobbies and interests which includes monopoly, property developers in singapore and poker. Will soon undertake a contiki trip that may include going to the Lower Valley of the Omo.

    My blog: http://www.primaboinca.com/view_profile.php?userid=5889534