Lie group homomorphism

From formulasearchengine
Jump to navigation Jump to search

Machine epsilon gives an upper bound on the relative error due to rounding in floating point arithmetic. This value characterizes computer arithmetic in the field of numerical analysis, and by extension in the subject of computational science. The quantity is also called macheps or unit roundoff, and it has the symbols Greek epsilon ϵ or bold Roman u, respectively.

Values for standard hardware floating point arithmetics

The following values of machine epsilon apply to standard floating point formats:

IEEE 754 - 2008 Common name C++ data type Base b Precision p Machine epsilonPlastic and Reconstructive Surgeon Bud from Vernon, loves hang gliding, property developers in singapore and texting. Likes to visit new cities and spots such as Monasteries of Haghpat and Sanahin.

Here is my webpage; http://www.globalmajorproperties.com b(p1)/2
Machine epsilonPlastic and Reconstructive Surgeon Bud from Vernon, loves hang gliding, property developers in singapore and texting. Likes to visit new cities and spots such as Monasteries of Haghpat and Sanahin.

Here is my webpage; http://www.globalmajorproperties.com b(p1)
binary16 half precision not available 2 11 (one bit is implicit) 2 -11 = 4.88e-04 2 -10 = 9.77e-04
binary32 single precision float 2 24 (one bit is implicit) 2 -24 = 5.96e-08 2 -23 = 1.19e-07
binary64 double precision double 2 53 (one bit is implicit) 2 -53 = 1.11e-16 2 -52 = 2.22e-16
binary80 extended precision _float80[1] 2 64 2 -64 = 5.42e-20 2 -63 = 1.08e-19
binary128 quad(ruple) precision _float128[1] 2 113 (one bit is implicit) 2 -113 = 9.63e-35 2 -112 = 1.93e-34
decimal32 single precision decimal _Decimal32[2] 10 7 5 × 10 -7 10−6
decimal64 double precision decimal _Decimal64[2] 10 16 5 × 10 -16 10−15
decimal128 quad(ruple) precision decimal _Decimal128[2] 10 34 5 × 10 -34 10−33

24 years old Art Manager or Manager Harry Kamm from West Hill, really likes ceramics, property developers in Singapore house for rent and walking. Recently has traveled to .according to Prof. Demmel, LAPACK, Scilab 24 years old Art Manager or Manager Harry Kamm from West Hill, really likes ceramics, property developers in Singapore house for rent and walking. Recently has traveled to .according to Prof. Higham; ISO C standard; C, C++ and Python language constants; Mathematica, MATLAB and Octave; various textbooks - see below for the latter definition

Formal definition

Rounding is a procedure for choosing the representation of a real number in a floating point number system. For a number system and a rounding procedure, machine epsilon is the maximum relative error of the chosen rounding procedure.

Some background is needed to determine a value from this definition. A floating point number system is characterized by a radix which is also called the base, b, and by the precision p, i.e. the number of radix b digits of the significand (including any leading implicit bit). All the numbers with the same exponent, e, have the spacing, be(p1). The spacing changes at the numbers that are perfect powers of b; the spacing on the side of larger magnitude is b times larger than the spacing on the side of smaller magnitude.

Since machine epsilon is a bound for relative error, it suffices to consider numbers with exponent e=0. It also suffices to consider positive numbers. For the usual round-to-nearest kind of rounding, the absolute rounding error is at most half the spacing, or b(p1)/2. This value is the biggest possible numerator for the relative error. The denominator in the relative error is the number being rounded, which should be as small as possible to make the relative error large. The worst relative error therefore happens when rounding is applied to numbers of the form 1+a where a is between 0 and b(p1)/2. All these numbers round to 1 with relative error a/(1+a). The maximum occurs when a is at the upper end of its range. The 1+a in the denominator hardly matters, so it is left off for expediency, and just b(p1)/2 is taken as machine epsilon. As has been shown here, the relative error is worst for numbers that round to 1, so machine epsilon also is called unit roundoff meaning roughly "the maximum error that can occur when rounding to the unit value".

Thus, the maximum spacing between a normalised floating point number, x, and an adjacent normalised number is 2ϵ x |x|.[3]

Arithmetic model

Numerical analysis uses machine epsilon to study the effects of rounding error. The actual errors of machine arithmetic are far too complicated to be studied directly, so instead, the following simple model is used. The IEEE arithmetic standard says all floating point operations are done as if it were possible to perform the infinite-precision operation, and then, the result is rounded to a floating point number. Suppose (1) x, y are floating point numbers, (2) is an arithmetic operation on floating point numbers such as addition or multiplication, and (3) is the infinite precision operation. According to the standard, the computer calculates:

xy=round(xy)

By the meaning of machine epsilon, the relative error of the rounding is at most machine epsilon in magnitude, so:

xy=(xy)(1+z)

where z in absolute magnitude is at most ϵ or u. The books by Demmel and Higham in the references can be consulted to see how this model is used to analyze the errors of, say, Gaussian elimination.

Variant definitions

The IEEE standard does not define the terms machine epsilon and unit roundoff, so differing definitions of these terms are in use, which can cause some confusion.

The definition given here for machine epsilon is the one used by Prof. James Demmel in lecture scripts[4] and his LAPACK linear algebra package,[5] and by numerics research papers[6] and some scientific computing software.[7] Most numerical analysts use the words machine epsilon and unit roundoff interchangeably with this meaning.

The following different definition is much more widespread outside academia: Machine epsilon is defined as the smallest number that, when added to one, yields a result different from one. By this definition, ϵ equals the value of the unit in the last place relative to 1, i.e. b(p1),[8] and for the round-to-nearest kind of rounding procedure, u=ϵ/2. The prevalence of this definition is rooted in its use in the ISO C Standard for constants relating to floating-point types[9][10] and corresponding constants in other programming languages.[11][12] It is also widely used in scientific computing software,[13][14][15] in the numerics and computing literature[16][17][18][19] and other academic resources.[20][21]

How to determine machine epsilon

Where standard libraries do not provide precomputed values (as <float.h> does with FLT_EPSILON, DBL_EPSILON and LDBL_EPSILON for C and <limits> does with std::numeric_limits<T>::epsilon() in C++), the best way to determine machine epsilon is to refer to the table, above, and use the appropriate pow formula. Computing machine epsilon is often given as a textbook exercise. The following examples compute machine epsilon in the sense of the spacing of the floating point numbers at 1 rather than in the sense of the unit roundoff.

Note that results depend on the particular floating-point format used, such as float, double, long double, or similar as supported by the programming language, the compiler, and the runtime library for the actual platform.

Some formats supported by the processor might not be supported by the chosen compiler and operating system. Other formats might be emulated by the runtime library, including arbitrary-precision arithmetic available in some languages and libraries.

In a strict sense the term machine epsilon means the 1+eps accuracy directly supported by the processor (or coprocessor), not some 1+eps accuracy supported by a specific compiler for a specific operating system, unless it's known to use the best format.

IEEE 754 floating-point formats monotonically increase over positive values and monotonically decrease over negative values. They also have the property that where f(x) is the reinterpretation of x from an unsigned or twos-complement integer format to a floating-point format of the same width, and 0 < |f(x)| < ∞, |f(x+1) − f(x)| ≥ |f(x) − f(x−1)|. In languages that allow type punning and always use IEEE 754-1985, we can exploit this to compute a machine epsilon in constant time. For example, in C:

typedef union {
  long long i64;
  double d64;
} dbl_64;
double machine_eps (double value)
{
    dbl_64 s;
    s.d64 = value;
    s.i64++;
    return s.d64 - value;
}

This will give a result of the same sign as value. If a positive result is always desired, the return statement of machine_eps can be replaced with:

    return (s.i64 < 0 ? value - s.d64 : s.d64 - value);

64-bit doubles give 2.220446e-16, which is 2−52 as expected.

Approximation using C

The following C program does not actually determine the machine epsilon; rather, it determines a number within a factor of two (one order of magnitude) of the true machine epsilon, using a linear search.

 #include <stdio.h>

 int main( int argc, char **argv )
 {
    float machEps = 1.0f;

    printf( "current Epsilon, 1 + current Epsilon\n" );
    do {
       printf( "%G\t%.20f\n", machEps, (1.0f + machEps) );
       machEps /= 2.0f;
       // If next epsilon yields 1, then break, because current
       // epsilon is the machine epsilon.
    }
    while ((float)(1.0 + (machEps/2.0)) != 1.0);

    printf( "\nCalculated Machine epsilon: %G\n", machEps );
    return 0;
 }

Abridged Output

$ cc machine_epsilon.c; ./a.out
 current Epsilon, 1 + current Epsilon
1       2.00000000000000000000
0.5     1.50000000000000000000
...
0.000244141     1.00024414062500000000
0.00012207      1.00012207031250000000
6.10352E-05     1.00006103515625000000
3.05176E-05     1.00003051757812500000
1.52588E-05     1.00001525878906250000
7.62939E-06     1.00000762939453125000
3.8147E-06      1.00000381469726562500
1.90735E-06     1.00000190734863281250
9.53674E-07     1.00000095367431640625
4.76837E-07     1.00000047683715820312
2.38419E-07     1.00000023841857910156
Calculated Machine epsilon: 1.19209E-07

Approximation using C++

In such languages as C or C++ when you do something like while( 1.0 + eps > 1.0 ) the expression is calculated not with 64 bits (double) but with the processor precision (80 bits or more depends on the processor and compile options). Below program calculates exactly on 32 bits (float) and 64 bits (double).

#include <iostream>
#include <stdint.h>
#include <iomanip>

template<typename float_t, typename int_t>
float_t machine_eps()
{
	union
	{
		float_t f;
		int_t   i;
	} one, one_plus, little, last_little;

	one.f    = 1.0;
	little.f = 1.0;
	last_little.f = little.f;

	while(true)
	{
		one_plus.f = one.f;
		one_plus.f += little.f;

		if( one.i != one_plus.i )
		{
			last_little.f = little.f;
			little.f /= 2.0;
		}
		else
		{
			return last_little.f;
		}
	}
}

int main()
{
	std::cout << "machine epsilon:\n";
	std::cout << "float: " << std::setprecision(18)<< machine_eps<float, uint32_t>() << std::endl;
	std::cout << "double: " << std::setprecision(18) << machine_eps<double, uint64_t>() << std::endl;
}
$ g++ -o test test.cpp && ./test
machine epsilon:
float: 1.1920928955078125e-07
double: 2.22044604925031308e-16

Although, instead of typing all that, you could just use the following code:

#include <iostream>
#include <limits>

int main()
{
	//using a built-in function to display the machine-epsilon given the data type
	std::cout << "The machine precision for double is : " << std::numeric_limits<double>::epsilon() << std::endl;
	std::cout << "The machine precision for long double is : " << std::numeric_limits<long double>::epsilon() << std::endl;
	return 0;
}

You can use std::setprecision(desiredPrecision) to display however many digits you want. // Remember to #include <iomanip> before using std::setprecision(yourPrecision)!

Approximation using Java

A similar Java method:

    private static float calculateMachineEpsilonFloat() {
        float machEps = 1.0f;

        do
           machEps /= 2.0f;
        while ((float) (1.0 + (machEps / 2.0)) != 1.0);

        return machEps;
    }

Another Java implementation (together with a Smalltalk version) can be found in the appendix to Besset's (2000) numerical methods in Java & Smalltalk book. This appendix presents a complete implementation of MACHAR (Cody, 1988) in Smalltalk and Java.

Approximation using Pascal

function machine_epsilon: double;
var one_plus_halfepsilon: double;
begin
  Result := 1.0;
  repeat
    Result := 0.5 * Result;
    { to ensure that the result of the addition has the desired type,
      store it into a variable of this type }
    one_plus_halfepsilon := 1.0 + 0.5 * Result;
  until one_plus_halfepsilon <= 1.0;
end;

Approximation using a Perl one-liner

The following Perl one-liner prints the value of machine epsilon for the float data type (as implemented by the given Perl interpreter).

perl -le '$e=1; $e/=2 while $e/2+1>1; print $e'

On Windows, single-quotes must be changed to double ones.

Approximation using Erlang

The following module will calculate machine epsilon.

-module(machine_epsilon).

-export([calculate_machine_epsilon/0,calculate_machine_epsilon/1]).

calculate_machine_epsilon() -> calculate_machine_epsilon(2).

calculate_machine_epsilon(Base) -> calculate_machine_epsilon(Base, 1, no).

calculate_machine_epsilon(_Base, LastGuess, yes) -> LastGuess;
calculate_machine_epsilon(Base, LastGuess, no) -> 
		CurrentGuess = LastGuess/Base, 
		calculate_machine_epsilon(Base, CurrentGuess, is_less_than_machine_epsilon(CurrentGuess)).

is_less_than_machine_epsilon(Number) -> machine_epsilon_test(Number + 1).

machine_epsilon_test(1.0) -> yes;
machine_epsilon_test(_Other) -> no.

Approximation using Haskell

main = print . last . map (subtract 1) . takeWhile (/= 1) . map (+ 1) . iterate (/2) $ 1

Approximation using MATLAB

macheps = 1;

while (1 + macheps/2) ~= 1.0
    macheps = macheps/2;
end

or

>> fprintf('%.16e\n',eps(1))
2.2204460492503131e-16

Approximation using Python

Using Numpy use the eps attribute of finfo class. There is also a convenience function spacing() that "return(s) the distance between x and the nearest adjacent number" according to the docstring.

For example:

>>> import numpy as np
>>> np.finfo(np.float64).eps  # np.finfo(<dtype>).eps
2.2204460492503131e-16
>>> np.spacing(1)  # == np.finfo(np.float64).eps

The following Python function uses an approximation method to determine the value of machine epsilon for an arbitrary numerical data type.

def machineEpsilon(func=float):
    machine_epsilon = func(1)
    while func(1)+func(machine_epsilon) != func(1):
        machine_epsilon_last = machine_epsilon
        machine_epsilon = func(machine_epsilon) / func(2)
    return machine_epsilon_last

Some examples of its output (using IPython):

In [1]: machineEpsilon(int)
Out[1]: 1
In [2]: machineEpsilon(float)
Out[2]: 2.2204460492503131e-16
In [3]: machineEpsilon(complex)
Out[3]: (2.2204460492503131e-16+0j)

A one-liner to find the machine epsilon is

>>> import itertools
>>> next(2 ** -i for i in itertools.count() if 1 + 2 ** -(i + 1) == 1)
2.220446049250313e-16

The machine epsilon is available from the Python standard library:

>>> from sys import float_info
>>> float_info.epsilon
2.2204460492503131e-16

Using NumPy, the value of an inexact type's machine epsilon can be determined using the eps attribute of numpy.finfo as follows (again, in iPython):

In [1]: import numpy
In [2]: numpy.finfo(numpy.float).eps
Out[2]: 2.2204460492503131e-16
In [3]: numpy.finfo(numpy.complex).eps
Out[3]: 2.2204460492503131e-16

Approximation using Ada

In Ada, epsilon may be inferred from the 'Digits attribute.[22]

Epsilon : constant Real'Base := 1.0 / (10.0 ** Real'Digits);

Approximation using Prolog

An approximation using arithmetic and recursive predicates in Prolog is:

epsilon(X):-
	Y is (1.0 + X),
	Y = 1.0,
	write(X).
epsilon(X):-
	Y is X/2,
	epsilon(Y).

An example execution in the SWI-Prolog interpreter:

1 ?- epsilon(1.0).
1.1102230246251565e-16
true .

Approximation using Fortran

In Fortran 90 and more recent standards machine epsilon can be inquired by the intrinsic function epsilon().

integer,parameter :: dp = selected_real_kind(15, 307)

macheps = epsilon(1.0_dp)

On most machines, one can just use

macheps = epsilon(0d0)

to get the machine epsilon for double precision numbers.

An approximation in Fortran 77 is:

      PROGRAM MACHINEEPSILON
      IMPLICIT NONE
      DOUBLE PRECISION MACHEPS
     
      MACHEPS = 1.D0
 100  CONTINUE
      MACHEPS = MACHEPS / 2.D0
      IF ( 1.D0 + MACHEPS / 2.D0 .EQ. 1.D0 ) GOTO 110
      GO TO 100
 110  CONTINUE

      PRINT*, MACHEPS
      END

An example, assuming the above text is saved in the file "machine_epsilon.f", using the gfortran compiler:

   $ gfortran -O3 machine_epsilon.f -o prog_eps.exe
   $ ./prog_eps.exe
      2.22044604925031308E-016

Approximation using C#

The .net runtime libraries provide incorrect values for Single.Epsilon and Double.Epsilon. The provided values are actually the minimum positive Single and Double values (1.401-45 and 4.94E-324 respectively), instead of the correct values of epsilon.

    private static float CalculateFloatEpsilon()
    {
        float machineEpsilon = 1.0f;

        while ((float)(1.0 + (machineEpsilon / 2.0)) != 1.0f)
        {
            machineEpsilon /= 2.0f;
        } 

        return machineEpsilon;
    }

Notes and references

43 year old Petroleum Engineer Harry from Deep River, usually spends time with hobbies and interests like renting movies, property developers in singapore new condominium and vehicle racing. Constantly enjoys going to destinations like Camino Real de Tierra Adentro.

  • Anderson, E.; LAPACK Users' Guide, Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, third edition, 1999.
  • Cody, William J.; MACHAR: A Soubroutine to Dynamically Determine Machine Parameters, ACM Transactions on Mathematical Software, Vol. 14(4), 1988, 303-311.
  • Besset, Didier H.; Object-Oriented Implementation of Numerical Methods, Morgan & Kaufmann, San Francisco, CA, 2000.
  • Demmel, James W., Applied Numerical Linear Algebra, Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, 1997.
  • Higham, Nicholas J.; Accuracy and Stability of Numerical Algorithms, Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, second edition, 2002.
  • Press, William H.; Teukolsky, Saul A.; Vetterling, William T.; and Flannery, Brian P.; Numerical Recipes in Fortran 77, 2nd ed., Chap. 20.2, pp. 881–886
  • Forsythe, George E.; Malcolm, Michael A.; Moler, Cleve B.; "Computer Methods for Mathematical Computations", Prentice-Hall, ISBN 0-13-165332-6, 1977

See also

  • Floating point, general discussion of accuracy issues in floating point arithmetic
  • Ulp, unit in the last place

External links

it:Epsilon di macchina

  1. 1.0 1.1 Floating Types - Using the GNU Compiler Collection (GCC)
  2. 2.0 2.1 2.2 Decimal Float - Using the GNU Compiler Collection (GCC)
  3. 20 year-old Real Estate Agent Rusty from Saint-Paul, has hobbies and interests which includes monopoly, property developers in singapore and poker. Will soon undertake a contiki trip that may include going to the Lower Valley of the Omo.

    My blog: http://www.primaboinca.com/view_profile.php?userid=5889534
  4. Template:Cite web
  5. Template:Cite web
  6. Template:Cite web
  7. Template:Cite web
  8. note that here p is defined as the precision, i.e. the total number of bits in the significand including implicit leading bit, as used in the table above
  9. 20 year-old Real Estate Agent Rusty from Saint-Paul, has hobbies and interests which includes monopoly, property developers in singapore and poker. Will soon undertake a contiki trip that may include going to the Lower Valley of the Omo.

    My blog: http://www.primaboinca.com/view_profile.php?userid=5889534
  10. Template:Cite web
  11. Template:Cite web
  12. Template:Cite web
  13. Template:Cite web
  14. Template:Cite web
  15. Template:Cite web
  16. 20 year-old Real Estate Agent Rusty from Saint-Paul, has hobbies and interests which includes monopoly, property developers in singapore and poker. Will soon undertake a contiki trip that may include going to the Lower Valley of the Omo.

    My blog: http://www.primaboinca.com/view_profile.php?userid=5889534
  17. 20 year-old Real Estate Agent Rusty from Saint-Paul, has hobbies and interests which includes monopoly, property developers in singapore and poker. Will soon undertake a contiki trip that may include going to the Lower Valley of the Omo.

    My blog: http://www.primaboinca.com/view_profile.php?userid=5889534
  18. 20 year-old Real Estate Agent Rusty from Saint-Paul, has hobbies and interests which includes monopoly, property developers in singapore and poker. Will soon undertake a contiki trip that may include going to the Lower Valley of the Omo.

    My blog: http://www.primaboinca.com/view_profile.php?userid=5889534
  19. 20 year-old Real Estate Agent Rusty from Saint-Paul, has hobbies and interests which includes monopoly, property developers in singapore and poker. Will soon undertake a contiki trip that may include going to the Lower Valley of the Omo.

    My blog: http://www.primaboinca.com/view_profile.php?userid=5889534
  20. Template:Cite web
  21. Template:Cite web
  22. "3.5.8 Operations of Floating Point Types" Template:Cite web