Unit in the last place

{{Short description|Floating-point accuracy metric}}

{{more citations needed|date=March 2015}}

{{Use dmy dates|date=July 2021}}

In computer science and numerical analysis, unit in the last place or unit of least precision (ulp) is the spacing between two consecutive floating-point numbers, i.e., the value the least significant digit (rightmost digit) represents if it is 1. It is used as a measure of accuracy in numeric calculations.{{cite journal |author-first=David |author-last=Goldberg |title=What Every Computer Scientist Should Know About Floating-Point Arithmetic |journal=ACM Computing Surveys |date=March 1991 |volume=23 |issue=1 |pages=5–48 |doi=10.1145/103162.103163 |doi-access=free|s2cid=222008826}} (With the addendum "Differences Among IEEE 754 Implementations": [https://web.archive.org/web/20171011072644/http://www.cse.msu.edu/~cse320/Documents/FloatingPoint.pdf], [https://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html]).

Definition

The most common definition is: In radix b with precision p, if b^e \le |x| < b^{e+1}, then {{nowrap|\operatorname{ulp}(x) = b^{\max \{ e, \, e_\min \} - p + 1},{{cite book |author-last1=Muller |author-first1=Jean-Michel |author-last2=Brunie |author-first2=Nicolas |author-last3=de Dinechin |author-first3=Florent |author-last4=Jeannerod |author-first4=Claude-Pierre |author-first5=Mioara |author-last5=Joldes |author-last6=Lefèvre |author-first6=Vincent |author-last7=Melquiond |author-first7=Guillaume |author-last8=Revol |author-first8=Nathalie|author8-link=Nathalie Revol |author-last9=Torres |author-first9=Serge |title=Handbook of Floating-Point Arithmetic |date=2018 |orig-year=2010 |publisher=Birkhäuser |edition=2 |isbn=978-3-319-76525-9 |doi=10.1007/978-3-319-76526-6}}}} where e_\min is the minimal exponent of the normal numbers. In particular, \operatorname{ulp}(x) = b^{e - p + 1} for normal numbers, and \operatorname{ulp}(x) = b^{e_\min - p + 1} for subnormals.

Another definition, suggested by John Harrison, is slightly different: \operatorname{ulp}(x) is the distance between the two closest straddling floating-point numbers a and b (i.e., satisfying a \le x \le b and a \neq b), assuming that the exponent range is not upper-bounded.{{cite web|last=Harrison|first=John|title=A Machine-Checked Theory of Floating Point Arithmetic|url=https://www.cl.cam.ac.uk/~jrh13/papers/fparith.html|access-date=2013-07-17}}Muller, Jean-Michel (2005–11). "On the definition of ulp(x)". INRIA Technical Report 5504. ACM Transactions on Mathematical Software, Vol. V, No. N, November 2005. Retrieved in 2012-03 from http://ljk.imag.fr/membres/Carine.Lucas/TPScilab/JMMuller/ulp-toms.pdf. These definitions differ only at signed powers of the radix.

The IEEE 754 specification—followed by all modern floating-point hardware—requires that the result of an elementary arithmetic operation (addition, subtraction, multiplication, division, and square root since 1985, and FMA since 2008) be correctly rounded, which implies that in rounding to nearest, the rounded result is within 0.5 ulp of the mathematically exact result, using John Harrison's definition; conversely, this property implies that the distance between the rounded result and the mathematically exact result is minimized (but for the halfway cases, it is satisfied by two consecutive floating-point numbers). Reputable numeric libraries compute the basic transcendental functions to between 0.5 and about 1 ulp. Only a few libraries compute them within 0.5 ulp, this problem being complex due to the Table-maker's dilemma.{{cite web |last=Kahan |first=William |title=A Logarithm Too Clever by Half |url=https://people.eecs.berkeley.edu/~wkahan/LOG10HAF.TXT |access-date=2008-11-14}}

Since the 2010s, advances in floating-point mathematics have allowed correctly rounded functions to be almost as fast in average as these earlier, less accurate functions. A correctly rounded function would also be fully reproducible. {{Clarify|date=June 2024|reason=It seems that 0.501 ulp is just an example, not really an earlier, intermediate milestone. |text=An earlier, intermediate milestone was the 0.501 ulp functions,}} which theoretically would only produce one incorrect rounding out of 1000 random floating-point inputs.{{cite web |last1=Brisebarre |first1=Nicolas |last2=Hanrot |first2=Guillaume |last3=Muller |first3=Jean-Michel |last4=Zimmermann |first4=Paul |title=Correctly-rounded evaluation of a function: why, how, and at what cost? |url=https://hal.science/hal-04474530 |date=May 2024}}

Examples

=Example 1=

Let x be a positive floating-point number and assume that the active rounding mode is round to nearest, ties to even, denoted \operatorname{RN}. If \operatorname{ulp}(x) \le 1, then \operatorname{RN} (x + 1) > x. Otherwise, \operatorname{RN} (x + 1) = x or \operatorname{RN} (x + 1) = x + \operatorname{ulp}(x), depending on the value of the least significant digit and the exponent of x. This is demonstrated in the following Haskell code typed at an interactive prompt:

> until (\x -> x == x+1) (+1) 0 :: Float

1.6777216e7

> it-1

1.6777215e7

> it+1

1.6777216e7

Here we start with 0 in single precision (binary32) and repeatedly add 1 until the operation does not change the value. Since the significand for a single-precision number contains 24 bits, the first integer that is not exactly representable is 224+1, and this value rounds to 224 in round to nearest, ties to even. Thus the result is equal to 224.

=Example 2=

The following example in Java approximates {{pi}} as a floating-point value by finding the two double values bracketing \pi: p_0 < \pi < p_1.

// π with 20 decimal digits

BigDecimal π = new BigDecimal("3.14159265358979323846");

// truncate to a double floating point

double p0 = π.doubleValue();

// -> 3.141592653589793 (hex: 0x1.921fb54442d18p1)

// p0 is smaller than π, so find next number representable as double

double p1 = Math.nextUp(p0);

// -> 3.1415926535897936 (hex: 0x1.921fb54442d19p1)

Then \operatorname{ulp}(\pi) is determined as \operatorname{ulp}(\pi) = p_1 - p_0.

// ulp(π) is the difference between p1 and p0

BigDecimal ulp = new BigDecimal(p1).subtract(new BigDecimal(p0));

// -> 4.44089209850062616169452667236328125E-16

// (this is precisely 2**(-51))

// same result when using the standard library function

double ulpMath = Math.ulp(p0);

// -> 4.440892098500626E-16 (hex: 0x1.0p-51)

=Example 3=

Another example, in Python, also typed at an interactive prompt, is:

>>> x = 1.0

>>> p = 0

>>> while x != x + 1:

... x = x * 2

... p = p + 1

...

>>> x

9007199254740992.0

>>> p

53

>>> x + 2 + 1

9007199254740996.0

In this case, we start with x = 1 and repeatedly double it until x = x + 1. Similarly to Example 1, the result is 253 because the double-precision floating-point format uses a 53-bit significand.

Language support

The Boost C++ libraries provides the functions boost::math::float_next, boost::math::float_prior, boost::math::nextafter

and boost::math::float_advance to obtain nearby (and distant) floating-point values,{{cite book | url=https://www.boost.org/doc/libs/release/libs/math/doc/html/math_toolkit/next_float/float_advance.html | title=Boost float_advance}} and boost::math::float_distance(a, b) to calculate the floating-point distance between two doubles.{{cite book | url=https://www.boost.org/doc/libs/release/libs/math/doc/html/math_toolkit/next_float/float_distance.html | title=Boost float_distance}}

The C language library provides functions to calculate the next floating-point number in some given direction: nextafterf and nexttowardf for float, nextafter and nexttoward for double, nextafterl and nexttowardl for long double, declared in . It also provides the macros FLT_EPSILON, DBL_EPSILON, LDBL_EPSILON, which represent the positive difference between 1.0 and the next greater representable number in the corresponding type (i.e. the ulp of one).{{cite book | url=https://www.open-std.org/jtc1/sc22/wg14/www/docs/n1256.pdf | title=ISO/IEC 9899:1999 specification | at=p. 237, §7.12.11.3 The nextafter functions and §7.12.11.4 The nexttoward functions}}

The Go standard library provides the functions math.Nextafter (for 64 bit floats) and math.Nextafter32 (for 32 bit floats) both of which return the next representable floating-point value towards another provided floating-point value.{{cite web |title=math package - math - Go Packages |url=https://pkg.go.dev/math |website=pkg.go.dev |access-date=19 May 2025}}

The Java standard library provides the functions {{Javadoc:SE|java/lang|Math|ulp(double)}} and {{Javadoc:SE|java/lang|Math|ulp(float)}}. They were introduced with Java 1.5.

The Swift standard library provides access to the next floating-point number in some given direction via the instance properties nextDown and nextUp. It also provides the instance property ulp and the type property ulpOfOne (which corresponds to C macros like FLT_EPSILON{{cite web | url=https://developer.apple.com/documentation/swift/floatingpoint/ulpofone-7hdlb | title=ulpOfOne - FloatingPoint {{pipe}} Apple Developer Documentation | website=Apple Inc. | access-date=2019-08-18}}) for Swift's floating-point types.{{cite web | url=https://developer.apple.com/documentation/swift/floatingpoint | title=FloatingPoint - Swift Standard Library {{pipe}} Apple Developer Documentation | website=Apple Inc. | access-date=2019-08-18}}

See also

References

{{Reflist}}

Bibliography

{{Wiktionary|ulp}}

  • Goldberg, David (1991–03). "Rounding Error" in "What Every Computer Scientist Should Know About Floating-Point Arithmetic". Computing Surveys, ACM, March 1991. Retrieved from http://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html#689.
  • {{Cite book|title=Handbook of floating-point arithmetic|last=Muller|first=Jean-Michel|publisher=Birkhäuser|year=2010|isbn=978-0-8176-4704-9|location=Boston|pages=32–37}}

Category:Computer arithmetic

Category:Floating point