Get the latest tech news

Myths About Floating-Point Numbers (2021)


Wed 17 Mar 2021 Floating-point numbers are a great invention in computer science, but they can also be tricky and troublesome to use correctly. I’ve written about them already by publishing Floating-Point Formats Cheatsheet and presentation “Pitfalls of floating-point numbers” (“Pułapki liczb zmiennoprzecinkowych” – the slides are in Polish).

If you implement two versions of your formula, similar but not exactly the same, the compiler may, for example, optimize (a * b + c) from doing MUL + ADD to FMA (fused multiply-add) instruction, which does the 3-argument operation in one step. This is because floating-point standard (IEEE 754) defines only required precision of operations like sin, cos, etc., so the exact result may vary on the least significant bit. If we recall from math classes that sine cycles between -1 and 1 every 2*PI ≈ 6.283185 and we take into consideration that above 16,777,216 a 32-bit float cannot represent all integer numbers exactly, but start jumping every 2, then every 4 etc., we can conclude that we have not enough precision to know whether our result should be -1, 1, or anything in between.

Get the Android app

Or read this on Hacker News

Read more on:

Photo of Myths

Myths

Photo of Point Numbers

Point Numbers

Related news:

News photo

Bitcoin's Security Budget Issue: Problems, Solutions and Myths Debunked

News photo

Comparing floating-point numbers (2012)

News photo

Perfect Random Floating-Point Numbers