Floating point calculation is reasonably complex subject.
This has to do with the binary representation of a floating point number, that doesn’t guarantee every possible number (obviously), to have an exact representation, which can lead to errors in operations, and yes, these errors can propagate.
Here is a link on the subject, although it isn’t the simplest thing out there, it can get you a good perspective if you want to understand the subject.
http://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html
The bottom line problem is that for a given floating point number, a rational, it is possible that the binary representation of that number needs more precision than what’s available in the given language.
For instance, if you use 24 bits to store a double, but the binary representation of your floating point value needs 25 bits to be represented accurately, there will be a rounding error.
Edit:
As Péter Török noted in the comment, most known programming languages use the same representations for common data types, float
-> 32 bits, double
-> 64 bits, so the precision can usually be calculated taking the data type into regard, no matter the language.
2
solved Why the result of ‘0.3 * 3’ is 0.89999999999999 in scala language? [duplicate]