>Use 64-bit arithmetic.
>
>Think Cray.
The same problem exists, it is an inherent limitation of floating-point numbers; it is also a [practical] limitation of decimal numbers (such as 1/3).
It is inviting problems to disacknowledge the limitations of the tools you are using and assume them to be adequate for the task at hand; perhaps the one disadvantage of standardizing floating-point architecture [to the IEEE spec] is that scientists have become lax in considering the limits thereof (whereas prior they had to check that the precision of the machine’s floating-point would work).
Along these lines of “floating point precision limitations” is the “real number” for Prolog which [IIRC] is a software implementation and can have [in theory, there are still the finite memory limitations] infinitely long precision. Something like that might be more appropriate for the situation at hand where, instead of floating point numbers, you have a record representing, say, the rational form of a number;
{Example:
Type Numeric is private; — Say something that can represent square-root of 3, or cube-root of 2, i, or pi, etc.
Type Rational is new Record
Numerator, Denominator : Numeric;
end record;
}
That being said, you are right, there are some numbers which cannot be represented exactly using floating-opoint arithmetic, including (I think) the square root of two (or was it three?).
I recall working on optimizing an engineering simulation program where we had to turn off compiler options in order to get the answers to come out right: the compiler was taking a group of operands and rearranging the order they were being added in -- with the side effect that some of the smaller numbers were "dropped" as rounding errors. We had to change the code by hand so that it sorted the numbers first, and then began adding from smallest to largest...
Cheers!