I am getting old and am forgetting answers to things I used to know. I believe it has to do with the way the floating point numbers are stored/operated on. I'm sure someone will straighten out my answer.
Correct. It's because "float" and "double" numbers are base 2. You could say that base 10 numbers are
estimated in base 2.
The easiest way to explain this is using a square root. Using a base 10 number system, what is the square root of 3? This question is impossible. You can't accurately describe this using a decimal number. You might write it down as 1.73205. Now put that in your calculator:
1.73205 x 1.73205 = what?
I'm getting 2.9999972025
What went wrong? The number system is not appropriate.
Modern programs shouldn't have this problem anymore. The newer "decimal" type of variable is more appropriate for money related calculations. Instead of using a simple base 2 number system, decimal uses base 10 expressed in base 2. What does that mean? It means lots of bit combinations are not allowed. I'll give an example. You're only allowed to have bit combinations that add up to 9.
1001 - this is the maximum number allowed for a set of 4 bits
As a decimal type, 4 bits has a maximum value of 9. If this were a regular float or double, 4 bits would have a maximum value of 8+4+2+1 = 15.
You can see how memory expensive this gets when you want one higher. How many bits does it take to represent the number 10? Since they come in sets of 4, I need 8 bits:
(0001)(0000)
Expressing the number 10 in regular base 2 would only need 5 bits:
1010
It's exactly the same as the Y2K problem. Ghetto technology leads to ghetto math. Why use 4 digit years when you can use 2 digit years? This could save millions of dollars in hardware. Why use expensive decimal types to represent non-whole numbers when you can cheap out and use binary types? They're usually close enough. Usually.