Double Vs decimal

HFS+

Senior member
Dec 19, 2011
216
0
0
Whats the difference bewteen a double variable and decimal variable?
 

brandonb

Diamond Member
Oct 17, 2006
3,731
2
0
Basically a double loses precision when you have very high number of digits on either side of the decimal point. (0.0 is fine, 1.01234305 is fine. 10000000000234234234.101101001023412341234, is not fine and will start to approximate because there are alot of digits.)

A decimal is stored almost as two integer values (on each side of the decimal point) that are glued together. So having a large number of digits to the left of the decimal does not effect the precision on the right hand because internally they are handled separately.

Even though decimal data types are 16 bytes and take alot of memory (doubles are 8 bytes) I normally just declare all my numeric types (that have decimal points) as decimal. Unless I'm coding a 3d engine because DirectX is based on single precision (not sure if that changed in DX10 or 11, I haven't played with it enough in the last few years) and not decimal.
 

HFS+

Senior member
Dec 19, 2011
216
0
0
wait, each digit in a decimal data type is 16 bytes? or just the entire decimal?
 

Cogman

Lifer
Sep 19, 2000
10,284
138
106
The question that wasn't asked and should have been.. What language and what platform are we talking about? We can't really say anything about the sizes (or even the behavior) or either without knowing this information.
 

eLiu

Diamond Member
Jun 4, 2001
6,407
1
0
The question that wasn't asked and should have been.. What language and what platform are we talking about? We can't really say anything about the sizes (or even the behavior) or either without knowing this information.

It sounds like 'decimal variable' really narrowed down the possibilities... I'm guessing maybe C# or VB given the mention of DirectX.

Also, decimal *IS NOT* exact precision. It's not even arbitrary precision (arbitrary precision is a floating point implementation that adaptively adds more storage as more precision is needed).

Decimal is *FIXED POINT* vs the *FLOATING POINT* of double. Doubles are 8 bytes. As I recall, it's 1 bit sign, 11 bits exponent, 52 bits mantissa. The mantissa essentially encodes a number in the form of 1.xxxxxxxxx... (where x is a 0-9 digit). Then the exponent ranges from 2^-1024 to 2^1023 and multiplies the mantissa to get your resultant number. The exponent is why it's called "floating point." B/c depending on what exponent you set, the decimal point can end up anywhere in the number.

Floating point is limited by relative precision. Say I have a double precision number x. What is the smallest double larger than x that I can represent? This is limited by the mantissa; it can represent a min value of eps=2^-52 or 2.2*10^-16. (Hence why ppl say doubles have 16 digits of precision). "eps" is short for epsilon; it represents a quantity called "machine precision". So (x+eps)-x = eps. But (x+eps/10)-x = 0.* This might seem surprising. Double precision can represent both x and eps/10. But when you combine them, eps/10 is not in the representable range of x, so nothing happens.

But in trade for the relative precision limit, you can represent a huge range of numbers. From like 10^307 down to 10^-308 or something like that. But you dont represent all the spaces between numbers equally (floating point).

So this is the kind of problems that double precision can have. You can represent 16 digits only. So if I compute (x-y) and get like 10^-14, I've lost a lot of precision! From the original data, I now only have 2 valid digits & 14 digits of noise. This is called cancellation. The other issue is easily illustrated by this example...
double y = 2^17;
for i = 1 to 2^17
y += 1.0;
y = ?
If you guessed 2^18, you'd be wrong. 2^17 + 1 = 0 in double precision. So at the end of that, y = 2^17 still. The error incurred here is huge. If instead, we computed:
double x = 0.0;
for i = 1 to 2^17;
x += 1.0;
double y = x + 2.0^17;
What is y now? Probably not exactly 2^18, but it'll be close. Within machine precision: that is, within roughly 2^17*eps.

*I'm ignoring denormalized numbers.

FIXED POINT stands opposed to all of this. The decimal place is always at the same place. There is no exponent. You get a fixed number of digits before & another (possibly different) fixed number of digits after the decimal point. This limits the range of numbers you can represent. It fixes the minimum difference between numbers. And that minimum difference does not change as numbers get smaller/bigger (as it did w/floating point where it scales w/the exponent).

But this is of course *different* from exact precision. Exact precision would imply that I could represent numbers like "pi". And we know that's not possible. Another fundamental example is say I can represent 1 digit before, and 1 digit after the decimal place. So the smallest value I have is 0.1. What if I try to divide that by 2? Not representable. Not exact.