what is double precision useful for?

Anarchist420

Diamond Member
Feb 13, 2010
8,645
0
76
www.facebook.com
apparently i dont know what it is useful for since most people say it is useful only for compute (even though games use computations all the time and even though graphics must be computed before you see them on your monitor), so i want others to explain to me why more precision can't make games look better... the lower the precision, the more error there will have to be if i am not wrong.

the way i see it is if the render targets are 16 bit each for RGBA, some SGSS/MSAA or TXAA going on, fog (as an example) needed to be really complex (a really "revolutionary" kind of fog), and the lighting, depth, and fog calcs done per pixel, then more precision is better. or even if old features were to be made better and mixed with new features (new data), then more precision will allow for less error, right? like if more colors than ever before were added to it, it was a revolutionary shape, was intense like never seen before, and it could follow you around everywhere, morph, shift colors, or dissolve itself fluidly, by color, by shape, etc., all at once then more precision would make it look better... right? .

and why do most say double precision is not necessary for gaming? aren't some CG movies (which are animated like games could be) rendered in DP if the studio doesnt want severe rendering errors? james cameron doesnt seem like someone who would be satisfied with much rendering error.

i also realize that it is impractical to make everything from the instruction set set all the way to the output device 64 bit (for example, RGB10 monitors are still not very common) but then instruction set and data processing are more responsible for what you see rather than the other way around. and of course, fp reduces rounding errors, but why is the rounding error less important than error caused by not enough unique values? isn't, 64 bit fixed point better for some applications than 32 bit floating point is and vice versa?

lastly... should it be concluded that the main reason most dont value more precision is because most people arent paranoid about rendering errors and accuracy lower than i can imagine? why or why not?
 

Jaydip

Diamond Member
Mar 29, 2010
3,691
21
81
apparently i dont know what it is useful for since most people say it is useful only for compute (even though games use computations all the time and even though graphics must be computed before you see them on your monitor), so i want others to explain to me why more precision can't make games look better... the lower the precision, the more error there will have to be if i am not wrong.

the way i see it is if the render targets are 16 bit each for RGBA, some SGSS/MSAA or TXAA going on, fog (as an example) needed to be really complex (a really "revolutionary" kind of fog), and the lighting, depth, and fog calcs done per pixel, then more precision is better. or even if old features were to be made better and mixed with new features (new data), then more precision will allow for less error, right? like if more colors than ever before were added to it, it was a revolutionary shape, was intense like never seen before, and it could follow you around everywhere, morph, shift colors, or dissolve itself fluidly, by color, by shape, etc., all at once then more precision would make it look better... right? .

and why do most say double precision is not necessary for gaming? aren't some CG movies (which are animated like games could be) rendered in DP if the studio doesnt want severe rendering errors? james cameron doesnt seem like someone who would be satisfied with much rendering error.

i also realize that it is impractical to make everything from the instruction set set all the way to the output device 64 bit (for example, RGB10 monitors are still not very common) but then instruction set and data processing are more responsible for what you see rather than the other way around. and of course, fp reduces rounding errors, but why is the rounding error less important than error caused by not enough unique values? isn't, 64 bit fixed point better for some applications than 32 bit floating point is and vice versa?

lastly... should it be concluded that the main reason most dont value more precision is because most people arent paranoid about rendering errors and accuracy lower than i can imagine? why or why not?
DP is useful where you can't afford errors.We use it on our medical imaging app, but remember DP without ECC is worthless.Regarding games it is not critical enough to warrant DP
 

BrightCandle

Diamond Member
Mar 15, 2007
4,762
0
76
Moving to DP would potentially improve the accuracy of the calculations for games as the amount of processing per pixel goes up, but right now its not worth the performance impact to get a slightly more accurate result.

The genuine real world applications that use DP tend to be scientific in nature, they are working with the very large or very small and a high amount of calculation with compounding errors and hence doubles makes a lot of sense to reduce that error component. But there are also quite a few programs that use DP and probably don't need to, but because they are 64 bit programs on the CPU it gets translated into GPU algorithms using doubles as well.
 

Ventanni

Golden Member
Jul 25, 2011
1,432
142
106
Double precision hardware is primarily catered to the scientific community, and is not used in games. If game designers were to begin utilizing this capability, it's likely it could be used for more accurate lighting and physics effects, but the performance trade-off would be huge.

I am curious though just how AMD and Nvidia implement their double precision capability. Are all shader cores DP capable, or do AMD and Nvidia partition off sections that are DP capable and some that are single precision only?
 

monstercameron

Diamond Member
Feb 12, 2013
3,818
1
0
why is 32bit single precision? why not 16bit or something? does this have to do with IEEE setting some kind of standard?
 

BrightCandle

Diamond Member
Mar 15, 2007
4,762
0
76
A lot of programming languages have decided that floats are 32bit, doubles are 64 bit and stuff beyond that has crazy names (like double double and all sorts of other silly names).

Basically SP/DP got created as part of the 32bit OS world and that is what has driven the existing numbers.

As a side note its kind of odd to think of todays CPUs as 64bit, that is the addressable space in theory but practically they only support 40 bits of it, calculated wise they can process 256 bit numbers. So its kind of just a historical oddity.
 

Wall Street

Senior member
Mar 28, 2012
691
44
91
32 bit is single because it could be written as one word on a 32-bit processor.

Double performance doesn't impact games because the number format is specified in the code and pretty much all games choose single precision floats because of the performance difference. I think this is a good idea because the difference in quality of the calculations is very minimal but single precision allows the programmers to add a greater quantity of effects.

Also keep in mind that single has 24 bits of precision while monitors have 8 bits per channel or sometimes 10 bits. So the "errors" as you call them, or the rounding, still rarely impacts the final result.
 
Last edited:

monstercameron

Diamond Member
Feb 12, 2013
3,818
1
0
32 bit is single because it could be written as one word on a 32-bit processor.

Double performance doesn't impact games because the number format is specified in the code and pretty much all games choose single precision floats because of the performance difference. I think this is a good idea because the difference in quality of the calculations is very minimal but single precision allows the programmers to add a greater quantity of effects.

so is 64bit a single on a 64bit processor?
 

jpiniero

Lifer
Oct 1, 2010
16,116
6,577
136
I am curious though just how AMD and Nvidia implement their double precision capability. Are all shader cores DP capable, or do AMD and Nvidia partition off sections that are DP capable and some that are single precision only?

I think it's the latter but that is changing in Maxwell.
 

Imouto

Golden Member
Jul 6, 2011
1,241
2
81
Double precision is useless in the consumer market and not worth it in the pro market with scientific field being almost the sole market for DP GPUs. That said DP is useless without ECC memory.
 

BFG10K

Lifer
Aug 14, 2000
22,709
3,000
126
apparently i dont know what it is useful for since most people say it is useful only for compute (even though games use computations all the time and even though graphics must be computed before you see them on your monitor), so i want others to explain to me why more precision can't make games look better... the lower the precision, the more error there will have to be if i am not wrong.
Just because something does a calculation it doesn't mean it needs the highest precision available.

1.2 + 4.3 = 5.5; even single precision is vastly overkill to accurately store any of those values.

can ECC memory check 100% and correct 100%? just wondering:)
Of course not. But in the situations where you really need it (e.g. not regular consumer space), it's a lot better than having no ECC.
 

Mark R

Diamond Member
Oct 9, 1999
8,513
16
81
can ECC memory check 100% and correct 100%? just wondering:)
Typical ECC implementations can correct 100% of single bit errors, and detect 100% of 2-bit errors. More extensive errors may not be detected or corrected reliably.

However, the exact capability varies with hardware, bus width, etc.

The use of DP and ECC are not related, and there are plenty of reasons why you might need one and not the other. Although in general, DP means serious work of some kind, which means ECC.

DP gives you more significant figures, and therefore reduces the effect of rounding errors. (Rounding errors can crop up where not expected. For example the number 0.1, when converted to binary is actually infinitely long - it is a repeating number, like 1/3=0.3.... So, 0.1 will always be rounded off. Add enough 0.1s together, and the rounding errors will combine and you will get a noticeable discrepancy)

Certain types of computation, where the result of one computation is used in the next, over and over can accumulate errors. This is commonly found in scientific simulations, where the next step in the simulation depends on the previous step solution.

There also certain types of computation which by their nature require high precision at an intermediate step - sometimes it is possible to use a different method to work around this requirement, but sometimes you just have to use higher precision calcualtions. This is a problem that comes up in mathematical techniques like FFTs and matrix inversions.

For gaming, the need for precision and computational stability is low, and therefore DP isn't required. This is the case with most graphics. Although, I wrote a medical imaging app which had a volume rendering function - true volume rendering requires a render pass for each plane of voxels at each distance away from you. This app would use 200-300 pass rendering, with alpha blending on each pass. This worked fine with single precision floats, but with half precision (16 bit), you could start to see faint artefacts appearing where the rounding errors had built up.

In general, scientific and other industrial/engineering work uses DP to ensure that rounding errors don't pile-up and create a noticeable error. Financial work shouldn't use floats at all (not even DP) because a fixed point system which can avoid rounding errors completely is better.

ECC is used anywhere where you have to be sure that your data is correct. Any kind of file server, business data, scientific/industrial work, etc. An alternative, however, is to run the computation multiple times and see if the results are the same (this approach is used by distributed scientific projects such as folding @ home).
 
Last edited:

Beavermatic

Senior member
Oct 24, 2006
374
8
81
Sounds like someone is trying to do some soul searching on whether they should buy a GTX Titan or a GTX 780ti.... double the vram with double precision, or more cores and better memory bandwidth (but half the vram) and higher "today" performance.
 

selni

Senior member
Oct 24, 2013
249
0
41
Double precision is useless in the consumer market and not worth it in the pro market with scientific field being almost the sole market for DP GPUs. That said DP is useless without ECC memory.

DP and ECC correct totally different types of errors, so it's strange to say one is useless without the other. There's certainly classes of problems that don't require ECC but simply aren't practical to solve using single precision. Not that it's been well adapted to GPUs yet, but linear programming for example.

Both are irrelevant for gaming though yeah.
 

Anarchist420

Diamond Member
Feb 13, 2010
8,645
0
76
www.facebook.com
Sounds like someone is trying to do some soul searching on whether they should buy a GTX Titan or a GTX 780ti.... double the vram with double precision, or more cores and better memory bandwidth (but half the vram) and higher "today" performance.
good reply.:) i actually bought a 780 a few months ago (and i would've returned it except only i would have had nothing until i could've saved up $480 more for a titan) thinking it didnt have a fuse in it that cripples DP... that's just low class and not really necessary of nvidia to do that... it know it helped them make more money, but considering they wont open up their drivers much due to IP and that they have no competition (except maybe AMD, but AMD has always neglected image quality and extra features ever since R300 if not before then), it was rather amoral.

additionally, i am worried about price ceilings on processors of all kinds and that would be more favorable to intel, nv, and amd than IP repeal. corruption goes up as power is more consolidated and to get rid around that one needs to not be offered any IP, other regulations, or subsidies and then these things could be designed, made, and shipped out a place no larger than an 8 car garage. not saying that would work best for me if i was a businessman, but then i am not trying to be a businessman yet.

[mods: i dont mind this thread being closed now since i got good answers and may have just derailed it; although i wish i had the self-control to never make another DP thread again lol]
 

Anarchist420

Diamond Member
Feb 13, 2010
8,645
0
76
www.facebook.com
Typical ECC implementations can correct 100% of single bit errors, and detect 100% of 2-bit errors. More extensive errors may not be detected or corrected reliably. However, the exact capability varies with hardware, bus width, etc. The use of DP and ECC are not related, and there are plenty of reasons why you might need one and not the other. Although in general, DP means serious work of some kind, which means ECC. DP gives you more significant figures, and therefore reduces the effect of rounding errors. (Rounding errors can crop up where not expected. For example the number 0.1, when converted to binary is actually infinitely long - it is a repeating number, like 1/3=0.3.... So, 0.1 will always be rounded off. Add enough 0.1s together, and the rounding errors will combine and you will get a noticeable discrepancy) Certain types of computation, where the result of one computation is used in the next, over and over can accumulate errors. This is commonly found in scientific simulations, where the next step in the simulation depends on the previous step solution. There also certain types of computation which by their nature require high precision at an intermediate step - sometimes it is possible to use a different method to work around this requirement, but sometimes you just have to use higher precision calcualtions. This is a problem that comes up in mathematical techniques like FFTs and matrix inversions. For gaming, the need for precision and computational stability is low, and therefore DP isn't required. This is the case with most graphics. Although, I wrote a medical imaging app which had a volume rendering function - true volume rendering requires a render pass for each plane of voxels at each distance away from you. This app would use 200-300 pass rendering, with alpha blending on each pass. This worked fine with single precision floats, but with half precision (16 bit), you could start to see faint artefacts appearing where the rounding errors had built up. In general, scientific and other industrial/engineering work uses DP to ensure that rounding errors don't pile-up and create a noticeable error. Financial work shouldn't use floats at all (not even DP) because a fixed point system which can avoid rounding errors completely is better. ECC is used anywhere where you have to be sure that your data is correct. Any kind of file server, business data, scientific/industrial work, etc. An alternative, however, is to run the computation multiple times and see if the results are the same (this approach is used by distributed scientific projects such as folding @ home).
+1.:) i need to remember all of what you explained.
Just because something does a calculation it doesn't mean it needs the highest precision available. 1.2 + 4.3 = 5.5; even single precision is vastly overkill to accurately store any of those values.
dont agree 100% but +1.:)
Of course not. But in the situations where you really need it (e.g. not regular consumer space), it's a lot better than having no ECC.
+1.:)
 

Pottuvoi

Senior member
Apr 16, 2012
416
2
81
Doubles are important when dealing with large areas and/or speeds, especially if physics are involved. (Especially when dealing with planetary scales.)
With singles one has to do all sorts of tweaks and local spaces to get needed precision and even then physics might become unstable. (IE. Kerbal Space Program)

In pixel shaders most games use 16bit floats for HDR and it is quite limiting and games tend to clamp results.
Moving to 32bit floats would help, but doubles would be overkill.

Using 64bit z-buffer (or 48Z/16S) would be easy way out of most z-fighting errors, but it would be costly and I'm pretty sure no GPU currently supports the format. (although coming 'dx12?' parts might fix this.)
It would also mean quite big increase in needed bandwidth.
 

Anarchist420

Diamond Member
Feb 13, 2010
8,645
0
76
www.facebook.com
Using 64bit z-buffer (or 48Z/16S) would be easy way out of most z-fighting errors, but it would be costly and I'm pretty sure no GPU currently supports the format. (although coming 'dx12?' parts might fix this.) It would also mean quite big increase in needed bandwidth.
knowing microsoft we will see continued use of the stencil buffer and mutiples of 24. i also doubt there will ever be a 100% lossless pure 64 bit z buffer without any IQ reducing optimizations because lower quality formats are already favored by most devs over max quality (even if there is a less than 1% performance difference on high end hardware) and also because the whole cartelized industry is so into forced lossy compression and subtle (but noticeable iq) reductions... they dont make iq reductions optional because most tech reviewers largely ignore comparing old drivers to new drivers (in terms of IQ and bugs), they dont analyze them and they would get threats from nv and amd anyway.
 

_Rick_

Diamond Member
Apr 20, 2012
3,948
70
91
Double precision is crucial, when you re-use the results of a first calculation multiple times. Numerical mathematics tries to avoid this, as even when using arbitrary fixed length precision calculation, you will still degrade your end result as an exponential function over the initial imprecision, the more you re-use calculated values.
Furthermore single precision only has about a 10^6 precision. That means when you multiply two such values, you're already down to 10^5. Do worse things, like x^y, and you can quickly break coherency through over or underflows.
In single, if you add 1 to 10^10 one Billion times, the result will still be 10^10. Therefore it's not even suitable for long term accumulation.
Single precision floating point math has to be used even more carefully than floating point math in general, as the smallest error in writing down a formula might break the result, despite the formula being mathematically correct.

So yeah, double precision is essential, when you do anything with the result of your calculation, single precision helps, when all you do is drawing triangles and pixels for fun, and you need to be fast about it.
 

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
so is 64bit a single on a 64bit processor?
A single word for that hardware? Yes.

Is it called that? No.

A word has pretty much been standardized as 16 bits in Windows, and 32 bits everywhere else. Both are just due to historical reasons, but have since been enshrined in docs and standards.

1.2 + 4.3 = 5.5; even single precision is vastly overkill to accurately store any of those values.
That's more what fixed point is for, which is alive and well, though dedicated hardware for it is rare, these days. You will get rounding error in the above calculation the instant it has to convert "1.2" from a string into a FP number, and must keep that approximation in mind when comparing what appears to be another number like 1.2, or 5.5 (I think 5.5 can be represented exactly, though). It's especially annoying in scripting languages that don't expose the actual number types, but convert everything to SP or DP anyway. Floating point and integer really are different, and need to be treated as such (fixed point types are extensions of integers).

IEEE754 FP is for results that don't need to be extremely accurate or precise, but for which a fixed data size with varying precision is good enough (for example: the code managing this text input box I'm using needs lots of infinitely precise and accurate numbers, while the graphical renderer of its contents does not). It does, however, give sufficiently predictable behavior to be good enough for a wide variety of applications, so long as what you are doing doesn't need as many digits as you'll end up with, and equality comparisons being tricky is either not a problem, or would exist anyway. Using multiple doubles for added precision is fast on modern hardware, as well, compared to other ways of adding precision, which is why quad is basically unsupported, and further extensions are unlikely to be made.

Z-issues will be with us until devs are using engines that can incorporate explicit surface orders, based on either expected position, or view angle, which only recently become possible for them to do. Higher precision Z will not get used, because Z work, even compressed, is a bandwidth-hog, not merely for VRAM, but inside the chip. FI, something flagging clothing as always over the skin, so that conflicts via lower-LOD models, or the two moving through each other in animations, won't matter. Likewise with overlaid texture surfaces that cause distant flickering, due to being too close.
 

Wall Street

Senior member
Mar 28, 2012
691
44
91
That's more what fixed point is for, which is alive and well, though dedicated hardware for it is rare, these days. You will get rounding error in the above calculation the instant it has to convert "1.2" from a string into a FP number, and must keep that approximation in mind when comparing what appears to be another number like 1.2, or 5.5 (I think 5.5 can be represented exactly, though). It's especially annoying in scripting languages that don't expose the actual number types, but convert everything to SP or DP anyway. Floating point and integer really are different, and need to be treated as such (fixed point types are extensions of integers).

The issue that most people aren't seeing here is that, yes, double floats round less than single floats. Quad floats round even less. I think that BFG10K's point is that when you have a number with two significant figures of precision, you don't need double floats to capture all of that precision.

In games, the objective is to rendering to a target that can display 8 bits per color channel. Maybe 10 in some really high end displays that aren't common. How many bits of precision do you need before the rounding to get to an eight bit target is vastly more than the rounding based on the float format used? Here is the precision of each format:

Half: 11 bits
Single: 24 bits
Double: 53 bits
Ext. Double (x87): 64 bits
Quad: 113 bits

I understand that half precision isn't good enough because rounding errors in 11 bit calculations can show up in an 8 to 10 bit calculation. I haven't seen anyone make the case that 24 bits of precision vs 53 bits of precision would matter. Maybe it would in a calculation with thousands of iterations, but such calculations are impossible in a game because to do that for each pixel in each frame would mean < 1 FPS.
 

Anarchist420

Diamond Member
Feb 13, 2010
8,645
0
76
www.facebook.com
Higher precision Z will not get used, because Z work, even compressed, is a bandwidth-hog, not merely for VRAM, but inside the chip.
agree with that considering how long we were stuck with only partial precision z-buffers. and due to that, we never really saw any DX9 games that had anything like the original unreal's hugely spaced out levels. 32 bit fixed point log depth buffers and D32FS8, the latter for when screen space linearity is was a necessity should've been allowed in DX 9 games, but microsoft wouldnt allow it in their games.
 

dipster

Junior Member
Jan 19, 2014
6
0
0
That's more what fixed point is for, which is alive and well, though dedicated hardware for it is rare, these days. You will get rounding error in the above calculation the instant it has to convert "1.2" from a string into a FP number, and must keep that approximation in mind when comparing what appears to be another number like 1.2, or 5.5 (I think 5.5 can be represented exactly, though).

The inexactness of 1.2 has to do with the base 2 representation, not the floatingness of the point. And most people use a base 2 representation, both for floats and for fixed point numbers

e.g. if we have 3 bits for the integer part and two bits for the fractional part, then 1.2 is represented as 001 010, i.e. 1 and 1/4, i.e. 0.25. This is the most common kind of fixed point number representation, and it's what you'll find on GPUs supporting fixed point numbers, for example.

OTOH if we have 6 bits for the combined number and the integer part is x/10 and the fractional part is the remainder, then it's just 12. This is slow, however, so it isn't common.

It's especially annoying in scripting languages that don't expose the actual number types, but convert everything to SP or DP anyway.
Javascript, you were the chosen one! :( :( :(

IEEE754 FP is for results that don't need to be extremely accurate or precise, but for which a fixed data size with varying precision is good enough
Scientific computations that need a lot of accuracy do use floats. Fixed width floats have a lot of good things about them, and very little bad. Great performance, good error behavior, relatively easy to analyze, etc. -- (compared to e.g. fixed point numbers, or quote notation, or pairs of integers, which are all worse on all fronts).

To answer the OP, doubles are great because increasing your precision decreases representation error exponentially. Everyone loves doubles.

Maybe it would in a calculation with thousands of iterations, but such calculations are impossible in a game because to do that for each pixel in each frame would mean < 1 FPS.

It doesn't take thousands. 1/(a - b), if a and b are close -- oops!

I think a point could be made that the worst that could happen is visual artifacts, and probably only for a few pixels, and probably only for a single frame. Pretty harmless.
 
Last edited: