apparently i dont know what it is useful for since most people say it is useful only for compute (even though games use computations all the time and even though graphics must be computed before you see them on your monitor), so i want others to explain to me why more precision can't make games look better... the lower the precision, the more error there will have to be if i am not wrong.
the way i see it is if the render targets are 16 bit each for RGBA, some SGSS/MSAA or TXAA going on, fog (as an example) needed to be really complex (a really "revolutionary" kind of fog), and the lighting, depth, and fog calcs done per pixel, then more precision is better. or even if old features were to be made better and mixed with new features (new data), then more precision will allow for less error, right? like if more colors than ever before were added to it, it was a revolutionary shape, was intense like never seen before, and it could follow you around everywhere, morph, shift colors, or dissolve itself fluidly, by color, by shape, etc., all at once then more precision would make it look better... right? .
and why do most say double precision is not necessary for gaming? aren't some CG movies (which are animated like games could be) rendered in DP if the studio doesnt want severe rendering errors? james cameron doesnt seem like someone who would be satisfied with much rendering error.
i also realize that it is impractical to make everything from the instruction set set all the way to the output device 64 bit (for example, RGB10 monitors are still not very common) but then instruction set and data processing are more responsible for what you see rather than the other way around. and of course, fp reduces rounding errors, but why is the rounding error less important than error caused by not enough unique values? isn't, 64 bit fixed point better for some applications than 32 bit floating point is and vice versa?
lastly... should it be concluded that the main reason most dont value more precision is because most people arent paranoid about rendering errors and accuracy lower than i can imagine? why or why not?
the way i see it is if the render targets are 16 bit each for RGBA, some SGSS/MSAA or TXAA going on, fog (as an example) needed to be really complex (a really "revolutionary" kind of fog), and the lighting, depth, and fog calcs done per pixel, then more precision is better. or even if old features were to be made better and mixed with new features (new data), then more precision will allow for less error, right? like if more colors than ever before were added to it, it was a revolutionary shape, was intense like never seen before, and it could follow you around everywhere, morph, shift colors, or dissolve itself fluidly, by color, by shape, etc., all at once then more precision would make it look better... right? .
and why do most say double precision is not necessary for gaming? aren't some CG movies (which are animated like games could be) rendered in DP if the studio doesnt want severe rendering errors? james cameron doesnt seem like someone who would be satisfied with much rendering error.
i also realize that it is impractical to make everything from the instruction set set all the way to the output device 64 bit (for example, RGB10 monitors are still not very common) but then instruction set and data processing are more responsible for what you see rather than the other way around. and of course, fp reduces rounding errors, but why is the rounding error less important than error caused by not enough unique values? isn't, 64 bit fixed point better for some applications than 32 bit floating point is and vice versa?
lastly... should it be concluded that the main reason most dont value more precision is because most people arent paranoid about rendering errors and accuracy lower than i can imagine? why or why not?