zephyrprime
Diamond Member
- Feb 18, 2001
- 7,512
- 2
- 81
He seems to be saying that the g70/71 doesn't really do real 32bit math. It emulates it with lesser precision math. It's true that 32bit CPUs can do 64bit math in this way but doing floating point that way doesn't work. Floating point can be emulated by integer instructions but doing so is super slow compared to hardware FP.Originally posted by: tanishalfelven
They use ATI 1900 only because they're the first ones to put 32 bit floating point hardware on their graphics card. The "issue" with nVidia's cards is that they don't have that capability. While the math can still be done on a nVidia card - the same way having a 32-bit CPU doesn't mean you can't process 64-bit variables - the processing overhead takes away much of the advantage. Development may be more complex, too.
this was one of the comments on the daily tech page. someone here mind explaining what it means.
Nvidia's site also contradicts what he claims:
http://www.nvidia.com/object/7_series_techspecs.html