BenSkywalker
Diamond Member
- Oct 9, 1999
- 9,140
- 67
- 91
However, how is nvidia going to make $$ selling a $400 card with <= $300 card performance and a $650 card with ~ $400 performance?
T10 costs $8,000. No, I didn't add an extra zero. Comparable performing Intel hardware is likely in the $150K-$250K range. Look at it from that price/performance metric.
I'm not even sure what your point is. Especially since it's not even quite correct
Looking over every piece of documentation I can find I was way out of line saying it was 120, it appears to be 0. ATi can't come close on any of the metrics used based on all documentation I can see to any of the 754r specs, maybe I am just missing something somewhere? Can someone please link me to the level of 754 compliance ATi's hardware has for DP?
Yep, that's exactly what it is (VLIW). In general the hardware is less complex but the burden is shifted onto the compiler to extract good performance.
AMD couldn't manage to get a decent 3DNow! compiler or even a solid x86-64 compiler going despite enormous potential benefit to get it done. Intel- arguably the best compiler coders in the world couldn't get VLIW compilers decent for a decade. Perhaps it is much simpler for them with shader code, but on a software basis it is an enormously complex task and quite frankly, ATi can't seem to keep WoW working on their PC drivers(which is kind of played by more people then all other games in the top 20 PC charts combined every month
AMD in particular should seriously be thinking about this. nVidia is already going after Intel and is devoting considerable transistor budgets to knocking Intel out of the HPC market as much as possible- this is something AMD should be very much aware of. Given the two processors are starting to push into each others territories, how much longer before a VIA CPU paired with a nV GPU is considered a more viable alternative solution to Intel then AMD? It may never get this far, not saying it will, but that is the current direction things are going in and AMD seems to be several years behind the curve. I know they were banking on the CPU side taking over the GPU end, they likely would have been wiser if they prepared for both scenarios.
Fortunately for ATi having 800 shaders is quite useful and their compiler is probably quite good now.
Mainly it's a good thing that they are handling the easiest possible code to get working on a VLIW setup so they aren't losing as horribly as they could be. Straight up, the 3850(no mistype) should throttle the 9800GTX with huge shader loads. Either something is wrong with the hardware, the software, or somewhere in between(my money is on the compiler- scheduling for that has to be an absolute nightmare).