NVIDIA 9800GTX+ Review Thread

Page 14 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
However, how is nvidia going to make $$ selling a $400 card with <= $300 card performance and a $650 card with ~ $400 performance?

T10 costs $8,000. No, I didn't add an extra zero. Comparable performing Intel hardware is likely in the $150K-$250K range. Look at it from that price/performance metric.

I'm not even sure what your point is. Especially since it's not even quite correct

Looking over every piece of documentation I can find I was way out of line saying it was 120, it appears to be 0. ATi can't come close on any of the metrics used based on all documentation I can see to any of the 754r specs, maybe I am just missing something somewhere? Can someone please link me to the level of 754 compliance ATi's hardware has for DP?

Yep, that's exactly what it is (VLIW). In general the hardware is less complex but the burden is shifted onto the compiler to extract good performance.

AMD couldn't manage to get a decent 3DNow! compiler or even a solid x86-64 compiler going despite enormous potential benefit to get it done. Intel- arguably the best compiler coders in the world couldn't get VLIW compilers decent for a decade. Perhaps it is much simpler for them with shader code, but on a software basis it is an enormously complex task and quite frankly, ATi can't seem to keep WoW working on their PC drivers(which is kind of played by more people then all other games in the top 20 PC charts combined every month ;) ). Given AMD's and ATi's priorities I don't see them sinking a fraction of the resources needed to be where they should be right now.

AMD in particular should seriously be thinking about this. nVidia is already going after Intel and is devoting considerable transistor budgets to knocking Intel out of the HPC market as much as possible- this is something AMD should be very much aware of. Given the two processors are starting to push into each others territories, how much longer before a VIA CPU paired with a nV GPU is considered a more viable alternative solution to Intel then AMD? It may never get this far, not saying it will, but that is the current direction things are going in and AMD seems to be several years behind the curve. I know they were banking on the CPU side taking over the GPU end, they likely would have been wiser if they prepared for both scenarios.

Fortunately for ATi having 800 shaders is quite useful and their compiler is probably quite good now.

Mainly it's a good thing that they are handling the easiest possible code to get working on a VLIW setup so they aren't losing as horribly as they could be. Straight up, the 3850(no mistype) should throttle the 9800GTX with huge shader loads. Either something is wrong with the hardware, the software, or somewhere in between(my money is on the compiler- scheduling for that has to be an absolute nightmare).
 

Denithor

Diamond Member
Apr 11, 2004
6,298
23
81
I know it was mentioned earlier that you cannot compare 1:1 stream processors on different architecture but I'm curious about the CUDA power of a GTX 280 versus a 9800GTX. Would these SP be close enough to just say the 280 has an 87.5% increase in processing power (240/128) or is there more to it than that?
 

stasdm

Junior Member
Jun 28, 2008
5
0
0
After analyzing some tests published, I came to the following conclusions:

1. ATI chips are less CPU-dependent that NVIDIA ones, so they would outperform the latter on low and mid-range processors, but will fail down on quicker CPUs.
2. New GTX200 not just "overtakes" shaders from slave cards, but uses others main processors abilities. This allows to have 1+0.8+0.64 productivity increase in some tests (compared to 1+0.3+0.1 in the 980 case)

Also, if I were a NVIDIA guy, I would not start new line with the old product. I think they are no more fools than me.

So, I think that so called 9800 gtx+ is nothing less that test products of gtx 200 GPU on new line (there are some significant indication for that).

One of them - almost total lack of any information on 9800 gtx+ performance in SLI mode. If (on a speedy CPU and 2 real x16 connectors) 2x9800gtx+ =< 1.3 of the one, then it's the old core. If >= 1.5 - then it is an "abridged" version of gtx 200.

The only test I saw shows 1.6 increase (one is not enough, thou).

The only answer to that is a test on a fast CPU on Intel x48 or NVIDIA 790i m/b

PS. I'd like to see your opinions on My dream computer
 

bryanW1995

Lifer
May 22, 2007
11,144
32
91
the reason there's an almost "total lack of any information on 9800 gtx+ performance in sli mode" is that nobody ouside of nvidia has more than one of these cards because they're still not available yet.
 

Denithor

Diamond Member
Apr 11, 2004
6,298
23
81
What are you guys talking about, no benchmarks on 9800GTX+ SLI? The 4870 review has two pages (page 20, page 21) of multiGPU benchmarks showing the 9800GTX+ in SLI absolutely crushing almost everthing they threw at it. There is only one benchmark (Bioshock) where it doesn't beat the GTX 280 (and something is fishy with that result, as the 8800GT SLI beats the GTX 280). And except for that single benchmark, 9800GTX+ SLI manages to place between 2nd and 4th place in all the other benchmarks.

Which is why I've been saying it's such a good value. $400 beats out the $500 9800GX2, the $650 GTX 280, and basically matches the performance of the $600 4870CF.
 

Keysplayr

Elite Member
Jan 16, 2003
21,211
50
91
Originally posted by: bryanW1995
the reason there's an almost "total lack of any information on 9800 gtx+ performance in sli mode" is that nobody ouside of nvidia has more than one of these cards because they're still not available yet.

What the.....
 

stasdm

Junior Member
Jun 28, 2008
5
0
0
Originally posted by: Denithor
What are you guys talking about, no benchmarks on 9800GTX+ SLI? The 4870 review has two pages (page 20, page 21) of multiGPU benchmarks showing the 9800GTX+ in SLI.

And those results are the outrageous lie!!!!!
9800 GX2 cannot be 1.5 times faster than 9800 GTX by definition (unless tested in different conditions)! 9800 GX2 uses only 20-30% more bandwidth that 9800 GTX and the master core is connected only by SLI link with the slave - how it can be 50% faster?
All other tests show only 20-25% rise - which is much more thrue-like.
 

HOOfan 1

Platinum Member
Sep 2, 2007
2,337
15
81
Originally posted by: stasdm

And those results are the outrageous lie!!!!!
9800 GX2 cannot be 1.5 times faster than 9800 GTX by definition (unless tested in different conditions)! 9800 GX2 uses only 20-30% more bandwidth that 9800 GTX and the master core is connected only by SLI link with the slave - how it can be 50% faster?
All other tests show only 20-25% rise - which is much more thrue-like.

except for the fact that multiple sites show similar performance separation between the 9800GTX and 9800GX2

Legit Reviews

The Tech Report

Hothardware

Legion Hardware

Bit-Tech

Driver Heaven

 

Keysplayr

Elite Member
Jan 16, 2003
21,211
50
91
Originally posted by: stasdm
Originally posted by: Denithor
What are you guys talking about, no benchmarks on 9800GTX+ SLI? The 4870 review has two pages (page 20, page 21) of multiGPU benchmarks showing the 9800GTX+ in SLI.

And those results are the outrageous lie!!!!!
9800 GX2 cannot be 1.5 times faster than 9800 GTX by definition (unless tested in different conditions)! 9800 GX2 uses only 20-30% more bandwidth that 9800 GTX and the master core is connected only by SLI link with the slave - how it can be 50% faster?
All other tests show only 20-25% rise - which is much more thrue-like.

Ok, I'm dying to know. "By definition"? Can you "define" things for us?
By the way, you are talking with people that know these cards inside and out, so do your
homework first! ;)
 

stasdm

Junior Member
Jun 28, 2008
5
0
0
Ok, I'm dying to know. "By definition"? Can you "define" things for us?
By the way, you are talking with people that know these cards inside and out, so do your
homework first! ;)

OK!

There are three factors on which GPU productivity depends

1. CPU speed. GPU just cannot eat more than is fed by CPU. I could not find any CPU dependence comparison in presented tests.

2. Shaders availability. In some tests results might be nearly proportional to available shaders number (to some extent, after which any addition would add nothing). That only says that the GPU main processor is not working to the full. Try to block PCIe lines to see the bandwidth used and compare in SLI mode. This shows the game demand in shaders, but not overall GPU productivity.
Also could not find REAL tests, showing also the main GPU utilization.

3.GPU main unit limitations. This is common to all multi-processors systems. Approximately its may be stated as a 0.8 rule. Each processor added may add to the system productivity no more than 0.8 power processor number. Or 1+0.8+0.64?.. Works, of cause, when in single chip test main GPU is busy to full extent - and this is the "by definition" case

As 980 chip does not use the main processor of the second/third card (else it might not be attached in the way it's done in GT2), it may gain only in the second case - in tests where shaders counts weight more than GPU ability to manage them. But will add no more than 2-3% where its own speed matter.

And as Mark Twain said, there is a lie, an utter lie and statistics.
 

allies

Platinum Member
Jun 18, 2002
2,572
0
71
Originally posted by: stasdm
Ok, I'm dying to know. "By definition"? Can you "define" things for us?
By the way, you are talking with people that know these cards inside and out, so do your
homework first! ;)

OK!

There are three factors on which GPU productivity depends

1. CPU speed. GPU just cannot eat more than is fed by CPU. I could not find any CPU dependence comparison in presented tests.

2. Shaders availability. In some tests results might be nearly proportional to available shaders number (to some extent, after which any addition would add nothing). That only says that the GPU main processor is not working to the full. Try to block PCIe lines to see the bandwidth used and compare in SLI mode. This shows the game demand in shaders, but not overall GPU productivity.
Also could not find REAL tests, showing also the main GPU utilization.

3.GPU main unit limitations. This is common to all multi-processors systems. Approximately its may be stated as a 0.8 rule. Each processor added may add to the system productivity no more than 0.8 power processor number. Or 1+0.8+0.64?.. Works, of cause, when in single chip test main GPU is busy to full extent - and this is the "by definition" case

As 980 chip does not use the main processor of the second/third card (else it might not be attached in the way it's done in GT2), it may gain only in the second case - in tests where shaders counts weight more than GPU ability to manage them. But will add no more than 2-3% where its own speed matter.

And as Mark Twain said, there is a lie, an utter lie and statistics.

No offense, but what the heck are you talking about???
 

HOOfan 1

Platinum Member
Sep 2, 2007
2,337
15
81
Originally posted by: allies

No offense, but what the heck are you talking about???

He doesn't know....

there is a preponderance of evidence out there that proves he is wrong...I already linked to it.
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
Originally posted by: stasdm
Originally posted by: Denithor
What are you guys talking about, no benchmarks on 9800GTX+ SLI? The 4870 review has two pages (page 20, page 21) of multiGPU benchmarks showing the 9800GTX+ in SLI.

And those results are the outrageous lie!!!!!
9800 GX2 cannot be 1.5 times faster than 9800 GTX by definition (unless tested in different conditions)! 9800 GX2 uses only 20-30% more bandwidth that 9800 GTX and the master core is connected only by SLI link with the slave - how it can be 50% faster?
All other tests show only 20-25% rise - which is much more thrue-like.

It's a nice little theory. But theory and simulation are not reality.
The reality is that this is what it tests out to be, therefore there are other factors that your theory does not take into account, and the theory should be revised.
 

stasdm

Junior Member
Jun 28, 2008
5
0
0
PS to my previous post.

GT2 could outperform two GTX in some group 2 tests only if tested on NVIDIA 780i/790i chipset with both GTX cards in the northbridge slots. It will necessarily block at least 4 lines of (of really only one x16 connection) northbridge connection, giving the GT2 bandwidth handicap. But if the second card installed in the southbridge connection or tested on Intel x48 chipset it will outperform GT2 at about shaders speed difference.
 

HOOfan 1

Platinum Member
Sep 2, 2007
2,337
15
81
What is a GT2? There is a card called a GX2...but not a card called a GT2.

Besides who ever posted anything about the GX2 beating dual 9800GTX?

In the links that you are whining about being "lies by definition" The SLied 9800GTX+ beat the 9800GX2 in all but 2 games...in one of those games they matches the GX2 and in the other game obviously something in the drivers or SLi scaling is not working....

go look at the benchmarks again....
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
Originally posted by: stasdm
PS to my previous post.

GT2 could outperform two GTX in some group 2 tests only if tested on NVIDIA 780i/790i chipset with both GTX cards in the northbridge slots. It will necessarily block at least 4 lines of (of really only one x16 connection) northbridge connection, giving the GT2 bandwidth handicap. But if the second card installed in the southbridge connection or tested on Intel x48 chipset it will outperform GT2 at about shaders speed difference.

There is not a single nvidia card, of any type or make, that could outperform two GTX280...
unless you mean two GX2 in SLI (for four GPUs) under your "theoretical" performance... that, or compared to a 9800GTX instead....

GTX is no longer sufficient to describe a card, there is the 8800 (but we know you dont mean it), the 9800, the 9800+, the 260, and the 280... so with 5 individual GPUs with the GTX monkier, you can no longer just call it a "GTX"
 

stasdm

Junior Member
Jun 28, 2008
5
0
0
Sorry for some tipos in my night post (it wat 02:50, by the way).

1. I ment 9800 GX2 and 9800 GTX comparison, where some tests give the win to 9800 GX2 over a pare of SLIed 9800 GTX - this is not a drivers mistake, but testing mistake. NVIDIA 790i has two PCIe v.2 x16 wired connection at the north bridge, but only one PCIe v.1.1 x16 internal channel. Just plug into one of the slots any x8 card (even unused) and you'll see the difference.

2. 200 chipset is quite another game - in SLI mode they use this interconnection.

3. Not quite sure, but there are some notions that 9800 GTX+ also uses this interconnection, which allows to think that it is based on a highly "cut down" 200 chip.