HardOCP 6990+6970 CF vs 580 Tri Sli

Page 5 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

zerocool84

Lifer
Nov 11, 2004
36,041
472
126
A lot of people seem to forget that the super expensive stuff that no one can afford still get people to look at that company. Why do people care about how fast a Porsche 911 turbo is compared to a Nissan GT-R? 99% of people can't afford those cars but it's those types of cars that get people to like that company. It's just E-Peen, Nvidia fans do it, ATI fans do it, AMD fans do it, Intel fans do it.
 

Aristotelian

Golden Member
Jan 30, 2010
1,246
11
76
The 580 @1.5gb is enough for 25600x1600 30 inch monitors, some people decide to get 2 and 3 of these cards for added fps. Or for 3D

One GTX580 1.5GB is most certainly not enough for 2560x1440, let alone x1600, for gamers who like eye candy as well. I'm stating this for people interested in that setup. I have 2x 580s and there are games where I need to turn down settings (Dragon Age 2 being an obvious example) or else have intermittent fps dips (Crysis 2 at max settings) and so forth. If you're sensitive to min fps like I am, and enjoy max eye candy like I do, then 1x GTX580 1.5GB at 2560x1440 or 1600 just won't cut it.
 

NoQuarter

Golden Member
Jan 1, 2001
1,006
0
76
I'll step in again and disagree there: If most people game at 1080p, then the field is prety even. One of the things I've noticed, is what eats VRAM is AA/MSAA...etc. At very high resolutions, nV and AMD are pretty equal, if you don't involve mind-bending settings like [H] does. If you triple that, then you get into the issue of CF/SLI scaling, which is another horse. Single-card CF scales badly over 3+ cards, while 2-card (6990+6970) CF is the shiznit.
For what it's worth, 6990+6970 is operating as 3 GPU CF and thus 3x 6970 should work equally well. Regardless that it's only 2 physical cards the drivers are still operating in 3x CF.
 

notty22

Diamond Member
Jan 1, 2010
3,375
0
0
Their conclusion that the Tri-Fire system did not scale well with the increase in CPU performance is a bit off. The reason why the 3-way 580s performed better is due to the use of the NF200. It's an nVidia chip that is really made with SLI in mind, not Crossfire.
The nf200 is basically a bridge chip that duplexes pci-e lanes, it allows greater bandwith whether its crossfire or sli or a raid card .
 

Skurge

Diamond Member
Aug 17, 2009
5,195
1
71
Uh...call me old fashioned, but what the extreme niche top end is doing on 3 monitors is not how I make a purchase, unless I have that setup.

I look at my budget, my resolution, and pick the best card(s) based on multiple reviews on the games I play or plan on playing.


If you make your card choice based off of what $1500 worth of product can do on $1k worth of monitors, I guess that is your decision...

Still doesnt mean the review is useless. Reviews similar to these can help people like dac7nco make a good purchase. Im sure he wouldn't be too happy if he had to blindly spend $1000+ on a setup and hoped he picked the right one.
 

GaiaHunter

Diamond Member
Jul 13, 2008
3,697
397
126
The nf200 is basically a bridge chip that duplexes pci-e lanes, it allows greater bandwith whether its crossfire or sli or a raid card .

Obviously there is something wrong with the setup and AMD cards.

One thing was the NVIDIA system getting higher performance increases while AMD wouldn't - that would be perfectly legit.

Another thing is the AMD setup getting a performance decrease compared to a slower CPU.

Why is that, I don't know, but something isn't right.

Other than that it seems that the trend of AMD cards not requiring big CPUs to reach its potential while NVIDIA will want every extra MHz continues.
 
Last edited:

notty22

Diamond Member
Jan 1, 2010
3,375
0
0
Obviously there is something wrong with the setup and AMD cards.

One thing was the NVIDIA system getting higher performance increases while AMD wouldn't - that would be perfectly legit.

Another thing is the AMD setup getting a performance decrease compared to a slower CPU.

Why is that, I don't know, but something isn't right.

Other than that it seems that the trend of AMD cards not requiring big CPUs to reach its potential while NVIDIA will want every extra MHz continues.
The nf200 operates off the bios, there is no special driver. Its more of a conflict in that 1 game.
The conflict is arising because , IMO, the 6990, dual gpu's sharing the 1st pci-e slot on the m/b.
On the 1366 system, the cards on the AMD system, saw 16xlanes for the 6990
16xlanes for the 6970
The 3 gtx 580's saw 16x, 16x, 8x
There is a bridge chip on the 6990 that deals with communication between the 2 gpu's sharing the single slot.
Something screwy is happening in that 1 game with the combination of hardware, obvious.
 
Last edited:

GaiaHunter

Diamond Member
Jul 13, 2008
3,697
397
126
The nf200 operates off the bios, there is no special driver. Its more of a conflict in that 1 game.
The conflict is arising because , IMO, the 6990, dual gpu's sharing the 1st pci-e slot on the m/b.
On the 1366 system, the cards on the AMD system, saw 16xlanes for the 6990
16xlanes for the 6970
The 3 gtx 580's saw 16x, 16x, 8x
There is a bridge chip on the 6990 that deals with communication between the 2 gpu's sharing the single slot.
Something screwy is happening in that 1 game with the combination of hardware, obvious.

I didn't say it was the hardware or not - part of the setup is drivers and profiles.
 

Dribble

Platinum Member
Aug 9, 2005
2,076
611
136
Their conclusion that the Tri-Fire system did not scale well with the increase in CPU performance is a bit off. The reason why the 3-way 580s performed better is due to the use of the NF200. It's an nVidia chip that is really made with SLI in mind, not Crossfire.

I was reading the comments - apparently their original motherboard was PCI-E 4* for the 3rd slot used by the 3rd GTX 580? The radeons didn't need that as they used only two slots due to one of them being a dual card.

The new motherboard gives 16, 8, & 8 for geforce's or 16 & 16 for radeons.

Perhaps that is why the NF200 is better - it's nothing to do with the color of graphics card, it just provides better bandwidth for 3 cards.
 

AdamK47

Lifer
Oct 9, 1999
15,676
3,529
136
Did they run the Radeons in native PCI-E 8x/8x mode or were they using the NF200 for the 16x/16x?
 

Dribble

Platinum Member
Aug 9, 2005
2,076
611
136
ASUS P8P67 WS Revolution have 4x PCI-e x16 ports that two of them have 16 lanes and the other two have 8 lanes, so we have 2x 16 and 2x 8.

http://www.asus.com/Motherboards/Intel_Socket_1155/P8P67_WS_Revolution/#specifications

I think the NF200 has 32 lanes, hence you can either use both blue slots (16 + 16) or one blue slot and both back slots (16 + 8 + 8).

Anyway net effect is before:
geforce's had (16 + 8 + 4)
radeons (16 + 8)

now:
geforces get (16 + 8 + 8)
radeons (16 + 16)

I suspect [H] will say that 4* lane made no difference, but then till this review came out they would have told you that using a faster cpu made no difference either. An obvious further test would have been to under clock the new cpu to see if the geforce's really were only cpu bound or in fact that 4* slot was stuffing things up too.
 

BrentJ

Member
Jul 17, 2003
135
6
76
www.hardocp.com
I was reading the comments - apparently their original motherboard was PCI-E 4* for the 3rd slot used by the 3rd GTX 580? The radeons didn't need that as they used only two slots due to one of them being a dual card.

The new motherboard gives 16, 8, & 8 for geforce's or 16 & 16 for radeons.

Perhaps that is why the NF200 is better - it's nothing to do with the color of graphics card, it just provides better bandwidth for 3 cards.

'ding, we have a winrar

AMD cards were running at x16/x16 on the new system
NV cards were running at x16/x8/x8 on the new system
 

AdamK47

Lifer
Oct 9, 1999
15,676
3,529
136
'ding, we have a winrar

AMD cards were running at x16/x16 on the new system
NV cards were running at x16/x8/x8 on the new system

So the Radeons where running on the NF200 chip then. I'm curious to see how that compares to the native 8x/8x PCI-E mode. I'm willing to bet the cards perform better without the NF200.
 

Lonyo

Lifer
Aug 10, 2002
21,938
6
81
I think the NF200 has 32 lanes, hence you can either use both blue slots (16 + 16) or one blue slot and both back slots (16 + 8 + 8).

Anyway net effect is before:
geforce's had (16 + 8 + 4)
radeons (16 + 8)

now:
geforces get (16 + 8 + 8)
radeons (16 + 16)

I suspect [H] will say that 4* lane made no difference, but then till this review came out they would have told you that using a faster cpu made no difference either. An obvious further test would have been to under clock the new cpu to see if the geforce's really were only cpu bound or in fact that 4* slot was stuffing things up too.

Well at best it made a 3.8% difference, assuming that the tests showing the lowest CPU scaling are GPU limited.
Since the lowest improvement is 3.8%, it seems unlikely that x4 vs x8 made much difference, although it's not impossible since not all games respond exactly the same way to PCIe bandwidth changes.
 

notty22

Diamond Member
Jan 1, 2010
3,375
0
0
Its going to get interesting when Bulldozer arrives, imo, Bulldozer will have to be compared to 1155 and the old 1366 system.
990FX has plenty of pci-e lanes, thats never been the bottleneck, will the cpu have the grunt, a game like F1 seems it needs.
 

AtenRa

Lifer
Feb 2, 2009
14,003
3,362
136
I think the NF200 has 32 lanes, hence you can either use both blue slots (16 + 16) or one blue slot and both back slots (16 + 8 + 8).

Anyway net effect is before:
geforce's had (16 + 8 + 4)
radeons (16 + 8)

now:
geforces get (16 + 8 + 8)
radeons (16 + 16)

I suspect [H] will say that 4* lane made no difference, but then till this review came out they would have told you that using a faster cpu made no difference either. An obvious further test would have been to under clock the new cpu to see if the geforce's really were only cpu bound or in fact that 4* slot was stuffing things up too.

Yes you are right, P67 has 16 lanes plus 32 from NF200

In the first review they used the MSI Eclipse SLI (X58) that it has 32 lanes so HD6990+HD6970 was on 16 + 16 and 3x SLI was on 16+16+4 or 16+8+4
 

AdamK47

Lifer
Oct 9, 1999
15,676
3,529
136
Yes you are right, P67 has 16 lanes plus 32 from NF200

In the first review they used the MSI Eclipse SLI (X58) that it has 32 lanes so HD6990+HD6970 was on 16 + 16 and 3x SLI was on 16+16+4 or 16+8+4

16-8-8
 

SolMiester

Diamond Member
Dec 19, 2004
5,330
17
76
So now the [H] review has been fixed, we can summarise that the AMD Tri Fire (6990+6970) setup, is slower, hotter and louder than the 580 Tri SLI setup...!?, but cheaper!..
 
Last edited:
Feb 19, 2009
10,457
10
76
So now the [H] review has been fixed, we can summarise that the AMD Tri Fire (6990+6970) setup, is slower, hotter, louder and than the 580 Tri SLI setup...!?, but cheaper!..

No, we can conclude that setup with the NV chipset screwed AMD's TriFire scaling, horribly. Worse was F1 with a big FPS reduction. BC2 was a slight reduction in min/average fps. If it's doing that in games that are neutral, it's probably limiting the ability of CF to scale in some way.. especially considering the big CPU performance difference.
 
Last edited:

ActionParsnip

Junior Member
Apr 15, 2009
8
0
0
Most of my experience at [H] has been snarky mods trolling you for a ban. Actually there's one mod whose a self-proclaimed engineer that deletes my account every time he sees it because he lost a polite science argument. Although it seems the other mods keep validating my new accounts fully knowing my IP because they know he's full of sh-t and has no grounds for banning me.

Yes yes im sure that's exactly how it happened.....


.....what probably happened: arsey poster decided he knew better than the editors and begun preaching to them. Poster strayed off-topic which gets you noticed and warned. All posters are warned once. Poster was banned and has tried to re-register. Poster mystified that changing nick/email address did not grant him forum access once more.

Not true? ok, then direct us to your ban-inducing thread on the hardforum
 

Cookie Monster

Diamond Member
May 7, 2005
5,161
32
86
No, we can conclude that setup with the NV chipset screwed AMD's TriFire scaling, horribly. Worse was F1 with a big FPS reduction. BC2 was a slight reduction in min/average fps. If it's doing that in games that are neutral, it's probably limiting the ability of CF to scale in some way.. especially considering the big CPU performance difference.

Its a conspiracy I tell you...

All joking aside, I doubt a simple bridge chip could impact crossfire scaling. And plus it was only for one game, while the others were naturally within the margin of error.