How does my 7900gt SLI setup compare to ATI x1900 series cards in shader intensive games?

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

evolucion8

Platinum Member
Jun 17, 2005
2,867
3
81
Originally posted by: Cookie Monster
Originally posted by: MonkeyFaces
I have had one major advantage with SLI, which is image quality. Using SLI anti aliasing, you can force applications to render much smoother textures, althought this will reduce performance significantly in addition to weak shader performance.

By no means nVIDIA has no weak shader performance. Considering nVIDIA's 24 Pxiel shaders do just fine against 48 Pixel shaders of the R580, you cant say they have weak shader performance.

If you compare the 7900GTX against the X1950XTX in a pure shader benchmark, ATi will simply crush the 7900GTX, not only because the ATi solutions has more Shaders, is also the efficiency relying on it's smaller pixel shader granularity improving the performance when using dynamic branching. But that's doesn't mean that the same will be reflect in games. You can see that they perform very close in many games, they're so many variables like polygon count, texture fillrate, shader fillrate, only the most efficient architecture will have the edge in performance, so having more shaders or pixel pipelines not neccesarily will mean more performance, look the XGI Volari Duo V8 or something like that, having 16 pixel pipelines loosing miserably against the 8 pixel pipelines 6600GT, or the GeForce FX, having a higher pixel fillrate and higher shader instruction count, loosing miserably against the 9600 PRO in current DX 9 titles like F.E.A.R. and BF2.
 

Cookie Monster

Diamond Member
May 7, 2005
5,161
32
86
Originally posted by: evolucion8
Originally posted by: Wreckage
Originally posted by: evolucion8

Remember that if the game doesn't have an SLI Profile, simply no boost in performance will happen,

ATI also relies on profiles. Often if it does not have a profile the default render mode will not give you a boost either and in some cases can run worse than a single card.

ATi's approach on crossfire profiles is to select the best frame rendering method that gives the best performance on a game, if for some reason your game is not on the profile, you still able to take advantage using the afrfriendlyd3d blah blah blah, or in the Catalyst Control Center Option. nVidia's SLI doesn't have much options in frame rendering modes, so their approach relies only to let the game run on sli. Crossfire is more flexible.

Firstly,

ATi's approach on crossfire profiles is to select the best frame rendering method that gives the best performance on a game

You do realise this is what nVIDIA SLI profile does. You have your knowledge exactly the other way around. nVIDIA got many SLI rendering modes from SFR to AFR to AFR2 etc. The SLi profiles lets the cards render the best mode for that type of game while also having performance boosts here and there due to optimisations. As nVIDIA has over 300 SLi profiles, there support in this area is really big. This is why SLi is MUCH more amture in terms of stability and performance.

On the other hand you got crossfire. ATi advertised that crossfire needs NO profiles. (something you contradicted). They took super tiling rendering method which in a lot of cases showed hardly any performance gains oppose to what they were saying. ATi lied about the needs for profiles. Crossfire suffered many due to this. It was unstable (games were buggy glitchy and crossfire also suffered SLi limitations such as vsync etc), you wouldnt see performance gains and theres the other talk aboit dongles and master cards. Their support lagged behind that of nVIDIA, and can be seen by users having to personally change .exe files to a different name to enable AFR to get the correct performance. You can see now that Crossfire has profiles but a very few of them unlike nVIDIAs hundreds. The only advantage of using crossfire is Super AA, thanks to the composition engine the performance hit is much less that SLi AA.

This is really generalised but i could dig up the articles and benchmarks if you think otherwise. SLi is very mature (for 6 and 7 series) and crossfire still has some catching up to do.
 

Cookie Monster

Diamond Member
May 7, 2005
5,161
32
86
Originally posted by: evolucion8
Originally posted by: Cookie Monster
Originally posted by: MonkeyFaces
I have had one major advantage with SLI, which is image quality. Using SLI anti aliasing, you can force applications to render much smoother textures, althought this will reduce performance significantly in addition to weak shader performance.

By no means nVIDIA has no weak shader performance. Considering nVIDIA's 24 Pxiel shaders do just fine against 48 Pixel shaders of the R580, you cant say they have weak shader performance.

If you compare the 7900GTX against the X1950XTX in a pure shader benchmark, ATi will simply crush the 7900GTX, not only because the ATi solutions has more Shaders, is also the efficiency relying on it's smaller pixel shader granularity improving the performance when using dynamic branching. But that's doesn't mean that the same will be reflect in games. You can see that they perform very close in many games, they're so many variables like polygon count, texture fillrate, shader fillrate, only the most efficient architecture will have the edge in performance, so having more shaders or pixel pipelines not neccesarily will mean more performance, look the XGI Volari Duo V8 or something like that, having 16 pixel pipelines loosing miserably against the 8 pixel pipelines 6600GT, or the GeForce FX, having a higher pixel fillrate and higher shader instruction count, loosing miserably against the 9600 PRO in current DX 9 titles like F.E.A.R. and BF2.

What has dynamic branching (performance) got to do with shader performance? No games even use dynamic branching.

You have any idea what you are talking about there?

Polygon count?!!? what has this got to do with shader performance? This is a first ive heard.

Texture fillrate and shader fillrate both directly depend on the number of pixel pipelines (therefore pixel shaders)/texture mapping units. They maybe the variables to some extent but to my knowledge they remain constant. The main factor in shader performance can be directly linked with how efficent the shader algorithms used are.

Some of the examples you gave me make no sense whatsoever. :confused:
 

evolucion8

Platinum Member
Jun 17, 2005
2,867
3
81
Originally posted by: Cookie Monster
Originally posted by: evolucion8
Originally posted by: Cookie Monster
Originally posted by: MonkeyFaces
I have had one major advantage with SLI, which is image quality. Using SLI anti aliasing, you can force applications to render much smoother textures, althought this will reduce performance significantly in addition to weak shader performance.

By no means nVIDIA has no weak shader performance. Considering nVIDIA's 24 Pxiel shaders do just fine against 48 Pixel shaders of the R580, you cant say they have weak shader performance.

If you compare the 7900GTX against the X1950XTX in a pure shader benchmark, ATi will simply crush the 7900GTX, not only because the ATi solutions has more Shaders, is also the efficiency relying on it's smaller pixel shader granularity improving the performance when using dynamic branching. But that's doesn't mean that the same will be reflect in games. You can see that they perform very close in many games, they're so many variables like polygon count, texture fillrate, shader fillrate, only the most efficient architecture will have the edge in performance, so having more shaders or pixel pipelines not neccesarily will mean more performance, look the XGI Volari Duo V8 or something like that, having 16 pixel pipelines loosing miserably against the 8 pixel pipelines 6600GT, or the GeForce FX, having a higher pixel fillrate and higher shader instruction count, loosing miserably against the 9600 PRO in current DX 9 titles like F.E.A.R. and BF2.

What has dynamic branching (performance) got to do with shader performance? No games even use dynamic branching.

You have any idea what you are talking about there?

Polygon count?!!? what has this got to do with shader performance? This is a first ive heard.

Texture fillrate and shader fillrate both directly depend on the number of pixel pipelines (therefore pixel shaders)/texture mapping units. They maybe the variables to some extent but to my knowledge they remain constant. The main factor in shader performance can be directly linked with how efficent the shader algorithms used are.

Some of the examples you gave me make no sense whatsoever. :confused:

If you was able to read the whole thread, you should know that I was talking about the limiting factor in the FPS in many games, if you don't know what i'm talking about you should not bash me. Dynamic Branching is a SM 3.0 feature, it allows you to use reduce the shader overhead when using long shaders, thus increasing the performance predicting the next move in the shader. Hardly does it anyways. Is just too hard to predict a Floating Point Calculation in a game. And not neccesarily you will get more performance increasing the pixel pipelines, that just the standard way to increase performance, but there's more inside the hood like algorithms, data flow, calculations etc, do i have to recall again that the XGI Volario Duo V8 with it's 16 pixel pipelines didn't fare very well against other midrange cards? Do I have to recall that the 7900GTX with it's 24 pixel pipelines and 24 shader pipelines performs pretty much the same as the 16 pixel pipelines and 48 shaders of the Radeon X1950XTX? I'm talking about general performance in games, o are you gonna tell me that's the shader thing is the only limiting factor in a game? Like a game is made only of shaders, confused?!
 

Gstanfor

Banned
Oct 19, 1999
3,307
0
0
Originally posted by: ArchAngel777
Originally posted by: evolucion8
Originally posted by: ArchAngel777
Originally posted by: evolucion8
Originally posted by: josh6079
The X1900XT(X) will do pretty good but a 7900GT SLI setup will almost always crank more frames out - Click


That article is quite old, with recent driver optimizations the gap has been closed, the SLI scales a bit better than crossfire anyways. But if you look here you will see that the gap is no longer like it used to be. http://www23.tomshardware.com/graphics....elx=33&model1=520&model2=546&chart=219
http://www23.tomshardware.com/graphics....elx=33&model1=520&model2=546&chart=203 http://www23.tomshardware.com/graphics....elx=33&model1=520&model2=546&chart=215 http://www23.tomshardware.com/graphics....elx=33&model1=520&model2=546&chart=223
You can search for more and compare.


There is something majorly wrong with those benchmarks. Let us take a look at them.

First of all, compere the difference between a 7900 GT in SLI to a 7800 GTX, both are essentially the same cards (7900 GT is a tad faster actually). Now, take a look... Not much of a difference in any of the games is there? It is almost as if SLI is broken in those tests. I wouldn't TRUST tomshardware after looking through these, something wasn't tested correctly.

LOL, check this out... Turn the Benchmark to Half Life 2, highest setting and compare the 7800 GTX to the 7800 GTX in SLI and you get a huge 0 frames per second difference! In fact, SLI does nothing at all according to them! :thumbsdown: I would be very careful to link to those ever again.

Edit ** I compared two indenticle cards, one in Sli, the other not and came up with these results on their website

41% Increase from SLI in Prey (highest settings)

33% Increase from SLI in Oblivion (highest settings)

0% Increase from SLI in Half Life 2 (highest Settings)

0% Increase from SLI in Hard Truck Apoc (highest settings)

77% Increase from SLI in 3d Mark 2006 (highest settings)

Now, I am NOT an owner of an SLI system, but I am pretty damn sure that 1) Half Life 2 and Hard Truck Apoc will seen an increase in framerate, no matter what and 2) That the increase in Prey and Oblivion are quite low... I think they scale better than that in those two games. Again, I wouldn't trust these benchmarks one bit.

Remember that if the game doesn't have an SLI Profile, simply no boost in performance will happen, look at the 6800GT in SLI, no new games has been added in the SLI profile for the 6800 series (Correct me if I'm wrong) and that's why in newer games, even a single Radeon X1800GTO can outperform it in that game. My friend bought a pair of Radeon X1650XT and did Crossfire and runs faster in games like Oblivion and F.E.A.R. Considering that a single 6800GT will outperform easily a X1650XT.


I don't know enough about SLI to confirm your statement. But I was under the impression you can force alternate frame rendering on any game you want, it just will not give a huge performance advantage as the other method in some games.

If what you say about SLI is true, then I consider it a rather worthless product. I mean no offense to those who own it, but if a game has to be specifically written for or supported to see an increase, then I don't see the value of it so much.


Besides, HL2 does indeed support SLI, so those numbers are bogus and I was pretty damn certain it scaled better than Tom's HW showed.

You can force any sli rendering mode you like on any app. Having said that, in practice you usually just set AFR2 and if necessary tweak the compatability bits. ALso on g71 with recent drivers SLI AA is a worthwhile option (some games get a very noticeable boost from SLI AA).
 

josh6079

Diamond Member
Mar 17, 2006
3,261
0
0
Crossfire is more flexible.
Hardly.

Especially when you take into account that not just any two ATi GPU's can be paired together, even if they are the same model. ATi's Master/Slave card cirrcus limits what you can go CrossFire with. When the limiting factor (most always the Master card) is either too expensive or no where to be found, an SLI setup capable of being paired with any two models is much more "flexible".

It's getting there, but right now I'd still go SLI over CrossFire.
 

evolucion8

Platinum Member
Jun 17, 2005
2,867
3
81
Originally posted by: josh6079
Crossfire is more flexible.
Hardly.

Especially when you take into account that not just any two ATi GPU's can be paired together, even if they are the same model. ATi's Master/Slave card cirrcus limits what you can go CrossFire with. When the limiting factor (most always the Master card) is either too expensive or no where to be found, an SLI setup capable of being paired with any two models is much more "flexible".

It's getting there, but right now I'd still go SLI over CrossFire.

But you had forgot that ATi has updated their Crossfire platform with the Radeon X1950 PRO, a dongleless solution, incorporating the comunication chip on the GPU itself, so no more dongles or master cards, and that advancement will be incorporated on future gpus too.
 

josh6079

Diamond Member
Mar 17, 2006
3,261
0
0
But you had forgot that ATi has updated their Crossfire platform with the Radeon X1950 PRO...
No, they did not update their entire CrossFire "platform" just because they did it with one model. If you buy an X1900XT(X) or X1950XTX you still abide by the Slave/Master card attribute. CrossFire is getting better and the X1950 Pro shows that, however just because ATi can do it with one model doesn't make it more flexible than Nvidia's similar solution with several models.
 

evolucion8

Platinum Member
Jun 17, 2005
2,867
3
81
quote:

--------------------------------------------------------------------------------
Originally posted by: josh6079

quote:

--------------------------------------------------------------------------------
But you had forgot that ATi has updated their Crossfire platform with the Radeon X1950 PRO...
--------------------------------------------------------------------------------


No, they did not update their entire CrossFire "platform" just because they did it with one model. If you buy an X1900XT(X) or X1950XTX you still abide by the Slave/Master card attribute. CrossFire is getting better and the X1950 Pro shows that, however just because ATi can do it with one model doesn't make it more flexible than Nvidia's similar solution with several models.

Remember that the Crossfire implemented on the X1950XTX is older than the one implemented on the X1950 PRO. So is obvious that if you buy the X1950XTX series, the older version is the one that you will get no matter if you buy it today or 3 months ago. To implement the new Crossfire in the X1950XTX and below, changes has to be done in the GPU to put that inside of it and simply will require another R580+ style update, simply not good cause it will require another product launch and the R600 is just around the corner. So If I say that the Crossfire platfom has been updated is because it's last iteration of it is better than the one before and will be the one used in the future, the older ones is just thing of the past, clear?
 

josh6079

Diamond Member
Mar 17, 2006
3,261
0
0
So If I say that the Crossfire platfom has been updated is because it's last iteration of it is better than the one before and will be the one used in the future, the older ones is just thing of the past, clear?
Yes, that's clear, but it still doesn't support your original claim that CrossFire was more flexible than SLI.

Saying that CrossFire has been updated only on one card and shows signs of being more like SLI in the future doesn't mean that it's more flexible than SLI now.
 

aka1nas

Diamond Member
Aug 30, 2001
4,335
1
0
It doesn't matter still. Crossfire without a Mastercard currently only exists on one model card, which is too low-end to make sense to double-up on. For all intents and purposes, a reasonable Crossfire setup still requires a mastercard.
 

Acanthus

Lifer
Aug 28, 2001
19,915
2
76
ostif.org
Originally posted by: ArchAngel777
Originally posted by: wizboy11
Originally posted by: MonkeyFaces
Originally posted by: wizboy11
Overclock them like mine and you won't have a problem ;) (look at sig)

Overclocking wont give me a realistic performance boost (maybe up to 10 fps, but I still wont be playing shader intensive games to the max). To get your clocks, I would need a voltmod. I am really not up for physically modifying my cards' pcbs, though I probably do have adequate cooling with my 2 vf-900s. Still, every way I look at it, I should have opted for an ATI video card. However, at the time, a $270 7900gt looked like a much better buy than a $400 1900xt.

Im still wondering if it is a rare thing that I can't play many of the newest games at max settings at 1280x1024 with SLI (AA+AF maxed too). I was expecting a little more because 2 card setups were supposedly overkill for my resolution. I only added the second card for future proofing, but they are showing their age pretty quickly.
Regarding AA and AF, do ATI cards receive the same performance hit with them turned on compared to Nvidia cards? I recall reading that the xbox360 gpu has the ability to crank them out without a loss in frames, i'm just wondering if it's the same for Ati's video cards.

The reason the Xbox 360 can do that is because of the (i think) 16mb of on-die ram or something to that affect.

ATI cards do have a similar performance decrease although sometimes it is slightly less but then your just splitting hairs.

Your fine with what you have. I'm surprised that your cards don't perform that good at your resolution. Give an example of a game?

Also, it really isn't that hard to vmod the card. Just get some special ink and some time and you can do it with the cards in your case if you so choose to. The safe part is the vmod on the core. What kills everyones cards was the vmod on the memory (which is why my memory really isn't that high in speed as some other vmodded 7900GT's).

Take a look at the 7900GT Vmod thread.
I'd recommend you do the 1.4 or 1.45 vmods. Anything over that is overkill on the voltage.

Trust me going up by 100mhz+ on the core can do wonders.

For all of $20 for the things that you need (less if you have them already) you can make your cards faster (noticeably faster too)


The XBox has 10mb of EDRAM. However, there are arguements whether this is employed on even one single game on the Xbox 360. I won't get into the details, but if you do a search I am sure you will find much more than you wanted to know.

I believe Gears of War does in fact utilize it.