The Difference Between nVidia and ATI

MotF Bane

No Lifer
Dec 22, 2006
60,801
10
0
Just trying to understand the differences in the cards...

I visit Newegg, and ATI cards tend to have 12 or 16 pixel pipelines, but at the same memory amount and price range, nVidia cards have 20 or even 24. Also, they seem to have a lot more numbers (X1950XT, etc). All my experience is with nVidia GeForce 7xxx cards. Can anybody give me an idea how to compare the performances?
 

Warren21

Member
Jan 4, 2006
118
0
0
ATI is in it's 11th (X = roman numeral 10, + 1 = 11 ... X series cards like the X850 = 10th) generation 'cause they cheated and started using series numbers at 7000, haha.

ATI cards recently have been designed with a theory of more pixel shaders, less pixel pipes and really high clocks. Example: R580 (See: X1900 XT/XTX, X1900 AIW, X1950 XT)
48 pixel shaders, 16 pipes, ~650 MHz core. The thing is however, nVidia cards usually have a 1:1 ratio of shaders/pipes, or sometime a little more. The 7900 GTX for example is 16 pipes and 24 pixels shaders -- ATI comes out slightly on top in some games but you see -- many more pixel shaders are wasted/not fully utilized. the flaw in ATI's design is that the shaders are too complex and never see enough optimization for 100% utilisation or else theoretically the X1900 should be much faster.

As far as X1xxx ATI vs 7xxx nV, nV does better in flight sims, RTS games and Open GL titles. ATI usually does better in FPS games and D3D based titles. ATI holds a slight edge on nV in visual quality versus the 7 series but the 8800s are even better than the X1900s.

Not a very well-organized post but I hope it helps.
 

Butterbean

Banned
Oct 12, 2006
918
1
0
I have been trying to figure out the same thing. A fair amount of people tell me the graohics quality is better with ATI but that the drivers suck. I can't say ATI graphics are better since I never had an nVidia card to use, but I did have an ATI All-In-Wonder card for 3 years and the drivers did indeed suck. Lost 3 weeks of my life screwing around witht hose things. I am desperate to avoid ATI again (just built new PC) but will probably end up with them again.
 

evolucion8

Platinum Member
Jun 17, 2005
2,867
3
81
Originally posted by: Warren21
ATI is in it's 11th (X = roman numeral 10, + 1 = 11 ... X series cards like the X850 = 10th) generation 'cause they cheated and started using series numbers at 7000, haha.

ATI cards recently have been designed with a theory of more pixel shaders, less pixel pipes and really high clocks. Example: R580 (See: X1900 XT/XTX, X1900 AIW, X1950 XT)
48 pixel shaders, 16 pipes, ~650 MHz core. The thing is however, nVidia cards usually have a 1:1 ratio of shaders/pipes, or sometime a little more. The 7900 GTX for example is 16 pipes and 24 pixels shaders -- ATI comes out slightly on top in some games but you see -- many more pixel shaders are wasted/not fully utilized. the flaw in ATI's design is that the shaders are too complex and never see enough optimization for 100% utilisation or else theoretically the X1900 should be much faster.

As far as X1xxx ATI vs 7xxx nV, nV does better in flight sims, RTS games and Open GL titles. ATI usually does better in FPS games and D3D based titles. ATI holds a slight edge on nV in visual quality versus the 7 series but the 8800s are even better than the X1900s.

Not a very well-organized post but I hope it helps.

Actually, the GeForce 7900GTX uses 24 Pixel Pipelines, and 24 shaders, so a game will typical scale with the performance of this card. ATi on the other hand thinks that most games will be shader limited, since this has become a shader era, they saw that their X1800XT with it's 16 pixel pipelines and 16 shaders weren't enough, so they created the X1900XTX with 16 pixel pipelines and 48 pixel shaders. And the ATi shaders aren't too complex to create, is not a "design flaw", because the software that is used to create shaders is DX9, and both, the 7900GTX and X1900XTX runs the same software. The role that ATi's plays is within the driver, optimizing the shaders to fully utilize all the Shaders Pipelines. But todays games doesn't use so many shaders, so the X1900XTX shader core will remain unchallenged by the load and will be Fillrate limited. In intensive games or benchmarks with plenty of shaders, the X1900XTX will be far away from the 7900GTX. In OpenGL, ATi has done a great job optimizing and updating it's OpenGL driver, and you can see that tittles like Quake 4 that uses the Doom 3 engine runs as fast as in nVidia hardware, and sometimes even faster. In Doom 3 is the game were ati trails behind, is because nVidia uses a Look Up table for textures, and that runs slower on ATi because since it runs the other code faster, has to wait for it. That's why they found that using math for calculating it improved the performance, and may be some other tricks done in the engine to leave ATi behind. Cause why Quake 4 that uses the same engine runs as fast or faster on ATi hardware?? And yes, The 8800 series has a better image quality, is simply outstanding.
 

Gamingphreek

Lifer
Mar 31, 2003
11,679
0
81
Originally posted by: evolucion8
Originally posted by: Warren21
ATI is in it's 11th (X = roman numeral 10, + 1 = 11 ... X series cards like the X850 = 10th) generation 'cause they cheated and started using series numbers at 7000, haha.

ATI cards recently have been designed with a theory of more pixel shaders, less pixel pipes and really high clocks. Example: R580 (See: X1900 XT/XTX, X1900 AIW, X1950 XT)
48 pixel shaders, 16 pipes, ~650 MHz core. The thing is however, nVidia cards usually have a 1:1 ratio of shaders/pipes, or sometime a little more. The 7900 GTX for example is 16 pipes and 24 pixels shaders -- ATI comes out slightly on top in some games but you see -- many more pixel shaders are wasted/not fully utilized. the flaw in ATI's design is that the shaders are too complex and never see enough optimization for 100% utilisation or else theoretically the X1900 should be much faster.

As far as X1xxx ATI vs 7xxx nV, nV does better in flight sims, RTS games and Open GL titles. ATI usually does better in FPS games and D3D based titles. ATI holds a slight edge on nV in visual quality versus the 7 series but the 8800s are even better than the X1900s.

Not a very well-organized post but I hope it helps.

Actually, the GeForce 7900GTX uses 24 Pixel Pipelines, and 24 shaders, so a game will typical scale with the performance of this card. ATi on the other hand thinks that most games will be shader limited, since this has become a shader era, they saw that their X1800XT with it's 16 pixel pipelines and 16 shaders weren't enough, so they created the X1900XTX with 16 pixel pipelines and 48 pixel shaders. And the ATi shaders aren't too complex to create, is not a "design flaw", because the software that is used to create shaders is DX9, and both, the 7900GTX and X1900XTX runs the same software. The role that ATi's plays is within the driver, optimizing the shaders to fully utilize all the Shaders Pipelines. But todays games doesn't use so many shaders, so the X1900XTX shader core will remain unchallenged by the load and will be Fillrate limited. In intensive games or benchmarks with plenty of shaders, the X1900XTX will be far away from the 7900GTX. In OpenGL, ATi has done a great job optimizing and updating it's OpenGL driver, and you can see that tittles like Quake 4 that uses the Doom 3 engine runs as fast as in nVidia hardware, and sometimes even faster. In Doom 3 is the game were ati trails behind, is because nVidia uses a Look Up table for textures, and that runs slower on ATi because since it runs the other code faster, has to wait for it. That's why they found that using math for calculating it improved the performance, and may be some other tricks done in the engine to leave ATi behind. Cause why Quake 4 that uses the same engine runs as fast or faster on ATi hardware?? And yes, The 8800 series has a better image quality, is simply outstanding.

Ummm no.

Both cards IIRC have 16 ROP's. An ROP is a "Render Output Pipeline". This is the point where the scene is basically "assembled" with all the color, and Z-values.

Each card has 8 Vertex Pipelines.

Pixel Pipelines are when it gets somewhat confusing. Each card is limited in the number of textures it can output (NOT process) by the amount of ROPs. The G70 has 24 Pixel Pipelines and the R580 has 48 Pixel Pipelines. So while it can process far more, resulting in a higher fill rate, it cannot output all 48 Shaders at once due to the fact that it only has 16 ROP's.

Now to address some of these ridiculous notions you have made:
because the software that is used to create shaders is DX9, and both, the 7900GTX and X1900XTX runs the same software.

Are you just making this up?? Direct X is a programming API. It doesn't "create" anything. The features within its set of standards allow methods for programmers to use pixel shading.

24 Pixel Pipelines, and 24 shaders

I have no idea what you mean by "shaders". I assume you mean ROP's, in which case that is false. It has 16 ROPs. I believe the G80 has 24 though.

But todays games doesn't use so many shaders, so the X1900XTX shader core will remain unchallenged by the load and will be Fillrate limited.

The games don't know or care what shader configuration there is. There is no "sensing program" to determine it. The drivers and the hardware on the video card balance the load between all 48 shaders regardless. They don't fill up one by one like gas tanks in a car!

In intensive games or benchmarks with plenty of shaders, the X1900XTX will be far away from the 7900GTX.

While there is some merit to what you say, in that the X1900 has much more power than the 7900 it will never be "far away" from the 7900 due to the fact that it still only has 16 ROP's.

In Doom 3 is the game were ati trails behind, is because nVidia uses a Look Up table for textures, and that runs slower on ATi because since it runs the other code faster, has to wait for it. That's why they found that using math for calculating it improved the performance, and may be some other tricks done in the engine to leave ATi behind.

Ok you have it backwards. While ATI has vastly improved their drivers, THEY, not Nvidia, use lookups. By lookup's you mean Shader Replacement. The basic principle behind this is the fact that ATI has always been stronger at Math intensive calculations. Therefore they log what is needed in the original code (via the driver) and essentially convert it to calculations which their chips can process much faster in. (Sorry for the basic explanation but for the sake of this thread, no more really is needed).

Generally engineers want to stay away from this. When Nvidia used it on the NV3x it left a bad taste in everyones mouth because they incorporated very large amounts of IQ degrading optimizations in it. ATI, with the Catalyst A.I. seems to have done an exceptional job at retaining IQ.

-Kevin
 

Piuc2020

Golden Member
Nov 4, 2005
1,716
0
0
Originally posted by: Warren21
ATI is in it's 11th (X = roman numeral 10, + 1 = 11 ... X series cards like the X850 = 10th) generation 'cause they cheated and started using series numbers at 7000, haha.

Actually not that they cheated, rather the first number denoted DirectX version (though there were a few exceptions like the Radeon 9000), 7xxx cards were directx 7, 8xxx cards were directx 8 and 9xxx cards were directx 9, they broke that naming scheme for obvious reasons.

Not trying to be a pedantic jerk but oh well...
 

evolucion8

Platinum Member
Jun 17, 2005
2,867
3
81
Originally posted by: Gamingphreek
Originally posted by: evolucion8
Originally posted by: Warren21
ATI is in it's 11th (X = roman numeral 10, + 1 = 11 ... X series cards like the X850 = 10th) generation 'cause they cheated and started using series numbers at 7000, haha.

ATI cards recently have been designed with a theory of more pixel shaders, less pixel pipes and really high clocks. Example: R580 (See: X1900 XT/XTX, X1900 AIW, X1950 XT)
48 pixel shaders, 16 pipes, ~650 MHz core. The thing is however, nVidia cards usually have a 1:1 ratio of shaders/pipes, or sometime a little more. The 7900 GTX for example is 16 pipes and 24 pixels shaders -- ATI comes out slightly on top in some games but you see -- many more pixel shaders are wasted/not fully utilized. the flaw in ATI's design is that the shaders are too complex and never see enough optimization for 100% utilisation or else theoretically the X1900 should be much faster.

As far as X1xxx ATI vs 7xxx nV, nV does better in flight sims, RTS games and Open GL titles. ATI usually does better in FPS games and D3D based titles. ATI holds a slight edge on nV in visual quality versus the 7 series but the 8800s are even better than the X1900s.

Not a very well-organized post but I hope it helps.

Actually, the GeForce 7900GTX uses 24 Pixel Pipelines, and 24 shaders, so a game will typical scale with the performance of this card. ATi on the other hand thinks that most games will be shader limited, since this has become a shader era, they saw that their X1800XT with it's 16 pixel pipelines and 16 shaders weren't enough, so they created the X1900XTX with 16 pixel pipelines and 48 pixel shaders. And the ATi shaders aren't too complex to create, is not a "design flaw", because the software that is used to create shaders is DX9, and both, the 7900GTX and X1900XTX runs the same software. The role that ATi's plays is within the driver, optimizing the shaders to fully utilize all the Shaders Pipelines. But todays games doesn't use so many shaders, so the X1900XTX shader core will remain unchallenged by the load and will be Fillrate limited. In intensive games or benchmarks with plenty of shaders, the X1900XTX will be far away from the 7900GTX. In OpenGL, ATi has done a great job optimizing and updating it's OpenGL driver, and you can see that tittles like Quake 4 that uses the Doom 3 engine runs as fast as in nVidia hardware, and sometimes even faster. In Doom 3 is the game were ati trails behind, is because nVidia uses a Look Up table for textures, and that runs slower on ATi because since it runs the other code faster, has to wait for it. That's why they found that using math for calculating it improved the performance, and may be some other tricks done in the engine to leave ATi behind. Cause why Quake 4 that uses the same engine runs as fast or faster on ATi hardware?? And yes, The 8800 series has a better image quality, is simply outstanding.

Ummm no.

Both cards IIRC have 16 ROP's. An ROP is a "Render Output Pipeline". This is the point where the scene is basically "assembled" with all the color, and Z-values.

Each card has 8 Vertex Pipelines.

Pixel Pipelines are when it gets somewhat confusing. Each card is limited in the number of textures it can output (NOT process) by the amount of ROPs. The G70 has 24 Pixel Pipelines and the R580 has 48 Pixel Pipelines. So while it can process far more, resulting in a higher fill rate, it cannot output all 48 Shaders at once due to the fact that it only has 16 ROP's.

Now to address some of these ridiculous notions you have made:
because the software that is used to create shaders is DX9, and both, the 7900GTX and X1900XTX runs the same software.

Are you just making this up?? Direct X is a programming API. It doesn't "create" anything. The features within its set of standards allow methods for programmers to use pixel shading.

24 Pixel Pipelines, and 24 shaders

I have no idea what you mean by "shaders". I assume you mean ROP's, in which case that is false. It has 16 ROPs. I believe the G80 has 24 though.

But todays games doesn't use so many shaders, so the X1900XTX shader core will remain unchallenged by the load and will be Fillrate limited.

The games don't know or care what shader configuration there is. There is no "sensing program" to determine it. The drivers and the hardware on the video card balance the load between all 48 shaders regardless. They don't fill up one by one like gas tanks in a car!

In intensive games or benchmarks with plenty of shaders, the X1900XTX will be far away from the 7900GTX.

While there is some merit to what you say, in that the X1900 has much more power than the 7900 it will never be "far away" from the 7900 due to the fact that it still only has 16 ROP's.

In Doom 3 is the game were ati trails behind, is because nVidia uses a Look Up table for textures, and that runs slower on ATi because since it runs the other code faster, has to wait for it. That's why they found that using math for calculating it improved the performance, and may be some other tricks done in the engine to leave ATi behind.

Ok you have it backwards. While ATI has vastly improved their drivers, THEY, not Nvidia, use lookups. By lookup's you mean Shader Replacement. The basic principle behind this is the fact that ATI has always been stronger at Math intensive calculations. Therefore they log what is needed in the original code (via the driver) and essentially convert it to calculations which their chips can process much faster in. (Sorry for the basic explanation but for the sake of this thread, no more really is needed).

Generally engineers want to stay away from this. When Nvidia used it on the NV3x it left a bad taste in everyones mouth because they incorporated very large amounts of IQ degrading optimizations in it. ATI, with the Catalyst A.I. seems to have done an exceptional job at retaining IQ.

-Kevin

I was clarifying Warren21 mind, because he said that ATis Shader Pipelines were hard to program, and that's not true because nobody uses an specific card to optimize a game for it incurring in a bad performance for others or issues. So in a shortcut I said that DX9 is the software used to create shaders, a software that works on every DX 9 card regardless of the configuration, but we all know that it does more, so why complicate the life of others saying so unneccesary explanations? So if a game is created is not to get the most of the ATi Shaders working, is to work efficiently in any DX9 platform. The driver is the one which will try to optimize the game for the card (Catalyst A.I. etc) Some optimizations can be done under programs like The Way It Mean To Be Played, those games usually are heavier in textures, low in polygon count and don't have too many pixel shaders (Doom 3 blocky heads anyone?) favoring usually the nVidia cards, while the ones under the Get In The Game Program, uses great amount of pixel shaders and high polygon count, and doesn't relly much in heavy textures, favoring the ATi architecture.

The 7900GTX has 24 Pixel Pipelines, 24 Pixel Shaders Units and 16 ROP's, X1900XTX has 16 Pixel Pipelines, 16 ROP's and 48 Pixel shaders. Why? Cause every Pixel Pipeline has 3 independent Pixel Shaders unit. So if somebody next time says that a certain card has certain amount of Shaders, it refers to Pixel Shaders Units? Is that anyhow unclear??

And for Doom 3, read the following:

In Doom 3, there's actually a bit of shader replacement going on. For example, the game uses texture lookups to determine the values of specular lighting fall-off rates, essentially reading a precomputed value by fetching that value from a texture. It just so happens that ATI's hardware can compute the algorithm faster than it can fetch the answer from a texture, so that's what it does: It replaces texture lookups with the math it approximates. The result should be identical, but with a significant speedup.

Source: http://www.findarticles.com/p/articles/mi_zdext/is_200409/ai_n7184457

So Doom 3 does use a Look Up table for textures. So who got it backwards now?


 

Gamingphreek

Lifer
Mar 31, 2003
11,679
0
81
In Doom 3, there's actually a bit of shader replacement going on. For example, the game uses texture lookups to determine the values of specular lighting fall-off rates, essentially reading a precomputed value by fetching that value from a texture. It just so happens that ATI's hardware can compute the algorithm faster than it can fetch the answer from a texture, so that's what it does: It replaces texture lookups with the math it approximates. The result should be identical, but with a significant speedup.

Source: http://www.findarticles.com/p/articles/mi_zdext/is_200409/ai_n7184457

So Doom 3 does use a Look Up table for textures. So who got it backwards now?

I never denied that Doom 3 doesn't for ATI cards (Catalyst AI). For Nvidia however, they haven't used Shader Replacement since the NV3x series. That was one of the reasons why the NV4x was so good, because it didn't use any shader replacement anymore.

The rest I agree with you on. It was just tough to understand exactly what you meant earlier on.

-Kevin
 

evolucion8

Platinum Member
Jun 17, 2005
2,867
3
81
That's allright. Both, the NV40 and R420 are good cards. The NV40 didn't need shader replacement because Doom 3 was created and based on it's architecture, and since the R420 and NV40 works in differents ways to get the exact result, a workaround was needed on the R420 to make the game playable. That issue doesn't happen in D3D games, it happens mostly on OpenGL where specific vendor extensions are used.
 

Gamingphreek

Lifer
Mar 31, 2003
11,679
0
81
Originally posted by: evolucion8
That's allright. Both, the NV40 and R420 are good cards. The NV40 didn't need shader replacement because Doom 3 was created and based on it's architecture, and since the R420 and NV40 works in differents ways to get the exact result, a workaround was needed on the R420 to make the game playable. That issue doesn't happen in D3D games, it happens mostly on OpenGL where specific vendor extensions are used.

Yea, you really have to hand it to ATI for a great job done on the Shader Replacement for OpenGL. Though if they get time a huge leap would be for them to completely rewrite their OpenGL code and get away from it.

-Kevin
 

Munky

Diamond Member
Feb 5, 2005
9,372
0
76
Originally posted by: Gamingphreek
So while it can process far more, resulting in a higher fill rate, it cannot output all 48 Shaders at once due to the fact that it only has 16 ROP's

I'm not sure what you're getting at, but the r580 certainly can and does utilize all of its shader units. This is not to say any of its shaders are weaker than those of the r520 (as AT seems to believe) or the g71, but games do not rely purely on shaders, and that's why you do not see literally a 3x improvement over the r520. In synthetic PS benches, the r580 does get close to 3x the performance of the r520.

http://www.beyond3d.com/reviews/ati/r580/index.php?p=12
 

Gamingphreek

Lifer
Mar 31, 2003
11,679
0
81
Originally posted by: munky
Originally posted by: Gamingphreek
So while it can process far more, resulting in a higher fill rate, it cannot output all 48 Shaders at once due to the fact that it only has 16 ROP's

I'm not sure what you're getting at, but the r580 certainly can and does utilize all of its shader units. This is not to say any of its shaders are weaker than those of the r520 (as AT seems to believe) or the g71, but games do not rely purely on shaders, and that's why you do not see literally a 3x improvement over the r520. In synthetic PS benches, the r580 does get close to 3x the performance of the r520.

http://www.beyond3d.com/reviews/ati/r580/index.php?p=12

It can use them. Notice I said it can still process many times more information! I did say that it cannot output more because it is limited due to the fact that it only has 16 ROP's.

-Kevin
 

Whirlwind

Senior member
Nov 4, 2006
540
18
81
Both ATI and Nvidia make great video cards.

You can get a great card from either for around $200, anything over $200 and you get even a better card :)