Originally posted by: Gamingphreek
Originally posted by: evolucion8
Originally posted by: Warren21
ATI is in it's 11th (X = roman numeral 10, + 1 = 11 ... X series cards like the X850 = 10th) generation 'cause they cheated and started using series numbers at 7000, haha.
ATI cards recently have been designed with a theory of more pixel shaders, less pixel pipes and really high clocks. Example: R580 (See: X1900 XT/XTX, X1900 AIW, X1950 XT)
48 pixel shaders, 16 pipes, ~650 MHz core. The thing is however, nVidia cards usually have a 1:1 ratio of shaders/pipes, or sometime a little more. The 7900 GTX for example is 16 pipes and 24 pixels shaders -- ATI comes out slightly on top in some games but you see -- many more pixel shaders are wasted/not fully utilized. the flaw in ATI's design is that the shaders are too complex and never see enough optimization for 100% utilisation or else theoretically the X1900 should be much faster.
As far as X1xxx ATI vs 7xxx nV, nV does better in flight sims, RTS games and Open GL titles. ATI usually does better in FPS games and D3D based titles. ATI holds a slight edge on nV in visual quality versus the 7 series but the 8800s are even better than the X1900s.
Not a very well-organized post but I hope it helps.
Actually, the GeForce 7900GTX uses 24 Pixel Pipelines, and 24 shaders, so a game will typical scale with the performance of this card. ATi on the other hand thinks that most games will be shader limited, since this has become a shader era, they saw that their X1800XT with it's 16 pixel pipelines and 16 shaders weren't enough, so they created the X1900XTX with 16 pixel pipelines and 48 pixel shaders. And the ATi shaders aren't too complex to create, is not a "design flaw", because the software that is used to create shaders is DX9, and both, the 7900GTX and X1900XTX runs the same software. The role that ATi's plays is within the driver, optimizing the shaders to fully utilize all the Shaders Pipelines. But todays games doesn't use so many shaders, so the X1900XTX shader core will remain unchallenged by the load and will be Fillrate limited. In intensive games or benchmarks with plenty of shaders, the X1900XTX will be far away from the 7900GTX. In OpenGL, ATi has done a great job optimizing and updating it's OpenGL driver, and you can see that tittles like Quake 4 that uses the Doom 3 engine runs as fast as in nVidia hardware, and sometimes even faster. In Doom 3 is the game were ati trails behind, is because nVidia uses a Look Up table for textures, and that runs slower on ATi because since it runs the other code faster, has to wait for it. That's why they found that using math for calculating it improved the performance, and may be some other tricks done in the engine to leave ATi behind. Cause why Quake 4 that uses the same engine runs as fast or faster on ATi hardware?? And yes, The 8800 series has a better image quality, is simply outstanding.
Ummm no.
Both cards IIRC have 16 ROP's. An ROP is a "Render Output Pipeline". This is the point where the scene is basically "assembled" with all the color, and Z-values.
Each card has 8 Vertex Pipelines.
Pixel Pipelines are when it gets somewhat confusing. Each card is limited in the number of textures it can output (NOT process) by the amount of ROPs. The G70 has 24 Pixel Pipelines and the R580 has 48 Pixel Pipelines. So while it can process far more, resulting in a higher fill rate, it cannot output all 48 Shaders at once due to the fact that it only has 16 ROP's.
Now to address some of these ridiculous notions you have made:
because the software that is used to create shaders is DX9, and both, the 7900GTX and X1900XTX runs the same software.
Are you just making this up?? Direct X is a programming API. It doesn't "create" anything. The features within its set of standards allow methods for programmers to use pixel shading.
24 Pixel Pipelines, and 24 shaders
I have no idea what you mean by "shaders". I assume you mean ROP's, in which case that is false. It has 16 ROPs. I believe the G80 has 24 though.
But todays games doesn't use so many shaders, so the X1900XTX shader core will remain unchallenged by the load and will be Fillrate limited.
The games don't know or care what shader configuration there is. There is no "sensing program" to determine it. The drivers and the hardware on the video card balance the load between all 48 shaders regardless. They don't fill up one by one like gas tanks in a car!
In intensive games or benchmarks with plenty of shaders, the X1900XTX will be far away from the 7900GTX.
While there is some merit to what you say, in that the X1900 has much more power than the 7900 it will never be "far away" from the 7900 due to the fact that it still only has 16 ROP's.
In Doom 3 is the game were ati trails behind, is because nVidia uses a Look Up table for textures, and that runs slower on ATi because since it runs the other code faster, has to wait for it. That's why they found that using math for calculating it improved the performance, and may be some other tricks done in the engine to leave ATi behind.
Ok you have it backwards. While ATI has vastly improved their drivers, THEY, not Nvidia, use lookups. By lookup's you mean Shader Replacement. The basic principle behind this is the fact that ATI has always been stronger at Math intensive calculations. Therefore they log what is needed in the original code (via the driver) and essentially convert it to calculations which their chips can process much faster in. (Sorry for the basic explanation but for the sake of this thread, no more really is needed).
Generally engineers want to stay away from this. When Nvidia used it on the NV3x it left a bad taste in everyones mouth because they incorporated very large amounts of IQ degrading optimizations in it. ATI, with the Catalyst A.I. seems to have done an exceptional job at retaining IQ.
-Kevin