• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

So, with GDDR5 maybe we will get cards that can play Crysis at Very High

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Originally posted by: BFG10K

Huh? What does the GeForce 7 series or Oblivion have to do with it?

In addition to being a different architecture there's the other issue of the filtering fiasco on the GF7.

In a direct comparison to the X1800 XT (the latter having pretty much the same specs but 1/3 the shaders) the X1900 had a healthy even at launch across a range of games.

In particular look at the pasting it delivers in Riddick, Fear and Call of Duty 2 when you crank the details.

Oblivion was the first game did geforce 7 started to show it's weakness over x1900 series. Geforce 7 had particularly weaker shaders. About 1/2 the performance of it's rival.

You do know that each relatively companies tweak their cards like adding 512bit memory ring bus not to mention x1900xtx had faster clocks and faster memory over 1800xt.


You've already been made the fool as your own benchmark proved you were wrong.

That's why it doesn't beat it in any of the other resolutions. When SP was limited did it ever lose to 8800gts with faster SP clocks at extreme situations by 1/2 fps. Now that's just one benchmark in 1 resolution. I assure you 8800gtx will beat g92 8800gts in sleuth of games and you'll be in you basement trying to find articles trying to make a come back like usual.
 
Lost in this discussion about shader vs. texel/bandwidth is the rather large difference in ROPs (pixel fillrate) between the parts. Personally I think this is the biggest difference in performance with VRAM being the next major difference. G92 GT (and probably GTS) weigh in at 16, G80 GTS has 20, G80 GTX/Ultra has 24. Clock speeds being the same, the GTX/Ultra have a 20-30% advantage in raw fillrate which is where we'd see the differences between the parts; at high resolutions, shader intensive games, and with AA enabled.

I highly doubt shaders are the bottleneck once you exceed a certain point. The GT for instance should be very close to the GTX/Ultra in terms of shader ops/sec since the shader core runs much faster than the GTX/Ultra. Should be something like 1400 x 128 = 179,200 vs. 1700 x 112 = 190,400. Also, recent reviews of 2 GTS parts (G80 96SP and 112SP) run at similar clockspeeds (576 GTS vs. 600 GT) as the GT show the GTS are typically faster than the GT with # of shaders being equal to or less than the GT. Keep in mind due to the higher shader core speed of the GT, the GT would still have the edge in shader ops/sec but it yields little benefit over the GTS variants. This points to ROPs and VRAM as the main factor in terms of performance differences.

I also doubt memory bandwidth is an issue as the G80 GTS/GTX showed very little performance gains when bandwidth was increased by raising memory clockspeeds. Much less than increasing core clock speeds in any case (again, increasing ROP speed and shader speed when linked). It may yield a higher performance increase on the GT due to the smaller memory bus, but again I doubt it yields any significant increase or we'd see GDDR4 variants of the GT to compensate for the 256-bit bus. Might show up with the G92 GTS, but I highly doubt it'll have any significant impact on performance relative to the G92 GT.
 
I don't think ROP matters as much until you get to extreme resolutions because 8800gts has more pixel fillrate than 8800gt but it loses in most of the common resolutions except extreme cases. But you are right about one thing a OC version of 8800gt can not even beat a 8800gtx with higher shader operations.

GTX shader clocks are actually 1350mhz and original GT is version is 1500mhz but OC versions can go high as 1600mhz and it still can't beat 8800gtx.

It's always good to add a little bit more texel fillrate over bandwidth because you are not wasting any memory bandwidth when you do this. 8800gt does this well. I think Nvidia learned this from the days of 7600gt and so on.
 
Originally posted by: Azn
I don't think ROP matters as much until you get to extreme resolutions because 8800gts has more pixel fillrate than 8800gt but it loses in most of the common resolutions except extreme cases.

That's the case with any of the 8800 variants provided you run them at similar clockspeeds and was the reason the G92 GT drew rave reviews of "GTX performance at half the price" from the review community. Point is, you won't see that much difference between the parts until you actually stress them at higher resolutions or enable AA.

Firing Squad 8800GTS SSC (112 SP G80) Review

As you can see, the GTS SSC tends to outperform the G92 GT in all resolutions from 1280 to 1920 despite the GT's advantage in core clock and shader clock. There's also a Tech-Report review that compares the G80 96SP to the GT at similar clockspeeds that mimics these results, although they only tested higher resolutions unfortunately. It does however show that SP does not seem to be the main bottleneck for 8800 series parts.
 
3dmark is still a useful benchmark when looking at the test because it can show how cards are being saturated and used.

8800gts ssc is also clocked higher than original 8800gts. It's clocks are 576mhz. That has a FP16 texel filter rate of 13.824 G/texels and is being completely utilized by it's bandwidth while 8800gt has 16.8 G/texels but not fully being utilized. In lower resolutions like 1280x1024 texel fillrate matter more than rops. Clearly Firingsquad is showing 8800gts 112SP G80 in those benchmarks in those lower resolutions. I wish someone can benchmark 3dmark multitexture test to show this.
 
Originally posted by: Azn
3dmark is still a useful benchmark when looking at the test because it can show how cards are being saturated and used.
3DMark certainly serves its purposes, but how can you discount real-world performance in actual games in favor of 3DMark? In any case, there's myriad 3DMark results posted in the initial GT reviews that disprove your claim that the GT is bandwidth limited. As with the GTS/GTX, there's very little improvement in 3DMark (@1280 ofc) with the GT when increasing bandwidth by increasing memory clockspeed.

Even increasing the core clock yields lower gains than the GTS/GTX, which again, points to ROPs being the bottleneck. There's plenty of user-feedback here to support this as well, just look around that GT OC'ing thread and you'll see 3DMark results, including my own.

8800gts ssc is also clocked higher than original 8800gts. It's clocks are 576mhz.
Which is why that review along with the Tech-Report review are better than most reviews which often recycle 1 year old benches based on artificially imposed stock clock speeds and specifications. Point is that there are "stock" GTS available at 576MHz if one doesn't want to OC their 513MHz GTS to GT/GTX speeds, and its certainly a more accurate comparison than benching a 500MHz GTS to a 650MHz+ GT OC SSOMGzies and saying the GT (or even the GTX for that matter) are 20-40% faster than the GTS.

That has a FP16 texel filter rate of 13.824 G/texels and is being completely utilized by it's bandwidth while 8800gt has 16.8 G/texels but not fully being utilized. In lower resolutions like 1280x1024 texel fillrate matter more than rops. Clearly Firingsquad is showing 8800gts 112SP gG0 in those benchmarks in those lower resolutions. I wish someone can benchmark 3dmark multitexture test to show this.
Again, the GT should have the advantage in all situations in terms of texel fillrate with its 1:1 texture addressing/mapping units and higher core clock speeds, even over the GTX. At lower resolutions, bandwidth is less important and less likely to be saturated regardless of what part you're referring to, but a simple test would be to demonstrate a tangible improvement by increasing memory bandwidth by increasing memory clocks. Which once again points to no increase with either the GT/GTS/GTX in the benchmark (at the same resolution) you stated would take advantage of the increased bandwidth.
 
Originally posted by: chizow

3DMark certainly serves its purposes, but how can you discount real-world performance in actual games in favor of 3DMark? In any case, there's myriad 3DMark results posted in the initial GT reviews that disprove your claim that the GT is bandwidth limited. As with the GTS/GTX, there's very little improvement in 3DMark (@1280 ofc) with the GT when increasing bandwidth by increasing memory clockspeed.

Even increasing the core clock yields lower gains than the GTS/GTX, which again, points to ROPs being the bottleneck. There's plenty of user-feedback here to support this as well, just look around that GT OC'ing thread and you'll see 3DMark results, including my own.

Not discounting real world game benchmarks. 3dmark score is irrelevant however it can tell us how what is being bottlenecked whether that be texel fillrate or memory bandwidth.

http://techreport.com/articles.x/12285/4

Clearly card like 8600gts is being bottlenecked here in these fillrate tests. 8600gts has a theoretical pixel fillrate of 5400 M/pixels and peak texel of 10800 M/texels which 8600gts has a bandwidth of 32 gb/s. The test shows bottlenecked pixel fillrate of 2934.1 M/pixels and texel fillrate of 7650.9 M/texels which is clearly being bottlenecked by it's bandwidth. Radeon 1950 pro shows no bottleneck here. Neither does 7900gs far as texel fillrate goes. Memory bandwidth plays a big role how these texels are being saturated.

http://techreport.com/articles.x/12379/3
http://techreport.com/articles.x/11211/5

Now let's look at 8800 ultra or gtx fillrate tests. All of it's texel fillrate is being utilized here because it has enough memory bandwidth to saturate it's theoretical texel fillrate.


Which is why that review along with the Tech-Report review are better than most reviews which often recycle 1 year old benches based on artificially imposed stock clock speeds and specifications. Point is that there are "stock" GTS available at 576MHz if one doesn't want to OC their 513MHz GTS to GT/GTX speeds, and its certainly a more accurate comparison than benching a 500MHz GTS to a 650MHz+ GT OC SSOMGzies and saying the GT (or even the GTX for that matter) are 20-40% faster than the GTS.

You still can't discount memory bandwidth for the SSC card which is clocked 900mhz is higher than original GTS. Fillrate gives you higher average fps but memory bandwidth gives you "higher minimum" fps. Techreport review shows us in these extreme conditions. Certainly memory bandwidth plays a vital role.


Again, the GT should have the advantage in all situations in terms of texel fillrate with its 1:1 texture addressing/mapping units and higher core clock speeds, even over the GTX. At lower resolutions, bandwidth is less important and less likely to be saturated regardless of what part you're referring to, but a simple test would be to demonstrate a tangible improvement by increasing memory bandwidth by increasing memory clocks. Which once again points to no increase with either the GT/GTS/GTX in the benchmark (at the same resolution) you stated would take advantage of the increased bandwidth.

No. GT is being bottlenecked by it's memory bandwidth. It can do so much with 57.6gb of memory bandwidth. It cannot saturate all of its FP16 texel fillrate which matters with modern games that use HDR. Its texel fillrate only matters in games without HDR. If you tested 3dmark it's texel fillrate will not even be close to 33.6 Gtexels/s. It would show somwhere in the line of 14-16 Gtexels/s. I'm just estimating here but I dare anyone with 8800gt and 3dmark 2k6 pro show some results of these fillrate tests.

Here's the only one I could find and I was right all along. This fillrate tests show 8800gt cannot reach it's theoretical fillrate.

http://images.vnu.net/gb/inqui...-dx10-hit/fillrate.jpg

Yes I would like to see a test where a GT is only overclocked by memory clock speed and it would show huge improvement over raising the core which would have some effect but not as much as raising memory clock speeds. GTS and GTX is a different beast and it would show good results if you raised core and memory speed together.
 
Oblivion was the first game did geforce 7 started to show it's weakness over x1900 series.
Again more dodging and total goal-post shifting on your part. Nobody was discussing the GF7 series because it?s a completely different architecture.

Geforce 7 had particularly weaker shaders. About 1/2 the performance of it's rival.
Which is exactly why the filtering issue is so important given GF7 shaders were tied to their texturing units unlike being decoupled on the Radeon.

But this was never under discussion, you've simply chopped and changed the issue just to obfuscate the fact that you're wrong.

You do know that each relatively companies tweak their cards like adding 512bit memory ring bus not to mention x1900xtx had faster clocks and faster memory over 1800xt.
What the hell are you talking about? There was no change to the memory ring from the X1800 to X1900 and its clocks were pretty much identical, which is eaactly which it?s a perfect test-bed to demonstrate shader differences. There are plently of games in that list that pre-date Oblivion.

That's why it doesn't beat it in any of the other resolutions.
"Other resolutions"? You mean like 1280x1024 which is CPU limited and hence influenced more by the CPU/platform rather than shaders?

When SP was limited did it ever lose to 8800gts with faster SP clocks at extreme situations by 1/2 fps. Now that's just one benchmark in 1 resolution.
You were the one that was harping on about using that review despite my protestations. To quote yourself "what do you think that was? Fake review?

So don?t start crying about it when we start using it because you forgot to actually check whether it backs your claims.

I assure you 8800gtx will beat g92 8800gts in sleuth of games and you'll be in you basement trying to find articles trying to make a come back like usual.
Fortunately for us your assurances mean nothing.

Yes I would like to see a test where a GT is only overclocked by memory clock speed and it would show huge improvement over raising the core which would have some effect but not as much as raising memory clock speeds.
Again you need to provide evidence of your claims and thus-far you have nothing except a bunch of meaningless and theoretical 3DMark results.

I'm still waiting for a retraction of your original claim: you can see the biggest jump when textures are saturated by it's bandwidth combined with current SP. There are still many games still rely on texture prowness over shader. Actually it's about 95% of PC games out today.

The two graphs showed us that you pulled that out of your orifice and when they did you ignored the graphs and the results and starting rambling on about Oblivion and the GF7 series.

You need to retract your claims and stop trolling.
 
Originally posted by: BFG10K
Oblivion was the first game did geforce 7 started to show it's weakness over x1900 series.
Again more dodging and total goal-post shifting on your part. Nobody was discussing the GF7 series because it?s a completely different architecture.

Geforce 7 had particularly weaker shaders. About 1/2 the performance of it's rival.
Which is exactly why the filtering issue is so important given GF7 shaders were tied to their texturing units unlike being decoupled on the Radeon.

But this was never under discussion, you've simply chopped and changed the issue just to obfuscate the fact that you're wrong.

You do know that each relatively companies tweak their cards like adding 512bit memory ring bus not to mention x1900xtx had faster clocks and faster memory over 1800xt.
What the hell are you talking about? There was no change to the memory ring from the X1800 to X1900 and its clocks were pretty much identical, which is eaactly which it?s a perfect test-bed to demonstrate shader differences. There are plently of games in that list that pre-date Oblivion.

That's why it doesn't beat it in any of the other resolutions.
"Other resolutions"? You mean like 1280x1024 which is CPU limited and hence influenced more by the CPU/platform rather than shaders?

When SP was limited did it ever lose to 8800gts with faster SP clocks at extreme situations by 1/2 fps. Now that's just one benchmark in 1 resolution.
You were the one that was harping on about using that review despite my protestations. To quote yourself "what do you think that was? Fake review?

So don?t start crying about it when we start using it because you forgot to actually check whether it backs your claims.

I assure you 8800gtx will beat g92 8800gts in sleuth of games and you'll be in you basement trying to find articles trying to make a come back like usual.
Fortunately for us your assurances mean nothing.

Yes I would like to see a test where a GT is only overclocked by memory clock speed and it would show huge improvement over raising the core which would have some effect but not as much as raising memory clock speeds.
Again you need to provide evidence of your claims and thus-far you have nothing except a bunch of meaningless and theoretical 3DMark results.

I'm still waiting for a retraction of your original claim: you can see the biggest jump when textures are saturated by it's bandwidth combined with current SP. There are still many games still rely on texture prowness over shader. Actually it's about 95% of PC games out today.

The two graphs showed us that you pulled that out of your orifice and when they did you ignored the graphs and the results and starting rambling on about Oblivion and the GF7 series.

You need to retract your claims and stop trolling.

Just take a hike. Stop your trolling. You got all the techno mumble jumble right but you clearly have some learning disability and love to get into arguments.

If debate is what you want I can give it but if you are coming to run your mouth I have nothing for you.
 
Nice, totally dodge anything that disproves your claims and keep on posting the same old stuff. There was another individual that used to post here and did exactly the same thing.

Your arguments have all the hallmarks of being trolls.
 
3dmark is a benchmarking tool that tests different subsections of each cards like PS, fillrate, etc... which can tell us many things if you know how to read it right.

You got one thing right BFG. 3dmark actual scores are irrelevant because the actual score can sway by what futuremark thinks would give you higher numbers at the time whether that be CPU, PS, or texture fillrate.
 
LOL. You non educated guys can pick sides all you want that is why ignorance can't be stopped.

It doesn't change a thing.

8800gts 112SP slower clocked SP beats 8800gt with faster SP as shown by chizow.
 
Originally posted by: Azn
LOL. You non educated guys can pick sides all you want that is why ignorance can't be stopped.

Non-educated? Ignorant?

That's funny, those sound like personal attacks to me. I could have sworn that there were rules about that sort of thing on this forum.
 
If you took it as a personal attack I apologize but you need to be knowledgeable in this field to join in the discussion instead of picking sides when you don't don't understand...

If you want to join the discussion I welcome you.
 
It doesn't change a thing.

8800gts 112SP slower clocked SP beats 8800gt with faster SP as shown by chizow.
LOL, you're quite the comic, aren't ya?

I love it how you use results only when you think they back your claims and then dismiss them if they don't.

Chizow's FiringSquad link lends evidence to the ROPs being the bottleneck in that particular GTS vs GT situation, which I have no trouble accepting.

You on the other hand continue to deny this and keep going on about your texturing/memory bandwidth fantasies.
 
I'm a comic who has knows how 3D works. 😀

ROP has nothing to do with it in lower resolutions.

Chizow clearly shows in Firingsquad benchmarks your SP gives you the biggest gain as utter crap. :beer:
 
ROP has nothing to do with it in lower resolutions.
Using that reasoning neither does texturing.

Chizow clearly shows in Firingsquad benchmarks your SP gives you the biggest gain as utter crap.
But according to your theory the GT should be faster because it has more texture fillrate, yet it isn't.

It's not even faster at a low resolution using the CPU limited HL2 where memory bandwidth absolutely will not be a factor.

Hell, HL2 doesn't even use FP rendering so it should be the perfect testbed to demonstrate your theories, yet even that game fails to back you claims.

So yet again we have overwhelming evidence to prove you?re wrong and yet again you continue to troll like a certain another individual who used to post here.
 
You don't even know what a ROP does how are you going to explain to me?

ROP is single texture performance. It is only useful in higher resolutions and AA performance.
 
Are you going to still deny 8800gt theoretical fillrate can't be achieved with only 57.6gbs/s?
Who gives a shit about theoretical fillrate? It's not relevant to games because they aren't showing the trends your little nebulous tests show.

One day knowledge (truth) will prevail instead of ignorance.
That will be the day you stop posting.

I'm still waiting for a retraction for your claim you can see the biggest jump when textures are saturated by it's bandwidth combined with current SP. There are still many games still rely on texture prowness over shader. Actually it's about 95% of PC games out today.

You need to retract that claim given two graphs I posted prove that you pulled that comment out of your orifice. The fact is even a two year old game like Fear has a 7:1 arithmetic:texturing ratio.
 
Who is this other individual you keep talking about? Did he make you look stupid time again and again?
 
Originally posted by: Azn
Yes I would like to see a test where a GT is only overclocked by memory clock speed and it would show huge improvement over raising the core which would have some effect but not as much as raising memory clock speeds. GTS and GTX is a different beast and it would show good results if you raised core and memory speed together.
VR-Zone OC'ing results

And as I said before, you can see these results mimicked over in the GT OC'ing thread on these forums, except no one saw the massive increases VRZone saw with the core/shader increases. Most saw 500-1000 point increases by raising the core/shader clocks with absolutely no difference when touching the memory clocks.

Anyways, we'll know more soon enough with the release of the G92 GTS.

8800 Series Specs as Leaked by NV

Edit: link fixed
 
You don't even know what a ROP does how are you going to explain to me?

ROP is single texture performance. It is only useful in higher resolutions and AA performance.
I'm still waiting for your retractions.

I'm also still waiting for your explanation as to why even HL2 @ low resolutions is slower on the GT despite said card having a large texturing advantage and memory bandwidth not being a factor.
 
Back
Top