So, with GDDR5 maybe we will get cards that can play Crysis at Very High

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

AzN

Banned
Nov 26, 2001
4,112
2
0
Originally posted by: BFG10K
Are you going to still deny 8800gt theoretical fillrate can't be achieved with only 57.6gbs/s?
Who gives a shit about theoretical fillrate? It's not relevant to games because they aren't showing the trends your little nebulous tests show.


Who cares about theoretical fillrate? Because this would show in real gaming situations that it is being bottlenecked by memory bandwidth. All of its potential cut into half. So in gaming situations it is always step behind the gtx that reach its full theoretical fillrate that is higher than 8800gt fillrate achieved.
 

AzN

Banned
Nov 26, 2001
4,112
2
0
Originally posted by: chizow
Originally posted by: Azn
Yes I would like to see a test where a GT is only overclocked by memory clock speed and it would show huge improvement over raising the core which would have some effect but not as much as raising memory clock speeds. GTS and GTX is a different beast and it would show good results if you raised core and memory speed together.
VR-Zone OC'ing results

And as I said before, you can see these results mimicked over in the GT OC'ing thread on these forums, except no one saw the massive increases VRZone saw with the core/shader increases. Most saw 500-1000 point increases by raising the core/shader clocks with absolutely no difference when touching the memory clocks.

Anyways, we'll know more soon enough with the release of the G92 GTS.

8800 Series Specs as Leaked by NV

You need to fix the link. I can't see the results. It takes me to tweaktown instead vrzone on that top link.

Are you talking about GT core raises? I think I said in my previous post you would NOT see big results by raising the core but raising memory speeds. Gaming benchmarks would be better to see these kind of results than 3dmark. 3dmark 2k6 scores sway more with shader performance than anything else as you can tell why 2900xt bests gts in this category but can't beat it in real gaming situations.
 

Zenoth

Diamond Member
Jan 29, 2005
5,202
216
106
So, any chances to see GeForce 9 and/or AMD R700 using GDDR5 ?
 

BFG10K

Lifer
Aug 14, 2000
22,709
3,005
126
Because this would show in real gaming situations that it is being bottlenecked by memory bandwidth.
How exactly would it show that if games are not using texturing like a theoretical texturing test does?

Do you not understand the simple concept that having a 7:1 arithmetic:texturing ratio shifts the bottleneck away from texturing and onto shaders?

I'm still waiting for a retraction for your claims and your HL2 explanation.

And Chizow, can you please fix your VR link? Thanks. :thumbsup:
 

chizow

Diamond Member
Jun 26, 2001
9,537
2
0
Originally posted by: Zenoth
So, any chances to see GeForce 9 and/or AMD R700 using GDDR5 ?

I think the real question is if GDDR5 is worth the $100-$200 price premium over GDDR3/GDDR4 for very little improvement in performance. Until a GPU actually demonstrates it can benefit from current memory bandwidth limitations I'd rather spend the money on more GPU rather than more unused bandwidth.
 

BFG10K

Lifer
Aug 14, 2000
22,709
3,005
126
VR Zone overclocking.

Memory overclocking showed the lowest gain.

Shader overclocking showed the highest gain.

It's even your beloved 3DMark that you claim means so much.

I'll be waiting for yet another retraction of your false claims.
 

Quiksilver

Diamond Member
Jul 3, 2005
4,725
0
71
This thread is hilarious.
I'm a grab some popcorn brb...

K, back with my Popcorn.
Time to continue watching the tech version of 'The View'.
 

AzN

Banned
Nov 26, 2001
4,112
2
0
What don't you understand? That 3dmark scores are swayed by shader portion of the tests.

It is the same reason why 1600pro can beat 850xt in 3dmark 2k6 but in real gaming situations is another story.

When you raise the core speed of the 8800gt you are also raising SP clocks which would benefit 3dmark 2k6 scores.

The biggest gains were shown when vrzone took everything at stock and just raised SP clocks. That right there tells us 3dmark scores are swayed by Shader. But in real gaming situations raising SP clocks alone does not gain the biggest gain as shown by FiringSquad benchmark of Chizow.
 

jiffylube1024

Diamond Member
Feb 17, 2002
7,430
0
71
Azn, you prove that you're talking out of your arse time and time again and BFG's technical "mumble jumble " (I assume you mean mumbo jumbo?) is due to the fact that he knows what he's talking about. I'll just take one example here of how spectacularly you're wrong:


Originally posted by: Azn


Oblivion was the first game did geforce 7 started to show it's weakness over x1900 series. Geforce 7 had particularly weaker shaders. About 1/2 the performance of it's rival.

You do know that each relatively companies tweak their cards like adding 512bit memory ring bus not to mention x1900xtx had faster clocks and faster memory over 1800xt.

Take a look at the X1900XT/XTX review on this very site to see how shader power made a difference even back in January of '06, nearly two years ago. First, check out the specs of the X1900 series versus the X1800 series:

X1900 specs vs X1800 specs

The X1900XT has the exact same clockspeed as the X1800XT, 50 MHz slower memory (1.45 GHz vs 1.55 GHz), exact same 512-bit ring bus, etc, same number of texture units, etc. The big difference is 48 pixel shader pipelines on the X1900 vs 16 on the X1800.

If we weren't shader limited then there should be no difference between the cards (the X1800 series should win due to the 50 MHz more memory bandwidth), yet the X1900XT beats the X1800XT on EVERY SINGLE BENCHMARK.

The X1900XTX was 20-30% faster than the X1800XT, as seen here . This was before driver updates further improved the X1900 series.

The gap has widened significantly since then in terms of pixel shaders in new games; current engines like the Crysis engine and Unreal Engine 3 use WAY more pixel shading power than games of early 2006.

-------

Now take a look at the 8800GT review and study the specs of the cards.

The 8800GTS has more memory bandwidth than the 8800GT, while the 8800GT has a faster core and shader clock and more texture addressing/filtering units and stream processors. The 8800GT is faster in everything.
 

AzN

Banned
Nov 26, 2001
4,112
2
0
Originally posted by: jiffylube1024
Azn, you prove that you're talking out of your arse time and time again and BFG's technical "mumble jumble " (I assume you mean mumbo jumbo?) is due to the fact that he knows what he's talking about. I'll just take one example here of how spectacularly you're wrong:


Originally posted by: Azn


Oblivion was the first game did geforce 7 started to show it's weakness over x1900 series. Geforce 7 had particularly weaker shaders. About 1/2 the performance of it's rival.

You do know that each relatively companies tweak their cards like adding 512bit memory ring bus not to mention x1900xtx had faster clocks and faster memory over 1800xt.

Take a look at the X1900XT/XTX review on this very site to see how shader power made a difference even back in January of '06, nearly two years ago. First, check out the specs of the X1900 series versus the X1800 series:

X1900 specs vs X1800 specs

The X1900XT has the exact same clockspeed as the X1800XT, 50 MHz slower memory (1.45 GHz vs 1.55 GHz), exact same 512-bit ring bus, etc, same number of texture units, etc. The big difference is 48 pixel shader pipelines on the X1900 vs 16 on the X1800.

If we weren't shader limited then there should be no difference between the cards (the X1800 series should win due to the 50 MHz more memory bandwidth), yet the X1900XT beats the X1800XT on EVERY SINGLE BENCHMARK.

The X1900XTX was 20-30% faster than the X1800XT, as seen here . This was before driver updates further improved the X1900 series.

The gap has widened significantly since then in terms of pixel shaders in new games; current engines like the Crysis engine and Unreal Engine 3 use WAY more pixel shading power than games of early 2006.

-------

Now take a look at the 8800GT review and study the specs of the cards.

The 8800GTS has more memory bandwidth than the 8800GT, while the 8800GT has a faster core and shader clock and more texture addressing/filtering units and stream processors. The 8800GT is faster in everything.


Never said shader doesn't effect games but it doesn't effect much as fillrate and bandwidth. But it wasn't until did oblivion come out did shader mattered a lot more. Geforce 7 series took the biggest nose dive when this game came out.

Shader did have a prevailing role when 1900 series just came out. When all those games were benchmarked 1800xt was very close to 1900xt performance but as time passed more games used more shader and 1900 series took a bigger jump. Once you have enough shader power the performance increase becomes to a point there is no return. It's texture fillrate and and right amount of bandwidth that prevails then.
 

jiffylube1024

Diamond Member
Feb 17, 2002
7,430
0
71
Originally posted by: Azn
What don't you understand? That 3dmark scores are swayed by shader portion of the tests.

It is the same reason why 1600pro can beat 850xt in 3dmark 2k6 but in real gaming situations is another story.

When you raise the core speed of the 8800gt you are also raising SP clocks which would benefit 3dmark 2k6 scores.

The biggest gains were shown when vrzone took everything at stock and just raised SP clocks. That right there tells us 3dmark scores are swayed by Shader. But in real gaming situations raising SP clocks alone does not gain the biggest gain as shown by FiringSquad benchmark of Chizow.

So what the heck are you trying to say? You're praising 3dmark for its ability to tangibly measure changes in theoretical values ,but they're just that -- theoretical values!!!. They do not correlate to 1:1 changes of actual gaming performance, and you have to take 3dmark numbers with a grain of salt because it doesn't tax systems in the same way that games do. In particular, 3dmark seems to be much more CPU sensitive than any game, where the bottleneck is always strongly on the GPU side.
 

jiffylube1024

Diamond Member
Feb 17, 2002
7,430
0
71
Originally posted by: Azn

You have anything to add that would have some insight or have something to prove I'm talking about of my arse? Of course not.

Um - read my post and follow my links that disprove your earlier statements. Your 'I can't hear you la la la' defense is silly.
 

jiffylube1024

Diamond Member
Feb 17, 2002
7,430
0
71
Originally posted by: BFG10K
VR Zone overclocking.

Memory overclocking showed the lowest gain.

Shader overclocking showed the highest gain.

It's even your beloved 3DMark that you claim means so much.

I'll be waiting for yet another retraction of your false claims.

Probably the best proof so far BFG - well done. It even uses 3dmark, his program of choice. Bravo!

Azn -- what can't you understand about this simple concept? Memory bandwidth does play its part in current gen cards but it's not the #1 factor, not by a long shot? GPU clockspeed is more important, shader power is more important still!

Why would Nvidia update its lineup with a bunch of cards that drastically cut bandwidth and yet boost shader power? And these changes somehow produce improvements versus the last gen mid range and approach 8800GTX performance in some cases. Perhaps because the trend is a growing emphasis in games on shader power?????????????
 

AzN

Banned
Nov 26, 2001
4,112
2
0
Originally posted by: jiffylube1024
Originally posted by: Azn
What don't you understand? That 3dmark scores are swayed by shader portion of the tests.

It is the same reason why 1600pro can beat 850xt in 3dmark 2k6 but in real gaming situations is another story.

When you raise the core speed of the 8800gt you are also raising SP clocks which would benefit 3dmark 2k6 scores.

The biggest gains were shown when vrzone took everything at stock and just raised SP clocks. That right there tells us 3dmark scores are swayed by Shader. But in real gaming situations raising SP clocks alone does not gain the biggest gain as shown by FiringSquad benchmark of Chizow.

So what the heck are you trying to say? You're praising 3dmark for its ability to tangibly measure changes in theoretical values ,but they're just that -- theoretical values!!!. They do not correlate to 1:1 changes of actual gaming performance, and you have to take 3dmark numbers with a grain of salt because it doesn't tax systems in the same way that games do. In particular, 3dmark seems to be much more CPU sensitive than any game, where the bottleneck is always strongly on the GPU side.

You clearly can't understand why 3dmark scores are swayed by shader portion of the test.

I think I've said this earlier that 3dmark scores irrelevant but it is a tool to measure PS, fillrate, etc...
 

AzN

Banned
Nov 26, 2001
4,112
2
0
Originally posted by: jiffylube1024
Originally posted by: BFG10K
VR Zone overclocking.

Memory overclocking showed the lowest gain.

Shader overclocking showed the highest gain.

It's even your beloved 3DMark that you claim means so much.

I'll be waiting for yet another retraction of your false claims.

Probably the best proof so far BFG - well done. It even uses 3dmark, his program of choice. Bravo!

Azn -- what can't you understand about this simple concept? Memory bandwidth does play its part in current gen cards but it's not the #1 factor, not by a long shot? GPU clockspeed is more important, shader power is more important still!

Why would Nvidia update its lineup with a bunch of cards that drastically cut bandwidth and yet boost shader power? And these changes somehow produce improvements versus the last gen mid range and approach 8800GTX performance in some cases. Perhaps because the trend is a growing emphasis in games on shader power?????????????

Because Nvidia is cutting costs and being more efficient. 384 bit memory costs more than 256bit memory. 24 rops cost less than 16rops to produce and so on but raising the texture fillrate to accommodate its performance.

I think Chizow posted this
that PS is not as important as you or BFG claims.
 

AzN

Banned
Nov 26, 2001
4,112
2
0
Originally posted by: jiffylube1024
Originally posted by: Azn

You have anything to add that would have some insight or have something to prove I'm talking about of my arse? Of course not.

Um - read my post and follow my links that disprove your earlier statements. Your 'I can't hear you la la la' defense is silly.

Well it was your mistake don't point because you can't post properly.

Its not defense when I'm trying to educate some of you but too ignorant to acknowledge. :beer:
 

nitro28

Senior member
Dec 11, 2004
221
0
76
Wow, this did spark quite the conversation. Reality is, that being able to advertise the shiny new GDDR5 is advantageous to the card makers whether it actually makes any real difference or not. People love to buy the new technology because it is new. We have seen time after time that creating a perception of value or intrigue is all that is needed in advertising. Having the object actually perform doesn't seem to matter, at least not at first. I work on the pharmaceutical industry and we see it all the time. A new version of a older drug comes out, says its new and improved and people switch to it, only to find out down the road that it just costs them more and performs about the same.
 

chizow

Diamond Member
Jun 26, 2001
9,537
2
0
Originally posted by: Azn
I think Chizow posted this
that PS is not as important as you or BFG claims.
I didn't say SP performance wasn't important, as that's clearly something that helps the GT immensely. I said that ROPs seem to be the main bottleneck for 8800s, not SPs. If the G92 GTS has 20-24 ROPs I'm sure it'll be the GTX/Ultra killer everyone is expecting, but instead its significant gains in SP performance are probably going to be crippled by its lack of ROPs similar to what we're already seeing with the GT.

Its pretty clear that there's a strong relationship to # of SP/shader clock and # of ROPs/clock speed, which makes sense. My guess is that the G92 shaders bring it to the point ROPs become the bottleneck and its higher core speed and more efficient texturing units help close that gap some. But at the same time, the G80 benches at the same clock speeds were provided to show that SP only gets you so far and that ROPs still yield the bigger performance (again 112SP G80 vs. 112SP G92 and 96SP G80 vs. 112SP G92 favor the G80s at the same clock speeds).

What's pretty clear however, is that memory bandwidth has little to no impact on any of the parts at any resolution, setting, or game (and yes, I've owned and tested all 3).
 

AzN

Banned
Nov 26, 2001
4,112
2
0
Maybe you didn't but Firingsquad is sure telling us that 8800gts with slower SP clocks are besting 8800gt with higher SP clocks.
 

BFG10K

Lifer
Aug 14, 2000
22,709
3,005
126
Probably the best proof so far BFG - well done
Thanks but to be fair Chizow deserves the credit for that find. :thumbsup:

I think I've said this earlier that 3dmark scores irrelevant but it is a tool to measure PS, fillrate, etc...
You mean 3DMark is only relevant when you say it is, just like any other results that are presented?

Your "get in the last word no matter how wrong I am" is tiresome. Keep it up and you'll be reported for trolling.
 

AzN

Banned
Nov 26, 2001
4,112
2
0
ROP has nothing to do width lower resolutions.

I think I've explained it to you. ROP is for higher resolutions and AA. Same reason why 2600xt with 4rops can best 8600 with 8rops in many of the games when AA is not used. Same with 8800gts vs 8800gt.

I think you should find some more tangible benchmark instead of 3dmark scores that reflect with PS portion of the tests..

Memory bandwidth plays a vital role with fillrate. Why not stick a 128bit memory controller to cut costs when memory is not important?
 

AzN

Banned
Nov 26, 2001
4,112
2
0
Originally posted by: BFG10K

You mean 3DMark is only relevant when you say it is, just like any other results that are presented?

Your "get in the last word no matter how wrong I am" is tiresome. Keep it up and you'll be reported for trolling.

You can't understand why 3dmark SCORES reflect PS portion of the test. That was what futuremark is aiming for at the time. Same reason why 8800gt shows huge gains raising SP clocks but in the real world it will not show huge gains in games. Like I said it is a tool to measure each subsection of these cards and should be so. 3dmark scores are irrelevant but the actual data is relevant.

Do whatever you like that deems necessary for you. I came here to discuss you however can't seem to agree so you say I am trolling. All power to you.
 

chizow

Diamond Member
Jun 26, 2001
9,537
2
0
Originally posted by: Azn
ROP has nothing to do width lower resolutions.

I think I've explained it to you. ROP is for higher resolutions and AA. Same reason why 2600xt with 4rops can best 8600 with 8rops in many of the games when AA is not used. Same with 8800gts vs 8800gt.

I think you should find some more tangible benchmark instead of 3dmark scores that reflect with PS portion of the tests..

Memory bandwidth plays a vital role with fillrate. Why not stick a 128bit memory controller to cut costs when memory is not important?

Bandwidth plays the smallest role.....it only becomes important when there's not enough of it, which is least likely to occur at low resolutions.

Fill Rate is always important as that determines how fast a frame is rendered. For example, if you needed to fill your pool with 48 gallons of water, which would allow you to accomplish that faster? Using 24 buckets or 16 provided it took you as long to make each trip?

I only used 3DMark because you stated it would show the GT was bandwidth limited and I knew there was a review that specifically showed it wasn't. There's plenty of other benchmarks out there, but honestly its as simple as moving the "Memory" Slider up on any 8800 part and running some benchmarks to show bandwidth is simply a non-issue.

As for the 128-bit controller; If you used GDDR that could hit 4000MHz effective speeds it might be fine compared to a 256-bit controller with 2000MHz effective. Honestly I'm not sure as there's some that argue bus width is also important rather than just relying on theoretical bandwidth from faster RAM speeds. And also, a 128-bit controller would create a bigger problem with G80/G92 parts since it seems the ROP clusters are tied to the memory controllers.
 

AzN

Banned
Nov 26, 2001
4,112
2
0
Originally posted by: chizow
Originally posted by: Azn
ROP has nothing to do width lower resolutions.

I think I've explained it to you. ROP is for higher resolutions and AA. Same reason why 2600xt with 4rops can best 8600 with 8rops in many of the games when AA is not used. Same with 8800gts vs 8800gt.

I think you should find some more tangible benchmark instead of 3dmark scores that reflect with PS portion of the tests..

Memory bandwidth plays a vital role with fillrate. Why not stick a 128bit memory controller to cut costs when memory is not important?

Bandwidth plays the smallest role.....it only becomes important when there's not enough of it, which is least likely to occur at low resolutions.

Fill Rate is always important as that determines how fast a frame is rendered. For example, if you needed to fill your pool with 48 gallons of water, which would allow you to accomplish that faster? Using 24 buckets or 16 provided it took you as long to make each trip?

I only used 3DMark because you stated it would show the GT was bandwidth limited and I knew there was a review that specifically showed it wasn't. There's plenty of other benchmarks out there, but honestly its as simple as moving the "Memory" Slider up on any 8800 part and running some benchmarks to show bandwidth is simply a non-issue.

As for the 128-bit controller; If you used GDDR that could hit 4000MHz effective speeds it might be fine compared to a 256-bit controller with 2000MHz effective. Honestly I'm not sure as there's some that argue bus width is also important rather than just relying on theoretical bandwidth from faster RAM speeds. And also, a 128-bit controller would create a bigger problem with G80/G92 parts since it seems the ROP clusters are tied to the memory controllers.

I can agree with that. But to achieve it's full potential memory bandwidth plays a role of saturating that fillrate like a bucket (how much it can hold). It is the same reason why 8800gt doesn't reach it's full potential when multi-texture test of 3dmark2k6 was shown. The bucket wasn't big enough and the water was overflowing to reach the pool.

I think everyone can agree that 3dmark scores reflects the most with PS portion of the tests. Real gaming benchmark would prove what I've been saying all this time.