So, with GDDR5 maybe we will get cards that can play Crysis at Very High

nitro28

Senior member
Dec 11, 2004
221
0
76
Samsung GDDR5

So, if this memory is due out in the first half of 2008 and samples have been given to all the big card makers we should see some new cards in the first half I would hope. How much difference should this memory make? It does appear to have a lot better specs.
 

BFG10K

Lifer
Aug 14, 2000
22,709
3,002
126
How much difference should this memory make?
Not that much given most games these days are shader bound. Witness how well the 8800 GT and 3870 do with just a 256 bit memory bus compared to 320/384/512 derivatives.
 

AzN

Banned
Nov 26, 2001
4,112
2
0
I wouldn't say just shader bound with current generation of games. I would say it's more texture bound and feeding right amount of memory bandwidth than anything else. 8800gt does so well in this dept and pull ahead of original 8800gts because of this. If the new 8800gt had 384bit memory bus like gtx it would stomp an ultra in it's tracks.
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
I don't think DDR5 would be enough. You will need DDR5 and the next gen archetecture (ie, Geforce 10)
 

BFG10K

Lifer
Aug 14, 2000
22,709
3,002
126
I wouldn't say just shader bound with current generation of games.
Most of the bottleneck these days is with shaders given each pixel can have hundreds if not thousands instructions running on it before it?s drawn.

does so well in this dept and pull ahead of original 8800gts because of this.
The main reason it pulls away from the GTS is because it has more stream processors (aka shaders), which is exactly what I was saying earlier.

I would say it's more texture bound and feeding right amount of memory bandwidth than anything else.
Texturing is certainly important but memory bandwidth isn't that important unless you're really crippled on 128 bit or something.

If the new 8800gt had 384bit memory bus like gtx it would stomp an ultra in it's tracks.
I doubt that very much; witness how the 2900 XT with ~50% more memory bandwidth isn't really faster than the 3870 because the rest of the specs are pretty much the same.

Clearly memory bandwidth isn't having much of an impact in that situation.

Similarly the Ultra has ~80% more memory bandwidth than the GT but it's not even close to being 80% faster so again memory bandwidth isn't the limiting factor there.
 

Arkaign

Lifer
Oct 27, 2006
20,736
1,379
126
w3 n33dZ teh GDDR16 Nvidia 12000 and Core8OctoGoblin with 128GB of m3m0ry to play teh Crysis!

/2nd mortgage for PC parts
 

AzN

Banned
Nov 26, 2001
4,112
2
0
Originally posted by: BFG10K

Most of the bottleneck these days is with shaders given each pixel can have hundreds if not thousands instructions running on it before it?s drawn.

Current games still need texture fillrate to draw screens. Without them shader would be useless. The main reason 2900xt did so poorly is because it didn't have enough texture fillrate to saturate it's massive memory bandwidth.

The main reason it pulls away from the GTS is because it has more stream processors (aka shaders), which is exactly what I was saying earlier.

Kind of like the new GTS with 128SP that are clocked higher still can't beat original gtx with slower SP clocks.


Texturing is certainly important but memory bandwidth isn't that important unless you're really crippled on 128 bit or something.

Memory bandwidth is important as much as texture fillrate. When you have massive textures fillrate you need memory bandwidth to saturate it or else the texture fillrate sit there idle much like memory bandwidth.

http://techreport.com/r.x/gefo...600/3dm-multi-1280.gif

I couldn't find 8800gt so we can use 8600gts which is also memory bandwidth starved as an example. 8800gt and 8600gt/s has more things similar than 8800gts or the gtx. 8600gts has a theoretical fillrate of 10800 Mtexels/s but on 3dmark fillrate test it shows as 7650 because it needs bandwidth to saturate its fillrate.




I doubt that very much; witness how the 2900 XT with ~50% more memory bandwidth isn't really faster than the 3870 because the rest of the specs are pretty much the same.

Clearly memory bandwidth isn't having much of an impact in that situation.

Similarly the Ultra has ~80% more memory bandwidth than the GT but it's not even close to being 80% faster so again memory bandwidth isn't the limiting factor there.

2900xt had gts level of texture fillrate. Only in higher resolutions did it ever serve any purpose.

It's different with 8800gt because it has more texture fillrate than ultra. Most of that texture fillrate idling because it's memory bandwidth starved. Once more memory bandwidth can saturate 8800gt fillrate it can surpass ultra by a good margin.

 

nitro28

Senior member
Dec 11, 2004
221
0
76
GDDR5 would make a good excuse for the card makers to release a new card with a new engine since it will be one more difference that they can advertise between the old cards and new. A good marketing move whether it makes much of a difference or not.
 

BFG10K

Lifer
Aug 14, 2000
22,709
3,002
126
Current games still need texture fillrate to draw screens.
That isn't under debate and never was. My original claim was that shader performance is the most important factor these days while memory bandwidth isn't much of a factor unless a card is hopelessly starved like 128 bits or something.

Without them shader would be useless.
It's possible to write shader code that doesn't require any texels (e.g. particle effects, precedural textures, etc). Furthermore you might be running thousands of instructions per pixel while only requiring a few texel fetches for each.

The main reason 2900xt did so poorly is because it didn't have enough texture fillrate to saturate it's massive memory bandwidth.
No, the main reason it does so poorly is because the shaders can't do as much work per clock cycle as nVidia's and also require optimal code to expose their full potential.

That and because they have no hardware AA resolve which loads the shaders even more.

Kind of like the new GTS with 128SP that are clocked higher still can't beat original gtx with slower SP clocks.
When official specs & reviews are out with proper benchmarks we can discuss that point further, assuming anything?s left to discuss.

And again I'm not claiming memory bandwidth is irrelevant and makes no difference, what I'm claiming is shader performance is much more important.

Memory bandwidth is important as much as texture fillrate.
No it isn't given we have numerous examples of cards not affected much by a reduction of memory bandwidth.

The days of 100% memory bandwidth limitation because of fillrate saturation are long gone. Modern games load pixel shaders almost to the breaking point and that's where the bottleneck is, not on the memory.

I couldn't find 8800gt so we can use 8600gts which is also memory bandwidth starved as an example.
The 8600 GTS?s situation as not even close to being that of a 8800 GT.

8800gt and 8600gt/s has more things similar than 8800gts or the gtx.
That claim is utter lunacy; the 8800 GT vs 8600 GTS is like apples vs oranges.

112 SP vs 32 SP, 20 ROPs vs 8 ROPs, the list goes on.

It's different with 8800gt because it has more texture fillrate than ultra. Most of that texture fillrate idling because it's memory bandwidth starved.
You need to provide evidence of your claims. You need to demonstrate that overclocking the memory on the GT creates a linear performance increase compared to bandwidth in a range of modern titles, otherwise you're simply speculating

As it stands now the performance between the GTX / GT / GTS 640/320 is pretty much where expected based on SPs and core clocks. The GT is clearly unfazed by it?s 256 bit memory bus because it gives the GTX/Ultra a run for its money.

And like I said earlier the Ultra is not 80% faster than the GT even though it has 80% more memory bandwidth.
 

BFG10K

Lifer
Aug 14, 2000
22,709
3,002
126
Additionally I forgot to mention that the GT only has a texel fillrate advantage when filtering regular INT textures.

When operating on FP formats (which most modern games use for HDR) its texel fillrate is actually lower than that of the GTX.

So again your texturing claims are largely unfounded.
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
You notice that when cards advetise the specs they throw in clock speeds, DDR type, shaders, advertising induced nicknames for various methods they use (OMFGBBQ this uses a ringbus) but never any information on how it PERFORMS? I wish card manufacturers made all that data availalbe but not front stage, where instead of 'detailed specs" they would have "performance charts" comparing it to other cards of its generation... but then again, it works.
 

AzN

Banned
Nov 26, 2001
4,112
2
0
Originally posted by: BFG10K

That isn't under debate and never was. My original claim was that shader performance is the most important factor these days while memory bandwidth isn't much of a factor unless a card is hopelessly starved like 128 bits or something.

Of course that was the debate. You said shader is the most important factor. I personally believe it's the texture fillrate and right amount of bandwidth to saturate the fillrate that matters with these high end cards.


It's possible to write shader code that doesn't require any texels (e.g. particle effects, precedural textures, etc). Furthermore you might be running thousands of instructions per pixel while only requiring a few texel fetches for each.

Yes it's possible but we are talking about current generation of games here. You can go on and on about in your theories but current crop of games need fillrate.

No, the main reason it does so poorly is because the shaders can't do as much work per clock cycle as nVidia's and also require optimal code to expose their full potential.

That and because they have no hardware AA resolve which loads the shaders even more.

That is your hypothesis. 2900xt can do more shader operations than 8800 ultra but it only rivals a 8800gts even when AA is not used. Its texture fillrate only rivaling to 8800gts and perform relatively to 8800gts.


When official specs & reviews are out with proper benchmarks we can discuss that point further, assuming anything?s left to discuss.

And again I'm not claiming memory bandwidth is irrelevant and makes no difference, what I'm claiming is shader performance is much more important.

Did you not see the benchmarks @ tweaktown? 8800gtx still beat 8800gts g92 core.


No it isn't given we have numerous examples of cards not affected much by a reduction of memory bandwidth.

The days of 100% memory bandwidth limitation because of fillrate saturation are long gone. Modern games load pixel shaders almost to the breaking point and that's where the bottleneck is, not on the memory.

That's not entirely true. Current games still rely on texturing to draw screens then comes the shaders to do its effects.


The 8600 GTS?s situation as not even close to being that of a 8800 GT.

Surely 8600gts has more things in common with 8800gt than gts or gtx. G92 is upgraded version G84 and G84 was upgraded from G80. Except g92 can grab twice as many texels per clock.

That claim is utter lunacy; the 8800 GT vs 8600 GTS is like apples vs oranges.

112 SP vs 32 SP, 20 ROPs vs 8 ROPs, the list goes on.

That's where you have mistaken. Nvidia designs corresponds with all their line up.

You need to provide evidence of your claims. You need to demonstrate that overclocking the memory on the GT creates a linear performance increase compared to bandwidth in a range of modern titles, otherwise you're simply speculating

As it stands now the performance between the GTX / GT / GTS 640/320 is pretty much where expected based on SPs and core clocks. The GT is clearly unfazed by it?s 256 bit memory bus because it gives the GTX/Ultra a run for its money.

And like I said earlier the Ultra is not 80% faster than the GT even though it has 80% more memory bandwidth.

GT was unphased by it's 256bit memory bus because it's clocked much higher than the original GTS. GT has memory bandwidth of 57.6 GB/S. A GTS has bandwidth of 64.0 GB/S. It is not much difference. GT however excels in texel filtering rate over the GTS which commands its lead over the GTS not because it has 16 more shaders. Almost tripling the performance of bilinear texel filtering and 1/3 faster in bilinear FP16 texel filtering rate.

Memory bandwidth alone doesn't give you automatic performance. I think I explained it to you in my previous post.
 

AzN

Banned
Nov 26, 2001
4,112
2
0
Originally posted by: BFG10K
Additionally I forgot to mention that the GT only has a texel fillrate advantage when filtering regular INT textures.

When operating on FP formats (which most modern games use for HDR) its texel fillrate is actually lower than that of the GTX.

So again your texturing claims are largely unfounded.

You are right I almost forgot about this. Depending on the game it would beat ultra if 8800gt had enough bandwidth to saturate it's fillrate.
 

BFG10K

Lifer
Aug 14, 2000
22,709
3,002
126
Of course that was the debate.
No it wasn't. You originally stated current games still need texture fillrate to draw screens. That was never the debate so stop concocting strawman arguments.

You said shader is the most important factor.
That?s because for modern titles it is.

I personally believe it's the texture fillrate and right amount of bandwidth to saturate the fillrate that matters with these high end cards.
But again you have absolutely nothing to back that belief.

To provide evidence you'd need to overclock a GT's memory and show a linear relationship between memory bandwidth & performance (e.g. +10% memory bandwidth = +10% performance gain) in a range of modern games.

If you can't show that you're simply speculating.

Pretty much all of the evidence thus-far points to modern games being shader bound because the performance we're seeing most closely corresponds to relative shading power, specifically SPs & core/shader clocks.

That's not to say other things like texturing and memory bandwidth aren't a factor, they just aren't as important. We aren't seeing large differences between memory bandwidth with either vendor unless the card is really crippled with 128 bits or less.

Yes it's possible but we are talking about current generation of games here. You can go on and on about in your theories but current crop of games need fillrate.
No, the current crop of games need shading power, specifically pixel shading power for exactly this reason. They run a lot of instructions on the same pixel while not necessarily needing the same amount of texel operations.

We can see this from benchmarks and also from reviewer commentary like Tech Report.

To quote them: And, believe it or not, memory bandwidth is arguably at less of a premium these days, since games produce "richer" pixels that spend more time looping through shader programs and thus occupying on-chip storage like registers and caches.

That is your hypothesis.
Hypothesis? LOL, it's a proven fact. Look any technical dissection of the two architectures.

2900xt can do more shader operations than 8800 ultra
In theory perhaps, with code massaged well and under ideal conditions. But really that was kinda case of the FX line too but in practice that hardly ever materialized.

Did you not see the benchmarks @ tweaktown? 8800gtx still beat 8800gts g92 core.
Again when I see proper card specs and reviews then we'll talk about it. I'm not going to jump to conclusions based on leaked benchmarks from a single website.

Current games still rely on texturing to draw screens then comes the shaders to do its effects.
I'm not sure you quite understand what happens in a modern rendering system. Legacy multi-texturing is gone, replaced with pixel shading.

You do realize that for every texel that ends up on your screen there could have been hundreds if not thousands of calculations done on the pixel shader to alter its original form? And that most operations probably didn't involve any texturing at all?

Surely 8600gts has more things in common with 8800gt than gts or gtx.
LOL, are you for real? How about checking something as basic as the specs before jumping to widely inaccurate conclusions?

G92 is upgraded version G84 and G84 was upgraded from G80. Except g92 can grab twice as many texels per clock.
And?

That's where you have mistaken. Nvidia designs corresponds with all their line up
Huh? 112 SPs vs 32 SPs is fact; look it up. To claim the cards are in a similar class with respect to performance is utter lunacy.

GT was unphased by it's 256bit memory bus because it's clocked much higher than the original GTS.
Except I was referring to the GTX/Ultra which most certainly have significantly more memory bandwidth. Like I've said for the third time the Ultra has 80% more memory bandwidth than the GT.

Did you get that? 80%. Almost double; despite this the GT peforms relative to its SPs and core clocks, not memory bandwidth or texel fillrate.

Memory bandwidth alone doesn't give you automatic performance. I think I explained it to you in my previous post.
Likewise texture filltrate. What is pretty much automatic is pixel shading since most commercial games made in the last three years or so use it extensively.

Depending on the game it would beat ultra if 8800gt had enough bandwidth to saturate it's fillrate.
In otherwords not many games then since most modern mainstream titles use FP rendering so even if your texturing theory is valid it's a moot point in such titles.
 

AzN

Banned
Nov 26, 2001
4,112
2
0
Originally posted by: BFG10K
No it wasn't. You originally stated current games still need texture fillrate to draw screens. That was never the debate so stop concocting strawman arguments.

Are we going to say NO you didn't YES I did debate here? That is part of the debate here. You specifically said shader matters more. Without textures there is no shader. Get it?

That?s because for modern titles it is.

Sure but it will never show huge improvements as having bigger texture fillrate combined with bigger bandwidth with current crop of games. Shading matters only to a certain extent with most of the games out today. I can only think of 1 game that shading matters more than any other games. Call of Juarez I think uses mass amounts of shading.


But again you have absolutely nothing to back that belief.

To provide evidence you'd need to overclock a GT's memory and show a linear relationship between memory bandwidth & performance (e.g. +10% memory bandwidth = +10% performance gain) in a range of modern games.

If you can't show that you're simply speculating.

Pretty much all of the evidence thus-far points to modern games being shader bound because the performance we're seeing most closely corresponds to relative shading power, specifically SPs & core/shader clocks.

That's not to say other things like texturing and memory bandwidth aren't a factor, they just aren't as important. We aren't seeing large differences between memory bandwidth with either vendor unless the card is really crippled with 128 bits or less.

Nothing to back it up? GTX still beats G92 GTS because it still has right amount of bandwidth combined with it's texture fillrate a while having slower SP clocks.

G92 8800gts has texture fillrate of 18.2 Gtexels/s but it's still bandwidth starved by its 62.1 GB/s of bandwidth . A gtx has texture fillrate of 18.4 Gtexels/s but it's actually being utilized because it can saturate with bigger memory bandwidth. I assure you if you tested a GT or the new G92 GTS with 3dmark multitexture fillrate test it ISN'T achieving it's theoretical fillrate by a good margin.


No, the current crop of games need shading power, specifically pixel shading power for exactly this reason. They run a lot of instructions on the same pixel while not necessarily needing the same amount of texel operations.

We can see this from benchmarks and also from reviewer commentary like Tech Report.

To quote them: And, believe it or not, memory bandwidth is arguably at less of a premium these days, since games produce "richer" pixels that spend more time looping through shader programs and thus occupying on-chip storage like registers and caches.

It needs shaders to do it's effects but textures and bandwidth still commands performance in the real world as shown in benchmarks. http://www.tweaktown.com/artic...arks_crysis/index.html


Hypothesis? LOL, it's a proven fact. Look any technical dissection of the two architectures.

I don't see any proven fact. All I see is you telling me it is. :laugh:

In theory perhaps, with code massaged well and under ideal conditions. But really that was kinda case of the FX line too but in practice that hardly ever materialized.

But it can as shown here in shader portion of 3dmark test of 2900xt vs the 8800 line up. http://techreport.com/articles.x/12458/3

As you can see 2900xt does quite well compared to 8800gtx and doesn't fall back compared to 8800 lineup but the performance however still slower than 8800gts.

Again when I see proper card specs and reviews then we'll talk about it. I'm not going to jump to conclusions based on leaked benchmarks from a single website.

What do you think that was? Fake review?


I'm not sure you quite understand what happens in a modern rendering system. Legacy multi-texturing is gone, replaced with pixel shading.

You do realize that for every texel that ends up on your screen there could have been hundreds if not thousands of calculations done on the pixel shader to alter its original form? And that most operations probably didn't involve any texturing at all?

It is not completely gone. It is still there. I'm not full time programmer so I can't comment.


LOL, are you for real? How about checking something as basic as the specs before jumping to widely inaccurate conclusions?

Yes I'm for real. :p They are still the same core using different number of ROPs, texture units, PS, and clock speeds.



That was your rebuttal? Priceless. :laugh:



Huh? 112 SPs vs 32 SPs is fact; look it up. To claim the cards are in a similar class with respect to performance is utter lunacy.

Number of SP makes the cards somehow different? It's still designed relatively similar with one another.

Except I was referring to the GTX/Ultra which most certainly have significantly more memory bandwidth. Like I've said for the third time the Ultra has 80% more memory bandwidth than the GT.

Did you get that? 80%. Almost double; despite this the GT peforms relative to its SPs and core clocks, not memory bandwidth or texel fillrate.

You can't even understand what I said previous how are you going to understand now? You just reply because you are hard headed who think he knows it all. Reread what I said about fillrate and bandwidth on my first reply to you.

Likewise texture filltrate. What is pretty much automatic is pixel shading since most commercial games made in the last three years or so use it extensively.

Last 3 years? I thought it's been a year and a half when games like Oblivion, Unreal 3, Crysis based engines showed up that use shading extensively. You can see the biggest jump when textures are saturated by it's bandwidth combined with current SP. There are still many games still rely on texture prowness over shader. Actually it's about 95% of PC games out today.


In otherwords not many games then since most modern mainstream titles use FP rendering so even if your texturing theory is valid it's a moot point in such titles.

I think I mentioned to you I forgot about that on my previous post about FP texel fillrate. But yes 8800gt will still beat an ultra in some games if it did have enough memory bandwidth to saturate it's texel fillrate.

 

BFG10K

Lifer
Aug 14, 2000
22,709
3,002
126
Are we going to say NO you didn't YES I did debate here?
If you want to claim something was said when it wasn't you should expect to be called out on it.

Without textures there is no shader. Get it?
Rubbish, not to mention pixel shader operations don?t have a 1:1 relationship to texture operations. Get it?

Sure but it will never show huge improvements as having bigger texture fillrate combined with bigger bandwidth with current crop of games. Shading matters only to a certain extent with most of the games out today. I can only think of 1 game that shading matters more than any other games. Call of Juarez I think uses mass amounts of shading.
Again total and unfounded speculation with absolutely zero evidence to back it up. That you keep repeating your theories doesn't make them true.

Nothing to back it up? GTX still beats G92 GTS because it still has right amount of bandwidth combined with it's texture fillrate a while having slower SP clocks.
Where?

I assure you if you tested a GT or the new G92 GTS with 3dmark multitexture fillrate test it ISN'T achieving it's theoretical fillrate by a good margin.
Who gives a crap about a theoretical test like 3DMark? You run a synthetic test specifically designed to bench multi-texturing and you somehow think that translates to a real game?

It needs shaders to do it's effects but textures and bandwidth still commands performance in the real world as shown in benchmarks. http://www.tweaktown.com/artic...arks_crysis/index.html
Pardon me? Your own link proves you're wrong

In Crysis @ 1920 x 1200 the GTS 512 IS faster than the GTX.

It isn't faster than the Ultra but that?s likely because the XXX moniker means its core/shader is overclocked and probably higher than the GTS?s. In fact it?s likely the GTX is overclocked too but that article is so poorly done we can?t be sure.

So to spell it out in simple terms for you the 64 GB/sec card is beating the 86.4 GB/sec card on account of more powerful shading capability.

What part of this are you having trouble understating? Your own links are proving how wrong you are for heaven?s sake.

I don't see any proven fact.
Then you need to do some research as it's not my job to educate you.
But it can as shown here in shader portion of 3dmark test of 2900xt vs the 8800 line up. http://techreport.com/articles.x/12458/3
I don't see anything relevant in that link except the 2900 XT is at best tied with the GTS in a synthetic pixel shader test which again is synthetic and doesn?t necessarily relate to games.

What do you think that was? Fake review?
LOL, please tell me you aren?t this naive. Do you always treat solitary pre-release reviews as gospel?

In either case your own link proves you're wrong. A card with lesser bandwidth but with more shading power is beating the other card.

It is not completely gone. It is still there.
Actually it is give DX10 has no fixed function pipeline anymore and given cards have been doing fixed functions on shaders since the GF3 days.

I'm not full time programmer so I can't comment.
Funny, it hasn?t stopped you thus-far.

Yes I'm for real. They are still the same core using different number of ROPs, texture units, PS, and clock speeds.
Same core but everything else is different? So in your mind that scenario is classed as equality, is it? :roll:

Your reasoning is simply comical.

That was your rebuttal?
Your own comment refuted itself.

?It?s the same even though everything else is different!?

LOL!

Number of SP makes the cards somehow different?
Is that a trick question or something? The number of SPs dictates performance. Do you know what an SP is?

You can't even understand what I said previous how are you going to understand now? You just reply because you are hard headed who think he knows it all. Reread what I said about fillrate and bandwidth on my first reply to you.
I'm not going to waste my time reading that rubbish again when once was more than enough to demonstrate to me you don't know what you're talking about.

Last 3 years? I thought it's been a year and a half when games like Oblivion, Unreal 3, Crysis based engines showed up that use shading extensively.
LOL, how about the likes of Fear, Riddick, Far Cry et al? Given you're so wrapped up with HL2 I'm not at all surprised you failed to notice.

You can see the biggest jump when textures are saturated by it's bandwidth combined with current SP. There are still many games still rely on texture prowness over shader. Actually it's about 95% of PC games out today.
Evidence?

I think I mentioned to you I forgot about that on my previous post about FP texel fillrate. But yes 8800gt will still beat an ultra in some games if it did have enough memory bandwidth to saturate it's texel fillrate.
More flip-flopping nonsense. You?ve been proven wrong time and time again yet you keep posting as if you know what you?re talking about.
 

Genx87

Lifer
Apr 8, 2002
41,091
513
126
Shading power has been a limiting factor since ~2004. Beyond3d has a few articles up showing the difference in texture vs shader for games. Memory bandwidth as a bottleneck is rare from what I can tell. The lastest from nvidia helps prove that. Less bandwidth, more SPs, better performance.
 

BFG10K

Lifer
Aug 14, 2000
22,709
3,002
126
Click.

According to that link in 2004 over 75% of new games were using pixels shaders and by 2006 it was 100%.

And in the lower graph you can clearly see the trend of increased shader operations vs texturing. The fact is texturing demands are not increasing even close to the level of that of pixel shading.

I don't think you have anything left to argue given the evidence pretty much backs everything I've said.
 

AzN

Banned
Nov 26, 2001
4,112
2
0
Originally posted by: Genx87
Shading power has been a limiting factor since ~2004. Beyond3d has a few articles up showing the difference in texture vs shader for games. Memory bandwidth as a bottleneck is rare from what I can tell. The lastest from nvidia helps prove that. Less bandwidth, more SPs, better performance.

But it has more texture fillrate as well.
 

AzN

Banned
Nov 26, 2001
4,112
2
0
Originally posted by: BFG10K
Click.

According to that link in 2004 over 75% of new games were using pixels shaders and by 2006 it was 100%.

And in the lower graph you can clearly see the trend of increased shader operations vs texturing. The fact is texturing demands are not increasing even close to the level of that of pixel shading.

I don't think you have anything left to argue given the evidence pretty much backs everything I've said.

Yes it's using pixel shader but it wasn't until Oblivion did it ever show more of its use where x1900 series outperform Geforce 7 series.
 

AzN

Banned
Nov 26, 2001
4,112
2
0
Originally posted by: BFG10K
Are we going to say NO you didn't YES I did debate here?
If you want to claim something was said when it wasn't you should expect to be called out on it.

Without textures there is no shader. Get it?
Rubbish, not to mention pixel shader operations don?t have a 1:1 relationship to texture operations. Get it?

Sure but it will never show huge improvements as having bigger texture fillrate combined with bigger bandwidth with current crop of games. Shading matters only to a certain extent with most of the games out today. I can only think of 1 game that shading matters more than any other games. Call of Juarez I think uses mass amounts of shading.
Again total and unfounded speculation with absolutely zero evidence to back it up. That you keep repeating your theories doesn't make them true.

Nothing to back it up? GTX still beats G92 GTS because it still has right amount of bandwidth combined with it's texture fillrate a while having slower SP clocks.
Where?

I assure you if you tested a GT or the new G92 GTS with 3dmark multitexture fillrate test it ISN'T achieving it's theoretical fillrate by a good margin.
Who gives a crap about a theoretical test like 3DMark? You run a synthetic test specifically designed to bench multi-texturing and you somehow think that translates to a real game?

It needs shaders to do it's effects but textures and bandwidth still commands performance in the real world as shown in benchmarks. http://www.tweaktown.com/artic...arks_crysis/index.html
Pardon me? Your own link proves you're wrong

In Crysis @ 1920 x 1200 the GTS 512 IS faster than the GTX.

It isn't faster than the Ultra but that?s likely because the XXX moniker means its core/shader is overclocked and probably higher than the GTS?s. In fact it?s likely the GTX is overclocked too but that article is so poorly done we can?t be sure.

So to spell it out in simple terms for you the 64 GB/sec card is beating the 86.4 GB/sec card on account of more powerful shading capability.

What part of this are you having trouble understating? Your own links are proving how wrong you are for heaven?s sake.

I don't see any proven fact.
Then you need to do some research as it's not my job to educate you.
But it can as shown here in shader portion of 3dmark test of 2900xt vs the 8800 line up. http://techreport.com/articles.x/12458/3
I don't see anything relevant in that link except the 2900 XT is at best tied with the GTS in a synthetic pixel shader test which again is synthetic and doesn?t necessarily relate to games.

What do you think that was? Fake review?
LOL, please tell me you aren?t this naive. Do you always treat solitary pre-release reviews as gospel?

In either case your own link proves you're wrong. A card with lesser bandwidth but with more shading power is beating the other card.

It is not completely gone. It is still there.
Actually it is give DX10 has no fixed function pipeline anymore and given cards have been doing fixed functions on shaders since the GF3 days.

I'm not full time programmer so I can't comment.
Funny, it hasn?t stopped you thus-far.

Yes I'm for real. They are still the same core using different number of ROPs, texture units, PS, and clock speeds.
Same core but everything else is different? So in your mind that scenario is classed as equality, is it? :roll:

Your reasoning is simply comical.

That was your rebuttal?
Your own comment refuted itself.

?It?s the same even though everything else is different!?

LOL!

Number of SP makes the cards somehow different?
Is that a trick question or something? The number of SPs dictates performance. Do you know what an SP is?

You can't even understand what I said previous how are you going to understand now? You just reply because you are hard headed who think he knows it all. Reread what I said about fillrate and bandwidth on my first reply to you.
I'm not going to waste my time reading that rubbish again when once was more than enough to demonstrate to me you don't know what you're talking about.

Last 3 years? I thought it's been a year and a half when games like Oblivion, Unreal 3, Crysis based engines showed up that use shading extensively.
LOL, how about the likes of Fear, Riddick, Far Cry et al? Given you're so wrapped up with HL2 I'm not at all surprised you failed to notice.

You can see the biggest jump when textures are saturated by it's bandwidth combined with current SP. There are still many games still rely on texture prowness over shader. Actually it's about 95% of PC games out today.
Evidence?

I think I mentioned to you I forgot about that on my previous post about FP texel fillrate. But yes 8800gt will still beat an ultra in some games if it did have enough memory bandwidth to saturate it's texel fillrate.
More flip-flopping nonsense. You?ve been proven wrong time and time again yet you keep posting as if you know what you?re talking about.

I don't think I have time for this to continue nor do I want to sit here with ignorance. All your going to do is reply again and again with nonsense word for word. That's why you have 15000 posts. :roll:

When the new g92 8800gts with 128SP and faster clocks review is posted with many different game benchmarks in major sites you will be the one made of fool.
 

BFG10K

Lifer
Aug 14, 2000
22,709
3,002
126
Yes it's using pixel shader but it wasn't until Oblivion did it ever show more of its use where x1900 series outperform Geforce 7 series.
Huh? What does the GeForce 7 series or Oblivion have to do with it?

In addition to being a different architecture there's the other issue of the filtering fiasco on the GF7.

In a direct comparison to the X1800 XT (the latter having pretty much the same specs but 1/3 the shaders) the X1900 had a healthy even at launch across a range of games.

In particular look at the pasting it delivers in Riddick, Fear and Call of Duty 2 when you crank the details.

When the new g92 8800gts with 128SP and faster clocks review is posted with many different game benchmarks in major sites you will be the one made of fool.
You've already been made the fool as your own benchmark proved you were wrong.