9800 GTX+/GTS 250

Page 4 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Scali

Banned
Dec 3, 2004
2,495
0
0
Originally posted by: Azn
It's pointless to argue with you when I already gave you an explanation why the 8800gt beats the 8800gts 640mb. Although the 8800gt is bandwidth starved it's able to beat 8800gts 640 mb because it has more SP and TMU. With more bandwidth it would easily beat 8800gts 640mb by 2 folds and even further. While 8800gts 640mb has been saturated by bandwidth and more bandwidth would not help this card much like 2900xt.

You have a good point here, I don't think everyone sees it.
A nice test would be to take an 8800GTS640, and run some benchmarks, first with the memory slightly underclocked, then with the memory slightly overclocked.
If what you're saying is correct, then the benchmark results will be pretty close.
Doing the same test on an 8800GT should show a more direct impact on changing the memory speed (and therefore bandwidth).

I'd like to add in general... the first 8800s probably had 'too much' bandwidth because nVidia didn't have any DX10 software to analyse. They just had to estimate bandwidth requirements, and build their cards around those.
I think people focus mostly on fillrate and AA in this thread, but memory is used for more than just that. Another huge factor in the consumption of memory bandwidth is texturing.
I guess that's the core of the issue here. DX10 games aren't all that texture-heavy, they are mostly shader-limited. nVidia probably thought there would be a lot more texture-usage when they originally designed the 8800-series. In terms of pure fillrate (ROPs and all), it doesn't have the horsepower to use anywhere near its memory bandwidth. I think that's what Azn is alluding to. It has a lot of 'spare' bandwidth which can be used for texturing.
Later generations of DX10 hardware focused more on shader performance, maximizing fillrate/AA and such, and less on memory bandwidth, which resulted in better performance at a lower cost. That's exactly what the transition from G80 to G92 was: a leaner, meaner chip.
With AMD it seems they overshot the bandwidth requirements even worse with the original 2900XT, and you see a similar trend going to the 3000 and 4000 series.
But, hindsight is always 20/20. Neither AMD nor nVidia could predict what requirements today's DX10 games would have, back in 2006.
 

Keysplayr

Elite Member
Jan 16, 2003
21,209
50
91
"Keys says a lot of things. Primarily tell other people what to do even though he's persuaded by Nvidia and other forum members. I'm trying to make you understand by giving you examples. You ask questions because that's just how you post to win an argument. I did however gave you a straight answer after I gave you an explanation and examples."


Here is something else Keys does Mr. Congeniality. Points out potential errors in your methods.

"8800GTS
CORE REDUCTION 590/1836/1026
32.13 fps -7.2% difference"

You neglected to downclock the shaders here as well? Or is this a typo?
I would gather downclocking the shaders to 1350MHz, you might notice a bit more than 7.2% in performance.

 

AzN

Banned
Nov 26, 2001
4,112
2
0
Originally posted by: Keysplayr
"Keys says a lot of things. Primarily tell other people what to do even though he's persuaded by Nvidia and other forum members. I'm trying to make you understand by giving you examples. You ask questions because that's just how you post to win an argument. I did however gave you a straight answer after I gave you an explanation and examples."


Here is something else Keys does Mr. Congeniality. Points out potential errors in your methods.

"8800GTS
CORE REDUCTION 590/1836/1026
32.13 fps -7.2% difference"

You neglected to downclock the shaders here as well? Or is this a typo?
I would gather downclocking the shaders to 1350MHz, you might notice a bit more than 7.2% in performance.

Nope that's no error but nice try though. I'm showing you difference from Core and memory clocks. After all I'm trying to show you fillrate is hindered by bandwidth not SP.
 

Scali

Banned
Dec 3, 2004
2,495
0
0
Originally posted by: Keysplayr
You neglected to downclock the shaders here as well? Or is this a typo?
I would gather downclocking the shaders to 1350MHz, you might notice a bit more than 7.2% in performance.

Well, assuming it's not a typo, you'd have a point that the test wasn't conducted properly.
However, it seems to me that this test is irrelevant to what Azn claims.
I think you just need to look at the memory speed reduction to see that a reduction in bandwidth has a direct and profound effect on performance. That's what he claimed. If there was plenty of bandwidth, then reducing the memory speed wouldn't have had such a profound effect.

Now I'd like to see what the same card would do with MORE memory speed/bandwidth.
I think it's likely that performance will increase. This would demonstrate that indeed the G92 core is held back by the memory.
 

Keysplayr

Elite Member
Jan 16, 2003
21,209
50
91
Originally posted by: Scali
Originally posted by: Keysplayr
You neglected to downclock the shaders here as well? Or is this a typo?
I would gather downclocking the shaders to 1350MHz, you might notice a bit more than 7.2% in performance.

Well, assuming it's not a typo, you'd have a point that the test wasn't conducted properly.
However, it seems to me that this test is irrelevant to what Azn claims.
I think you just need to look at the memory speed reduction to see that a reduction in bandwidth has a direct and profound effect on performance. That's what he claimed. If there was plenty of bandwidth, then reducing the memory speed wouldn't have had such a profound effect.

Now I'd like to see what the same card would do with MORE memory speed/bandwidth.
I think it's likely that performance will increase. This would demonstrate that indeed the G92 core is held back by the memory.

The whole thing is, the reason the 9800GTX+ would get near a GTX260 if memory bandwidth were equal would be due to 9800GTX+ high core and insane shader clocks.

I don't know if I'm agreeing with Azn or not, but I can definitely concede that there is a remaining doubt, that if only memory on the 9800GTX+ is overclocked to it's max, would there be a linear increase in performance? Or would it be diminishing returns because as it sits at stock, bandwidth is sufficient for the core? I just wish he had a 9800GTX+ or a GTS250 to test this with.

Apoppin, you have a 512MB GTS250. Notice a linear increase in performance when overclocking only the memory?

There I go again, telling people what to do. Tsk tsk tsk keys.

 

AzN

Banned
Nov 26, 2001
4,112
2
0
Originally posted by: Scali
Originally posted by: Keysplayr
You neglected to downclock the shaders here as well? Or is this a typo?
I would gather downclocking the shaders to 1350MHz, you might notice a bit more than 7.2% in performance.

Well, assuming it's not a typo, you'd have a point that the test wasn't conducted properly.
However, it seems to me that this test is irrelevant to what Azn claims.
I think you just need to look at the memory speed reduction to see that a reduction in bandwidth has a direct and profound effect on performance. That's what he claimed. If there was plenty of bandwidth, then reducing the memory speed wouldn't have had such a profound effect.

Now I'd like to see what the same card would do with MORE memory speed/bandwidth.
I think it's likely that performance will increase. This would demonstrate that indeed the G92 core is held back by the memory.

Nope not a typo. It actually has everything with what I've been saying since I joined this forum. If the 8800gts isn't starved for bandwidth core reduction would come out on top but it didn't. considering you can't raise the memory clocks much on the 8800gts reduction of clocks would have the same type of results as raising clocks.

BFG's ultra in other hand downclocking the core made the biggest drop in performance. Complete opposite of the G92 chip.
 

AzN

Banned
Nov 26, 2001
4,112
2
0
Originally posted by: Keysplayr
The whole thing is, the reason the 9800GTX+ would get near a GTX260 if memory bandwidth were equal would be due to 9800GTX+ high core and insane shader clocks.

I don't know if I'm agreeing with Azn or not, but I can definitely concede that there is a remaining doubt, that if only memory on the 9800GTX+ is overclocked to it's max, would there be a linear increase in performance? Or would it be diminishing returns because as it sits at stock, bandwidth is sufficient for the core? I just wish he had a 9800GTX+ or a GTS250 to test this with.

This wouldn't be the first time you doubted me keys. Remember when 9600gt was released? ;)
 

Scali

Banned
Dec 3, 2004
2,495
0
0
Originally posted by: Keysplayr
The whole thing is, the reason the 9800GTX+ would get near a GTX260 if memory bandwidth were equal would be due to 9800GTX+ high core and insane shader clocks.

Well, isn't that what was said all along?
Take away the memory limitation, and you can get a 9800GTX+ to perform near a GTX260?
Thing is that 9800GTX+ already pushes GDDR3 as far as it will go on its 256-bit bus, with speeds in excess of 2 GHz.
GDDR5 would be a good solution. A wider bus would mean a larger and more complex GPU design, which in turn would probably mean lower clockspeeds... so you'd get back to something like the GTX260.
 

Keysplayr

Elite Member
Jan 16, 2003
21,209
50
91
Originally posted by: Azn
Originally posted by: Keysplayr
The whole thing is, the reason the 9800GTX+ would get near a GTX260 if memory bandwidth were equal would be due to 9800GTX+ high core and insane shader clocks.

I don't know if I'm agreeing with Azn or not, but I can definitely concede that there is a remaining doubt, that if only memory on the 9800GTX+ is overclocked to it's max, would there be a linear increase in performance? Or would it be diminishing returns because as it sits at stock, bandwidth is sufficient for the core? I just wish he had a 9800GTX+ or a GTS250 to test this with.

This wouldn't be the first time you doubted me keys. Remember when 9600gt was released? ;)

I dunno Azn. Maybe it's just your pompous personality that rubs the wrong way.
Yes, I remember when the 9600GT was released. It was demonstrated that 64 shaders seemed to be enough in certain games. Seemed to be very close to 8800GT in performance and we wondered why. CoD4 was one of them if I recall.

 

apoppin

Lifer
Mar 9, 2000
34,890
1
0
alienbabeltech.com
Originally posted by: SSChevy2001
Originally posted by: apoppin
why is my Galaxy GTS 250/512MB consistently outperforming my Palit 9800GT/512MB ?
- i thought they were fairly close .. is the 9800GTX that much faster than the GT?
:confused:
Are you serious?

GTS250 738/1836/1100 128SPs 16ROPs 64TMUs
9800GT 600/1500/900 112SPs 16ROPs 56TMUs

22% increase in Memory Bandwidth
40% increase in FLOPS
23% increase in Pixel Fill Rate
41% increase in Texture Fill Rate

thanks for that; i was surprised, that is all - i thought they were a bit closer

... that is about where one would expect my results then - there are a couple of games where the minimums favor the GT; but generally the GTS runs away from the GT - some heavy over clocking would bring the GT a lot closer; but the GTS is also a good overclocker

it is a pretty decent midrange GPU now .. in it's third iteration

i am going to set up an 8800GTX now just for fun to see how the original compares
rose.gif


does this story ever have a conclusion .. or is it "neverending"
 

Scali

Banned
Dec 3, 2004
2,495
0
0
Here are some interesting results:
http://www.firingsquad.com/har..._performance/page8.asp

It seems that the 9800GTX can hold its own against the 8800GTX/Ultra in most tests... but when you go to the color fill test, the 8800GTX/Ultra show a HUGE advantage.
So it seems it balances out the wrong way, it has all this fillrate, but it can't texture and shade quickly enough to be useful in most games.
 

AzN

Banned
Nov 26, 2001
4,112
2
0
Originally posted by: Keysplayr
I dunno Azn. Maybe it's just your pompous personality that rubs the wrong way.
Yes, I remember when the 9600GT was released. It was demonstrated that 64 shaders seemed to be enough in certain games. Seemed to be very close to 8800GT in performance and we wondered why. CoD4 was one of them if I recall.

My pompous internet personality has nothing to do with what we are discussing. I've been at this computer science thing for a long time if I sound pompous because I act like I know a lot about it is because I do. If that makes me pompous in your eyes so be it. I have no love for haters.

I remember you telling me observe instead of posting because I didn't know what I was talking about. If I remember correctly facts crumbled and you literally ate your own statement in that discussion when we tested 8800gt by disabling SP with Rivatuner. :p

Again that old thread ideology is similar to this thread. I remember trying to explain to you in that old thread that 8800gt fillrate was limited by bandwidth while 9600gt was more balanced and not limited considering there weren't too many SP hungry games that would hinder even with 64SP. Now it's a little different. More SP hungry games have flourished into PC games. This is where that SP was for future comment I had told BFG in that late 2007 thread which we argued for 100 pages. ;)
 

AzN

Banned
Nov 26, 2001
4,112
2
0
Originally posted by: Scali
Here are some interesting results:
http://www.firingsquad.com/har..._performance/page8.asp

It seems that the 9800GTX can hold its own against the 8800GTX/Ultra in most tests... but when you go to the color fill test, the 8800GTX/Ultra show a HUGE advantage.
So it seems it balances out the wrong way, it has all this fillrate, but it can't texture and shade quickly enough to be useful in most games.

That's because 8800gtx/ultra has more theoretical pixel fill and bandwidth that saturates more than 9800gtx.

IF you look at texture fill test 8800gt doesn't quite catch upto 8800gtx even though 8800gt has 82% more texture fillrate. Which also leads me to believe that bandwidth is holding back nearly half of it's texture fillrate.
 

Scali

Banned
Dec 3, 2004
2,495
0
0
Originally posted by: Azn
That's because 8800gtx/ultra has more theoretical pixel fill and bandwidth that saturates more than 9800gtx.

IF you look at texture fill test 8800gt doesn't quite catch upto 8800gtx even though 8800gt has 82% more texture fillrate. Which also leads me to believe that bandwidth is holding back nearly half of it's texture fillrate.

My point exactly. If you look at the changes from G80 to G92, they reduced the number of ROPs, which would account for the pixel fillrate decrease... at the same time they doubled the texture addressing units, which should give it more texturing power.
At the same time, the memory bandwidth was reduced slightly. So you can never get the full theoretical texturing power... however, in practice you aren't just texturing, and you aren't just filling pixels. You'll be doing a combination of both, and then the balance apparently starts to make a whole lot of sense.
 

Keysplayr

Elite Member
Jan 16, 2003
21,209
50
91
Originally posted by: Azn
Originally posted by: Keysplayr
I dunno Azn. Maybe it's just your pompous personality that rubs the wrong way.
Yes, I remember when the 9600GT was released. It was demonstrated that 64 shaders seemed to be enough in certain games. Seemed to be very close to 8800GT in performance and we wondered why. CoD4 was one of them if I recall.

My pompous internet personality has nothing to do with what we are discussing. I've been at this computer science thing for a long time if I sound pompous because I act like I know a lot about it is because I do. If that makes me pompous in your eyes so be it. I have no love for haters.

I remember you telling me observe instead of posting because I didn't know what I was talking about. If I remember correctly facts crumbled and you literally ate your own statement in that discussion when we tested 8800gt by disabling SP with Rivatuner. :p

Again that old thread ideology is similar to this thread. I remember trying to explain to you in that old thread that 8800gt fillrate was limited by bandwidth while 9600gt was more balanced and not limited considering there weren't too many SP hungry games that would hinder even with 64SP. Now it's a little different. More SP hungry games have flourished into PC games. This is where that SP was for future comment I had told BFG in that late 2007 thread which we argued for 100 pages. ;)

"My pompous internet personality has nothing to do with what we are discussing."

Yup. It's your attitude that gets in the way, or I should say I let it get in the way. I can admit when I am wrong and usually keep an open mind on things, but I can't let your approach to things get in the way of that in the future. I have to step back and consider this next time when reading your posts and understanding the ego posting them. It's not a bad thing Azn. I just need to be more aware of it.

/cheers.
 

Scali

Banned
Dec 3, 2004
2,495
0
0
Originally posted by: Keysplayr
"My pompous internet personality has nothing to do with what we are discussing."

Yup. It's your attitude that gets in the way, or I should say I let it get in the way. I can admit when I am wrong and usually keep an open mind on things, but I can't let your approach to things get in the way of that in the future. I have to step back and consider this next time when reading your posts and understanding the ego posting them. It's not a bad thing Azn. I just need to be more aware of it.

Matter of perspective I suppose. I didn't find Azn's to be all that pompous. Then again, I think there's a point to what he says. If you think there isn't a point, it may sound pompous, I don't know.
Then again, you have to be careful not to give off this "Everyone who doesn't agree with me is an idiot"-vibe.
 

AzN

Banned
Nov 26, 2001
4,112
2
0
Originally posted by: Keysplayr
"My pompous internet personality has nothing to do with what we are discussing."

Yup. It's your attitude that gets in the way, or I should say I let it get in the way. I can admit when I am wrong and usually keep an open mind on things, but I can't let your approach to things get in the way of that in the future. I have to step back and consider this next time when reading your posts and understanding the ego posting them. It's not a bad thing Azn. I just need to be more aware of it.

/cheers.

And you telling me to observe instead of posting? I just wonder who the pompous ass really is... Hmmm.. :p
 

AzN

Banned
Nov 26, 2001
4,112
2
0
Originally posted by: Scali
Originally posted by: Azn
That's because 8800gtx/ultra has more theoretical pixel fill and bandwidth that saturates more than 9800gtx.

IF you look at texture fill test 8800gt doesn't quite catch upto 8800gtx even though 8800gt has 82% more texture fillrate. Which also leads me to believe that bandwidth is holding back nearly half of it's texture fillrate.

My point exactly. If you look at the changes from G80 to G92, they reduced the number of ROPs, which would account for the pixel fillrate decrease... at the same time they doubled the texture addressing units, which should give it more texturing power.
At the same time, the memory bandwidth was reduced slightly. So you can never get the full theoretical texturing power... however, in practice you aren't just texturing, and you aren't just filling pixels. You'll be doing a combination of both, and then the balance apparently starts to make a whole lot of sense.

It just depends on the game. Most games aren't really shader dependent but some are like GRID for instance. that game loves shader but most games have bigger impact with right combination of core and bandwidth.
 

Keysplayr

Elite Member
Jan 16, 2003
21,209
50
91
Originally posted by: Azn
Originally posted by: Keysplayr
"My pompous internet personality has nothing to do with what we are discussing."

Yup. It's your attitude that gets in the way, or I should say I let it get in the way. I can admit when I am wrong and usually keep an open mind on things, but I can't let your approach to things get in the way of that in the future. I have to step back and consider this next time when reading your posts and understanding the ego posting them. It's not a bad thing Azn. I just need to be more aware of it.

/cheers.

And you telling me to observe instead of posting? I just wonder who the pompous ass really is... Hmmm.. :p

It all, and I mean ALL stems from your original attitude in the 9600GT thread Azn. I really never speak to anyone that way unless provoked. If I recall, we were so aggravated at each other, that you actually blocked my PM's. I responded in kind. Funny that.
 

AmberClad

Diamond Member
Jul 23, 2005
4,914
0
0
Enough. How about you guys take the back and forth bickering to PM, instead of sending this thread any further off track.

AmberClad
Video Moderator
 

Scali

Banned
Dec 3, 2004
2,495
0
0
Originally posted by: Azn
It just depends on the game. Most games aren't really shader dependent but some are like GRID for instance. that game loves shader but most games have bigger impact with right combination of core and bandwidth.

Obviously it always depends on what you do with it. That was my point... When nVidia designed the first 8800 series, they couldn't really predict what the average DX10 game would demand, as I already said. In theory it's quite possible to design a game that runs fantastic on the G80, and much slower on G92. But as it turns out, G80 wasn't exactly where the average game was going, so with G92 nVidia could fine-tune the design to reality a bit.
 

AzN

Banned
Nov 26, 2001
4,112
2
0
Originally posted by: Scali
Originally posted by: Azn
It just depends on the game. Most games aren't really shader dependent but some are like GRID for instance. that game loves shader but most games have bigger impact with right combination of core and bandwidth.

Obviously it always depends on what you do with it. That was my point... When nVidia designed the first 8800 series, they couldn't really predict what the average DX10 game would demand, as I already said. In theory it's quite possible to design a game that runs fantastic on the G80, and much slower on G92. But as it turns out, G80 wasn't exactly where the average game was going, so with G92 nVidia could fine-tune the design to reality a bit.

On the contrary. If you look at those results on vantage you linked it paints the picture how the performance is divided into separate groups. There's color fill test for AA and high resolution. Texture fill test for multiple texturing performance which also sway in AA and raw frame rates. Particle test for smoke and such in games. Then there's group of shader tests that determines processing power. 8800gtx will always be faster than 8800gt but 8800gts is fairly close to 8800gtx and so on.

G80 is still just as fast g92 in real game benches. What hinders performance on the G92 is bandwidth when you consider it doesn't quite reach peak performance with vantage fillrate tests while G80 does. The average game is combination of these things but mostly swayed by fillrate performance then followed by shader. Of course there are few games that is swayed more towards shader like GRID.
 

Scali

Banned
Dec 3, 2004
2,495
0
0
Originally posted by: Azn
On the contrary. If you look at those results on vantage you linked it paints the picture how the performance is divided into separate groups. There's color fill test for AA and high resolution. Texture fill test for multiple texturing performance which also sway in AA and raw frame rates. Particle test for smoke and such in games. Then there's group of shader tests that determines processing power. 8800gtx will always be faster than 8800gt but 8800gts is fairly close to 8800gtx and so on.

On the contrary of what?
Not sure how what you say relates to what I say.

Originally posted by: Azn
G80 is still just as fast g92 in real game benches.

I think you are looking at it from the opposite direction.
What I'm saying is that it's G92 that is 'still just as fast' as G80 despite having a narrower memory interface and less ROPs. So G92 has a more efficient balance than G80 did. Because of the 256-bit bus they could use a cheaper PCB, and they used less memory to keep the cost down, without sacrificing performance. Quite a feat, really.

Originally posted by: Azn
What hinders performance on the G92 is bandwidth when you consider it doesn't quite reach peak performance with vantage fillrate tests while G80 does. The average game is combination of these things but mostly swayed by fillrate performance then followed by shader. Of course there are few games that is swayed more towards shader like GRID.

You'll also see that 8800GTX/Ultra is generally faster when you go for really high resolutions and high AA settings. The combination of the extra memory, the extra bandwidth and the extra fillrate will then kick in.
 

BFG10K

Lifer
Aug 14, 2000
22,477
2,399
126
Originally posted by: Azn

It's pointless to argue with you when I already gave you an explanation why the 8800gt beats the 8800gts 640mb. Although the 8800gt is bandwidth starved it's able to beat 8800gts 640 mb because it has more SP and TMU.
Again all you?re doing is simply confirming what I?ve been saying all along: that despite having reduced memory bandwidth, the 8800 GT is faster because of improvements to the core, thereby proving memory bandwidth isn?t the primary limiting factor.

Do you understand what bandwidth limitation is? It's when a card can't fully stretch it's performance because of bandwidth. Not when the card has more bandwidth so it should be able to beat the card. That's where you base your entire argument on and have a hard time understanding this concept.
Uh, no. My entire argument comes from the fact that you?re claiming memory is the limitation, yet parts still get faster when it?s reduced. Clearly it isn?t the primary limitation.

It?s like narrowing the neck of a bottle but still observing a higher flow of water. Clearly then the neck wasn?t primarily holding things back.

1680x1050
Crysis 4xAF high settings

8800gts @ 756/1836/1026
34.43 fps

CORE REDUCTION 590/1836/1026
32.13 fps -7.2% difference

BANDWIDTH REDUCTION 756/1836/800
29.72 fps -15.8% difference

So you were saying? I suggest you go pick up a G92 and go study up on it. This card severely bottlenecked by bandwidth.
Where to even start with this?

How about the fact that you used a single game, a single benchmark, and a single card and are trying to claim that sole result is somehow the norm when multiple G92 benchmarks disagree with you?

How about the fact that you never even touched the shader clock?

How about the fact that you didn?t even paint any kind of average across a range of scenarios?

Additionally, I?m pretty sure Chizow posted several tests from his G92 that showed the opposite of yours (i.e. memory showed the lowest performance impact). There were even several G92 overclocking threads that showed the same thing on the whole.

So yeah, your figures may be accurate, but they?re also an outlier. You absolutely cannot infer it?s the norm based on your sole result.

Again you have a hard time understanding what bandwidth limitation means. I tried to explain to you in many occasions and gave you plenty of examples. Not my fault you can't seem to understand it spit out rhetoric. All I can hope is that you try. ;)
Again you seem to have a hard time understanding that if you reduce something but performance goes up, it?s not a limitation.

We are trying to find out if G92 is hindered by bandwidth. After all that is the topic on hand.
Again we know it isn?t because G92 parts beat other parts that have more bandwidth.

We also know it isn?t because G92 parts? relative performance increased despite having their bandwidth reduced.

That and you?ve repeatedly agreed that core improvements to the G92 more than offset the reduction in bandwidth because that?s the explanation you gave us as to why it?s faster.

4850 is sure limited by bandwidth. You move the memory slider up and you get better average and minimum frame rates. Not to mention 4850 core clocks are locked with their SP clocks.
Oh, so you use the term ?limited? to describe anything that holds back performance? In that case every DX10 part is basically limited by everything (core/memory/shader/texturing/CPU) since it?s possible to find a scenario where improving one of these facets improves performance to some degree.

Heck, even overclocking the 2900XT?s VRAM can yield some performance improvement in certain situations. I guess using your terminology I can say the 2900XT is limited by bandwidth since moving the memory slider improves performance?

Using the term like that makes it lose meaning. More accurate usage would be based on the lowest proportionate ratio between the slider increase and the actual improvement. In that case the 4850 is clearly limited by the core, not the memory.

Now what you are describing is what gets more gains from the GPU. SP/core vs memory bandwidth. In this case combination of SP and core wins over bandwidth.
That?s my point, namely that the primary limitation is from the core.

LOL... Again with that same rhetoric. I already posted the answers already multiple times. ;)
The only answers you posted yet again prove the 4850 isn?t primarily limited by bandwidth.

But it was techreport who posted 3dmark theoretical figures and even posted a conclusion why bandwidth limits fillrate. I'm just citing what they posted and giving you examples.
But again these ?examples? have little to no basis in the real world. I?m showing you actual game figures while you post 3DMark synthetic tests as ?proof?. Uh-huh.

Then you should do something about your partner since you can't trust 2900xt is faster than 8800gtx in 3dmark.
Uh, no, I don?t need to ?do? anything. Go assign homework exercises to someone who gives a shit.

If you don't know the reason why 2900xt beats 8800gtx in 3dmark 2k6 what makes you so qualified to do any kind of articles and make conclusions?
Is that supposed to incite some kind of a reaction on my part? Try harder.

To answer OP's question. Yes G92 is hindered by memory bandwidth.
Again multiple independent benchmarks disagree with your sole test. If the card was hindered by memory bandwidth then it couldn?t possibly be showing performance gains by reducing it.

In case of g92 memory bandwidth makes the most difference as shown by Crysis benchmark.
Yes, in your sole benchmark, which is neither complete or the norm.

Look at 4870 minimum frame rates compared to gainward 4850 that's clocked to 4870 core and maxed out memory clocks. You see 25fps on the Gainward @ 1920x1200 while 4870 minimum frame rate is 33fps. that's quite a bit of a jump than 10-15% average frame rate gains. That's 25% better minimum frame rates.
Right, but again I?m not trying to imply bandwidth makes zero difference. Not to mention that the Gainward is sometimes faster than the 4870 despite having less bandwidth, clearly indicating driver or benchmarking noise.

I've never said bandwidth is the primary factor with most dx10 parts. What you did say however is that bandwidth is a non-issue among dx10 parts although it is very much an issue.
From my second post I stated no it?s not, not when it?s readily demonstrated that SP clocks generally have a bigger impact on performance than memory clocks. Again this is something that I?ve tested repeatedly with several parts.

Again my point is that bandwidth is not the primary limiting factor on DX10 parts (including the G92), not that bandwidth makes zero difference.

But based on my first comment in this thread I can see I used a poor choice of words to convey this meaning.
 

ArchAngel777

Diamond Member
Dec 24, 2000
5,223
61
91
Originally posted by: BFG10K
But based on my first comment in this thread I can see I used a poor choice of words to convey this meaning.

I don't agree. I knew what you were saying all along. No one in this thread has claimed that is makes '0' difference. However, Azn seems to think someone has said that and is argueing against a ghost.