40nm Battle Heats Up

Page 6 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

BFG10K

Lifer
Aug 14, 2000
22,709
2,976
126
Originally posted by: chizow

I said a criteria like "Top 10 games from the last 2-3 months" is always going to be more objective and always going to be more relevant than what he claimed were objective benchmarks, especially if he's going to claim it was a list of popular titles.
How are they more relevant if those five titles are the games nVidia asked reviewers to use?

How are they more relevant if they don?t include an adequate cross-section from range of titles?

I realize this is a foreign concept to you, but some people don?t have the ADD equivalent when gaming and hence still actively play slightly older titles, titles that are still quite demanding and still quite recent.

Are you saying the inclusion of random older titles like CoJ, RS:Vegas, and Jericho that never even satisfied any such popularity, relevance or best-selling criteria are more relevant than recent titles like COD5, L4D, FO3 or FC2?
No, I?m saying a benchmark suite that includes both modern and legacy titles is more relevant and robust than one that only includes titles nVidia told reviewers to use. Without a large cross-section of titles the results can be skewed towards a particular vendor depending on what titles are used. A large enough title list reduces or even eliminates this problem.

How is a list of "Top 10 titles in the last 3 months" less biased than "Whatever is on Wolfgang's Hard Drive"?
What are you talking about? In the Cat 9.1 article ComputerBase included a range of new titles like CoD 5, FC 2 and S:CS. The difference is that they also included legacy titles, which is my point.

What a joke, this has nothing to do with the titles still being very demanding,
Actually it has everything to do with it. I play Call of Juarez and Jericho and I?ll take any extra performance I can get.

Unless your criteria is perhaps "Titles that hit the $9.99 Bargain Bin within 3 months of release" or perhaps "Poorly reviewed titles with Metacritic scores less than 60",
What the hell are you talking about?

http://www.metacritic.com/game...=rainbow%20six%20vegas (score=85)

http://www.metacritic.com/game...z?q=call%20of%20juarez (score=72)

http://www.metacritic.com/game...richo?q=clive%20barker's%20jericho (score=63)

Not one of those titles has a rating below 60. Are you going to shut up about reviews now?

I'm really not sure how you can argue Nvidia's guidance is somehow "cherry-picked" or "marketing".
LMAO. Given they stated what titles have to be tested (unlike ATi who only stipulated the title count), I?m not sure how anyone could come to any other conclusion. But there you go, you never cease to amaze me.

You should if you're going to claim "12 random titles on Wolfgang's Hard Drive" is somehow more objective or relevant than Top 10 titles from the last 3-4 months.
This has already been covered repeatedly.

I've already considered the possibility and cross-referenced older results and found they're not a carbon copy as they were in the past with months-old archived results.
That doesn?t change the fact that the scores could be wrong like they were in Big Bang. I?m not saying one way or another, just pointing out that someone who was asking Derek to stand down as a reviewer should be considering such a scenario.

And Grid showed improvement, so you need to retract your lie.
Retract your lie Chizow: Nvidia did not list improvements in the titles AT tested.

You were wrong.

Retract it immediately and stop playing rhetorical games.

And Grid showed improvement, so you need to retract your lie.
Err, no. 1.79% is well within the margin of benchmarking error.

Anyway, I never claimed there weren?t any performance gains, I merely pointed out the scores were an outlier compared to other reviews and after later testing it was found they weren?t accurate. Therefore you should be questioning the figures in your linked review, but you?re not. You?re happy to accept those because they paint nVidia in a good light.

Stop arguing in circles with your useless rhetoric and retract your lie: Nvidia did not list improvements in the titles AT tested.

Retract your lie Chizow and stop trolling.

And Crysis showed improvement once resolution/AA was increased.
Only after retesting was done that proved the first batch of benchmarks were not showing the true story. Which is my whole point ? the first benchmarks were not indicative of reality.

Also I wasn't directly referring to their "benchmark result", as I've stated numerous times not all games, even if listed would show improvement, I was referring to his conclusion, which was clearly the outlier. He stated the drivers did not make any noticeable impact which was more or less a foregone conclusion as he didn't bother to test enough of the games listed or sufficient resolutions and settings to come to that conclusion.
Oh, I see. So you don?t even need benchmark scores now, it?s enough you can read Derek?s mind to tell us what he was thinking, and therefore retroactively apply this logic backwards to his results?

So what are you arguing now exactly? That the scores were an accurate indication of reality but the conclusion wasn?t? :roll:

What utter hair-splitting and semantic games on your part.

No the benchmark wasn't wrong,
If they weren?t wrong then how come they didn?t mimic that of other sites?

If they weren?t wrong how come AT corrected them later and admitted they weren?t an accurate indication of the state of affairs?

Derek simply didn't test thoroughly enough to come to the conclusion he came to and later corrected his mistake by testing more titles and more resolutions/settings, which was my point about it being the outlier all along.
More total hair-splitting, semantic games and trolling on your part.

Answer the question Chizow: was the initial review an accurate reflection of Big Bang or not?

Answer the question and stop trolling and playing semantic games.

Are you really going to get behind those benchmarks as evidence to back your point?
Absolutely, namely the point that the initial figures weren?t a true indicator in relation to what others were getting, and neither was the conclusion. But given at this time you don?t even understand what?s being argued, it?s no wonder you?ve totally lost the plot and just keep typing simply because you can use a keyboard.

Your arguments are like a fish out of water: they keep flapping out of reflex but they never achieve anything useful.

Yep, I understand you claimed numerous times that ATI had better and more robust drivers based on your experience, and now you're claiming you bought another Nvidia part because Nvidia's drivers are better and more robust. Makes total sense.
Actually that comment doesn?t make any sense whatsoever and it?s not surprising given it mimics your state of understanding of the situation.

No it shows averages can clearly be skewed by subjective selections that favor one vendor or another, which is why averages and aggregates should not be used as a cumulative indication of performance.
Again, fuck the averages. Ignore them if you like. We?re focusing on the scores that don?t include nVidia?s cherry-picked games, and observing performance gains missed in many other reviews.

As for it being a "fact", I'm not so sure of that given they couldn't even replicate their performance gains a day later:
Why don?t you ask them? Perhaps they tested another benchmark. Anyone with the most basic level of benchmark understanding knows you can?t compare figures across reviews.

Afterward? No you made all those idiotic comments about ATI drivers being superior in your experience long before you touched a 4850, which was what? 4 years after the last ATI part you used?
Except those ?idiotic? comments were later backed by Derek and his peers (according to him).

Answer the question Chizow: did Derek end up backing my claims about ATi driver superiority on the early Vista days?

Its hilarious you're attempting to justify comments made years before you finally decided to refresh your frame of reference.
Yet after I refreshed my frame of reference you were still claiming I couldn?t make a comparison. Meanwhile your frame of reference stopped at the 9700 Pro but you were all too eager to make sweeping generalizations about the state of ATi?s monthly drivers.

Derek didn't even enter the discussion until months later.
How is that relevant? He still ended up backing my claims and proved you wrong.

Answer the question Chizow: did Derek end up backing my claims about ATi?s driver superiority during the early Vista days, thereby proving you wrong?

But nice try, you explicitly claimed your opinion was based on your experience despite the fact you hadn't used an ATI part in years.
That?s another lie on your part. I frequently continued to use older ATi parts when I swapped them into my system for testing purposes. But keep digging that hole further for yourself.

And there you go again, trying to clump experiences you didn't have with online feedback, which aren't your experiences. You still can't seem to make the distinction, but this isn't surprising given your comments about Vista, Nvidia drivers and hot fixes, as a devout XP user.
You still can?t seem to understand Derek ended up backing my claims which proved you wrong.

You still can?t seem to understand I have relevant experience with the 4850.

You still can?t seem to understand your frame of reference stopped at the 9700 Pro so you?re in no position to be commenting about the merits of monthly drivers.

You still can?t seem to understand your frame of reference stopped at the 9700 Pro so you?re in no position to be commenting about the state of running modern games on modern ATi parts.

You still can?t seem to understand your frame of reference stopped at the 9700 Pro so you?re in no position to be attempting to argue against my claims about driver comparisons.

I never refused to accept Derek's claims, as has already been linked for you.
So you admit I was right then and you were wrong, given Derek ended up backing my claims?

And I haven't said anything about your 4850 experiences other than I'm sure the conclusion was predictable in order to justify previously ignorant comments.
You?re ?sure?? How exactly? Did you pull that certainty out of your orifice?

I also found it incredibly ironic and not surprisingly hypocritical that you would still choose to purchase an Nvidia part that was by most accounts inferior to the 4870 1GB based on criteria you've set. And now, you're claiming your decision was based on Nvidia having superior driver features?!?!? LMAO. We certainly have come full circle with your hypocrisy.
I see Azn is now drilling you about your choice of hardware purchases. So Chizow tell me, how does it feel to have someone questioning your buying rationale when they clearly have no idea what they?re talking about?

How does that medicine of yours taste, hmmm?

Again, the difference is, I didn't make an idiotic claim that the comments were based on my experience.
Right, you made idiotic comments, period.

And yes, quoting the likes of Anand, Derek, and now Jarred is certainly compelling testimony, as they're absolutely more qualified to comment than you given they actually have access and relevant experience with the hardware simultaneously at any given time, unlike you.
So again I?ll ask whether they backed my claims, thereby making me right and you wrong?

I'm not making sweeping generalizations and these results don't seem to be the outlier, they're pervasive.
Pervasive to whom? Your haven?t touched an ATi part since 9700 Pro so how are they pervasive to you?

As for your fixes...what's that supposed to mean other than they were already working on a fix?
So working on a fix is bad now?

Its already been demonstrated numerous times and confirmed by ATI's own driver team that it would most likely take 2 months in order to get a fix in due to alternating driver trunks.
It?s also been demonstrated that I?ve received fixes the very next month that I reported a problem. Maybe they were already working on a fix, maybe they weren?t. The point is the end result, which was a fix within one month of me reporting it.

Is FC2 studdering even fixed?
I?m honestly not sure. Has the physics freezing been fixed in Mirror?s Edge? Even after another emergency hot-fix for another TWIMTBP title (pervasive, to use your terms) Azn is still reporting freezes with PhysX enabled.

Its possible Nvidia takes longer with fixes for legacy titles, but its also clearly obvious Nvidia has better support for new titles, a claim I made very early on.
Obvious to whom? You?ve tried newer tiles on your 9700 Pro, have you? Or are you making sweeping generalizations again based on FC2, which many who actually use ATi parts clearly recognize as an outlier?

I?ve used modern ATi hardware (at the time) to run launch titles (at the time) without issue while nVidia users had problems in some of those titles, especially many TWIMTBP titles.

No need to start this again, I know you prefer support for old titles and aren't like the majority of users who buy new video cards for new games, and that's fine.
You still don?t get it: new titles are very important to me, but so are old titles.

That's nice, except my references, quotes and links to credible sources are always going to be more relevant than your non-concurrent experiences riddled with 3 year holes.
But I had references, quotes and links to credible sources and you still denied them. You were claiming claptrap like ?they were caused by other things in the system?, ?they?re not nVidia problems? and even worse, outright denying them.

And this was still after I had linked to forum threads with multiple dozen pages replicating the problems on a range of systems, and even after quoting nVidia?s fixes in their own driver readme.
 

AzN

Banned
Nov 26, 2001
4,112
2
0
Originally posted by: MegaWorks
Originally posted by: chizow
Originally posted by: MegaWorks
I'm still trying to get an direct answer to my question from Mr. Member of Nvidia Focus Group wannabe, so let me ask him again. chizow, Are you saying that nVidia drivers are more robust than ATI? Yes or No?
What would it matter what I think? You'll just ignore it and maybe over react again. :) I'm very happy with Nvidia's drivers though, especially support for new and popular titles, which is the main reason I keep my hardware up-to-date in the first place.

No I won't, I've asked you directly how is that ignoring? But by reading some of your posts I guest the answer is yes. :p But I disagree with you on that! Why? I play a lot of games, and I've used the R300, R420, R520, RV670 and the RV770 and none of them gave me problems with any of my games. OK maybe the RV770 with FC2 but that's it, my situation with CrossFire was bios than software drivers.

The GTX 260 that I'm using is ok! I mean nothing revolutionary it works just like my 4850. I really don't see the difference the way you put it. :confused:

AMD is not alone when it comes to driver problems. I'm using latest official Nvidia drivers with Mirror's Edge and Physx and the game constantly freeze. I even used the latest Nvidia's beta drivers that fix Mirror's Edge yet it still freeze. Turn off Physx and it runs fine. I don't even know why I bought this card. It's not all that much faster than my 8800gs was. :/ I could have gone with a 4830 for cheaper and could have had same results but felt like G92 was better raw performer.

I might just crossfire 2 4830 together for less than $200 and it should easily outperform GTX 280.
 

Mana

Member
Jul 3, 2007
109
0
0
Hm, I hope ATI releases these sooner rather than later as I know I'd say I'm due for an upgrade soon.

Also, not to be a jerk, but can we stop with the mass quoting and replying to individual sentences? It makes it really difficult to read and follow what you're saying.
 

dug777

Lifer
Oct 13, 2004
24,778
4
0
Amazing how much effort young chizow has put into defending the green team in here ;)

I'm rather looking forward to the next generation of cards, the current crop are getting somewhat boring. That said, my 4850@800/1023 is more than fast enough for my needs at the moment.

 

chizow

Diamond Member
Jun 26, 2001
9,537
2
0
Originally posted by: Azn
incompetent like you finger nail explanation and tried to make sense that ROP is the most makes the most performance impact in games? :laugh:
No, that I shouldn't have to detail everything that "ROP" entails when cutting ROPs also results in a loss of those logical units.

Uh. Yes you did. :roll: You also said bandwidth didn't matter either. :laugh:
Keep in mind though, the Shader Core speed difference doesn't produce nearly the results in performance as a straight increase to core or memory clocks. I think AT did a comparison in that OCZ 8800GTX review and the difference wasn't linear, maybe 20-50% improvement relative to Shader Core increase.

I posted that on 2/27/2007 almost two years ago, so what were you saying again? You were probably still talking about vertex shaders being more important than pixel shaders on your crippled 7300GS back then.

WRONG!!! G92 has minimal impact when you raise core frequencies. This has already been tested on 2 G92 cards which you have been ignoring with the evidence provided. 8800gs and 8800gts. G94 is a balanced product so you would need to raise everything. Not just the core.
Minimal impact from core frequencies? Really? Is that why G92 GT, to GTS, to GTX to GTX+ and 9800GX2 compared to the SLI G92 solutions all show significant gains from core clock increases, despite similar 1000-1100MHz bandwidth? That's 600MHz to 738MHz, a 23% difference in core clock. Are you saying the performance difference between G92 is closer to the minimal differences in SP and memory frequency, or closer to the 23% differences in core clock? G92 benefits less than G80 and GT200 from core clock frequencies, but its still has the greatest impact on performance.

ROFL... I even provided evidence back then which you ignored because it just made you look foolish with your ever retarded ROP makes the biggest performance difference. Here it is again. :eek:

http://techreport.com/r.x/rade...850/3dm-color-fill.gif

2900xt 3.8 Gpixels/second
3870 3.1 Gpixels/second

Why does it outperform in High color fill? Oh that's right ROP is tied down to the memory controller. In this case 2900xt had more bandwidth so it has more pixel performance than 3870. That 20% advantage in ROP performance equates to 1% performance difference in the real world. :roll:
Rofl, 3DMark synthetics again. ROP performance between the parts is almost the same, only 4% difference, so you can stop with whatever nonsense about ROPs performance being tied to memory controllers:
R600 and RV670 specs

But this isn't surprising from someone who claimed: Bigger bus is just better. It's wider and able to hit some peaks a smaller bus can't sustain. when discussing 3870 and 2900XT, while linking a bunch of garbage 3DMark theoreticals (again, about as useful as counting frames on a loading screen) while ignoring actual game benchmarks from the same site that showed minimal difference between the parts.

ROFL. You've proven to yourself that bandwidth mattered even at low resolution when you applied AA which you said it wouldn't. You even said it yourself... GTX 280 bandwidth was being wasted because it didn't have enough fillrate. When you downclocked your memory even @ 28% GTX 280 still has enough bandwidth to run efficiently. Now try downclocking it to 9800gtx+ memory bandwidth and see your card perform more like 9800gtx+ than GTX 280. :brokenheart:
Except the comparison was never between the 9800GTX+, I already know the GT200 is always faster than the 9800GTX+. The point of the exercise was to show memory bandwidth has much less impact than core clock increases to show the difference between bandwidth on GTX 295 and GTX 280 was less relevant than the loss of ROPs. My results clearly show that. :)

Your arguments are hypocritical and illogical at best. If someone actually read your arguments to Nvidia or AMD engineer or in a court they would surely laugh in your face. :laugh:
Like how a 27% decrease in memory bandwidth results in a 3-8% difference in performance? Anyone would clearly see that's less than the 15-25% difference between GTX 295 and GTX 280 SLI.

It was bandwidth limited @ 1680x1050. What makes you think it's not bandwidth limited @ 1280x1024? :laugh: Do you want me to test Crysis again but @ 1280x1024 with my 8800gts? Don't try to tell me about bandwidth limitation when you don't even have the slightest idea. You made a wild guess and you were wrong. Everyone can make a guess and be wrong. It doesn't mean you are stupid.
LOL? Uh, maybe because reducing resolution means a reduction in bandwidth requirements? There's no wild guessing here other than what gibberish will come out of your mouth next.

Bandwidth plays a role whether it be low or high resolution. What you haven't figured out is that more fillrate you have more bandwidth you need to run efficiently. Just because you have certain amount of bandwidth doesn't mean you aren't bandwidth limited.
Yes it plays a role but it clearly plays less of a role at lower resolutions as there's simply fewer pixels per frame, which reduces how much data passes to/from the frame buffer. I'm not arguing efficiency, I'm arguing which factor has the larger impact and clearly its not bandwidth. Bandwidth is only an issue if you clearly don't have enough and its completely crippling performance so that gains in other areas show no gain.

Your GTX 280 or GTX 260 isn't bandwidth starved. You and I both can agree to that but you keep implying g92 isn't bandwidth starved when it has texture fillrate closer to GTX 280 is ludicrous..
Its not ludicrous when G92 always benefits more from core/shader clock increases than increases to memory bandwidth. If G92 was so bandwidth starved as you claim, simply increasing memory bandwidth by itself would yield a bigger gain than core/shader increases, but it does not. There's at least 5 G92 parts that show this to be the case, scaling from 600 to 750MHz with memory clocks locked at 1000-1100MHz.

That's all you can come up with (BS) when evidence is provided for you. :disgust:

I pretty much explained this in my previous post in this thread and 9600gt thread with Keys. Here I will explain it again since you seem to think I couldn't explain it.

G92 is bandwidth starved and I've pretty much said this from the very start while G94 is a balanced product. With AA 9600gt comes within 10% of 9800gt because of bandwidth limitations of the 9800gt but when you compare raw performance without AA 9800gt is roughly 20-35% faster than 9600gt when bandwidth restrictions were less critical. Now if both cards has all the bandwidth it needs 9800gt would be much faster with or without AA.
Yep, it is BS because you haven't and still can't explain away the G94. But this should help straighten things out for you. You should be familiar with it, as you've referenced it in the past:

Expreview G94 to G92 GS

Please explain how a G92 card with more pixel/texture fillrate and shader performance is able to perform within 5% of G94, despite a 33% reduction in bandwidth? You said G92 didn't have enough bandwidth to satisfy its texture fillrate, yet here's a G92 part that shows no adverse effects from less bandwidth.

Also please explain to us how G94 with 33% fewer SP and TMUs is able to stay competitive with G92 GS if SP and TMU are the most important aspects of performance. Yes it has more bandwidth, but that shouldn't matter since there's less texture fillrate to begin with.

ROFL. you are the one here spreading Nvidia marketing jargon. :p I've proven almost everything you claimed my benchmarks and your benchmarks. :laugh: If that makes me a troll to you I guess I am. :D
Explaining the differences in architecture and how they corrolate to real-world performance between the parts is marketing jargon? More like squashing misinformation from someone who has repeatedly demonstrated incompetence and the inability to absorb readily available information.

Because you need bandwidth to take full advantage of that texture fillrate not to mention AA needs more bandwidth.
But the 4870 has less texture fillrate than 9800GTX+ and beats the 9800GTX+ even without AA, so additional bandwidth shouldn't be an issue, yet the 4870, like the 260 runs circles around the 9800GTX+. Weird. :confused:

I wouldn't say circles now. It's roughly 20% faster.
Yep, its always faster despite lower texture fillrate theoreticals. But is this surprising given the GTX 260 follows the same pattern as well? :) Gotta love how you throw up all these theoretical numbers which never bear out in real world applications. Just shows theoreticals are just that, theoretical and ultimately useless.

Exactly my point. 9800gtx+ is bandwidth limited to a point increasing core had very little impact on performance as shown on my Crysis benchmark. Did I say shader didn't impact performance? Of course I didn't I said texture and shader makes bigger impact than ROP. :roll:
Except that's clearly not true as G92 has gone through 25% increases to core from 8800GT to 9800GTX+ with a minimal 10%-15% increase in memory speed, yet it still scaled signficiantly with core clock increases.

ROFL. What hint? That you are making shit up? RV770 can only do 16 pixels per clock at 32bit color and GTX 280 can do 32 pixels per clock. Only time it can write/blend 2 pixels per clock is with MSAA or 64bit color consisting of HDR scenes. :laugh:

http://techreport.com/r.x/rade...ender-backend-ppcs.gif
Ah yep, I hadn't referenced that graph in a while and was still thinking of parts that could only write/blend at half speed. However it still shows 4870 is still faster than 9800GTX+ even without AA, even though it can write similar pixels per clock.

Grasp straws is right. You were trying to compare L2Cache to GPU logical units which I corrected your flawed argument. :gift: Depending on what you were doing those units can have bigger performance impact. That's the point. But in a game ROP doesn't pull nearly as much frames as texture or SP would long as you weren't limited anyway.
LOL, BS, you linked a die shot to GT200 and claimed a ROPs that only appeared to be 25% of the die couldn't have the biggest impact on performance because they were only 25%, at which point I clearly illustrated that equating die size to performance was clearly a flawed analysis. See, the difference is I was using L2 Cache to clearly show die size is not proportionate to performance, where you tried to show that it did with GT200. So do you think ROP performance and size is still a relevant comparison, or not?

Why don't you test your GTX 280 while you are at it. Try to see if you get the same results as I did. Of course not because GTX 280 is core hungry while my GTS is bandwidth hungry. Now if your GTX 280 showed same results as I did than I would be wrong and you would be right but then again you are too chicken shit to test your GTX 280 because you already know what the outcome will be. I would test more games but majority of games out there don't have a built in game benchmark. I don't have WIC and demo doesn't work with the latest drivers. GTA4 shouldn't be a factor because that game is CPU dependent.
Blah blah blah. I did test Crysis, it showed an 8% difference from 27% reduction in memory clocks. I also tested 4 other titles that showed 0-5% difference. Its obvious you're too "chicken shit" to stray away from Crysis as its one of the few titles that is bandwidth intensive enough to show a significant decrease in performance from a reduction in bandwidth at a lower resoluton like 1680, yet its still only 8%. Run a straight line and use FRAPs for all I care, just don't hide behind a lame excuse like "I don't have enough games with a built-in benchmark so I can only use Crysis".

Look at your minimum frame rate in Crysis and WIC at 21% and 14%. I don't know how much more relevance you need. :roll: Now clock your GTX 280 memory clocks to 550mhz and do the benches again and compare it with 9800gtx+. I dare you to post your results.
I'll only do it if you double dog dare me! :laugh: What about minimum frame rates? You claimed the difference between GTX 295 and GTX 280 SLI was due to bandwidth, they used averages so minimums were never in question. My benchmarks clearly showed a 3-8% difference from a 27% reduction in memory, proving your theory wrong while showing memory bandwidth was indeed less significant at lower resolutions, just as I stated.

ROFL. you never made any claims? WTF??? This post alone you were taking jabs about GTX 260 vs 9800gtx+ being not bandwidth starved for performance and you made no claims? W@W!! I'm speechless. Pathetic. High resolutions? I tested 1680x1050 with my 8800gts and 1440x900 with my 8800gs. :roll:
What do your results with 8800GTS and GS have to do with my claims about GTX 260 and 9800GTX+? If you're going to try and illustrate they're bandwidth limited, then show how much of an increase in performance is gained from an increase from memory clocks. Increasing bandwidth requirements and then reducing bandwidth doesn't prove your point about being bandwidth limited. That'd be like saying 9800GTX+ is bandwidth limited, so to prove this, I'm going to clock memory to 128-bit like an 8600GT.

That's because you don't have the slightest clue when it comes to GPU. I think you are no better than any of the new forum members asking which card is faster when it comes to GPU except those newbies are willing to learn but you. :laugh: Nazi's used to believe something too. That didn't make them right. ;)
Ya, you're an imbecile for equating anything on a tech forum to Nazism. But I'm sure all those new forum members are tickled pink to hear garbage like "GTX 260 isn't much better than 9800GTX+" or "9800GTX+ would stomp GT200 with more memory bandwidth" or "9800GTX+ is actually faster than GT200, but not really" And you still can't explain away the G94. :laugh:

Employee discount I bet for spreading Nvidia marketing jargon. :laugh: There's no way I'm paying more than $200 for GTX 280. The card performs fine for older games but new games like Crysis it still chokes not to mention ridiculous power requirements for such a slow ass card.
Yeah, employee discount from one of my other employers, Microsoft. Cash back employee benefit program was great, you didn't even have to be an employee to get in on it. :) As for marketing jargon, slow ass card. LMAO. Funny coming from someone who has a long history of only using "slow ass cards". But keep pouring it on my GTX 280, it can take the criticism, really. LOL.

I bet you downclocked your core and SP clocks by now only to find out I was right all along. That 10% turned into 15-25% with AA. ROFL... No wonder you don't want to show results. Such a silly boy.
I bet....you have no clue what you're talking about. So you're saying GT200 gets more than a linear increase from core clock increases? I'm a big fan of overclocking Nvidia parts and the performance gain they give, but even I can't make that claim. A 4% increase/decrease is going to yield a 4% increase/decrease at best, which isn't enough to make up for the 15-25% difference between GTX 295 and GTX 280 SLI.

Knowledgeable observers like who? Yourself? :laugh: You've also linked bit-tech benchmarks with GTX 260 SLI beaing GTX 295 too when it's not theoretically possible. :laugh:
Like Derek and Bit-Tech who both stated ROPs or bandwidth were bottlenecking GTX 295, and I'm sure most other review sites as well. And it is theoretically possible that GTX 260 SLI beats GTX 295, which just shows TMU/SPs aren't the most significant factor when it comes to performance.

Exactly no proof. When you lowered your bandwidth your minimum frames were all over the place dropping as much as 21%.
It showed a 27% reduction in bandwidth resulted in 3-8% difference in FPS and proved my point that bandwidth alone wasn't enough to explain away 15-25% differences between GTX 295 and GTX 280 SLI.

Only if it worked that way. Poor Chizow. :frown: When you have combination for both core, SP, bandwidth you get much greater drops than just clocking each clocks separately. Of course you would have to be knowledgeable with GPU to actually know this but then again you don't know anything about GPU so. :(
Yes, you get bigger decreases when you lower all simultaneously but if you decrease all equally, the drop in performance will not be greater than the % that you decreased them, the difference should be very linear. When you change them individually, you can then come to the conclusion which factor has the greatest impact on performance by comparing actual clock decreases to % drop in performance. And this is clearly illustrated when I decrease memory bandwidth 27% and only see a 3-8% drop in performance.

This is really elementary stuff when it comes to overclocking, enthusiasts have been making these comparisons for years and have generally found memory clocks historically have much less impact on performance than core clock. I guess introducing that third shader clock threw you off somehow lol.

Of course I did but it's okay. I can easily clock my 8800gts to stock clocks and only raise memory clocks and core clocks separately to show you same kind of results but then again it would make you look arguments look silly. You wouldn't want that. Of course not. You would will just say it doesn't prove anything when the evidence is before you. :p
Just as long as you understand why your initial benchmark was flawed, I don't really care if you run the benchmark again or not because I already know you're going to see less return on the increase to bandwidth.

Benchmarks talk chizow barks. :(
Yep and 27% talks, 3-8% barks, really softly. :)

Ruff ruff. errr.... :laugh: Useless without benches.
There's about 20 GTX 285 benchmarks that show a 10% increase in clocks result in <=10% performance. No need for me to prove a 4% difference in clocks is going to result in a <=4% difference in performance when that information is readily available and verifiable fact for anyone who has actual experience with the parts (and isn't completely incompetent, like you).

Make some sense. Why would I bench a 3dmark fillrate test and measure fps? In 3dmark fillrate test the screen isn't black with a logo. It's testing peak texures with multiple layers of textures.
Because it shows GPU workload and not theoreticals ultimately determine fillrate and FPS. It also shows pixels are still being drawn regardless how blank/empty you think they are.

What does downclocking the core of your GTX 280 have anything to do with what was stated? You said memory clocks wouldn't matter but it showed huge drops in minimum frame rates and in average frame rates by 3-8%. As for core I've already mentioned this numerous times in this thread. Your GTX 280 is core hungry and my 8800gts is bandwidth hungry. :roll:
No, I said bandwidth was less significant at lower resolutions even with AA, and it clearly is. Now going back to my original point about ROPs being more important than SP and TMU and bandwidth, do you think downclocking the core clock by 27% would result in a miniscule 3-8% drop in performance? Think about this a second before you reply, given you've already questioned how much a 4% core clock difference would make.

In lower resolutions you need less vram. Come on chizow you show know this. I thought you were brighter than this but apparently you are not. I only brought it up because GTX 260 having more VRAM for higher resolutions and or with AA. When those vram abnormally isn't effected I implied it's not much faster.
I'm well aware VRAM is less significant at lower resolutions, which is why I asked why you brought it up when we were specifically discussing lower resolutions where VRAM would be less of an issue. Oh right, its because you're not smart enough to focus on what you're arguing.

So you are implying all cards are built like your GTX 280 so all cards behave the same to core frquency like your GTX 280. How retarded of you. :eek:
Yep, all Nvidia parts since G80 that I've owned have behaved similarly where they benefit the most from increases to core clock over shader or memory. This is obvious to anyone who has used these parts, the fact you haven't come to the same conclusion, despite experience with only older/crippled parts would indicate you're incompetent.

Significantly? :laugh: I wouldn't say all that when it only performs 5-10% faster.

This is where the extra shader clocks and core clocks come in. Although it's bandwidth limited it still able to pull more fillrate with high core clocks but not as much as a card that has combination of bandwidth and core efficiency.
LMAO finally, progress. Its more than 5-10% from 8800GT or GTS to 9800GTX+, its closer to the 15-25% core/shader difference between the parts. Its obvious core/shader still has a greater impact than bandwidth despite your claims G92 was bandwidth limited to begin with.

If I prove it will you acknowledge that g92 is bandwidth starved? Of course not. You will say something as stupid as Crysis is bandwdith starved and we are back to square 1. It wouldn't matter anyway because I already have the results ready if you are willing to acknowledge you were wrong.
Sure you do, post them. I already know a memory bandwidth increase will result in less of an increase than than core/shader increases. Its really simple, which of the 3 yields the bigger increase? That means increasing each individually, not keep 2 factors as high as they can go, then dropping the one you claim will gain from an increase by 25%.

Did you even understand what I had originally posted? You are disagreeing with something you didn't even understand? :roll:

I mentioned that if 8800gs showed 10% dropped by downclocking memory, 8800gts should show bigger drops at the same resolution. In this case my 8800gts dropped 16% while 8800gs 10%. Why you ask? Because 8800gs was bandwidth starved in the first place. If you add 25% more of everything in a bandwidth limited card you would get amplified results.
That might be true, unfortunately you didn't test them at the same resolution, which was my point to begin with. Do you think the 8800GTS would've showed a bigger % drop at 1440 than the 8800GS? Maybe, but not for any certainty given you've decreased resolution 36% and decreased bandwidth 33% as well.

ROFL. You said you would show me benchmarks of exact same card performing same at lower resolution at lower bandwidth in fact there are no benchmarks available on the web which you have lied about. Then say go research? Pathetic feeble attempts. :laugh: I can easily do this with my 8800gts. Cut my bandwidth to half and make it craw but what's the point. :roll:
Yep pretty sure I came across a benchmark with it, of course you claimed it was a BS part when it clearly isn't. And yes cutting your bandwidth in half would prove the point that insufficient bandwidth at higher resolutions can cripple performance.

You cool you drop lot of money on gaming hardware. I'm 33 years old. I game on the side. Nothing major but I do it as a hobby but that's me. I'm content with what I have because that's all I need.
Oh wow you said that as if I cared. It just makes your comments and comparisons about GTX 280 that much funnier. :)

ROFL... Now you are denying... Seriously Lame. :thumbsdown:
What am I denying? One of the first things I said was not to bother pulling Crysis benches as I'm well aware its responsive to core/shader/memory clocks individually, and much moreso than other games.
 

Zstream

Diamond Member
Oct 24, 2005
3,396
277
136
*grabs the popcorn*

Seriously chizow, your arguments are not holding very much weight. In fact you go off on tangents and this is why the thread is where it is at.
 

chizow

Diamond Member
Jun 26, 2001
9,537
2
0
Originally posted by: BFG10K
How are they more relevant if those five titles are the games nVidia asked reviewers to use?

How are they more relevant if they don?t include an adequate cross-section from range of titles?

I realize this is a foreign concept to you, but some people don?t have the ADD equivalent when gaming and hence still actively play slightly older titles, titles that are still quite demanding and still quite recent.
They're more relevant because Qbfx claimed the Computerbase review was more subjective than Nvidia's biased picks based on popularity, when that's clearly not the case.

No, I?m saying a benchmark suite that includes both modern and legacy titles is more relevant and robust than one that only includes titles nVidia told reviewers to use. Without a large cross-section of titles the results can be skewed towards a particular vendor depending on what titles are used. A large enough title list reduces or even eliminates this problem.
Yet none of that matters as his claim was one suite was more objective than another, based on popularity, which is clearly false. Are COJ, RS:Vegas, and Jericho more popular or more relevant than the likes of COD5, L4D, or FO3?

How is a list of "Top 10 titles in the last 3 months" less biased than "Whatever is on Wolfgang's Hard Drive"?
What are you talking about? In the Cat 9.1 article ComputerBase included a range of new titles like CoD 5, FC 2 and S:CS. The difference is that they also included legacy titles, which is my point.

Actually it has everything to do with it. I play Call of Juarez and Jericho and I?ll take any extra performance I can get.
That's nice, except neither satisfy the popular or relevant criteria more than titles tested by other sites based on Nvidia's Top 10 criteria.

What the hell are you talking about?

http://www.metacritic.com/game...=rainbow%20six%20vegas (score=85)

http://www.metacritic.com/game...z?q=call%20of%20juarez (score=72)

http://www.metacritic.com/game...richo?q=clive%20barker's%20jericho (score=63)

Not one of those titles has a rating below 60. Are you going to shut up about reviews now?
LOL! I never checked the ratings actually, I just know Jericho and Call of Juarez received poor ratings. Still, none of the 3 satisfy any popular or relevant criteria, so they're clearly less relevant than Nvidia's "hand-picked" popular, best-selling titles of 2008.

LMAO. Given they stated what titles have to be tested (unlike ATi who only stipulated the title count), I?m not sure how anyone could come to any other conclusion. But there you go, you never cease to amaze me.
Yep they stated 5 titles with a selection of titles to choose from, which ultimately serves the same purpose as ATI's 5 title limit, to limit exposure of driver issues. Given the small selection sample, the chance many of the same games would've been reviewed anyways is highly likely.

That doesn?t change the fact that the scores could be wrong like they were in Big Bang. I?m not saying one way or another, just pointing out that someone who was asking Derek to stand down as a reviewer should be considering such a scenario.
The Big Bang scores weren't wrong, at least I had no reason to believe so. And as I said earlier, I did consider the newer drivers were archived, but again, I cross-referenced and found they weren't a carbon copy.

Retract your lie Chizow: Nvidia did not list improvements in the titles AT tested.

You were wrong.

Retract it immediately and stop playing rhetorical games.

Err, no. 1.79% is well within the margin of benchmarking error.
But Grid showed gains, so you need to retract your lie and stop playing rhetorical games. The titles I was referring to were actually the ones in their guidance package (Top 10 list) that most other sites did review, if you'd like to see which they were, I'm sure nRollo can provide them for you again. Anyways like I said, his results were too small of a sample to be conclusive, especially since the drivers themselves clearly stated:
  • Boosts performance in numerous 3D applications. The following are some examples of improvements measured with Release 180 drivers vs. Release 178 drivers (results will vary depending on your GPU, system configuration, and game settings)

Anyway, I never claimed there weren?t any performance gains, I merely pointed out the scores were an outlier compared to other reviews and after later testing it was found they weren?t accurate. Therefore you should be questioning the figures in your linked review, but you?re not. You?re happy to accept those because they paint nVidia in a good light.

Stop arguing in circles with your useless rhetoric and retract your lie: Nvidia did not list improvements in the titles AT tested.

Retract your lie Chizow and stop trolling.
ROFL What??? You claimed the results were an outlier? No, you started off by spewing garbage about me running around screaming the tests were an outlier:

  • Ah yes, the old "quote Anandtech whenever it suits me". Anandtech also claimed Big Bang does nothing for performance and you were running around screaming their tests were an outlier. You?ve also criticized their testing methodology whenever ATi is shown in a positive light, even going so far as to ask Derek to step down as a reviewer.

And now you're saying you never said there weren't performance gains, and that you thought AT's results were an outlier the entire time? LMAO.

Only after retesting was done that proved the first batch of benchmarks were not showing the true story. Which is my whole point ? the first benchmarks were not indicative of reality.
BS, you took exception to my analysis that the initial conclusion was an outlier, which means you agreed the results were satisfactory and accurate. Updates weren't posted until after my comments pointing out the discrepancy and how AT was the outlier compared to other review sites.

Oh, I see. So you don?t even need benchmark scores now, it?s enough you can read Derek?s mind to tell us what he was thinking, and therefore retroactively apply this logic backwards to his results?

So what are you arguing now exactly? That the scores were an accurate indication of reality but the conclusion wasn?t? :roll:

What utter hair-splitting and semantic games on your part.
Why would I need to read his mind when he told us what he was thinking in plain English in the review? Its obvious his conclusion was that there was no tangible gain from the drivers, but it was also obvious 5 games at 1 resolution was simply not a sufficient testing sample to come to such a conclusion, especially since the games he tested weren't included in the guidance package sent to reviewers.

If they weren?t wrong then how come they didn?t mimic that of other sites?

If they weren?t wrong how come AT corrected them later and admitted they weren?t an accurate indication of the state of affairs?
Maybe because.....AT tested 1 resolution and 5 games, many of which weren't included in the guidance package that claimed performance increases?

And once again, the results weren't wrong, the conclusion was based on insufficient data. AT did not correct their initial results, they simply expanded the scope of their initial testing and revised their conclusion. This should be simple enough to understand.

More total hair-splitting, semantic games and trolling on your part.

Answer the question Chizow: was the initial review an accurate reflection of Big Bang or not?

Answer the question and stop trolling and playing semantic games.
LMAO. No the initial conclusion was not accurate, they were clearly the outlier as I've stated numerous times, just as I did in the review comments.

Absolutely, namely the point that the initial figures weren?t a true indicator in relation to what others were getting, and neither was the conclusion. But given at this time you don?t even understand what?s being argued, it?s no wonder you?ve totally lost the plot and just keep typing simply because you can use a keyboard.

Your arguments are like a fish out of water: they keep flapping out of reflex but they never achieve anything useful.
ROFL. I don't know what's being argued? I'm the one running around screaming about AT's results being the outlier remember? LMAO. Such an idiot.

Actually that comment doesn?t make any sense whatsoever and it?s not surprising given it mimics your state of understanding of the situation.
Sure it does, you claimed you ultimately chose Nvidia again based on superior AA and game profiles, both driver features, yet you've claimed on numerous occasions that ATI's driver are more robust.

Again, fuck the averages. Ignore them if you like. We?re focusing on the scores that don?t include nVidia?s cherry-picked games, and observing performance gains missed in many other reviews.
The point is the averages include Wolfgang's cherry-picked games, so yes its easy to ignore them. But actually the gains were observed in Nvidia's cherry-picked games and its obvious other reviews would've missed them as they weren't using the drivers that claimed gains.

Why don?t you ask them? Perhaps they tested another benchmark. Anyone with the most basic level of benchmark understanding knows you can?t compare figures across reviews.
You can't compare reviews done by the same site, same person, same hardware, same drivers published within a day of each other? OK, so what conclusion would you come to given results published one day after another showed none of the gains, and often decreases in performance between driver versions from one review to another? Which one are you to believe?

And I would ask them if I spoke German, or if I visited that site more often than never.

Except those ?idiotic? comments were later backed by Derek and his peers (according to him).

Answer the question Chizow: did Derek end up backing my claims about ATi driver superiority on the early Vista days?
Yep he did, but surely disagrees with you now. And of course none of that changes the fact you made idiotic comments and claimed they were based on your experience a full year before you touched a 4850.

Yet after I refreshed my frame of reference you were still claiming I couldn?t make a comparison. Meanwhile your frame of reference stopped at the 9700 Pro but you were all too eager to make sweeping generalizations about the state of ATi?s monthly drivers.
When did I claim you couldn't make a comparison? I'm too busy laughing at your attempts to juxtapose experiences you had 3 months ago to idiotic claims you made over a year ago. Almost as funny hearing your justifications for buying another Nvidia part, despite refreshing your frame of reference and all the omments you've made in the past.

How is that relevant? He still ended up backing my claims and proved you wrong.

Answer the question Chizow: did Derek end up backing my claims about ATi?s driver superiority during the early Vista days, thereby proving you wrong?
Its relevant because you weren't basing your ignorant comments on his opinion, he only offered his opinion months later. This is different than me quoting him, Anand and Jarred after the fact about ATI drivers being horrible, right now.

That?s another lie on your part. I frequently continued to use older ATi parts when I swapped them into my system for testing purposes. But keep digging that hole further for yourself.
Sure you did, X800XL every few weeks testing all the games you had, to come to the conclusion ATI drivers were better based on your experience right?

You still can?t seem to understand Derek ended up backing my claims which proved you wrong.

You still can?t seem to understand I have relevant experience with the 4850.

You still can?t seem to understand your frame of reference stopped at the 9700 Pro so you?re in no position to be commenting about the merits of monthly drivers.

You still can?t seem to understand your frame of reference stopped at the 9700 Pro so you?re in no position to be commenting about the state of running modern games on modern ATi parts.

You still can?t seem to understand your frame of reference stopped at the 9700 Pro so you?re in no position to be attempting to argue against my claims about driver comparisons.
Just as he disagrees with you now, proving you wrong.

Your 4850 experiences have nothing to do with idiotic claims you made over a year ago, based on experiences you didn't have.

I wouldn't need to use an ATI part to see their monthly driver schedule is clearly broken, its plainly obvious with each hot fix/WHQL that doesn't actually fix what it was intended to.

I wouldn't need to use an ATI part to see complaints and problems with ATI drivers for newly launched games from sites like PCGH.

And I certainly don't need to own an ATI part to point out the inconsistencies and contradictions you make when comparing drivers.

So you admit I was right then and you were wrong, given Derek ended up backing my claims?
Yep, you were right then according to Derek, but you're certainly more wrong now than you were right then.

You?re ?sure?? How exactly? Did you pull that certainty out of your orifice?
So are you saying Nvidia's drivers are better than ATI's?

I also found it incredibly ironic and not surprisingly hypocritical that you would still choose to purchase an Nvidia part that was by most accounts inferior to the 4870 1GB based on criteria you've set. And now, you're claiming your decision was based on Nvidia having superior driver features?!?!? LMAO. We certainly have come full circle with your hypocrisy.
I see Azn is now drilling you about your choice of hardware purchases. So Chizow tell me, how does it feel to have someone questioning your buying rationale when they clearly have no idea what they?re talking about?

How does that medicine of yours taste, hmmm?[/quote]
If drilling me means telling me how cool and expensive my hardware is, then yeah it sucks I guess lol. :roll: The difference is my purchases have satisfied criteria important to me without exception. Can you say the same based on comments you've made? No, you can't. Why did you buy another Nvidia part again? More robust drivers. Yet you've claimed repeatedly ATI has more robust drivers.

Right, you made idiotic comments, period.
But nothing like the idiotic comments you made over a year ago based on experiences you never had.

So again I?ll ask whether they backed my claims, thereby making me right and you wrong?
Yep, but not based on your claims and experience. And of course you're certainly more wrong now than you were right then. Yet you still insist ATI has the better drivers right?

Pervasive to whom? Your haven?t touched an ATi part since 9700 Pro so how are they pervasive to you?
Potentially anyone using Vista?
Anyone using Vista + CF?
I thought you "cured" these problems by telling people to switch from Nvidia to ATI? :)

So working on a fix is bad now?
Nope, I just thought it was funny how you'd be naive enough to think a fix that came within a month was a result of your reports and not because they were already working on a fix. Gotta love those monthly placebo drivers.

It?s also been demonstrated that I?ve received fixes the very next month that I reported a problem. Maybe they were already working on a fix, maybe they weren?t. The point is the end result, which was a fix within one month of me reporting it.
Ya I get that feeling a lot too. Like when I walk up to the elevator and it just so happens to open, right as I walk up! Amazing.

I?m honestly not sure. Has the physics freezing been fixed in Mirror?s Edge? Even after another emergency hot-fix for another TWIMTBP title (pervasive, to use your terms) Azn is still reporting freezes with PhysX enabled.
Looks like its still around. What's that now? 3 WHQL and 5-7 Hot Fix/Betas?

As for Mirror's Edge, a PhysX patch was released for it the day of release, which again further supports my claims Nvidia releases fixes for their drivers in a more timely manner.

Obvious to whom? You?ve tried newer tiles on your 9700 Pro, have you? Or are you making sweeping generalizations again based on FC2, which many who actually use ATi parts clearly recognize as an outlier?
Nope I'm making sweeping generalizations based on updates I've seen for my Nvidia parts and their games, like Beta drivers for specific title launches (WiC, TimeShift, Crysis etc.) and more recently, GTA4, and Big Bang 2. I then compare them to the same problems recurring over and over month to month, hot fix to hot fix for ATI users.

I?ve used modern ATi hardware (at the time) to run launch titles (at the time) without issue while nVidia users had problems in some of those titles, especially many TWIMTBP titles.
Sure you have, your X800XL testing all your games every few months again right?

You still don?t get it: new titles are very important to me, but so are old titles.
But obviously older titles are more important to you since its the main reasoning behind your conclusion AMD drivers are better than Nvidia's.

But I had references, quotes and links to credible sources and you still denied them. You were claiming claptrap like ?they were caused by other things in the system?, ?they?re not nVidia problems? and even worse, outright denying them.

And this was still after I had linked to forum threads with multiple dozen pages replicating the problems on a range of systems, and even after quoting nVidia?s fixes in their own driver readme.
Yep, and I was giving relevant experiences I had with the hardware and OS in question. The difference is, I had actual experience with the hardware and software and went through all the various hot fixes for Vista that greatly reduced problems with Nvidia hardware. I also linked to clear references confirming as much with an article here on AT, along with a dedicated page on Nvidia and numerous game developers' web pages linking directly to MS Hot Fixes. You claimed ATI didn't have such problems, I said there simply weren't enough people using ATI parts to know. Amazing what happened once people actually had a reason to use ATI parts again......
 

AzN

Banned
Nov 26, 2001
4,112
2
0
Originally posted by: Zstream
*grabs the popcorn*

Seriously chizow, your arguments are not holding very much weight. In fact you go off on tangents and this is why the thread is where it is at.

Pretty much. :p
 

AzN

Banned
Nov 26, 2001
4,112
2
0
Originally posted by: chizow

No, that I shouldn't have to detail everything that "ROP" entails when cutting ROPs also results in a loss of those logical units.

Like your fingernails right. :laugh:



ROFL... Again 8800gtx is core hungry and bandwidth happy card. Of course it's going to make bigger difference when you raise core clocks. You couldn't figure this out on your own but had to quote Anandtech benches because you wouldn't know if the benchmarks were in front of you. :laugh: You need to be told be told what is what. That's the difference between someone who knows what they are talking about and someone who does not. :p

So raising core clocks is raising only ROP again? ROFL... It's raising 32TMU clocks on 8800gtx as well. FYI I had a 1950pro card back then but hey if it makes you feel any better about $500 purchases all power to you. ;)


Minimal impact from core frequencies? Really? Is that why G92 GT, to GTS, to GTX to GTX+ and 9800GX2 compared to the SLI G92 solutions all show significant gains from core clock increases, despite similar 1000-1100MHz bandwidth? That's 600MHz to 738MHz, a 23% difference in core clock. Are you saying the performance difference between G92 is closer to the minimal differences in SP and memory frequency, or closer to the 23% differences in core clock? G92 benefits less than G80 and GT200 from core clock frequencies, but its still has the greatest impact on performance.

You are blinded by your hypocrisy that you don't mention the memory clock and SP differences. Just core clocks to compare your results. This shows how pathetic and feeble your arguments are when 8800gt has 112SP and has a memory clock of 1800mhz. 9800gtx/+ has 128SP that's clocked at 1688 or 1836 and has memory clock of 2200mhz.

Now all of a sudden you say g92 has less impact on core clock frequencies than GT200. ROFL... :laugh:


Rofl, 3DMark synthetics again. ROP performance between the parts is almost the same, only 4% difference, so you can stop with whatever nonsense about ROPs performance being tied to memory controllers:
R600 and RV670 specs

Nonsense to you because you know nothing about GPU. :laugh:

http://techreport.com/articles.x/12458/4
Radeon HD 2900 XT gets closer than any of the other cards to its theoretical maximum pixel fill rate, probably because it has sufficient memory bandwidth to make that happen.

http://techreport.com/articles.x/14168/4
The single-textured fill rate test is typically limited by memory bandwidth, which helps explain why the Palit 9600 GT beats out our stock GeForce 8800 GT.

http://techreport.com/articles.x/14967/3
color fill test is typically limited by memory bandwidth,

http://techreport.com/articles.x/14934/7
Speaking of bandwidth, we've found that synthetic tests of pixel fill rate tend to be limited more by memory bandwidth that anything else. That seems to be the case here, since none of the cards reach anything close to a theoretical peak and the top four finish in order of memory bandwidth.

You were saying? :p


But this isn't surprising from someone who claimed: Bigger bus is just better. It's wider and able to hit some peaks a smaller bus can't sustain. when discussing 3870 and 2900XT, while linking a bunch of garbage 3DMark theoreticals (again, about as useful as counting frames on a loading screen) while ignoring actual game benchmarks from the same site that showed minimal difference between the parts.

Wider is better. It can fit data all at once while a smaller bus is doing 2x the work at faster clock speeds. Not to mention smaller bus usually means less ROP because it's being tied down to memory controller. Again 3dmark is a tool to measure what the video card is capable of. 3dmark overall score does not represent real world gaming experiences but the data is viable as any benchmarking utility. Just because you don't know how to use a tool don't blame me for your incompetence. ;)



Except the comparison was never between the 9800GTX+, I already know the GT200 is always faster than the 9800GTX+. The point of the exercise was to show memory bandwidth has much less impact than core clock increases to show the difference between bandwidth on GTX 295 and GTX 280 was less relevant than the loss of ROPs. My results clearly show that. :)

Why did you downclock your GTX 280 memory to 550mhz only to find out it performs like 9800gtx+? :D 21% in minimum frame rates is hardly less impact. :p



Like how a 27% decrease in memory bandwidth results in a 3-8% difference in performance? Anyone would clearly see that's less than the 15-25% difference between GTX 295 and GTX 280 SLI.

Oh don't forget 21% less minimum frame rates. :laugh: Would you rather have higher average frame rates or better minimum frame rates? Then again you still haven't downclocked your core and SP to GTX 295 levels. Probably because the drop was similar to GTX SLI vs GTX 295. :laugh:


LOL? Uh, maybe because reducing resolution means a reduction in bandwidth requirements? There's no wild guessing here other than what gibberish will come out of your mouth next.

You know how retarded that sounds? By your logic if a card has more fillrate and bandwidth it won't perform any faster. :roll:



Yes it plays a role but it clearly plays less of a role at lower resolutions as there's simply fewer pixels per frame, which reduces how much data passes to/from the frame buffer. I'm not arguing efficiency, I'm arguing which factor has the larger impact and clearly its not bandwidth. Bandwidth is only an issue if you clearly don't have enough and its completely crippling performance so that gains in other areas show no gain.

WRONG!!! My Crysis benches show bandwidth made a big difference even at lower resolution.




Its not ludicrous when G92 always benefits more from core/shader clock increases than increases to memory bandwidth. If G92 was so bandwidth starved as you claim, simply increasing memory bandwidth by itself would yield a bigger gain than core/shader increases, but it does not. There's at least 5 G92 parts that show this to be the case, scaling from 600 to 750MHz with memory clocks locked at 1000-1100MHz.

ROFL.... My Crysis benches from 8800gs and 8800gts showed bandwidth had a detrimental effect over core clocks when it comes to performance. :p


Yep, it is BS because you haven't and still can't explain away the G94. But this should help straighten things out for you. You should be familiar with it, as you've referenced it in the past:

Expreview G94 to G92 GS

Please explain how a G92 card with more pixel/texture fillrate and shader performance is able to perform within 5% of G94, despite a 33% reduction in bandwidth? You said G92 didn't have enough bandwidth to satisfy its texture fillrate, yet here's a G92 part that shows no adverse effects from less bandwidth.

Also please explain to us how G94 with 33% fewer SP and TMUs is able to stay competitive with G92 GS if SP and TMU are the most important aspects of performance. Yes it has more bandwidth, but that shouldn't matter since there's less texture fillrate to begin with.

What else.. Shader and texture units. Definitely NOT ROP!!! You also forget 8800gs is quite a bit slower with with AA because of bandwidth limitation.

ROFL... 8800gs has 12ROP and 9600gt has 16ROP. 8800gs is clocked lower than 9600gt. yet it still pull within 5% or at times surpassing 9600gt when it comes to raw performance. :laugh:


Explaining the differences in architecture and how they corrolate to real-world performance between the parts is marketing jargon? More like squashing misinformation from someone who has repeatedly demonstrated incompetence and the inability to absorb readily available information.

Then again you shouldn't even be talking about architectural differences and rooting for X company because you know nothing about GPU architecture.



But the 4870 has less texture fillrate than 9800GTX+ and beats the 9800GTX+ even without AA, so additional bandwidth shouldn't be an issue, yet the 4870, like the 260 runs circles around the 9800GTX+. Weird. :confused:

It still not enough that 64GB of bandwdith can't fully saturate 4870 30000 M/texels. More bandwidth fully saturate the card. While 9800gtx+ is being limited to how much data it can process with memory bandwidth. Same reason why my 8800gts showed minimal impact when lowered core clocks with same bandwidth in Crysis.


Yep, its always faster despite lower texture fillrate theoreticals. But is this surprising given the GTX 260 follows the same pattern as well? :) Gotta love how you throw up all these theoretical numbers which never bear out in real world applications. Just shows theoreticals are just that, theoretical and ultimately useless.

This proves how clueless you really are. So having more bandwidth and more processing power it should be slower than a card that has limited by bandwidth to a point where raising core clocks doesn't do anything? :laugh:


Except that's clearly not true as G92 has gone through 25% increases to core from 8800GT to 9800GTX+ with a minimal 10%-15% increase in memory speed, yet it still scaled signficiantly with core clock increases.

First of all 8800gt has 112SP and 57.6GB of bandwidth and 9800gtx+ has 128SP and 70.4GB of bandwidth. You are trying to compare 2 different cards with more or less SP and texture units to support your argument. :p When my test shows exactly same setup and changed core frequencies and memory clocks to determine the bottlenecks of g92. Which is more accurate Chizow? :laugh:




Ah yep, I hadn't referenced that graph in a while and was still thinking of parts that could only write/blend at half speed.

ROFL. See how your arguments crumble when you've been proven wrong again. :eek: Now it's not your fault for making an ass out of yourself when in fact what I stated originally was right all along. :p


However it still shows 4870 is still faster than 9800GTX+ even without AA, even though it can write similar pixels per clock.

You are asking a question that prove ROP isn't the biggest limiting factor when it comes to performance. :laugh:



LOL, BS, you linked a die shot to GT200 and claimed a ROPs that only appeared to be 25% of the die couldn't have the biggest impact on performance because they were only 25%, at which point I clearly illustrated that equating die size to performance was clearly a flawed analysis. See, the difference is I was using L2 Cache to clearly show die size is not proportionate to performance, where you tried to show that it did with GT200. So do you think ROP performance and size is still a relevant comparison, or not?

That's not even remotely close to what I said. I simply asked you if you thought the small section of chip can make the most differences and then on to say why doesn't Nvidia kill off some more texture units and SP and add more ROP if ROP made the most impact in performance.

Your arguments are flawed because a game is a set function we are discussing while CPU can do different things at any given time. You were trying to compare L2 cache performance in games to determine your argument when in fact L2 cache has dramatic impact in server environment.



Blah blah blah. I did test Crysis, it showed an 8% difference from 27% reduction in memory clocks. I also tested 4 other titles that showed 0-5% difference. Its obvious you're too "chicken shit" to stray away from Crysis as its one of the few titles that is bandwidth intensive enough to show a significant decrease in performance from a reduction in bandwidth at a lower resoluton like 1680, yet its still only 8%. Run a straight line and use FRAPs for all I care, just don't hide behind a lame excuse like "I don't have enough games with a built-in benchmark so I can only use Crysis".

I implied if my 8800gts shows bigger drops with memory clocks your GTX 280 should have similar results because you said Crysis is one of the few titles that scale with all facets of the GPU. ;) If that was the case your GTX 280 should also show similar results if G92 wasn't bandwidth limited which you've been arguing about.

Obviously you don't have the slightest idea when it comes to problem solving. No wonder you've been proven wrong multiple times on this thread alone with actual numbers. Ignorance is bliss I suppose. :p


I'll only do it if you double dog dare me! :laugh: What about minimum frame rates? You claimed the difference between GTX 295 and GTX 280 SLI was due to bandwidth, they used averages so minimums were never in question. My benchmarks clearly showed a 3-8% difference from a 27% reduction in memory, proving your theory wrong while showing memory bandwidth was indeed less significant at lower resolutions, just as I stated.

Yet you still haven't shown results by downclocking your GPU and shader. I double dare you. :cookie:


What do your results with 8800GTS and GS have to do with my claims about GTX 260 and 9800GTX+? If you're going to try and illustrate they're bandwidth limited, then show how much of an increase in performance is gained from an increase from memory clocks. Increasing bandwidth requirements and then reducing bandwidth doesn't prove your point about being bandwidth limited. That'd be like saying 9800GTX+ is bandwidth limited, so to prove this, I'm going to clock memory to 128-bit like an 8600GT.

Cause they are the same g92 chip. :laugh: Like I said if you are willing to acknowledge g92 is bandwidth limited I would be willing to show you the results by increasing bandwidth and core frequencies separately. You also have to tell me you were wrong. Simple isn't it? :D


Ya, you're an imbecile for equating anything on a tech forum to Nazism. But I'm sure all those new forum members are tickled pink to hear garbage like "GTX 260 isn't much better than 9800GTX+" or "9800GTX+ would stomp GT200 with more memory bandwidth" or "9800GTX+ is actually faster than GT200, but not really" And you still can't explain away the G94. :laugh:

I thought Nazi and you made a good resemblance. Nazi were ignorant just like you and believed retarded things. Much like your ROP makes the most impact in performance when it's clearly been illustrated to you that it doesn't with multiple findings and benches while you just run your mouth and call people imbeciles. :laugh:

I've illustrated with benchmarks showing g92 bandwidth limitations with 2 different g92 cards yet here you are arguing a proven case. Pathetic really. Next thing you know you would be arguing the world is flat because it seems flat to you. :laugh:



Yeah, employee discount from one of my other employers, Microsoft. Cash back employee benefit program was great, you didn't even have to be an employee to get in on it. :) As for marketing jargon, slow ass card. LMAO. Funny coming from someone who has a long history of only using "slow ass cards". But keep pouring it on my GTX 280, it can take the criticism, really. LOL.

There's no way Microsoft would hire you. They don't need clueless people spreading marketing jargon but Nvidia. :laugh: They need every retarded kid they can get.

Funny how your $300 something card you got on a bargain bin basement yet still only 30% faster than a $100 card when it comes to raw frame rates. You sure are brainwashed like how it's meant to be be played. :p



I bet....you have no clue what you're talking about. So you're saying GT200 gets more than a linear increase from core clock increases? I'm a big fan of overclocking Nvidia parts and the performance gain they give, but even I can't make that claim. A 4% increase/decrease is going to yield a 4% increase/decrease at best, which isn't enough to make up for the 15-25% difference between GTX 295 and GTX 280 SLI.

If it makes you happy sure Chizow I don't know what I'm talking about... :laugh: Just downclock and post results. I don't even want to hear your theories at this point because you've been proven wrong on multiple accounts just on this thread.


Like Derek and Bit-Tech who both stated ROPs or bandwidth were bottlenecking GTX 295, and I'm sure most other review sites as well. And it is theoretically possible that GTX 260 SLI beats GTX 295, which just shows TMU/SPs aren't the most significant factor when it comes to performance.

Now why did Derek and Bit-tech say this? Is it because GTX295 has lower bandwidth and ROP? This should be self explanatory but derek or bit-tech didn't mention clock differences either. They are roughly estimating here and nothing more. They didn't test enough to know exactly what those performance difference were. It would be simple if you just downclocked your core and SP and memory to show ROP differences but there you go when this hasn't been tested and say TMU/SP weren't a factor.


It showed a 27% reduction in bandwidth resulted in 3-8% difference in FPS and proved my point that bandwidth alone wasn't enough to explain away 15-25% differences between GTX 295 and GTX 280 SLI.

ROFL. What did your minimum frames do? Dropped 21%. :laugh: You also said it wouldn't have an impact in performance but it did. You still haven't downclocked your core or SP to show the results. :p



Yes, you get bigger decreases when you lower all simultaneously but if you decrease all equally, the drop in performance will not be greater than the % that you decreased them, the difference should be very linear. When you change them individually, you can then come to the conclusion which factor has the greatest impact on performance by comparing actual clock decreases to % drop in performance. And this is clearly illustrated when I decrease memory bandwidth 27% and only see a 3-8% drop in performance.

I don't need to hear about your theories and assumptions which have been proven wrong in multiple accounts. Just put up the benches and stop arguing.


This is really elementary stuff when it comes to overclocking, enthusiasts have been making these comparisons for years and have generally found memory clocks historically have much less impact on performance than core clock. I guess introducing that third shader clock threw you off somehow lol.

It's not elementary when you didn't know it is it. :laugh: Memory clocks sure do impact minimum frame rates. :p Core frequency determine max frame rates. A efficient card would yield better frame rates. ;)



Just as long as you understand why your initial benchmark was flawed, I don't really care if you run the benchmark again or not because I already know you're going to see less return on the increase to bandwidth.

Like I said chizow. if you are willing to acknowledge g92 is bandwidth starved after seeing the results I'm willing to post the results. That's only if you tell me you were wrong. :laugh:



Yep and 27% talks, 3-8% barks, really softly. :)

21% talks and chizow barks. LOUDLY! :laugh:



There's about 20 GTX 285 benchmarks that show a 10% increase in clocks result in <=10% performance. No need for me to prove a 4% difference in clocks is going to result in a <=4% difference in performance when that information is readily available and verifiable fact for anyone who has actual experience with the parts (and isn't completely incompetent, like you).

Just post benches. No need to hear any more of your hypothesis when you've been proven wrong repeatedly.


Because it shows GPU workload and not theoreticals ultimately determine fillrate and FPS. It also shows pixels are still being drawn regardless how blank/empty you think they are.

Make shit up as you go. It's the best you have to offer. What does that have to do with using fraps to determine fps where 3dmark is testing multiple layers of textures while a game has a logo and doing absolutely nothing? NOTHING! NO RELEVANCE!!! CHIZOW STRIKES AGAIN!!!! RUFF RUFF!!!!



No, I said bandwidth was less significant at lower resolutions even with AA, and it clearly is. Now going back to my original point about ROPs being more important than SP and TMU and bandwidth, do you think downclocking the core clock by 27% would result in a miniscule 3-8% drop in performance? Think about this a second before you reply, given you've already questioned how much a 4% core clock difference would make.

But then again it made a huge difference at the resolution you tested. Dropping your minimum frame rates by 21%.

This is how retarded your arguments are. You imply 27% core clocks would drop huge performance but then it doesn't prove ROP was the biggest factor because it's also tied down to texture clocks. :roll:



I'm well aware VRAM is less significant at lower resolutions, which is why I asked why you brought it up when we were specifically discussing lower resolutions where VRAM would be less of an issue. Oh right, its because you're not smart enough to focus on what you're arguing.

I brought it up because without VRAM or bandwidth limited situation GTX 260 is barely faster than 9800gtx+ while you were arguing that GTX 260 is so much faster when it isn't. That's was the original point but you don't even remember what the hell you were going off about. :laugh:


Yep, all Nvidia parts since G80 that I've owned have behaved similarly where they benefit the most from increases to core clock over shader or memory. This is obvious to anyone who has used these parts, the fact you haven't come to the same conclusion, despite experience with only older/crippled parts would indicate you're incompetent.

ROFL... This just makes you look like an idiot when in fact not all cards are built the same. Also my tests shows this yet you are still here arguing about something that's been proven with 2 different G92 chips.

The only 2 cards you really played around is 8800gtx and GTX 280. Both of those cards have plenty of bandwidth and not enough fillrate to complement the cards bandwidth. Of course you are going to be limited to clock frequencies. Isn't it so obvious? :disgust: G92 is a different beast. Having more than 2x the texture fillrate of 8800gtx and nearly as much texture fillrate as GTX 280 is constraint by bandwidth. This is the same reason why I said downclock your GTX 280 bandwdith 550mhz and see it perform more like 9800gtx+. But you won't because in fact it's all true. :p



LMAO finally, progress. Its more than 5-10% from 8800GT or GTS to 9800GTX+, its closer to the 15-25% core/shader difference between the parts. Its obvious core/shader still has a greater impact than bandwidth despite your claims G92 was bandwidth limited to begin with.

Progress? I don't even know where to begin with your idiocy. Tell me chizow how does increasing memory, shader, core clocks make it obvious? Full of shit as usual. Now you've changed your ROP argument to core and shader having more impact? :disgust:



Sure you do, post them. I already know a memory bandwidth increase will result in less of an increase than than core/shader increases. Its really simple, which of the 3 yields the bigger increase? That means increasing each individually, not keep 2 factors as high as they can go, then dropping the one you claim will gain from an increase by 25%.

Again, if I post them will you acknowledge g92 is bandwidth starved and that you are WRONG? :laugh:


That might be true, unfortunately you didn't test them at the same resolution, which was my point to begin with. Do you think the 8800GTS would've showed a bigger % drop at 1440 than the 8800GS? Maybe, but not for any certainty given you've decreased resolution 36% and decreased bandwidth 33% as well.

Of course it's true. I make logical claims while you make illogical claims when it's been proven to death. :p Are you acknowledging that g92 is bandwidth starved? :laugh:



Yep pretty sure I came across a benchmark with it, of course you claimed it was a BS part when it clearly isn't. And yes cutting your bandwidth in half would prove the point that insufficient bandwidth at higher resolutions can cripple performance.

If you were sure why not post them instead of BS'ing.

So what do you consider low resolution? Is 1280x1024 low enough for you? Should I test my GTS and clock the memory to 513mhz from 1026mhz to prove you wrong again? Considering my 8800gts dropped 16% @ 1680x1050 resolution with 28% reduction in bandwidth. Now what do you think will happen if I lowered more bandwidth? :laugh:


Oh wow you said that as if I cared. It just makes your comments and comparisons about GTX 280 that much funnier. :)

Obviously you cared enough to brag about your GTX 280 and trying to belittle people with slower cards. :p


What am I denying? One of the first things I said was not to bother pulling Crysis benches as I'm well aware its responsive to core/shader/memory clocks individually, and much moreso than other games.

You are hopeless. :(
 

mhouck

Senior member
Dec 31, 2007
401
0
0
So... what page to I have to go back to for anything relevant to the OP and that discussion?

However, it is impression to see the amount of cross quoting that a human being can do when they put their mind to it.:D