• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

FarCry 1.2 patch - up to 33% improvement for NV40... DoH! JUST KIDDING!

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Originally posted by: keysplayr2003


Because review sites are not credible Ackmed and probably never were except for maybe the very beginning. I only say this because the review sites vary so much from each others results that they aren't really any help. The only thing these reviews get accomplished is to get everyone all fired up and direct traffic to their forums and ensuring everyone sees the banner ads.

So because different hardware sites, get a different score in a benchmark, they're bias? I really hope you are not that... you know.

There are MANY reasons why they get different scrores. Some use the timedemos from the game, some use custom ones, and some use fraps. I seriously doubt you will ever find any setup the same on two different hardware sites. The real question is, why do you expect the, to be the same?

As long as they show the same thing, it helps. True there are some that are probably bias, but to say that all review sites are not credible is pretty ignorant.
 
Originally posted by: Ackmed
Originally posted by: keysplayr2003


Because review sites are not credible Ackmed and probably never were except for maybe the very beginning. I only say this because the review sites vary so much from each others results that they aren't really any help. The only thing these reviews get accomplished is to get everyone all fired up and direct traffic to their forums and ensuring everyone sees the banner ads.

So because different hardware sites, get a different score in a benchmark, they're bias? I really hope you are not that... you know.

There are MANY reasons why they get different scrores. Some use the timedemos from the game, some use custom ones, and some use fraps. I seriously doubt you will ever find any setup the same on two different hardware sites. The real question is, why do you expect the, to be the same?

As long as they show the same thing, it helps. True there are some that are probably bias, but to say that all review sites are not credible is pretty ignorant.


Myself I always prefer to see what the end user is actually getting...grant most with high end machines have both cards...these are the people I would simply love to see post THEIR benchmarks useing the ingame counter and fraps trying the same settings and what not that the review sites use....you may be surprised at exactly how far off these sites maybe vs what you or I will actually see...most sites use a clean install of windows with no installed software other than the benchies...no other programs running in the background etc etc...you get the point. so seeing what an actual end user is getting interests me alot more than any site....this is why for video cards I give up on sites and rely on users where as things like mobos and what not ill look to sites 🙂
 
That doesnt make any sense. If a card is faster than another card, with 10gigs of software installed to the HD, its going to be faster still on a machine with no installed software. Same goes for apps running in the background. Its not magically going to make the slower card faster.
 
Originally posted by: Ackmed
That doesnt make any sense. If a card is faster than another card, with 10gigs of software installed to the HD, its going to be faster still on a machine with no installed software. Same goes for apps running in the background. Its not magically going to make the slower card faster.

Im not speaking in reference to just the card my friend..Im speaking of the computer as a whole.
I can tell right off my system doesnt run as "fast" once everything is installed vs. a fresh install with only drivers...
 
Originally posted by: Ackmed
Originally posted by: Apophis

Weren't you the guy with an X800Pro saying it ran at some ridiculously cool temp, thus discounting the many websites that said they ran hot as hell? If so, that's pretty lol considering you're running a _WATERCOOLED SYSTEM_ thats not putting out nearly as much heat on the card as us people with normal airflow cooling.

You may want to actually check your facts, before looking foolish. I didnt post "ridiculously cool temp". They are average for a X800 Pro. 38c'ish idle, 66c'ish load. Also, if you had even looked at my system, only my CPU and chipset are watercooled right now.

Show me where "many websites that said they ran hot as hell", waiting on those links.

You're calling me foolish when you're discounting the amount of heat dumped into your case by the CPU and chipset when they AREN'T watercooled? That's exactly what I mentioned in my post. I -know- your GPU isn't watercooled, and never said it was. Good job champ.
 
Originally posted by: Apophis
Originally posted by: Ackmed
Originally posted by: Apophis

Weren't you the guy with an X800Pro saying it ran at some ridiculously cool temp, thus discounting the many websites that said they ran hot as hell? If so, that's pretty lol considering you're running a _WATERCOOLED SYSTEM_ thats not putting out nearly as much heat on the card as us people with normal airflow cooling.

You may want to actually check your facts, before looking foolish. I didnt post "ridiculously cool temp". They are average for a X800 Pro. 38c'ish idle, 66c'ish load. Also, if you had even looked at my system, only my CPU and chipset are watercooled right now.

Show me where "many websites that said they ran hot as hell", waiting on those links.

You're calling me foolish when you're discounting the amount of heat dumped into your case by the CPU and chipset when they AREN'T watercooled? That's exactly what I mentioned in my post. I -know- your GPU isn't watercooled, and never said it was. Good job champ.

Youll learn not to argue with Ackmed, if you call him out on something, he will tell you what he MEANT to say. Or use some stupid technicality so he can imagine he is right. It gets old very fast. My suggestion: Just ignore him entirely.
 
Originally posted by: Ackmed
That doesnt make any sense. If a card is faster than another card, with 10gigs of software installed to the HD, its going to be faster still on a machine with no installed software. Same goes for apps running in the background. Its not magically going to make the slower card faster.
It depends. Some drivers are more CPU-dependant than others. Drivers that depend on more CPU time devoted to them are obviously going to suffer if there are many other applications running in the background. Simply having the software installed generally isn't enough though.

Originally posted by: Shad0hawK
the final authority on the matter has spoken, all arguments may now end.
Hahaha, I think he was at Shader Day, wasn't he? 😀
 
Originally posted by: VisableAssassin
Originally posted by: Ackmed
That doesnt make any sense. If a card is faster than another card, with 10gigs of software installed to the HD, its going to be faster still on a machine with no installed software. Same goes for apps running in the background. Its not magically going to make the slower card faster.

Im not speaking in reference to just the card my friend..Im speaking of the computer as a whole.
I can tell right off my system doesnt run as "fast" once everything is installed vs. a fresh install with only drivers...


You're still not getting it. If a card is faster with software installed, and programs running in the background, it will be faster still on a clean install.

The amount of heat from a CPU is negligable, my temps were the same when I was on air. Still waiting on those website links, where are they?
 
Originally posted by: Ackmed
Originally posted by: VisableAssassin
Originally posted by: Ackmed
That doesnt make any sense. If a card is faster than another card, with 10gigs of software installed to the HD, its going to be faster still on a machine with no installed software. Same goes for apps running in the background. Its not magically going to make the slower card faster.

Im not speaking in reference to just the card my friend..Im speaking of the computer as a whole.
I can tell right off my system doesnt run as "fast" once everything is installed vs. a fresh install with only drivers...


You're still not getting it. If a card is faster with software installed, and programs running in the background, it will be faster still on a clean install.

The amount of heat from a CPU is negligable, my temps were the same when I was on air. Still waiting on those website links, where are they?

Correct way this sentence should have been done:

"The amount of heat from my CPU is negligable. My temps were the same when I was on air. Still patiently waiting for those website links, can I please have them?"

Where are you manners? We will teach them to you, do not worry. 😉
 
Why should I trust some forum poster over credible review sites? Im not saying rollo would lie, I dont even know him. Hes obviously very pro-NV however. The increases he shows simply do not match up with other reviews, not any of them.

I'm no more "pro nVidia" than I am "pro ATI". I do like the nV40 line more this time around, just like I obviously liked the R300 line more last time around.
(I did use a 9700/9800 sixteen of the last twenty months?)
What I don't like is the strange "nVidia bashing" that has been going on since the nV30 launch, and I've given my reasons for that enough in other threads anyone can pretty easily find.

As far as my benchmarking goes, I don't have any reason to lie about it or try to cheat you guys- I just wanted to show you what you can expect if you bench Far Cry using the application profile in Forceware, and controlling AA/AF in the application.

Wouldn't you rather have that info than not? The application profile is available in the drivers, so any of you can run it the same as I did? If the application profile offers that kind of speed enhancement, and you don't notice any diminished IQ, I can't see why you wouldn't run it that way? Because ATI doesn't have a profile? :roll:
 
personally i see more nvidiot bashing than nvidia bashing.

while we can certainly make fun of dual molex connectors and the real estate the nv40 requires, in terms of performance and IQ, i really can't see how you can bash it.

the xt is a little faster than the ultra.

the GT at times appears faster then the PRO, but overall very comparable - if you can get your hands on one.

the vanilla 6800 is in a class of it's own, with nothing in it's price-range comparable, and no new ati card announced for that segment yet.

sm3 benefits are blown out of proportion. they offer performance enhancements to either close the gap, or best the ati in some circumstances, but overall (with the exception of kicking ati's ass in CoD for some reason) the performance is pretty equitable.

nvidia does offer full precision shaders/hdr (as opposed to ati's 24-bit), but hdr performance is unknown, and iq differences are most likely inconsequential, as has been shown in comparing 24-bit/32-bit shaders.

while i can certainly agre nvidia bashing is hardly justified, so to could i say the ati-bashing i've seen is just as unjustified. there's simpley too much personal baggage making it's way into your observations, or so it seems to me...
 
Originally posted by: CaiNaM
personally i see more nvidiot bashing than nvidia bashing.

while we can certainly make fun of dual molex connectors and the real estate the nv40 requires, in terms of performance and IQ, i really can't see how you can bash it.

the xt is a little faster than the ultra.

the GT at times appears faster then the PRO, but overall very comparable - if you can get your hands on one.

the vanilla 6800 is in a class of it's own, with nothing in it's price-range comparable, and no new ati card announced for that segment yet.

sm3 benefits are blown out of proportion. they offer performance enhancements to either close the gap, or best the ati in some circumstances, but overall (with the exception of kicking ati's ass in CoD for some reason) the performance is pretty equitable.

nvidia does offer full precision shaders/hdr (as opposed to ati's 24-bit), but hdr performance is unknown, and iq differences are most likely inconsequential, as has been shown in comparing 24-bit/32-bit shaders.

while i can certainly agre nvidia bashing is hardly justified, so to could i say the ati-bashing i've seen is just as unjustified. there's simpley too much personal baggage making it's way into your observations, or so it seems to me...

i would have to disagree with you on one a few points.

judging the effectiveness of SM3 on a patch that barely utilizes it is very premature...that is like saying a new model ford cobra is will not be not fast because a ford festiva that uses few of the same parts is not fast...oversimplistic thinking at best.

in fact what we see is just the opposite, according to crytek the performance hit of using a long instruction set turned out to smaller than thought. while the performance gains in some cases actually exceeded expectations in cases where multiple shaders in SM2 were replaced by a single shader in SM3. and in the 1.2 patch, some shadowing was replaced...and that was it. the simple fact is crytek is saying SM3 kicks @$$..they just did not utilize it as much as people thought they would...for the time being.

even in the limited use the 1.2 patch made use of SM3 in complex shading areas there was a HUGE performance increase seen...but some people do thier best not to talk about that.
 
Originally posted by: Shad0hawK
Originally posted by: CaiNaM
personally i see more nvidiot bashing than nvidia bashing.

while we can certainly make fun of dual molex connectors and the real estate the nv40 requires, in terms of performance and IQ, i really can't see how you can bash it.

the xt is a little faster than the ultra.

the GT at times appears faster then the PRO, but overall very comparable - if you can get your hands on one.

the vanilla 6800 is in a class of it's own, with nothing in it's price-range comparable, and no new ati card announced for that segment yet.

sm3 benefits are blown out of proportion. they offer performance enhancements to either close the gap, or best the ati in some circumstances, but overall (with the exception of kicking ati's ass in CoD for some reason) the performance is pretty equitable.

nvidia does offer full precision shaders/hdr (as opposed to ati's 24-bit), but hdr performance is unknown, and iq differences are most likely inconsequential, as has been shown in comparing 24-bit/32-bit shaders.

while i can certainly agre nvidia bashing is hardly justified, so to could i say the ati-bashing i've seen is just as unjustified. there's simpley too much personal baggage making it's way into your observations, or so it seems to me...

i would have to disagree with you on one a few points.

judging the effectiveness of SM3 on a patch that barely utilizes it is very premature...that is like saying a new model ford cobra is will not be not fast because a ford festiva that uses few of the same parts is not fast...oversimplistic thinking at best.

in fact what we see is just the opposite, according to crytek the performance hit of using a long instruction set turned out to smaller than thought. while the performance gains in some cases actually exceeded expectations in cases where multiple shaders in SM2 were replaced by a single shader in SM3. and in the 1.2 patch, some shadowing was replaced...and that was it. the simple fact is crytek is saying SM3 kicks @$$..they just did not utilize it as much as people thought they would...for the time being.

even in the limited use the 1.2 patch made use of SM3 in complex shading areas there was a HUGE performance increase seen...but some people do thier best not to talk about that.

well, my concern isn't what crytek perceived the "cost" of long instructions to be, rather the overall performance benefits.

no IQ benefit (as many fanboys had previously claimed, and therefore being "bashed"), and a small performance benefit from sm3 which wasn't enough for the ultra to overtake the xt in far cry. i think i'm right on the money.

while i'd have to agree further performance could be achieved down the road, the point i was making was that the nvidia fanasses were claiming r420 users would cry over lack of sm3 support when far cry came out, and that certainly is not the case. in fact, it's quite the opposite -*yawn*.

is sm3 an improvement over sm2? certainly. but i haven't seen anything to convince me it will be very relevant in this generation of cards.

that's bashing neither nvidia or sm3... rather the idiots who made the erroneous claims.
 
Originally posted by: CaiNaM

well, my concern isn't what crytek perceived the "cost" of long instructions to be, rather the overall performance benefits.

no IQ benefit (as many fanboys had previously claimed, and therefore being "bashed"), and a small performance benefit from sm3 which wasn't enough for the ultra to overtake the xt in far cry. i think i'm right on the money.

while i'd have to agree further performance could be achieved down the road, the point i was making was that the nvidia fanasses were claiming r420 users would cry over lack of sm3 support when far cry came out, and that certainly is not the case. in fact, it's quite the opposite -*yawn*.

is sm3 an improvement over sm2? certainly. but i haven't seen anything to convince me it will be very relevant in this generation of cards.

that's bashing neither nvidia or sm3... rather the idiots who made the erroneous claims.


well let's look at the facts.

1. in complex shading scenes SM3 works just as advrertised, actually moreso.

2. the main stumbling block many thought SM3 would have was the performance hit of a single shader with multiple instructions. as it turns out the pergormance hit is not that great, at least according to crytek ,but what would they know? 😉

3. the 1.2 patch does not utilize very many SM3 shaders, in a few complex scenes some SM2 shaders were replaced by SM3 for the sake of performance.

can you logically explain to me how judging a patch replacing a few shaders is enough criteria to determine the actual performance benefit of SM3 as a whole?

that makes about as much sense as showing benchmarks where SM3 "makes no difference in performance" and neglecting to mention(or realize) that in many areas no(or very very few) SM3 instructions are even used(thus no difference...)

oh wait, plenty of people are doing just that... nevermind! 😀
 
The 6800 GT is the crown king. Performs as good if not better than the X800 at the same asking price. With all the new features, and can be overclocked to a 6800U with only a 300 watt PSU requirement other than the 480watt requirement.

Does SM3 make a difference? I don't know and I don't care. It's better to have then not to have. Makes no sense to get a X800 pro just because it doesn't have it. Just let the web sites do the analyzing and arguing, since that's what they get paid to do.
 
in complex shading scenes SM3 works just as advrertised, actually moreso.
moreso compared to what? what the nvidiots were claiming? no...
the main stumbling block many thought SM3 would have was the performance hit of a single shader with multiple instructions. as it turns out the pergormance hit is not that great, at least according to crytek ,but what would they know?
i never questioned that.
the 1.2 patch does not utilize very many SM3 shaders, in a few complex scenes some SM2 shaders were replaced by SM3 for the sake of performance.
how many and where?
can you logically explain to me how judging a patch replacing a few shaders is enough criteria to determine the actual performance benefit of SM3 as a whole?
again, how "few" shaders?

but you miss the entire point of what i said.. it's not about sm3 bashing.. why is that so hard to grasp?

a) nvidiots screamed i'd cry when the path came. it's here. i'm not crying, nor do i have reason to.

b) the topic of the post i ridicule stated 33% increases, which are false, and based on erroneaous tests. the truth is (according to the source you like to quote):

Based on our results with Far Cry, Shader Model 3.0 does bring NVIDIA tangible performance benefits but the results will vary based on usage. In levels with multiple light sources such as the ?Volcano? and ?Research? demos, we saw performance gains of up to 18%. The beauty of this is it?s a ?free? performance increase to the end user, there?s no need to turn off eye candy features to get this kind of performance.

In outdoor areas such as our custom Monkey Bay demo and the ?Training? and ?Regulator? levels present in Far Cry 1.2, the performance benefits are more limited. At best the performance increase is about five percent, roughly the boost you may get from a driver update. This still isn?t bad, but it isn?t a dramatic difference, we wouldn?t be surprised if some users hardly noticed it.

could these results be improved? sure, it's possbile. does it make nv40 faster than r420? well, not really. it's a mixed bag. the combination of driver (even tho 56.72 is the last "official" nv driver) improvements and sm3 improvements has resulting in the 6800 being almost as fast as the XT, and the GT faster in some areas than the PRO. overall, it's pretty comparable.

as for the judging of criteria, my only criteria was refuting the idiotic claims made by zealots. beyond that, the only claims i can make are those made above. any claims as to "how much benefit" can be gained "sometime in the future" is speculative, at best.

i've always said sm3 will have benefits. i've also said it won't see it's potential till much further down the road. how can you logically argue that?

that makes about as much sense as showing benchmarks where SM3 "makes no difference in performance" and neglecting to mention(or realize) that in many areas no(or very very few) SM3 instructions are even used(thus no difference...)

as i've stated many times, far cry is predominantly sm1.x shaders.. why would it? at the same time, how would it have a significant impact on the game? it won't. it will benefit those limited areas that rely on ps2 shaders being replaced by ps3.

again, i'm not sure what you're trying to argue?

now.. getting off the specific topic, i was just pointed to a new article where crytek stated they will only support fp32 HDR, tho the effects/quality of fp24 would be the same.. interesting they would deny 95% of gamers simply because of nvidia marketing? hmmm.. at any rate, this may be the first actual visual effect which will be sm3 specific (a marketing decision, not a technology one), but i didn't see an eta of when this might happen. will be interesting.

oh wait, plenty of people are doing just that... nevermind!

well, at any rate, it seems to me you are arguing just to argue, since you're really not staying to my specific comments, rather making arguments based on a broader scope than my comments...

and since you're directly quoting me, well... it's seems you're simply trying to "stir the pot"... or you've simply missed the context of what i stated.
 
Originally posted by: Regs
The 6800 GT is the crown king. Performs as good if not better than the X800 at the same asking price. With all the new features, and can be overclocked to a 6800U with only a 300 watt PSU requirement other than the 480watt requirement.

Does SM3 make a difference? I don't know and I don't care. It's better to have then not to have. Makes no sense to get a X800 pro just because it doesn't have it. Just let the web sites do the analyzing and arguing, since that's what they get paid to do.

i somewhat agree here...

i disagree anyone is the "crown king"...

i do agree it's better to have than not have.

i also believe there are valid reasons to get a PRO/XT.

it comes down to preference. overall, i'd say nvidia has the edge, even if it's a bit slower. doesn't mean it's bad to have an ati.. and doesn't mean you should run and sell your PRO/XT and get a 6800. i think

sm3 does indeed have benefits (3dc may, but it's hard to say, and i'd have to lean towards sm3 being of better overall benefit), but not so much so that anyone needs to get in a fuss - at least at this point. i've always stated sm3 support was good, but won't have much impact this generation.

we'll just have to see i guess... but i hardly think the 'race' is over.
 
Some of your people remind me of the morons who bashed SSE, SSE2, and MMX.
Yapped about how it meant nothing eventhough you knew your favorite CPU would have it eventually.

SM3 will be used, ATI is building an SM3 card in the future, SM3 > SM2.

Get over it.

You must be completely retarted, can you not see that 60% is better than 40%? Are you that dumb? Are you in grade 2?

retarted?

Nice 😀

The increases he shows simply do not match up with other reviews, not any of them. The difference should go down, not up when lowing the res. Its becomes more CPU bound at 1024x768 vs. 1600x1200.

Yes because we all know reviews are the end all. According to the reviews there is no way I should be playing Far Cry @ 1600X1200 on my 5900. According to them my FPS should be 20FPS or less. I get 20FPS or less in about 5% of the game and more like 30-50 in 80% of the game. Other 15% is 20-30.
 
Originally posted by: CaiNaM
in complex shading scenes SM3 works just as advrertised, actually moreso.
moreso compared to what? what the nvidiots were claiming? no...
the main stumbling block many thought SM3 would have was the performance hit of a single shader with multiple instructions. as it turns out the pergormance hit is not that great, at least according to crytek ,but what would they know?
i never questioned that.


moreso meaning moreso than was previosly thought...as far as you questioning what i raised in my second comment, unless your name is "many" i did not say you specifically did.


the 1.2 patch does not utilize very many SM3 shaders, in a few complex scenes some SM2 shaders were replaced by SM3 for the sake of performance.
how many and where?
can you logically explain to me how judging a patch replacing a few shaders is enough criteria to determine the actual performance benefit of SM3 as a whole?
again, how "few" shaders?[q/]]

as far as an exact number you would have to ask crytek that, all i know is that in scenes containing mutliple light sources and shadows, sm3 shaders were used to speed up performance, this reflects the performance difference that mainly gets divided between indoor scenes where most of the complex shaders occur, and outdoor ones where the lighting is not as complicated.


but you miss the entire point of what i said.. it's not about sm3 bashing.. why is that so hard to grasp?[q/]

a) nvidiots screamed i'd cry when the path came. it's here. i'm not crying, nor do i have reason to.

b) the topic of the post i ridicule stated 33% increases, which are false, and based on erroneaous tests. the truth is (according to the source you like to quote):

Based on our results with Far Cry, Shader Model 3.0 does bring NVIDIA tangible performance benefits but the results will vary based on usage. In levels with multiple light sources such as the ?Volcano? and ?Research? demos, we saw performance gains of up to 18%. The beauty of this is it?s a ?free? performance increase to the end user, there?s no need to turn off eye candy features to get this kind of performance.

In outdoor areas such as our custom Monkey Bay demo and the ?Training? and ?Regulator? levels present in Far Cry 1.2, the performance benefits are more limited. At best the performance increase is about five percent, roughly the boost you may get from a driver update. This still isn?t bad, but it isn?t a dramatic difference, we wouldn?t be surprised if some users hardly noticed it.

could these results be improved? sure, it's possbile. does it make nv40 faster than r420? well, not really. it's a mixed bag. the combination of driver (even tho 56.72 is the last "official" nv driver) improvements and sm3 improvements has resulting in the 6800 being almost as fast as the XT, and the GT faster in some areas than the PRO. overall, it's pretty comparable.

as for the judging of criteria, my only criteria was refuting the idiotic claims made by zealots. beyond that, the only claims i can make are those made above. any claims as to "how much benefit" can be gained "sometime in the future" is speculative, at best.

i've always said sm3 will have benefits. i've also said it won't see it's potential till much further down the road. how can you logically argue that?

i never accusesd you of "SM3 bashing" i just disagreed with you on a few points. why is THAT so hard to grasp? 😉 as far as the benchmarks you mention i have already addressed that as well



that makes about as much sense as showing benchmarks where SM3 "makes no difference in performance" and neglecting to mention(or realize) that in many areas no(or very very few) SM3 instructions are even used(thus no difference...)

as i've stated many times, far cry is predominantly sm1.x shaders.. why would it? at the same time, how would it have a significant impact on the game? it won't. it will benefit those limited areas that rely on ps2 shaders being replaced by ps3.

again, i'm not sure what you're trying to argue?

it is rather simple. i am merely pointing out facts, which are

1. in complex shading sm3 has proven a definite(and sometimes large) performance advantage over SM2
2. the performance hit many expected is not as bad as some thought
3. as you yourself just pointed out far cry is still "predominantly sm1.x shaders"

now.. getting off the specific topic, i was just pointed to a new article where crytek stated they will only support fp32 HDR, tho the effects/quality of fp24 would be the same.. interesting they would deny 95% of gamers simply because of nvidia marketing? hmmm.. at any rate, this may be the first actual visual effect which will be sm3 specific (a marketing decision, not a technology one), but i didn't see an eta of when this might happen. will be interesting.

i guess it all depends on how you look at it. maybe crytek decided not to hold thier technology back just because ATI did not build a SM3 capable card. the game still runs and looks great on SM2 cards. it is not cryteks fault ATI decided to stay with old technology, it is ATI's... whose only answer is to find ways to say "SM3 is not that big a deal" of course that is not stopping ATI from making a SM3 card though is it? 😉 i wonder if ATI's marketing dept will ahve the same cavalier attitude toward SM3 when they actually build a card that can run it...one can only speculate. but i do not think so 😀


oh wait, plenty of people are doing just that... nevermind!

well, at any rate, it seems to me you are arguing just to argue, since you're really not staying to my specific comments, rather making arguments based on a broader scope than my comments...

and since you're directly quoting me, well... it's seems you're simply trying to "stir the pot"... or you've simply missed the context of what i stated.

not really(on both points). when it comes to the facts what we both have presented they actually concur if you go back and re-read our discussion.

i guess the main difference is you see the SM3 glass as half empty while i see it as half full.

goodnight 🙂
 
Originally posted by: Amuro
They might not be called 6800UE, but most of the 6800Us are shipped with default core speed of 425mhz or higher.
That's news to me, though I'll admit I haven't followed the 6800U market *that* closely. 🙂 Even so, 425MHz is not the same as 450+MHz. Neither is 1100MHz the same as 1200MHz in very bandwidth-limited situations like 16x12 4x8.
 
Back
Top