Originally posted by: PeteRev, I'm surprised to see you registered here just to set the local wildlife straight. I thought you were supposed to be busy with other projects?
I am. But you know me... there are always those instances where I scold my 6-year-old son's friends. I have my "can't... resist... it... must... reply..." moments.
Anyway, I hadn't looked at the post that started this thread. Here's what it said :
Originally posted by: X
Beyond3D has a
nice comparison of SM3.0 with SM1.1 here. What I find interesting is that 3.0 is touted as being more efficient. However, if you enable all of its additional features, you end up being unable to render them at an acceptable frame rate with today's technology.
So why get SM3.0 today when cards aren't fast enough to take advantage of its features? Not to say there aren't other reasons to choose Nvidia over ATI (I've had both and have no particular brand loyalty). I just don't understand why people advocate SM3.0 as an advantage of the 68 series, given that that today's cards can't take advantage of it. Am I missing something?
1) The word "efficient" was mentioned. And it is correct,
up to a degeree. It depends on what a programmer seeks. For SC:CT, a couple of examples of what is meant by "efficiency" :
(a) in SM 3 there are less draw calls than in 1.1 because that means more lights can be processed in a single pass (unrestricted by shader instruction count limits). Tha means it's more efficient from both a CPU
and GPU point of view (less redundant work).
(b) static branching is used in SM3 to greatly simplify some of the game's shaders. Used shader combis of the SCCT's uber shaders are instantiated during the (what you see on the screen whenever you load a level/savedgame when using the SM3 path) "Caching SM3 shaders" phase to avoid runtime stalls caused by unified compiler runtime compilation.
Do these mean you're gonna get "acceptable framerates"? No, it means exactly what it means -- using SM3 is more efficient than SM1.1. Whether you get "acceptable framerates" is another matter all together. "More efficient" does not mean "acceptable framerates". It means "less sh*tty framerates". There's a difference.
2) "Unable to render at reasonable framerate when all SM3 features are enabled" was mentioned. This is a complaint that deserves a big "DUH!". Every single next-gen hardware on its debut will not render anything at "acceptable framerates" (even if you used the fastest CPU... remember, we're talking TnL-shader-capable hardware here) if many (I'm not even saying
all) of its next-gen features are enabled. Oh please, c'mon folks...
3) The important question mentioned by the poster : "
So why get SM3.0 today when cards aren't fast enough to take advantage of its features?"
Because the currently available mid-to-highend SM3.0 cards can run "SM2.0 games" competitively with other SM2.0-only offerings. IOW, in "SM2.0 apps/games", what has a SM3.0 card got to lose?
NOTE : Please remember point #2 above. Think logically. Know how developers do their business.
4) Quoted : "I just don't understand why people advocate SM3.0 as an advantage of the 68 series, given that that today's cards can't take advantage of it."
Please read points #1, 2 and 3 above. I'll understand the mistake of using the words "today's cards" instead of "today's games". Again, read points #1, 2 and 3.
If you still don't understand why,
at this time, a SM3.0 card makes sense if you're comparing it to a SM2.0 card
you already own (I assume the bolded words really are what's important, and not in the general sense of "SM3.0 now or later?"), then I won't offer any recommendations.
No wait, haven't I already done that?
5) Finally, the fact that the B3D "article" is the basis for starting this thread.
Everything (well, I would say 90% of it, excepting the screenies which should originate from B3D) in that "article" is basically a public version of what Ubisoft told B3D via emails. Ubisoft, seeing B3D as a media outlet, probably does not see any harm in "using" B3D to tell the public what SCCT does graphically. The article itself tells it like it is, as informed by Ubisoft to B3D -- all (or 90%) the bullet-point features of the game are provided by Ubisoft, the feature-specific explanations are provided by Ubisoft and other comments in the article are either provided by Ubisoft or promted by coments provided by Ubisoft... the article, basically, could've been writen by Ubisoft... 90% of the the article are facts about the game, as provided by Ubisoft to B3D... I should know.... I did it before for B3D and (no disrespect intended towards Nick) Nick simply isn't capable enough (3D-wise) nor knowledgeable enough about the game (without Ubi's contribution) to write that article without any sort of help. Anyone that says B3D is "biased" for/against any IHV
based solely on the content of that article is using their preconceptions of B3D (rightly, wrongly... I don't really give a sh*t) to divert readers of this thread to something that belongs in another forum and not onto the subject matter raised by the starter of this thread.
End of my comments on the post that started this thread.
I think everyone should stop talking about which media outlet's head-honcho is biased. We will always have different opinions.
What you guys should attempt to find out is how the game development business is run in certain development houses and/or in certain publishing companies.
Only then will you realize how this really determines which video card you should buy, and not by the way you perceive comments by a media outlet's reviewer in a review of a video card. In a media outlet's review of a video card, look at the settings used and the resultant framerates. That is basically all you need. Not the conclusion, not the odd comment about certain behavioural aspects of a video chip (the game developers know damned well better about this aspect than a media outlet video card reviewer!)... just the facts.
Unless of course, that reviewer happens to be me.
