The Ultimate SM3.0 Game Thread

Page 9 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

apoppin

Lifer
Mar 9, 2000
34,890
1
0
alienbabeltech.com
Originally posted by: VirtualLarry
Originally posted by: ZobarStyl
but I doubt (with Far Cry as my basis) that SM3.0 in 6xxx cards is the same as DX9 in 5xxx cards, or we would've seen a massive downturn in performance in Anand's review. My two cents.
I guess the primary question is, then, is the performance dip when "2.0++" mode is enabled in Chronicles of Riddick", simply poor coding/optimization for SM3.0 on the part of the developers, or is it truely the harbinger of things to come, in terms of SM3.0 core performance on current-gen GPU parts? That's the key, and at this point, with only a single "real" data-point, it's very hard to say. (IOW, it is a bit dangerous to extrapolate, but I've made a few "what if" posts recently along that same line of thinking.) I have no idea of the UE3 tech demo shown, was more CPU- or GPU- heavy, or really any details at all about it. I would be interested in finding out some more. More data-points for SM3.0 performance comparison will help too.

i am of the opinion that games made from the "ground up" with SM 3.0 will be MORE efficient for SM 3.0 enabled HW . . . . i have been saying this in several of my earlier posts but it has been ignored.

because of the 'variability' of results with SM 3.0 in FC/Riddick/PK etc., i'd have to say "adding" the SM 3.0 is way less efficient . . . but now i am conjecturing.
 

Bar81

Banned
Mar 25, 2004
1,835
0
0
Same to you, thanks for the civility. I think we agree on everything, the only point of discussion right now is whether the 6800 line will be able to run future SM3.0 games at acceptable framerates and with the limited information we have I'm very concerned that that may not be the case as I don't want people to buy the card and find out later that SM3.0 for them isn't a feature they can actually use. As to 2006, I know for a fact that I'll be kicking it with a SM3.0 enabled card but one that I hope will run at acceptable framerates with the effects utilized via SM3.0
 

apoppin

Lifer
Mar 9, 2000
34,890
1
0
alienbabeltech.com
Found your List
Vidia SM 3.0 List Includes H-L 2
Friday August 27, 2004
post a comment : 210 total

Description
nVidia seems to have made good use of its multimillion dollar investement in securing games. Much of that money has gone into getting games to adopt Shader Model 3.0 and a list, revealed on NVNews and subsequently removed, indicated that quite a few titles will support the controversial SM 3.0.

If this extensive list of titles proves to be correct then ATIs main argument, that it is too early to adopt SM 3.0 and that SM 2.0b is good enough, will no loger be valid. This list however, also puts ATIs choice to unleash the R520 in 2005 with SM 3.0 support into perspective. The Canadians, it seems, plan to use nVidias investment to their advantage.

The plot thickens however as one title is, controversially, included in nVidia's SM 3.0 list: Half-Life 2. It could be that since Valve will release the game in late September and since 3-4 months later ATI will support SM 3.0, the companies have agreed to use it in H-L 2. If that is the case nVidia will get a good 6 month window of being the only company with boards available that fully support Half-Life 2. Unless, of course, SM 3.0 support is released as a patch only after ATI releases its R520 boards. Confused? They probably want you to be, makes choosing a card a more emotional process than it should be.

However Half-Life 2 SM 3.0 support works out, nVidia seems to have made very good use of its money and the list of games was impressive. Titles included Lord of the rings: Battle for the middle earth, Stalker: Shadows of Chernobyl, Splinter Cell 3, Tiger Woods 2005, Vampire: Bloodlines, Tiger Woods 2005, Madden 2005, Driver 3.0, Grafan, Medal of Honor: Pacific Assault, Patched Painkiller and patched Far cry.
 

Bar81

Banned
Mar 25, 2004
1,835
0
0
NICE find. I'm going to bed now but I'll update everything tomorrow. I was looking *everywhere* for something like that.
 

VirtualLarry

No Lifer
Aug 25, 2001
56,587
10,225
126
Originally posted by: Rage187
Judging from every thread you have started including this one, you just like to bash Nv or down play any advantage they have over ATI.

That makes you a troll.

Just because we know your style, it's no excuse for trolling.

Sorry, I thought that this thread started off pretty usefully, myself. Your posts thus far in this thread, however, have been nothing more than those of a trolling NV fanboy, and to bring up a totally-unrelated thread (also started by the OP) in this one, simply in an attempt to discredit the OP, without contributing anything additional to this thread, proves it.

The precept that "SM3.0 features may not be useful for current-gen cards" is an interesting one, very worthy of debate. Unfortunately, that same precept seems to have angered NV fanboys that believe that anything that NV creates/introduces, including the "all-mighty SM3.0 support". What those people fail to consider, is if this is a replay of NV3x's FP precision in shaders, all over again, and whether SM3.0 support in current-gen parts has any actual utility at all, above just a marketing bullet-point feature that they can claim over their competition.

(Kind of reminds me of those "Blast Processing" advertisements for the 16-bit Sega Genesis console. Yet the competing consoles were just as powerful in most cases. Of course, that term was just a purely marketing term, whereas SM3.0 support is in fact real tech, but if no games actually make use of it, I'm not sure if it matters if the tech is real. But the similarities of how the fanboys throw around the term is apt.)

IOW, Rage187, the fact that you consider that SM3.0 support, for the current selection of available game software is indeed "an advantage they have over ATI", without any debate over the matter at all (which is what the context of this thread is about, is it not?), proves that you have already made up your mind in a dogmatic, fanboy style, and thus anyone who appposes your viewpoint in the matter is automatically wrong, correct?

"Blast Processing" indeed.
 

VirtualLarry

No Lifer
Aug 25, 2001
56,587
10,225
126
Originally posted by: Slaimus
One thing about the graphics rendering pipeline... there really should not be branching or looping. What most likely happens is that the DXSL compiler merely repeats the code however many times for loops, and execute both branches, picking the result from the chosen branch afterwards for branching.
See, that's the thing - in terms of real-world implementation, is the ability to have branching shaders in SM3.0, actually a good thing? Or a performance-loss nightmare waiting to happen, when it is truely utilized by real-world games, on current-gen parts? Sometimes, newer is not better.
 

apoppin

Lifer
Mar 9, 2000
34,890
1
0
alienbabeltech.com
Originally posted by: Bar81
Same to you, thanks for the civility. I think we agree on everything, the only point of discussion right now is whether the 6800 line will be able to run future SM3.0 games at acceptable framerates and with the limited information we have I'm very concerned that that may not be the case as I don't want people to buy the card and find out later that SM3.0 for them isn't a feature they can actually use. As to 2006, I know for a fact that I'll be kicking it with a SM3.0 enabled card but one that I hope will run at acceptable framerates with the effects utilized via SM3.0

Again . . . . my opinion . . . .

SM 3.0 is supposed to make shaders more efficient, not less efficient. Games are typically 1-1/2 years -3 years behind development. We just got SM 3.0 in the 6800 series . . . . we see that we already got some impressive number of titles and the card is only ONE year old . . . . now ADD ati's support for SM 3.0 and i'd say 'case closed' for 3.0 being quickly adopted.

i believe that the 6800gt/ultra will be "fine" till DX 10 and Unreal 3 in (probably late) '06.

the final 'kicker' to my case is that it does not "hurt" if your "gamble" is wrong and you have a slow or useless SM 3.0 feature . . . it does not affect the rest of the videocard in any way.

i. . . . n other words, IF your 6800ultra isn't running STALKER in SM 3.0 as well as your friend's x850xt is in SM 2.0, JUST TURN IT OFF. ;)

is my "logic" making any sense to you?
i think it is just my way of expressing my perception of the "future" of video


edit

Originally posted by: Bar81
NICE find. I'm going to bed now but I'll update everything tomorrow. I was looking *everywhere* for something like that.
Thanks a google search using "sm 3.0" games got me a lucky hit on the 2nd page: Results 11 - 20 of about 959 for "sm 3.0"+games. (0.21 seconds)
That list is also about 6 months old and there may be other neweer titles announced . . . . some of them look to be "must haves".


:D

i'll also check in later

edited
 

apoppin

Lifer
Mar 9, 2000
34,890
1
0
alienbabeltech.com
Originally posted by: VirtualLarry
Originally posted by: Slaimus
One thing about the graphics rendering pipeline... there really should not be branching or looping. What most likely happens is that the DXSL compiler merely repeats the code however many times for loops, and execute both branches, picking the result from the chosen branch afterwards for branching.
See, that's the thing - in terms of real-world implementation, is the ability to have branching shaders in SM3.0, actually a good thing? Or a performance-loss nightmare waiting to happen, when it is truely utilized by real-world games, on current-gen parts? Sometimes, newer is not better.

it's gonna take a LONG time (and awesome future HW) to take FULL advantage of SM 3.0.

HOWEVER, from the article i linked to, it appears that SM 3.0 is much more efficient reaching a certein gfx result then SM 2.0.
It should be mentioned that basically all SM 3.0 effects can be made using SM 2.0 and are likely to be made using 2.0 because of broader compatibility. Though, because of performance advantage game-developers are projected to use SM 3.0 for supporting hardware in order to allow their games to perform better. Moreover, eventually those, who have SM 3.0-supporting hardware may see some image quality benefits too.
 

lotus503

Diamond Member
Feb 12, 2005
6,502
1
76
"I think you are correct. But again, the issue is whether the current generation 6800 cards will be able to run at acceptable framerates when these new PS2.0 shader effects are implemented via SM3.0"

Well if my current 6800gt wont run it I am sure my SLI rig will. and if it will not I will plop down another grand for the latest and greatest.
 

apoppin

Lifer
Mar 9, 2000
34,890
1
0
alienbabeltech.com
Originally posted by: S0Lstice
"I think you are correct. But again, the issue is whether the current generation 6800 cards will be able to run at acceptable framerates when these new PS2.0 shader effects are implemented via SM3.0"

Well if my current 6800gt wont run it I am sure my SLI rig will. and if it will not I will plop down another grand for the latest and greatest.
They do in the SM 3.0 demos. If not in your game, back it back down to 2.0 with no loss whatsoever (compared to a non SM 3.0 competing ati card). ;)

i may be advocating SM 3.0 but it's not as a 'nVidia fan' . . . . . actually i really DO think ATI's 'timing" re: SM 3.0 is ok IF they launch the r520 on time. ;)


 

lotus503

Diamond Member
Feb 12, 2005
6,502
1
76
And just to add, I do love these discussions, truth is I work too much and am too lazy to nit pick all of the deatails of things, I am thankful I have a wonderful supporting cast of posters that are willing to Call different companies and examine each litte feature. In the end i'm the real winner. As I sit home down another beer plop down to do a little gaming, I can rest assured that someone, somewhere is picking every little detail apart and I can spend a few minutes and learn all about it :)

In short THANKS :)
 

VirtualLarry

No Lifer
Aug 25, 2001
56,587
10,225
126
Originally posted by: Rollo
I keep saying it's the 2002 feature set because, for the most part, it is.
Rollo, all current NV parts are more-or-less decendents of the Riva TNT2 cores too.
Do you see anyone running around bashing NV based on that line of commentary? No? Perhaps you can learn something from that then. The line is getting old, and as new features get added, it starts to make you look a little more more foolish each time that you repeat it.
Originally posted by: Rollo
ATI will have a SM3 card out soon, and all the people duped into buying a high cost X800 as a long term investment in 2005 are going to gnash their teeth as the value of their cards plummet.
Likewise, so are so many people that purchased current high-end NV4x parts, including those expensive SLI rigs - once they find out that their investment isn't actually that high-performance when running game that uses "real" SM3.0 code... well, they're not going to be very happy, I don't think.

But WTF am I doing, here I am, feeding the troll, and helping to contribute to devolving this thread into an NV/ATI comparison, when in fact, we should be focusing on whether or not SM3.0 has any real value, as a usable feature, on today's current-gen cards (which just happens to be NV cards, right now.)

I also just wanted to posit some technical reasoning behind this all. First off, the disclaimer - I'm not intimately familiar with NV's NV4x pipeline structure, nor SM3.0 code in particular. But I am very familiar with CPU micro-architecture, on many platforms, so I'm going to generalize my understanding of that, and use it to generate a hypothesis here for discussion. If anyone has any more accurate/direct knowledge of NV4x pipeline architecture or SM3.0 coding, please step up. :)

I'm going to compare branching vs. non-branching throughput-oriented, pipelined architectures here.

In terms of a development/programming standpoint, offering the capability for branching can greatly simplify the work/algorithms needed, compared to coding for a dataflow architecture that lacks control-flow operations. So branching is a win in terms of developer ease and productivitity here.

But in terms of actual low-level hardware implimentation, it can be a nightmare. For longer-pipelined architectures, allowing branching/looping constructs, creates the possibility of pipeline stalls/flushes. So if you, as a the silicon designer, impliment a chip with 8 pipelines rather than 4, that's double (or more) the chip real-estate, requiring higher development/validatation and mfg costs to make those chips.

Now, if those pipelines were "non-branching", that means that you've just doubled your effective throughput of that chip, compared to the prior-gen chip with only half of the pipelines. However, here's the crux of this issue - if you then allow those pipelines to impliment control-flow operations, which can cause stalls/flushes - let's hypothetically say that out of those 8 pipelines in operation, half of them at any one particular point in time may be experiencing stalls or whatnot. (Here's where some accurate low-level knowledge of their actual implementation would come in handy - experts?) Now the effective throughput of your new, more-expensive 8-pipeline chip, is back down to the same sort of effective throughput of the cheaper, 4-pipeline chip, that didn't implement a branching programming model. Uh-oh. Sure, developer productivity is up, but actual hardware performance is down. Way down. Back to a prior-generation part level of performance down, even. People will start to wonder why they even decided to spend twice the money to replace their older card with a newer one that promise this new feature, only to find out that it ... doesn't do much for them, at least strictly speaking performance-wise. (It may allow those games that they want to play, to get released onto the market sooner, but that doesn't help their actual frame-rates any.)

Now do people understand what (may) be at issue here?

Now if NV is planning, on their next-gen parts, to implement something akin to Intel's HT support for their P4 CPUs, which effectively hides pipeline stalls by allowing a secondary thread to take advantage of the opportunity, and use the CPU's otherwise-idle functional units, to ensure that they stay maximally-utilized, then they would be a step in the right direction. It would likely help "fix" the potential problem that might otherwise become apparent in branch-heavy code on chips that didn't implement something like that.

But current-gen parts, don't have that feature, do they?

So, if this is a real issue, in terms of the developer support for maximum hardware performance for current-gen parts, it would actually be best to avoid using branching shader code - which is one of the only major differences between SM3.0 and the prior-gen 2.0 stuff. (More or less.)

Higher-level graphical effects "features" like the much-touted "HDR", are not inherent features in the SM2 or 3 specs, but can be implimented in either. But due to the additional ease that a branching programming model offers, most devs will take the easy way out, and code it that way. However, that of course will likely not result in the highest level of actual hardware performance on currently-released parts.

I hope that I've hit most of the highlights here, and hopefully this thread can get back on track... but I doubt it. :p

Originally posted by: Rollo
I think their engineers were so proud of the R300 (justifiably so) they bought all the pot and Pink Floyd records in Canada and moved to the Caymans.
LOL. Sounds like possibly they might be partying with some of my cohorts from my game-programming days, after a developer release party. :p

Originally posted by: Rollo
They ran out of money for coconut shrimps and margaritas sometime late last year, so they stowed away on a fishing trawler, showed up back at the office with notes explaining their absence from Ziggy Marley, and announced the R520.
:roll:
LOL. Hey man, nothing beats Margueritaville.
 

imported_Noob

Senior member
Dec 4, 2004
812
0
0
When it coems down to it, if the 6800 can't run COR (which is a game that apparently utilizes SM 3.0 very good) then it won't be able to run it in the future. And that is why people buy X800's, because those people know for a fact that 3.0 don't make a difference. Plus the X800 cards run at higher FPS.
 

nRollo

Banned
Jan 11, 2002
10,460
0
0
Originally posted by: BFG10K
It probably is, but you missed the point of the subdiscussion entirely which was whether ATi's touted features actually are implemented
To date I'd say SM 3.0 implementation has outclassed Truform implementation.

as nvidia apparently has a history of touting a feature and then not enabling it/having it work correctly (at least that's what I gleaned from the poster's comments.)
Maybe...but ATi certainly has a history of hiding features, like SM 2.0b which nobody knew about until it magically appeared after FC 1.2 was available.

It is the end of days that was foretold- I agree with BFG almost all the time now and have no urge to debate him!

;):beer:
 

apoppin

Lifer
Mar 9, 2000
34,890
1
0
alienbabeltech.com
The only think i can say to ease your concerns, VirtualLarry, is to say SM 3.0 will be implemented gradually just as 2.0 was and 1.4 before it. Game designers know better than to bring out a game with gfx advanced that no one can play. ;)

But the future is speculation and i eagerly anticipate the unfolding of the details.

and i am outta here . . . . the discussion DOES seem to be finally 'on track'.

________________
Originally posted by: Noob
When it coems down to it, if the 6800 can't run COR (which is a game that apparently utilizes SM 3.0 very good) then it won't be abble to run it in the future. And that is why people buy X800's, because thos epeople know for a fact that 3.0 don't make a difference. Plsu the X800 cards run at hihger FPS.
it ain't SM 3.0, rather SM 2.0+++ according to the readme. And i doubt it is very efficient.

However, the 6800 series CAN run CoR just as well as the x800 series on the well-coded SM 2.0. ;)
 

apoppin

Lifer
Mar 9, 2000
34,890
1
0
alienbabeltech.com
Originally posted by: Noob
Another reason people will buy an X800 over a 6800:

http://www.xbitlabs.com/articles/video/display/half-life.html

The 6800 Ultra can't even outperform aan X800 Pro in teh HL2 benchamrks

What's your point? the 6800 kicks the x800 all over the place in Doom iii's engine . . . .
your benchmarks have nothing to do with SM 3.0 - which - incidently HL2 supports. :p
:roll:

edit: i am really OUT of here . . . . i know better than to discuss anything with a n00b fanATIc.
:roll:

LD
 

VirtualLarry

No Lifer
Aug 25, 2001
56,587
10,225
126
Originally posted by: BFG10K
What graphical effect does looping and branching have on the game?
Where did I claim it had one? Looping and branching allow multiple rendering passes to be collapsed into fewer passes, providing better performance. In many games SM 3.0 does nothing more than add performance for free which is always a good thing.

I'm not sure sure that it's quite truely "performance for free", since the pipelines that implement the control-flow instructions are going to suffer in performance because of it. I think a better question is, whether it offers better *overall* performance. You're right that if it can be used to collapse the number of rendering passes needed to render a scene, then it's good to be a *big* win in terms of memory bandwidth. Considering that memory bandwidth is currently the biggest obstacle to higher frame-rates, that seems like a good thing. But in the future, as games start to use shader code more than raw multi-pass texturing ops to achieve the same "look", then shader pipeline cycles may well become *more* valuable than memory bandwidth. It really comes down to how the developers actually use these features. Poorly-written SM3.0 code, could well be orders-of-magnitude slower than SM2.0 code, if they over-use control-flow and "churn" pipeline cycles without any real benefit.

IOW, when someone begins programming, back in the day it used to be with BASIC, they were told (usually sternly), that "GOTO was a bad command to use". Even if it were a feature provided by the language, that one should avoid it at all costs. Depending on how dogmatic the teacher was, that might be drilled into the programming student more often. The problem is that, without restricting the use of "GOTO", the code from a beginning programmer inevitably starts to devolve into "spaghetti code", a mess to read, and very in-efficent to execute. IOW, a tangled mess. However, it is a very powerful tool, and when used correctly, in limited cases, can actually provide a performance advantage.

I see a direct analog here, in terms of the transition from SM2 to SM3.0 - the shader-code programmers have just been given the ability to execute control-flow operations - in effect, they have just been given a "GOTO".

Experienced programmers, can use that to their performance advantage. One example is in terms of collapsing multiple rendering passes into one, as you said, BFG10K.

But the danger is, in-experienced programmers will likewise use it as an easy and cheap way out, faster to program, without regards to the consequences of the performance implications of using the tool wrongly.

So the addition of branching to shader code in SM3.0 is definately a double-edged sword, one that should be carefully watched to see how it is being swung around.

Originally posted by: BFG10K
Since people are lumping HDR in FarCry in with SM3.0, and because it *requires* SM3.0 to use in FarCry it's a legitimate feature worth noting.
Actually it has nothing to do with SM 3.0 but everything to do with FP blending which only the NV4x series supports.
Completely correct, I wish people would stop lumping in HDR support with SM3.0 support, because they are not the same. It's just that due to *one* decision by the developers of *one* game - that game in question currently requires hardware SM3.0 support to enable the option for HDR. That doesn't mean that HDR == SM3.0 somehow. :|


 

imported_Noob

Senior member
Dec 4, 2004
812
0
0
Originally posted by: apoppin
Originally posted by: Noob
Another reason people will buy an X800 over a 6800:

http://www.xbitlabs.com/articles/video/display/half-life.html

The 6800 Ultra can't even outperform aan X800 Pro in teh HL2 benchamrks

What's your point? the 6800 kicks the x800 all over the place in Doom iii's engine . . . .
your benchmarks have nothing to do with SM 3.0 - which - incidently HL2 supports. :p
:roll:

edit: i am really OUT of here . . . . i know better than to discuss anything with a n00b fanATIc.


:roll:

LD

HL2 don't support SM 3.0. Get your facts straight.

Maximum PC, PC Gamer, Anandtech, Xbit Labs, Toms Hardware, Hard OCP, Guru 3d, etc. all say that the X800 is better then the 6800. These are non-debatable facts.
 

apoppin

Lifer
Mar 9, 2000
34,890
1
0
alienbabeltech.com
Originally posted by: Noob
Originally posted by: apoppin
Originally posted by: Noob
Another reason people will buy an X800 over a 6800:

http://www.xbitlabs.com/articles/video/display/half-life.html

The 6800 Ultra can't even outperform aan X800 Pro in teh HL2 benchamrks

What's your point? the 6800 kicks the x800 all over the place in Doom iii's engine . . . .
your benchmarks have nothing to do with SM 3.0 - which - incidently HL2 supports. :p
:roll:

edit: i am really OUT of here . . . . i know better than to discuss anything with a n00b fanATIc.


:roll:

LD

HL2 don't support SM 3.0. Get your facts straight.

Maximum PC, PC Gamer, Anandtech, Xbit Labs, Toms Hardware, Hard OCP, Guru 3d, etc. all say that the X800 is better then the 6800. These are non-debatable facts.

http://www.megagames.com/news/html/hardware/nvidiasm30listincludesh-l2.shtml
get yours

aloha

your credibility just went to zero with your ATI propaganda BS . . . i have nothing further to say to you
 

imported_Noob

Senior member
Dec 4, 2004
812
0
0
Originally posted by: apoppin
Originally posted by: Noob
Originally posted by: apoppin
Originally posted by: Noob
Another reason people will buy an X800 over a 6800:

http://www.xbitlabs.com/articles/video/display/half-life.html

The 6800 Ultra can't even outperform aan X800 Pro in teh HL2 benchamrks

What's your point? the 6800 kicks the x800 all over the place in Doom iii's engine . . . .
your benchmarks have nothing to do with SM 3.0 - which - incidently HL2 supports. :p
:roll:

edit: i am really OUT of here . . . . i know better than to discuss anything with a n00b fanATIc.


:roll:

LD

HL2 don't support SM 3.0. Get your facts straight.

Maximum PC, PC Gamer, Anandtech, Xbit Labs, Toms Hardware, Hard OCP, Guru 3d, etc. all say that the X800 is better then the 6800. These are non-debatable facts.

http://www.megagames.com/news/html/hardware/nvidiasm30listincludesh-l2.shtml
get yours

aloha

your credibility just went to zero with your ATI propaganda BS . . . i have nothing further to say to you

Guess they patched it. Not that it's gonna make it look any better, or perform any better. Just like every other game.
 

VirtualLarry

No Lifer
Aug 25, 2001
56,587
10,225
126
Originally posted by: Bar81
Originally posted by: MisterChief
Thank you, my good sir! Now, these "new" effects you mention. What exactly do you mean? Are there actually new effects, or are earlier effects like shadows, reflections, particles, etc. being improved on?
They aren't new effects per se, they are SM2.0 effects but SM3.0 has enhancements that allow these effects to be rendered more efficiently, which in *theory* should allow SM3.0 enabled cards to use effects that simply would cripple SM2.0 only cards.
The thing is - it could be completely the opposite too. It could be that if poorly implemented, the SM3.0 codepath could offer far less performance on the same hardware than the SM2.x codepath, if the hardware doesn't have enough "oomph". Much like some cards that are technically DX9-compliant in hardware, are much better off being used with the DX8.1 codepath, for performance reasons. Adding branching to the shader code could well have the same sort of result.
Originally posted by: Bar81
Unfortunately, I'm having a hard time seeing where this is the case, and Riddick's results really concern me as it seems that maybe today's cards simply aren't powerful enough, *even with* SM3.0 to use the so far unused parts of the SM2.0 spec.
More data-points are needed on this, badly.
 

apoppin

Lifer
Mar 9, 2000
34,890
1
0
alienbabeltech.com
Originally posted by: VirtualLarry
Originally posted by: Bar81
Originally posted by: MisterChief
Thank you, my good sir! Now, these "new" effects you mention. What exactly do you mean? Are there actually new effects, or are earlier effects like shadows, reflections, particles, etc. being improved on?
They aren't new effects per se, they are SM2.0 effects but SM3.0 has enhancements that allow these effects to be rendered more efficiently, which in *theory* should allow SM3.0 enabled cards to use effects that simply would cripple SM2.0 only cards.
The thing is - it could be completely the opposite too. It could be that if poorly implemented, the SM3.0 codepath could offer far less performance on the same hardware than the SM2.x codepath, if the hardware doesn't have enough "oomph". Much like some cards that are technically DX9-compliant in hardware, are much better off being used with the DX8.1 codepath, for performance reasons. Adding branching to the shader code could well have the same sort of result.
Originally posted by: Bar81
Unfortunately, I'm having a hard time seeing where this is the case, and Riddick's results really concern me as it seems that maybe today's cards simply aren't powerful enough, *even with* SM3.0 to use the so far unused parts of the SM2.0 spec.
More data-points are needed on this, badly.

Is Riddick "really" 3.0 or as the readme calls it "2.0+++ for 6800 only"?

i vote for inefficient coding that is nVidia's "attempt" to show something different .. . . .

nite all