Shader model 3.0 (a civilized discussion)

Page 4 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

n7

Elite Member
Jan 4, 2004
21,281
4
81
Originally posted by: Actaeon
The fanboy-ism in this thread is rampant.


Agreed.

There's a lot of FUD in this thread too.
And by FUD, i mean my definition, which is Fuck3d Up Data ;)
 

housecat

Banned
Oct 20, 2004
1,426
0
0
I dont know if you are referring to my data (which no one seems to be able to refute- this happens when someone speaks the truth).. but theres a lot of DOTF on this forum about SM3/DX9C as well.

And in my definition, DOTF is denial of the facts. ;)
 

sellmen

Senior member
May 4, 2003
459
0
0
Originally posted by: housecat
I dont know if you are referring to my data (which no one seems to be able to refute- this happens when someone speaks the truth).. but theres a lot of DOTF on this forum about SM3/DX9C as well.

And in my definition, DOTF is denial of the facts. ;)

You haven't "proved" anything. The benchmarks we've seen so far (Far Cry) have shown small performance gains, not the huge gains you claim...until there are more PS3.0 benchmarks out, claiming "much higher performance" and "exponentially greater image quality" is pure speculation.

Same with your claim that Nvidia's "expertise" will give them "huge performance gains" in the next generation. Pure BS, unless you have something to back it up (and seeing how ATI's next card isn't out yet, you probably don't).

This thread is mostly pointless until more PS3.0 games come out.

 

Drayvn

Golden Member
Jun 23, 2004
1,008
0
0
I thought most of these features that everyone is all so adimant about is for SM3. When i thought they were features of Direct X...

Displacement Mapping
Parallax Mapping
Tone Mapping
HDR
And lots more

Are all DirectX features, and all those features can be used in SM3 to make it run faster. You can still have SM3 but it doesnt need all of those to run it. Because SM3 doesnt feature or require it. Direct X tho, does.

You might see games putting those features under SM3 if they have the option, because the only way to run those features without drastically dropping frame rates is to put it into SM3 so that its more efficient to run.

But as a good example. HL2 will support HDR, it supports Parallax mapping already and lots more. Ive also watched a real time demo of Far Cry The Project with HDR, Hardware Soft Shadows, and loads more with my X850XT-PE. And thats only using SM2.
 

Ackmed

Diamond Member
Oct 1, 2003
8,487
532
126
Originally posted by: housecat
Originally posted by: Noob
Do you guys think the 6800's will even have the power to run next generation games that are SM 3.0 supported?

I do.
I can run insane resolutions with HDR on in Farcry today.

Not only that, but the NV50 will likely be able to even better.

Its the R520 you should worry about, it is ATI's first gen SM3.. while NV will be on their 2nd revision SM3, having much more experience with it than ATI.

So if you are looking for your next card to be SM3, I'd stick with NV50.

Anyone with a monitor that can display a high enough res can run any game at "insane resolutions". Thats doesnt mean its going to be smooth. Especially since smooth is very subjective. Personally I dont consider 25-30 frames to be smooth. Which is what I got with my 6800GT, even lower with lots of action.

Does it really matter if its their first card for SM3? I dont think so. Look at their first try on DX9 and PS2.0. A MUCH better outing than NV's first.

I also agree with whomever said to wait till we actually see cards to decide which to "stick" with.
 

Keysplayr

Elite Member
Jan 16, 2003
21,209
50
91
Show me 17 articles from various sources that say SM3.0 offers NO advantages over SM2.0++ and I'll show you another 17 articles from other sources that say that it does.

Which ones are right? Which ones are wrong? So article posting is pretty much worthless.
Just like anything else. It is politics ala propaganda from both sides of the SM fence.
Can we not talk about this anymore? It never gets us anywhere.

I own a 6800GT and a X800XTPE. Both cards are amazing. Some games are better on one card over the other. Some in visual quality and some in speed. So my eyes tell me there is no difference. When a true 100% SM3.0 coded game emerges (are there any yet? RiddicK?) Maybe we will see some sort of performance difference along with some visual of some sort. I don't know and you don't know is the bottom line and most truest statement that can be made right now.

Just flew back from Italy and boy are my arms tired. Ba dump bump bump.

Buoan Giorno
Keys
 

sbuckler

Senior member
Aug 11, 2004
224
0
0
Just to clear up a few things about unreal 3. As far as I can tell the reason it looks prettier then existing stuff is because it uses:
1) HDR lighting: not SM3.0, but currently only supported by 6 series geforces. This makes a big difference, and imo is where the X800's are really missing out.
2) virtual displacement mapping (also know as parallax mapping). This is supported by everything since at least the 9800. 6800's also support true displacement mapping but to be honest don't have the grunt to use it - it's very much something for future generations of cards.
I am sure it will run on everything but the kitchen sink - epic will have it running on all sorts of newish pc's, all the next gen consoles, linux, mac, etc. - as they want it to become the defacto engine every company develops it's games on.

As for what SM3.0 brings to the table that really matters over 2b I am not conviced there is a lot.
There is stuff such as 32 bit floats being standard precision as opposed to 24 bit in earlier SM2 but I doubt that'll make a huge difference to graphics quality. The performance improvements of SM3.0 are mostly in 2b which X800's have, so performance wise I don't think X800's loose out much, as long as the games developer bothers to make a 2b rendering path.

Personally although graphics improvements are always nice I want my games to have better game play which pretty graphics don't really give you much of. That really requires more processing grunt so I await the arrival of dual cores, and hardware physics engines with the most interest.
 

Munky

Diamond Member
Feb 5, 2005
9,372
0
76
Originally posted by: housecat
Originally posted by: Noob
Do you guys think the 6800's will even have the power to run next generation games that are SM 3.0 supported?

I do.
I can run insane resolutions with HDR on in Farcry today.

Not only that, but the NV50 will likely be able to even better.

Its the R520 you should worry about, it is ATI's first gen SM3.. while NV will be on their 2nd revision SM3, having much more experience with it than ATI.

So if you are looking for your next card to be SM3, I'd stick with NV50.

You are starting to sound more and more like a fanboy. How many generations did it take for NV to get DX9 right? I'll give you a hint - it wasn't 1. And I just laugh everytime I see somebody using the word "future-proof", because they're just fooling themseves. If you get a high end card today, then trust me on this - 2 years from now you won't be thinking about sm3, you'll be more worried if you can push decent framerates at 800x600 or not.
 

Munky

Diamond Member
Feb 5, 2005
9,372
0
76
Originally posted by: humey
1ST i never said it couldnt play 2days games, i said not futureproof it not even dx9c

2ND plz dont flood screen with lots of quotes its PIA and i feel for 56k users trying to load 2 screens of repeats. :D

3rd, please write in at least half-decent English, because I don't have the patience to decypher your posts, I doubt anyone else has.
 

Matthias99

Diamond Member
Oct 7, 2003
8,808
0
0
Originally posted by: housecat
Originally posted by: pyrosity
The only difference between 2.0 and 3.0 is instruction lengths allowed and other shader programming differences. None of this results in new features. Everything that 3.0 does, 2.0 can do as well. That's including displacement mapping. I believe that Epic claims d-mapping as "3.0 exclusive" because they couldn't have pulled it off if they didn't have the speed boost of 3.0. The lighting differences shown in Far Cry 1.3 are done by HDR which you know is supported in PS 2.0. Every visual effect and improvement made is not done by switching/changing some shader language; it is done by adding different effects that come from the 2.0 era.

This is so misinformed.

1. PS3.0 offers slightly to moderately better efficiency and will offer negligible to somewhat better performance than PS2.0

2. Effects may be (somewhat) easier to implement, and those same effects may be slightly less stressing on the hardware through PS3.0

Fixed those for you. One big shader is not necessarily faster than multiple small shaders. And a lot of simple shader code just doesn't need much (if any) branching. Frankly, once you start getting into the sizes of shader programs where SM3.0 might make a noticeable (>= 10%) performance difference, it's running so slowly on today's hardware that the difference is mostly irrelevant.

I haven't seen developers saying it's worlds easier to write PS3.0 code than PS2.0. Having HLSLs is more of a benefit in terms of development.

3. PS3.0 offers full precision (32bit floating point), while 2.0 does not (24bit).

This is just flat-out wrong; PS2.0 offers 32-bit shaders if the hardware supports it (see, for instance, the GeForce5, which can do 32-bit PS2.0); it only REQUIRES that you support at least 24-bit shaders. PS3.0 requires 32-bit shader support. It's unclear how important this is anyway; HDR looks pretty damn good even at 24-bit precision.

Anyone who has studied Nvidia's new pixel shaders will know that it has PLENTY of power to use this as well.

Sort of. They don't really perform any better than ATI does in shader-heavy games like Far Cry and HL2. Doom3 is a different situation, because NVIDIA can do stencil shadows way, WAY faster than ATI can.

4. VS (vertex shader) 3.0 will bring exponentially greater image quality over VS2.0 and where we'll see the biggest IQ improvements. This feature finally brings us displacement mapping that DX9 originally promised.

VS3.0 does add hardware displacement mapping (well, actually, what it adds is letting vertex shaders access texture memory, which among other things lets you do HW-level displacement mapping). However, I do not believe this will run at a usable speed on any (single) GeForce6 card, and so I do not think it is that big a deal. It's no good if it brings $400-500 hardware to its knees on today's games; tomorrow's will be unplayable. Furthermore, no games are currently using this, and I don't know of any before UE3 that are specifically planning on using it.

Also, people keep bringing up FarCry's HDR -- which does not use SM3.0 at all. It is implemented through a floating-point framebuffer, which is (so far) an NVIDIA-only feature, and not part of the SM3.0 spec (this is also why you cannot use AA with HDR in FarCry -- NVIDIA does not support AA on FP framebuffers). If a card has FP framebuffers, even if it does not have SM3.0 support, FarCry HDR will work on it. Now, you can do HDR via pixel shaders -- HL2 is supposed to be doing this in a patch sometime -- but it is not clear what future games will go for.

Basically, where all these threads wind up is:

PS3.0 does not improve IQ over PS2.0, but may improve performance in some situations. However, the performance boost mostly comes with very long shaders, which will not run at usable speeds on today's cards anyway. Also, ATI's "PS2.0b" offers much of the same improvement for developers wishing to support it (Far Cry did this; the performance difference between PS2.0b and PS3.0 is negligible).

VS3.0 can improve IQ over VS2.0, but the really advanced features will probably run too slowly to be useful on today's hardware. It can also improve performance in some situations through geometry instancing, but again, PS2.0b also offers this.

FarCry's HDR is not related to SM3.0 at all, but uses an NVIDIA-specific feature. It is unclear if other games will use this as well, and/or if ATI's next-gen hardware will support this. At least some other games will be doing HDR through pixel shaders, which should work on any PS2.0/3.0 card.

IMO, at this point, it seems unlikely that SM3.0 will provide any sizable benefits for GF6 hardware going forwards. Certainly it will not make a 6600GT beat an X800XL/XT (and anyone who says this probably has a fuzzy grasp at best of how graphics hardware works), although I could see a 6600GT maybe getting close to an X800Pro in extremely optimal circumstances. 500+-line shaders and HW displacement mapping are likely to bring any GF6 card to a crawl even with SM3.0.
 

Keysplayr

Elite Member
Jan 16, 2003
21,209
50
91
Originally posted by: Matthias99
Originally posted by: housecat
Originally posted by: pyrosity
The only difference between 2.0 and 3.0 is instruction lengths allowed and other shader programming differences. None of this results in new features. Everything that 3.0 does, 2.0 can do as well. That's including displacement mapping. I believe that Epic claims d-mapping as "3.0 exclusive" because they couldn't have pulled it off if they didn't have the speed boost of 3.0. The lighting differences shown in Far Cry 1.3 are done by HDR which you know is supported in PS 2.0. Every visual effect and improvement made is not done by switching/changing some shader language; it is done by adding different effects that come from the 2.0 era.

This is so misinformed.

1. PS3.0 offers slightly to moderately better efficiency and will offer negligible to somewhat better performance than PS2.0

2. Effects may be (somewhat) easier to implement, and those same effects may be slightly less stressing on the hardware through PS3.0

Fixed those for you. One big shader is not necessarily faster than multiple small shaders. And a lot of simple shader code just doesn't need much (if any) branching. Frankly, once you start getting into the sizes of shader programs where SM3.0 might make a noticeable (>= 10%) performance difference, it's running so slowly on today's hardware that the difference is mostly irrelevant.

I haven't seen developers saying it's worlds easier to write PS3.0 code than PS2.0. Having HLSLs is more of a benefit in terms of development.

3. PS3.0 offers full precision (32bit floating point), while 2.0 does not (24bit).

This is just flat-out wrong; PS2.0 offers 32-bit shaders if the hardware supports it (see, for instance, the GeForce5, which can do 32-bit PS2.0); it only REQUIRES that you support at least 24-bit shaders. PS3.0 requires 32-bit shader support. It's unclear how important this is anyway; HDR looks pretty damn good even at 24-bit precision.

Anyone who has studied Nvidia's new pixel shaders will know that it has PLENTY of power to use this as well.

Sort of. They don't really perform any better than ATI does in shader-heavy games like Far Cry and HL2. Doom3 is a different situation, because NVIDIA can do stencil shadows way, WAY faster than ATI can.

4. VS (vertex shader) 3.0 will bring exponentially greater image quality over VS2.0 and where we'll see the biggest IQ improvements. This feature finally brings us displacement mapping that DX9 originally promised.

VS3.0 does add hardware displacement mapping (well, actually, what it adds is letting vertex shaders access texture memory, which among other things lets you do HW-level displacement mapping). However, I do not believe this will run at a usable speed on any (single) GeForce6 card, and so I do not think it is that big a deal. It's no good if it brings $400-500 hardware to its knees on today's games; tomorrow's will be unplayable. Furthermore, no games are currently using this, and I don't know of any before UE3 that are specifically planning on using it.

Also, people keep bringing up FarCry's HDR -- which does not use SM3.0 at all. It is implemented through a floating-point framebuffer, which is (so far) an NVIDIA-only feature, and not part of the SM3.0 spec (this is also why you cannot use AA with HDR in FarCry -- NVIDIA does not support AA on FP framebuffers). If a card has FP framebuffers, even if it does not have SM3.0 support, FarCry HDR will work on it. Now, you can do HDR via pixel shaders -- HL2 is supposed to be doing this in a patch sometime -- but it is not clear what future games will go for.

Basically, where all these threads wind up is:

PS3.0 does not improve IQ over PS2.0, but may improve performance in some situations. However, the performance boost mostly comes with very long shaders, which will not run at usable speeds on today's cards anyway. Also, ATI's "PS2.0b" offers much of the same improvement for developers wishing to support it (Far Cry did this; the performance difference between PS2.0b and PS3.0 is negligible).

VS3.0 can improve IQ over VS2.0, but the really advanced features will probably run too slowly to be useful on today's hardware. It can also improve performance in some situations through geometry instancing, but again, PS2.0b also offers this.

FarCry's HDR is not related to SM3.0 at all, but uses an NVIDIA-specific feature. It is unclear if other games will use this as well, and/or if ATI's next-gen hardware will support this. At least some other games will be doing HDR through pixel shaders, which should work on any PS2.0/3.0 card.

IMO, at this point, it seems unlikely that SM3.0 will provide any sizable benefits for GF6 hardware going forwards. Certainly it will not make a 6600GT beat an X800XL/XT (and anyone who says this probably has a fuzzy grasp at best of how graphics hardware works), although I could see a 6600GT maybe getting close to an X800Pro in extremely optimal circumstances. 500+-line shaders and HW displacement mapping are likely to bring any GF6 card to a crawl even with SM3.0.

That's two slightly's and two somewhat's. Seems to me that these might add up to a single substantial. Not saying your right or wrong. I'm saying you don't know along with the rest of us informed or not although I do respect your opinions of course.
 

Munky

Diamond Member
Feb 5, 2005
9,372
0
76
Originally posted by: hans030390
Um, no, Sm3.0 allows like...65000 somethings...i forget the term lol, as 2.0 supports 512...so yeah

REMEMBER!!! Sm3.0 supports displacement mapping, and 2.0 doesnt! Dmapping will be used HEAVILY in next gen games! So 2.0 might try to look like it, and it will go slow, while 3.0 will have less of a problem making it look like that.

Displacement mapping- instead of making the object appear to have 3d bumps, as in bump mapping, Dmapping actually physically changes the geometry of that object it is applied to. What you get is an extrememly detailed model without all that extra polys and modelling, but with real bumps...not only that, the shadows show the Dmapping bumps as well.

don't say 2.0 and 3.0 are the same, and that 3.0 is just more effiecient with no eye candy...read up on it before you reply, that was the purpose of my first post.

actually, far cry did have a change with 3.0. It rendered lights differently on objects and i think it applied it to foliage. It made a slight change, but you won't notice the real 2.0 to 3.0 difference unless the game uses all of the 3.0 features.

Do you even know what normal/displacement mapping is? The whole point of normal mapping is to make an object look like it has more detail than actually present in it's polygon model. And that stuff is only used as a lighting trick, so for example a flat wall texture actually looks like it has the fine grain of the wall material.

Now tell me, when you look at a shadow of a stone, do you see in it the fine-grain detail of it's surface? Because in real life I certainly don't, and I sure as hell don't care if you can/can't in a game. I bet a lot of people were saying the same thing about Ati's trueform feature when that came out, and most games don't even use it. Also, cards as early as the radeon 9800 have a f-buffer, that allows them to process more shader instructions than specified in dx9b specs, so program length is not an issue either.

Point is, sm3 is not as big of a deal as you would like people to believe, especially in current games. And it definitely won't convince me to chose one brand over another.
 

Matthias99

Diamond Member
Oct 7, 2003
8,808
0
0
Originally posted by: keysplayr2003
That's two slightly's and two somewhat's. Seems to me that these might add up to a single substantial. Not saying your right or wrong. I'm saying you don't know along with the rest of us informed or not although I do respect your opinions of course.

This is correct; it is possible that, in the right circumstances, you might see a noticeable performance boost from SM3.0 (between more efficient shaders and VS3.0 geometry instancing). I, like everyone else, cannot predict the future. :p

However, IMVHO, I think that you are unlikely to see big performance improvements on GF6 hardware with SM3.0. The situations where it is most likely to help you (for instance, when running very long shaders) are ones where even a 10-20% improvement in speed won't be enough to make things playable. And advanced SM3.0 IQ features like displacement mapping seem to be VERY GPU-intensive -- the only demo video I've seen from NVIDIA showed very simple uses of it, and they haven't really been showing it off since then, making me suspect they know it will run too slowly to be useful.

Certainly, a 6600GT is just not going to run like an X800XT. It would have to see a ~100% improvement in shader efficiency, which -- if you understand much about how shader programs are written -- is just not going to happen with the kinds of changes made going from 2.0->3.0.
 

Munky

Diamond Member
Feb 5, 2005
9,372
0
76
Originally posted by: Rollo
Originally posted by: munky
How many generations did it take for NV to get DX9 right? I'll give you a hint - it wasn't 1.
How is this relevant?

It's relevant to the original post questioning somebody's ability to get a feature implemented right the first time around, which you conveniently left out.
 

housecat

Banned
Oct 20, 2004
1,426
0
0
Originally posted by: Ackmed
Originally posted by: housecat
Originally posted by: Noob
Do you guys think the 6800's will even have the power to run next generation games that are SM 3.0 supported?

I do.
I can run insane resolutions with HDR on in Farcry today.

Not only that, but the NV50 will likely be able to even better.

Its the R520 you should worry about, it is ATI's first gen SM3.. while NV will be on their 2nd revision SM3, having much more experience with it than ATI.

So if you are looking for your next card to be SM3, I'd stick with NV50.

Anyone with a monitor that can display a high enough res can run any game at "insane resolutions". Thats doesnt mean its going to be smooth. Especially since smooth is very subjective. Personally I dont consider 25-30 frames to be smooth. Which is what I got with my 6800GT, even lower with lots of action.

Does it really matter if its their first card for SM3? I dont think so. Look at their first try on DX9 and PS2.0. A MUCH better outing than NV's first.

I also agree with whomever said to wait till we actually see cards to decide which to "stick" with.

While this would normally be a great, and witty response.. I must say I disagree.

ATI's first outing with DX9 that was so great, was not really ATI's outting. They had purchased ArtX, who had done the bulk of the development on the R300.
And not that it mattered, how long did it take for DX9 to become a powerful force after the 9700 was released?
Now, now.. dont be a fanboy, you must admit that.. just like SM3/DX9C today! It wasnt useful in many games!!!!!!!! :p :p :p
When the shoe is on the other foot is when you always spot the fanboys.

ATI itself really didnt do anything spectacular (R300) to speak of. We have no idea how ATI would have done with DX9 hardware had they not bought out ArtX.. the same technology was used on the Gamecube.

I should note, it was said that 3dfx's technology that NV aquired was supposed to have been used on the NV30. So we know who got the better deal there.

NV would've been better off continuing and doing their own thing. Its prob safe to say that 3dfx goign under when they did was prob a good thing, because their next few GPUs were probably going to be horrid (at least in DX9).

Regardless, ATI has a LOT on the line now with R520. I'll briefly outline.
1. They have to have a successful launch of their AMR/SLI setup, and it has to come out on time and be fast
2. R520 has to be out ON TIME, and IN QUANTITY this time.
3. They must have R520s DX9C/SM3 performance match at least the NV40, without being a disgrace. instead of assuming its faster, ppl better settle on equal to NV's first SM3 outting. Everyone assumes ATI has all of NV's research on SM3. NV not only researched it, they put it out in retail form!
4. This is ATI's first core since before the R300 done on their own.. that means NEW DRIVERS need written after riding the easy train from 9700-X800, not a good thing for ATI as they do not accel at this (see HDTV Blunder). And the core has to be damn fast at everythign it does (besides DXNext features)

A lot of risk. Needless to say, I will be selling ATI stock and buying Nvidia stock.
 

PrayForDeath

Diamond Member
Apr 12, 2004
3,478
1
76
Splinter Cell 3 Demo is an example of an SM3 game that runs horribly on NV4x hardware. I think what UE3 engine programmers said about 6600GT is pure marketing bs.
That doesn't mean SM3 is pointless, it actually has some pretty good features, it's just that IMHO NV4x isn't fast enough to support it.
 

Creig

Diamond Member
Oct 9, 1999
5,170
13
81
Originally posted by: housecat

While this would normally be a great, and witty response.. I must say I disagree.

ATI's first outing with DX9 that was so great, was not really ATI's outting. They had purchased ArtX, who had done the bulk of the development on the R300.
And not that it mattered, how long did it take for DX9 to become a powerful force after the 9700 was released?
Now, now.. dont be a fanboy, you must admit that.. just like SM3/DX9C today! It wasnt useful in many games!!!!!!!! :p :p :p
When the shoe is on the other foot is when you always spot the fanboys.

So why are you so gung-ho on SM3.0 then? Going from DX8.1 to DX9.0 brought a lot of graphical enhancements. Going from SM2.0b to SM3.0 is mostly just a speed increase.



ATI itself really didnt do anything spectacular (R300) to speak of. We have no idea how ATI would have done with DX9 hardware had they not bought out ArtX.. the same technology was used on the Gamecube.

I should note, it was said that 3dfx's technology that NV aquired was supposed to have been used on the NV30. So we know who got the better deal there.

NV would've been better off continuing and doing their own thing. Its prob safe to say that 3dfx goign under when they did was prob a good thing, because their next few GPUs were probably going to be horrid (at least in DX9).

"We have no idea how ATI would have done", "supposed to have been used", "prob safe to say", "probably going to be horrid"...

How about sticking to facts?



Regardless, ATI has a LOT on the line now with R520. I'll briefly outline.
1. They have to have a successful launch of their AMR/SLI setup, and it has to come out on time and be fast
2. R520 has to be out ON TIME, and IN QUANTITY this time.
3. They must have R520s DX9C/SM3 performance match at least the NV40, without being a disgrace. instead of assuming its faster, ppl better settle on equal to NV's first SM3 outting. Everyone assumes ATI has all of NV's research on SM3. NV not only researched it, they put it out in retail form!
4. This is ATI's first core since before the R300 done on their own.. that means NEW DRIVERS need written after riding the easy train from 9700-X800, not a good thing for ATI as they do not accel at this (see HDTV Blunder). And the core has to be damn fast at everythign it does (besides DXNext features)

1. ATI seems to be doing a pretty good job of selling their cards as it is. AMR motherboards will only increase their sales number.
2. Nvidia had an even longer "paper launch" period with the Nv4X than ATI did with the R4XX and it doesn't seem to have hurt their sales. The 6800U and 6800GT were very scarce for quite some time.
3. You're assuming.
4. You're assuming again. Stop that.



A lot of risk. Needless to say, I will be selling ATI stock and buying Nvidia stock.

You own ATI stock? Riiiggghhhhttt........


When the shoe is on the other foot is when you always spot the fanboys

'Bout the only thing you said that I agree with.


Originally posted by: housecat
This product is too expensive and has too much advanced technology.

Originally posted by: housecat
Who in the hell is against superior/advanced technology?

Oh wait. I'll answer that.. ATI devotees in this case.
 

SneakyStuff

Diamond Member
Jan 13, 2004
4,294
0
76
Originally posted by: PrayForDeath
Splinter Cell 3 Demo is an example of an SM3 game that runs horribly on NV4x hardware. I think what UE3 engine programmers said about 6600GT is pure marketing bs.
That doesn't mean SM3 is pointless, it actually has some pretty good features, it's just that IMHO NV4x isn't fast enough to support it.

I agree, it's not like I resent it or something, I just want it on my next card, along with the next DX?? support. I don't think it matters now, because by the time the games that use it become mainstream, i'll have a new card.
 

Drayvn

Golden Member
Jun 23, 2004
1,008
0
0
Just want to add, at all if i can.

ATis card does support HDR!!!!!!!!!!!

About the SC3 thing, for some reason i have all 3 "features" of SM3 (tho it isnt required for SM3) on... HDR, Parallax Mapping, Soft Shadows... They are all enabled in my config file in the demo, but it doesnt actually show it on the game.

Wierd...

When HL2 comes out with its HDR, it will also have Parallax Mapping, Soft Shadows, and all which for some reason nearly everyone thinks u HAVE to have SM3 to run em.

 

housecat

Banned
Oct 20, 2004
1,426
0
0
lol look at creig defending ATI like a true champ

i dont give a damn about nv/ati. i do not understand the downplaying of SM3/DX9C by everyone here. it makes no sense to me at all.

bring me two cards with identical speed, and one has DX9C hardware.. I'm going to take the DX9C hardware. Its that simple.
The rest is all ATI fanboyism.
 

Drayvn

Golden Member
Jun 23, 2004
1,008
0
0
Originally posted by: housecat
lol look at creig defending ATI like a true champ

i dont give a damn about nv/ati. i do not understand the downplaying of SM3/DX9C by everyone here. it makes no sense to me at all.

bring me two cards with identical speed, and one has DX9C hardware.. I'm going to take the DX9C hardware. Its that simple.
The rest is all ATI fanboyism.

SM3/DX9C are good things.

Whats pretty funny is that the R400 cores, have everything DX9C requires!

All that is missing is 2 things from SM3, looping and branching. That is it. nadda, 2 things, w00t!

 

Matthias99

Diamond Member
Oct 7, 2003
8,808
0
0
Originally posted by: housecat
i dont give a damn about nv/ati. i do not understand the downplaying of SM3/DX9C by everyone here. it makes no sense to me at all.

Did you read any of the replies above (esp. mine?) Today's cards are, IMO, too slow to really take advantage of it (unless you're talking about SLI 6800GTs or better, but that's currently an $800+ solution). Okay, so you get slightly better framerates in FarCry -- the same framerates that ATI's R4XX hardware gets with PS2.0b. Yee-haw.

bring me two cards with identical speed, and one has DX9C hardware.. I'm going to take the DX9C hardware. Its that simple.
The rest is all ATI fanboyism.

I don't think anyone's saying that, at the same price and overall performance, they wouldn't rather have SM3.0 than SM2.0. Obviously, more functionality is better than less functionality.

But if it's $300 for an X800XL and $400 for a 6800GT (as it is right now in PCIe), then what? You'd rather pay $100 more *now* for SM3.0 that *might* help you a little bit in a year -- and possibly end up being too slow for you to use anyway?