imported_michaelpatrick33
Platinum Member
- Jun 19, 2004
- 2,364
- 0
- 0
Long live CaiNaM! Hear me, Hear me, Long Live CaiNaM who posted above with rational thought and fun loving joy! Long live Rational CAiNaM
Originally posted by: Shad0hawK
Originally posted by: CaiNaM
sounds like some folks might be in for a letdown.... straight from the horse's mouth (crytek):
In current engine there are no visible difference between PS2.0 and PS3.0. PS3.0 is used automatically for per-pixel lighting depending on some conditions to improve speed of rendering.
http://www.pcper.com/article.php?ai...pe=expert&pid=2
6800 owners will not benefit from "better image IQ" compared to x800 users in Far Cry. sounds like there could be some performance benefits, however.
i "lmao" every time i read this comment. while true in a vastly oversimplistic sense...many are often mislead because they take it out of context, not realizing(or simply ignoring the fact that speed has a great impact on visual quality in the following 2 ways.
1. the ability to do more shaders means the ability to do more shader effects...more shader effects means better IQ because a givin scene can have more(or more complex) lighting effects through shaders thus the IQ is "better"...because of speed of shader performance.
2. a card that can "just run faster" can also run at a higher resolution...and we all know higher resolutions look better than lower resolutions. if you do not believe me fire up your favorite game at 640x480, then run it at 1600x1200 at same quality settings...it will be obvious which "looks better" even with the same LOD settings...
so snipping that quote(or a few quotes) and performing a gross act of oversimplification by taking it to mean "SM3 will not look any better than SM2" is nothing more than deluding yourself...anyone that has increased a resolution to make a game look better has the verifiable evidence right in front of them on their monitor.
Taking what we have been shown of Shader 3.0 by NVIDIA and utilizing the information gathered within this interview, we begin to have a clearer picture of how Shader 3.0 will affect your gaming experience. Whereas many had taken the position that Shader 3.0 would make significant improvements in image quality over Shader 2.0, we are beginning to see that is hardly the case. Although the foundation is present in Shader 3.0 to give developers even more freedom to maximize image quality, the immediate gains seem to be based entirely upon performance. Mr. Yerli?s answer to question eight is an excellent summation of where we currently stand regarding Shader 3.0.
Several new features in 3.0 shader model aren?t for free. Texture access in vertex shader is expensive, dynamic branching is not for free. So if developer has to utilize some features of PS3.0 shader model he/she should design shader in way which will remove other much important bottlenecks of application (CPU limitations, states/shaders changing, make shorter shader execution path, decrease streams bandwidth?).
Originally posted by: CaiNaM
you should read the article which i linked before accusing anyone of anything; it's not like anything was taken out of context. here's their conclusion:
Originally posted by: CaiNaM Taking what we have been shown of Shader 3.0 by NVIDIA and utilizing the information gathered within this interview, we begin to have a clearer picture of how Shader 3.0 will affect your gaming experience. Whereas many had taken the position that Shader 3.0 would make significant improvements in image quality over Shader 2.0, we are beginning to see that is hardly the case. Although the foundation is present in Shader 3.0 to give developers even more freedom to maximize image quality, the immediate gains seem to be based entirely upon performance. Mr. Yerli?s answer to question eight is an excellent summation of where we currently stand regarding Shader 3.0.
this was primarily stated for those who make comments such as, "I would love to see the SM3.0 difference, especially lighting and water ", etc. as there will be no differences to see.
again, it may be possible that some things can be done at less "cost", however it's uncertain whether there will be any tangible performance benefits with comments from the developers such as:
Several new features in 3.0 shader model aren?t for free. Texture access in vertex shader is expensive, dynamic branching is not for free. So if developer has to utilize some features of PS3.0 shader model he/she should design shader in way which will remove other much important bottlenecks of application (CPU limitations, states/shaders changing, make shorter shader execution path, decrease streams bandwidth?).
there's also other misinformation, such as claims dynamic branching requires sm3, which is certainly not the case as shown in the demo you will find, along with some background, in this
thread.
Originally posted by: CaiNaMthere are of course tangible benefits to sm3, but marketing has blown it much out of proportion. while it's certainly not as insignificant as ati tries to make it seem, neither is it as significant as nvidia makes it out to be. the truth is, as we often find, somwhere in the middle.
There are no IQ differences between the two paths. SM 3.0 just makes things faster which basically now makes the NV40 the preferred card for Far Cry.II'm most interested to see whether the enhanced IQ of this patch is provided through offset or displacement mapping as well. (not to mention just see the difference in IQ!)
Actually no, it wasn't. I've pointed this out to you many times but you seem to feel that continually repeating the same rubbish makes it true.it was the first game, AFAIK, to have any PS2 effects in it.
How could you be impressed given you never even played Far Cry then? Or are you admitting that you were talking rubbish back then?I hadn't seen as good of screenshots of PS2 differences, and was not that impressed with the shinier pipes and water, as you say.
That's astonishing considering Anand made it clear there was no IQ difference between the two. If you're going to be nVidia biased I suggest you actually find out what it is you're biased about.The patched Far Cry screenshots I've seen lately look a lot better than the Far Cry I'm playing, so I'm more impressed.
But no such "big picture" on ATi's 2x performance gain and IQ gain in SM 2.0, right?As for my "love" of SM3, it's just part of the big picture for me:
I thought "potential" doesn't count? Or is that only when ATi has potential?Potential improvements of displacement mapping and geometric instancing, with huge developer support for this hardware/feature set.
Which has absolutely nothing to do with SM 3.0.2. Potential SLI power heretofore unimagined
I thought features don't matter as long as performance is similar? Or was that the tune that you sang only when you had your NV30 fetish?4. Not the same feature set I've been playing with the better part of the last two years
What we saw here at PC Perspective was a moderate to impressive increase in performance in FarCry with the new 1.2 patch enabling SM3.0 support on NVIDIA's 6800 series of cards. The performance enhancements varied across the diferent levels of AA and AF as well as resolution, but the best results for NVIDIA came when the resolution was high and all bells and whistles were turned on in the control panel. If you have a 6800 series card, this new patch and graphics driver will give you a nice increase in performance for one of the most popular shooter games for free -- and that is something you can't beat. Is this performance increase enough to get people to take notice of SM3.0 and buy a card that utilizes it? I don't know if a single game is enough to convince anyone what card to buy, especially with the likes of Doom3 and Half Life 2 coming out within 2 months. In either event, the success of Pixel Shader 3.0 here is a feather in NVIDIA's cap.
Originally posted by: Shad0hawK
i agree with you there, but being able to up my res because of faster performance will still make my game look better
goodnight!![]()
The main point that the performance numbers make is not that SM3.0 has a speed advantage over SM2.0 (as even the opposite may be true), but that single pass per-pixel lighting models can significantly reduce the impact of adding an ever increasing number of lights to a scene.
It remains to be seen whether or not SM3.0 offer a significant reduction in complexity for developers attempting to implement this advanced functionality in their engines, as that will be where the battle surrounding SM3.0 will be won or lost.
Originally posted by: CaiNaM
Originally posted by: Shad0hawK
i agree with you there, but being able to up my res because of faster performance will still make my game look better
goodnight!![]()
i certainly havent read anything that would make me expect a performance increase to that degree. from anand's fc patch 2 preview:
The main point that the performance numbers make is not that SM3.0 has a speed advantage over SM2.0 (as even the opposite may be true), but that single pass per-pixel lighting models can significantly reduce the impact of adding an ever increasing number of lights to a scene.
It remains to be seen whether or not SM3.0 offer a significant reduction in complexity for developers attempting to implement this advanced functionality in their engines, as that will be where the battle surrounding SM3.0 will be won or lost.
to me that seems to say the "jury is still out" on whether it will decrease the "cost" of coding to where IQ could be improved by using more specialized rendering effects allowed by possible performance gains....
and goodnight![]()
Originally posted by: Regs
Well, as it looks right now Far Cry 1.2 doesn't offer much new to the user as expected. It's more helpful to the game developer right now.
The performance gain is negligible to the point where their are too many other variables to consider. While IQ doesn't improve one bit.
Site Link
Still I'm buying a 6800GT though.![]()
Originally posted by: Safeway
Wait for X800 optimized drivers. Maybe ATI can actually match the nVidia cards. XT PE to Ultra Extreme, XT to Ultra, Pro to GT.
The only way for ATI to increase its performance at this point is to sacrifice its IQ
Originally posted by: keysplayr2003
Originally posted by: Safeway
Wait for X800 optimized drivers. Maybe ATI can actually match the nVidia cards. XT PE to Ultra Extreme, XT to Ultra, Pro to GT.
Match the Nvidia cards how? By handling SM3.0 via software drivers? Not likely. The only way for ATI to increase its performance at this point is to sacrifice its IQ and that has been ATI's bread and butter for the last 2 years. I don't know how badly they want to stay ahead of NV but if they start to trade performance for IQ, it will be all over the review sites. Just like Nvidia sacrificed major IQ in the NV3x series to compete with the performance of ATI R360. We shall see what unfolds. I guess it all comes full circle eventually.
Originally posted by: ZobarStyl
The Inq (for whatever that's worth) said that the 6800's were running at 30% below top speed, but I've never heard anyone say the x800's were that way as well...link? From a basic logic standpoint it makes sense that a new architecture could be pushed farther than one that's been refined for 2 years already. Does that mean ATi won't get any driver gains out of the x800's? Hell no. But it does mean that the 6800's may have some power left to be tapped that won't show for a couple more driver revisions...but point at hand they have already gone from totally losing to ATi in FC perf to the GT sometimes beating the XTPE...it's safe to say that there's definitely been some improvement on the NV end.
Originally posted by: CaiNaM
Originally posted by: ZobarStyl
The Inq (for whatever that's worth) said that the 6800's were running at 30% below top speed, but I've never heard anyone say the x800's were that way as well...link? From a basic logic standpoint it makes sense that a new architecture could be pushed farther than one that's been refined for 2 years already. Does that mean ATi won't get any driver gains out of the x800's? Hell no. But it does mean that the 6800's may have some power left to be tapped that won't show for a couple more driver revisions...but point at hand they have already gone from totally losing to ATi in FC perf to the GT sometimes beating the XTPE...it's safe to say that there's definitely been some improvement on the NV end.
ati's engineers stated that during the ati/adaptive af chat on ati's website sometimes back. the mem controller was only running around 30% efficiency.. when all was said and done they were talking overall performance gains in the 10-15% area when drivers were optimized.
I am sure nVidia spends alot more on R&D, although I could be deathly mistaken.
Originally posted by: Safeway
Can you get an R&D quote from ATI?
Originally posted by: reever
Originally posted by: Safeway
Can you get an R&D quote from ATI?
Unless they specifically state how much they spent there is no way of knowing from simple annual reports as both companies don't just make graphic cards
