Wreckage
Banned
- Jul 1, 2005
- 5,529
- 0
- 0
Originally posted by: Wreckage
http://developer.nvidia.com/object/using_vertex_textures.html
The person you are quoting is an ATI employee. Do you expect him to say they f***ed up?
Isnt it obvious?Originally posted by: rbV5
Why would we listen to Nvidia more than ATI?
Originally posted by: rbV5
Are we just picking on ATI? or ignoring the fact that accordng to my October 2005 DirectX card capabilities chart from Microsoft, there are no parts that support 100% of DX9 including NV 7800x, far from it in fact.
Looks to me like compromises have been made from every GPU manufacturer, and there is alot of DX API unsupported in hardware in even the newest cards.
MaxVertexShader30InstructionSlots = 544
VertexShaderVersion = 3.0
Originally posted by: crazydingo
Isnt it obvious?Originally posted by: rbV5
Why would we listen to Nvidia more than ATI?
Like stated earlier, this in not mandatory in the SM3 spec.
Such a workaround would likely involve a performance penalty, but I doubt it would be a major hit. The larger issue is probably just the fact that the workaround would require special consideration from developers, because the GPUs lack a straightforward vertex texture fetch capability.
I looked at that chart. the 6800 and 7800 met or exceeded all SM3.0 functions for DirectX9. The X1xxx series from ATI was not even listed.
Originally posted by: Pete
crazydingo, actually I was wrong. Apparently Pacific Fighters makes use of vertex textures. I found this out from an older thread at B3D that was recently linked, but that nVidia page that Wreckage linked proves it.
dnavarro, again, virtual is not the same thing as displacement mapping. The former is a pixel shader effect (read: an optical illusion of depth on a flat surface, just like the excellent parallax occlusion mapping in ATI's R520's Toy Shop demo), while real displacement mapping creates geometry (read: real bumps).
malG, the bottom line is actually what Humus said (and ATI confirmed that MS is OK with): R520 is technically in SM3 spec. And they can do the equivalent of vertex texturing using their R2VB fourcc implementation.
Tod33, the special consideration is that vertex texturing on R520 is currently expected to be supported with a workaround, so devs will indeed need to pay special attention to see if the GPU uses either the expected SM3 bit or ATI's R2VB fourcc workaround. ATI did the same with geometry instancing and their SM2 series, AFAIK.
Originally posted by: M0RPH
Originally posted by: ubergeekmeister
Ooh. Sounds like ATI is being dodgy. That could end up causing many x1000 buyers tears.
ATI is not being dodgy, they're being smart. read here
Originally posted by: Pete
jim, I don't understand your first point at all. They have 16 pipes, but they're obviously clocked much higher, and apparently they're more efficient. (And there are actually two ALUs per pipe in the G70, and "1.5" in the R520, so it's more like 48 vs. 24--ignoring clock speeds and efficiency.)
Devs would probably be more handicapped by the huge amount of less-than-amazing hardware on teh market, like X300s and 6200s. And in fact the X1600XT looks to be amazing in its segment, what with 12 pixel "pipes" running at 590MHz. That is, if ALUs were all that mattered, which X1600XT vs. 6600GT/6800 benchmarks indicate is not the case with most current games....
Originally posted by: jim1976
Originally posted by: Pete
jim, I don't understand your first point at all. They have 16 pipes, but they're obviously clocked much higher, and apparently they're more efficient. (And there are actually two ALUs per pipe in the G70, and "1.5" in the R520, so it's more like 48 vs. 24--ignoring clock speeds and efficiency.)
Devs would probably be more handicapped by the huge amount of less-than-amazing hardware on teh market, like X300s and 6200s. And in fact the X1600XT looks to be amazing in its segment, what with 12 pixel "pipes" running at 590MHz. That is, if ALUs were all that mattered, which X1600XT vs. 6600GT/6800 benchmarks indicate is not the case with most current games....
Pete I didn't say that this will affect the overall performance, or that XT is not efficient, I just stated the fact..
In the most gpu limited resolution 2048x1536 though XT and GTX have almost identical performanceDoesn't that say anything?
Yes but I dont see your point either; making a mountain out of a mole hill. :roll:Originally posted by: Wreckage
Quoting another forum does nothing, as other people in that same forum dispute it. When more SM3.0 game start showing up the problem will become more evident
Originally posted by: crazydingo
Yes but I dont see your point either; making a mountain out of a mole hill. :roll:Originally posted by: Wreckage
Quoting another forum does nothing, as other people in that same forum dispute it. When more SM3.0 game start showing up the problem will become more evident
R520 passed SM3 spec.
Well Unreal3 also makes heavy use of dynamic branching where the R520 is 60-70% better than G70.Originally posted by: Matt2
It may be optional, but if the developer chooses to implement it, ie. Unreal3, then doesnt that make R520 suckers up sh1t creek without a paddle?
Originally posted by: johnnqq
this generation sucks
First of all, not all cards on the market are VS3 to begin with. Secondly, do you think Epic will spend time making a feature integral to its game knowing that half the market won't support it? Thirdly, apparently ATI can support it, and can do so in a way that the dev wouldn't know that they're not using vertex hardware for the texturing. So this may indeed be making a mountain of a molehill. I guess it depends on ATI stepping up to the plate.Originally posted by: Matt2
It may be optional, but if the developer chooses to implement it, ie. Unreal3, then doesnt that make R520 suckers up sh1t creek without a paddle?