X1000 series not 100% with SM 3.0?

Pete

Diamond Member
Oct 10, 1999
4,953
0
0
A technical guy weighed in here. There was also a thread about G70 vertex texturing performance here.
 

crazydingo

Golden Member
May 15, 2005
1,134
0
0
Originally posted by: RajunCajun
I guess that means benchmarks done using SM 3.0 should be looked at very carefully to make sure we're comparing apples to apples.
ATI cards might take a hit if that feature is used, in any case it will be obvious in the benchmarks.
 

RajunCajun

Senior member
Nov 30, 2000
213
0
0
I am far from being an expert, but this is the way I see it.

Shader Model 3.0 spec has certain features that video cards to have and accomplish to be able to run in that mode. Very much like each version of Direct X (5, 5.2, 6, etc) build on the features from the previous version and add some new ones.

Remember nVidia's FX line of cards (I'm still a proud owner of a FX5900)? People questioned the card's title as a Direct X 9 card because the pixel shaders ran at either 16bit or 32bit color when DX9 called for 24bit color (which ATI used). The main reason FX cards were slow in SOME titles ,most notably Half Life 2, is because Pixel Shader 2.0 called for 24bit color and the FX used 32bit color, resulting in slow performance. To Nvidia's credit, they were able to optimize shader performance in games by replacing PS 2.0 call for 24bit to something else so the game ran good. Look at Halo - in general FX5900/5950 outperformed 9800Pro/9800XT because of this. The problem was EACH game had to coded like that in the drivers.

So I feel we should question ATI's usage of SM 3.0 in the X1000 line. It may be that using thier "workaround" results in better performance versus 7800 cards.

I don't really know, but I can question!
 

crazydingo

Golden Member
May 15, 2005
1,134
0
0
Originally posted by: RajunCajun
So I feel we should question ATI's usage of SM 3.0 in the X1000 line. It may be that using thier "workaround" results in better performance versus 7800 cards.
Maybe you havent read the articles you posted yourself :roll: The article mentions that the "workaround" needs be used by the developer, not ATI.
 

Pete

Diamond Member
Oct 10, 1999
4,953
0
0
RajunCajun, maybe read my links. The truth is in there, not in your second post.
 

BFG10K

Lifer
Aug 14, 2000
22,709
3,002
126
I guess that means benchmarks done using SM 3.0 should be looked at very carefully to make sure we're comparing apples to apples.
How is that any different to any benchmark?
 

tuteja1986

Diamond Member
Jun 1, 2005
3,676
0
0
But what I have been seeing In the benchmark result from various hardware website that X1000 series have a better shader 3.0 implantation

but how does ATI win in farcry , splinter cell ;( with shader 3.0 turned on ?
Also xbit labs review had a review and they also sad the shader 3.0 on X1000 were great.
 

Pete

Diamond Member
Oct 10, 1999
4,953
0
0
Simple, I don't believe any SM3 game uses vertex texturing ATM. I believe it's mainly a theoretical issue at the moment.

Also, most reviews are praising the X1000's pixel shaders (specifically, their low branching penalty), not necessarily their entire implementation.
 

crazydingo

Golden Member
May 15, 2005
1,134
0
0
Originally posted by: Pete
Simple, I don't believe any SM3 game uses vertex texturing ATM. I believe it's mainly a theoretical issue at the moment.

Also, most reviews are praising the X1000's pixel shaders (specifically, their low branching penalty), not necessarily their entire implementation.
Thanks. :thumbsup:
 

SickBeast

Lifer
Jul 21, 2000
14,377
19
81
Originally posted by: Pete
Simple, I don't believe any SM3 game uses vertex texturing ATM. I believe it's mainly a theoretical issue at the moment.

Also, most reviews are praising the X1000's pixel shaders (specifically, their low branching penalty), not necessarily their entire implementation.

Will Unreal 3 use it? That's killer if it does.
 

Wreckage

Banned
Jul 1, 2005
5,529
0
0
A lot of sites are now questioning ATI's "SM3.0 done right" statement. They clearly left out a key feature of the spec and are hoping that game developers will program a "work around' for it.
 

dnavarro

Member
Oct 10, 2004
46
0
0
I have read this topic on several forums and it was brought up that the new Unreal Engine does make use of this feature. There was some news item in fact months ago (before the G70 or r520 were released) where Tim Sweeney at Epic was saying that the NVIDIA card's would render the new unreal engine correctly while the ATI cards would have some type of incompatibility issues (or artifacts). I also remember hearing about this but not thinking about it at the time. Is it possible Tim Sweeney was talking about this SM3.0 issue????

edit found a related item: here's a blurb where Tim Sweeney talks about UE3 and displacement mapping

"With regards to "virtual displacement mapping" or what is known as offset-mapping, other than the normally expected nice bumpy edges of corner walls, will this software technology be used for other things like, maybe, bullet-holes on walls or enemies? Currently, what is virtual displacement mapping utilized for in UE3??

We're using virtual displacement mapping on surfaces where large-scale tessellation is too costly to be practical as a way of increasing surface detail. Mostly, this means world geometry -- walls, floors, and other surfaces that tend to cover large areas with thousands of polygons, which would balloon up to millions of polygons with real displacement mapping. On the other hand, our in-game characters have enough polygons that we can build sufficient detail into the actual geometry that virtual displacement mapping is unnecessary. You would need to view a character's polygons from centimeters away to see the parallax"

link :
http://www.beyond3d.com/interviews/sweeneyue3/index.php?p=2

There's a thread at HardForum with Brent Justice (HardOCP video editor) stating that one thing this would effect is "displacement mapping". His take is that nothing uses it now, but it sure sounds like Unreal Engine 3.0 will :(


D
 

crazydingo

Golden Member
May 15, 2005
1,134
0
0
Originally posted by: Wreckage
A lot of sites are now questioning ATI's "SM3.0 done right" statement. They clearly left out a key feature of the spec and are hoping that game developers will program a "work around' for it.
Perhaps you should read a little bit before claiming it as a "key feature". :roll:

Taken from here:
Vertex fetch is optional, so it's not needed to be SM3.0 compliant.

You basically have three options. Do like nVidia and implement it, but be slow and very limited. Or spend an assload of die space to put equal texturing capabilities in the vertex shaders, at the expense of general performance or other features you could use that space for. Or defer this feature until it makes more sense and use R2VB instead for today's needs as that's going to outperform vertex fetch and will work on previous generation hardware as well. Personally I think the last option is the right choice.
 

Hacp

Lifer
Jun 8, 2005
13,923
2
81
Originally posted by: crazydingo
Originally posted by: Wreckage
A lot of sites are now questioning ATI's "SM3.0 done right" statement. They clearly left out a key feature of the spec and are hoping that game developers will program a "work around' for it.
Perhaps you should read a little bit before claiming it as a "key feature". :roll:

Taken from here:
Vertex fetch is optional, so it's not needed to be SM3.0 compliant.

You basically have three options. Do like nVidia and implement it, but be slow and very limited. Or spend an assload of die space to put equal texturing capabilities in the vertex shaders, at the expense of general performance or other features you could use that space for. Or defer this feature until it makes more sense and use R2VB instead for today's needs as that's going to outperform vertex fetch and will work on previous generation hardware as well. Personally I think the last option is the right choice.


UE3 is an engine what people have been waiting for. I'm hoping that they will code for ATI, or ATi pays them to,.
 

Wreckage

Banned
Jul 1, 2005
5,529
0
0
Originally posted by: crazydingo
Originally posted by: Wreckage
A lot of sites are now questioning ATI's "SM3.0 done right" statement. They clearly left out a key feature of the spec and are hoping that game developers will program a "work around' for it.
Perhaps you should read a little bit before claiming it as a "key feature". :roll:

Taken from here:
Vertex fetch is optional, so it's not needed to be SM3.0 compliant.

You basically have three options. Do like nVidia and implement it, but be slow and very limited. Or spend an assload of die space to put equal texturing capabilities in the vertex shaders, at the expense of general performance or other features you could use that space for. Or defer this feature until it makes more sense and use R2VB instead for today's needs as that's going to outperform vertex fetch and will work on previous generation hardware as well. Personally I think the last option is the right choice.

The person you are quoting is an ATI employee. Do you expect him to say they f***ed up?
 

malG

Senior member
Jun 2, 2005
309
0
76
The bottom line: the lack of it means the X1000 series are NOT fully SM3.0 compliant.

 

Sc4freak

Guest
Oct 22, 2004
953
0
0
Originally posted by: malG
The bottom line: the lack of it means the X1000 series are NOT fully SM3.0 compliant.

Um. Why? The Vertex Fetch feature is optional, making it SM3 compliant.
 

swatX

Senior member
Oct 16, 2004
573
0
0
Originally posted by: Hacp
Originally posted by: crazydingo
Originally posted by: Wreckage
A lot of sites are now questioning ATI's "SM3.0 done right" statement. They clearly left out a key feature of the spec and are hoping that game developers will program a "work around' for it.
Perhaps you should read a little bit before claiming it as a "key feature". :roll:

Taken from here:
Vertex fetch is optional, so it's not needed to be SM3.0 compliant.

You basically have three options. Do like nVidia and implement it, but be slow and very limited. Or spend an assload of die space to put equal texturing capabilities in the vertex shaders, at the expense of general performance or other features you could use that space for. Or defer this feature until it makes more sense and use R2VB instead for today's needs as that's going to outperform vertex fetch and will work on previous generation hardware as well. Personally I think the last option is the right choice.


UE3 is an engine what people have been waiting for. I'm hoping that they will code for ATI, or ATi pays them to,.


too late its already the "the way its meant to be played" game ie. Nvidia got to them first.
 

crazydingo

Golden Member
May 15, 2005
1,134
0
0
Originally posted by: Wreckage
The person you are quoting is an ATI employee. Do you expect him to say they f***ed up?
Care to prove him wrong? :roll: Its not like he can BS his around there.