JB-
You pointed out that Halo did not show same delta.
I pointed out that Halo does not have a full precisions mode and uses partial precision for FX
PP for the NV3X under HL2 also showed the R9800 with a substantial lead while Halo has the 5900 ahead of the 9800.
John C has already said their IS an IQ difference in D3 between the two. Its gonna depend on the game I feel.
He said there was 'no discernable quality difference', it's in one of B3D's interviews. Of course in the theoretical sense there is, as there is between FP32 and FP24.
The general conclusion looking at TR, HL2, Halo, Shadermark, 3dmark, Rightmark,ect is that the R9800 is much faster running PS2.0 shaders.
And the one actual game that is out that people want to own at that list is as fast or faster on the 5900. That is what I can see with crystal clarity.
I am sure PS2.0 will be come more limiting factor. But you have to remember not every game will use them in the same manor. AQ3 for example only has one PS2.0 shader. HL2 had a ton of PS2.0 shaders.
When will PS 2.0 become more of a limiting factor? We have been hearing that this will happen 'soon' for about a year now and so far we have two titles shipping, one of them worth owning and that happens to be faster for the most part on nV hardware.
Radeon 9800 Pro, stock and Radeon Pro 256 MB (OVERCLOCKED 405/371: ), Catalyst 3.7 drivers (Cheat detection enabled / disabled...no difference)
GeForce 5900 Ultra, Det 52.xx, PS 2.0a, partial precision mode.
It ends up showing close to what I would expect in terms of performance rift. I understand there is a difference but....
Notice the how many test the FX can not even run. Shall we factor those in
Refrast doesn't handle some of those properly. If refrast can't do it, then no way should any board be expected to.
Does it make more sense to map your benchmark to the DX9 standard or map it to something non-standard?
PS2.0A is part of the standard.
Spam-
Departures from these API's(OpenGl and Direct3d) would only add to the difficulties wouldn't it?
Here's the problem, DX10 parts are being designed right now, DX10 won't be finalized for some time yet. Who will we end up saying went wrong? As of right now I'd have to say nVidia will be the odd man out as they are aren't the next XBox chip. What nV ends up having to do is take a guess as to where ATi is going to go and try to exceed that for their NV50. If they waited for the DX10 specs to be finalized prior to beginning work on their DX10 part they would be dead in the high end for years.
ATI who developed their DX9 products EARLIER than Nvidia and got it right.
Microsoft adopted ATi's submission for DX9 over nVidia's.
Are you saying that ATI had greater access than Nvidia?
Yes, and it isn't just me saying it. You can ask the others involved in this discussion, MS went with ATi's submission for DX9. The reverse happened for DX8, MS went with nVidia(which everyone knew was going to happen due to the XBox).
But if so, would it not only further the argument that an open consultative process of design is even more necessary?
You can run in to issues with this. The IHVs are not going to want to show all of their cards, they are not going to want to let the other know exactly what they are doing. For nVidia on this gen of parts, they decided to offer FP16 which is what Carmack has been asking for along with FP32 which had a preexisting standard(IEEE) and also is useful for their Quadro parts for use in preview rendering in Viz applications. Looking at their part I would wager that they were of the mind that game developers were going to be utilizing FP16 for their shaders until they made the leap to FP32(Sweeney has mentioned this is his intention as a side note, combined with Carmack's desire for FP16).
For the DX10 parts everyone will be FP32, it will allow combining the shaders units so you won't have to deal with seperate Vertex and Pixel shader units. It's easy in hindsight to say shoulda/coulda/woulda, but based on the information nV had I don't think their choice in terms of accuracy was a bad one.
If Nvidia had designed for 24 bit precision instead of 16and 32 bit would we not have a different market in this place and time?
It would have hurt their potential on the pro part end. If they had the same register limitation it actually would have hurt them in the gaming space as they would be forced to have partial precission in 12INT instead of FP16.
Problems with the system of design,development and implementation can be managed. What are the alternatives? An even more fragmented and divisive process? That would be in nobodies interest.
Things are actually a
lot better now then they have been in the past. If you looked at specifications that existed when nV and ATi went in to the design phase nV is the only one that was following an existing standard. What's more, they also took the path with FP32 that ATi will be following with their DX10 parts. Neither the R3x0 or NV3X are fully DX9 compliant, neither of them are close actually, but the one thing we are left arguing about is their differences in PS performance the majority of the time. Look back to the days of 3dfx where they seemed utterly he!l bent on leaving out every feature they could, refusing to advance unless they could do so in a proprietary method the overwhelming majority of the time. Back when it was nV versus 3dfx in the gaming enthusiast market, nV did what they could to end up being closest to the DX spec. They never got it completely right back then either(prior to the XB deal and the NV2X line being the basis for it), they were simply the closest out of the big two. They are back to being 'close but not quite' there. What if we had the Rampage3 showing up with PS 1.1 and VS1.1 support but it had the next generation TBuffer3 effects? Really, with ATi and nVidia right now they both have a fairly close view on where the industry is headed and even without standards set in advance they can come out with parts that are quite competitive.
Would it be advantageous for the specs to be out early? Maybe, but what happens if MS guesses wrong? With the NV30 it is easy to blame nV for being too agressive with their expectations on build process, but if MS sets the standards and date for a new generation of parts(due to DX availability) and they have requirements that are either far too simplistic or complex for the build process available you end up with an entire multi billion dollar industry SOL.
Dave-
That didn't appear to be what you were saying at all:
That's what the quoting gets us, the conversation was in the context of 'cheating' when enabling AF.
If thats what the developer feels then thats what the developer feels. Applying something through the control panel goes outside of the specification of the application and is then up to the user and the board that they purchase.
With the board they purchased they have quality mode sliders that are supposed to enable AF and full trilinear, you are not getting this with either company right now if you enable it in the CP. I consider both of these implementations a hack, however if one of them is cheating, then both of them are.
Lets reapeat this one more time: ATI had the vast majority of the high end DX9 market and largest slice of the NVIDIA's DX9 market was a for a board that is/was primarily being treated as DX8, in other words they were not the market leader for their target DX9 base.
'High end DX9 parts' is a miniscule market. If a game is coming from a smaller developer they had better target it so it can run on a 5200 if they don't want to remain a small(or dead) developer.
Well, I've not read eveyones comments here, however it doesn't seem that way - could it just be the way you are interpretting things?
Absolutely not. I'm not talking about this thread, I'm talking about on these forums. There are forum regulars in this discussion, I don't think it leaves much to try and figure out.
However, as I've said before I do question whether the D3 engine will see such widescale adoption as previous id engine due to the inherant nature of the lighting model not coping well with outdoor scenes. I want to see if D3 has any levels outdoors at all, and if so how it handles them.
There are outdoor areas in Doom3. Since you don't have a helmet in Doom3, you have to run around and grab air canisters while you are outdoors. The biggest 'problem' with Doom3's outdoor engine is that you lose a lot of the dramatic effect of the lighting as the infinite light is too strong(guess you have been following HL2 closer then D3

). This has actually been discussed at B3D IIRC(though I may be recalling wrong I know I've seen it discussed in forums somewhere and I doubt it was here, id talked about this issue explicitly in an interview in one of the print publications, can't recall which one now but if I remember I'll look through my last few months worth of PC gaming mags and tell you which article it is in).