• We should now be fully online following an overnight outage. Apologies for any inconvenience, we do not expect there to be any further issues.

Anand's 9800XT and FX5950 review, part 2

Page 7 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

BFG10K

Lifer
Aug 14, 2000
22,709
3,003
126
If we pile all the evidence of this together we have nothing, nothing except comments from a man we know was paid millions of dollars from ATi.
And how much did nVidia pay you to ignore everything that's been going on?

You have ranted on about cheats in numerous other benches, show any evidence of it at all.
ROFL. Where have you been for the last six months?

Let's start with something simple - if nVidia's cheats are really bugs then explain to me nVidia would take the time to implement measures so that anti-cheat programs stop working? If the "bugs" have been fixed then how could these programs continue any effect on them? Why go to so much trouble to hide something that isn't there?

The things that supported your 'cheat' accusation against every bench nV ran are gone.
Right, so the compression has been removed and you've tried the anti-cheat programs and verified that there is no difference to either performance or image quality when they're employed. If not unless you have access to nVidia's driver source code then you simply have no evidence to make that claim. And calling Gabe a liar isn't evidence either and doesn't disprove his comments.

Actually I've mentioned in multiple threads the reason I won't buy a FX is because of driver bugs. Funny that.
Right, so instead you'll stick to an inferior NV25 board and continue to shout that ATi's AF is unusable.

Which version was it directly before 3DMark2K3 came out?
Catalyst 2.5 was the last DX 8.x driverset, Catalyst 3.0 was the first DX 9 driverset.

Then they released their driver sets with replaced code and the performance went up considerably.
Prove it. Show me the report from FutureMark that illustrates their findings just like the cheats they found in nVidia's boards.

You won't find it of course. The only thing you'll find is one shader that was optimised but unfortunately relied on application detection to work. For good measure ATi promised to remove it and they did.

OTOH nVidia's cheats were numerous and upon discovery, anti-cheat programs like RivaTuner discovered that performance across the board dropped to similar levels as previous drivers that hadn't been "optimised" yet. nVidia's response? In addition to badgering FutureMark they released drivers that stopped anti-cheat programs from working.

Rather bizzarre actions from a company whom you claim have genuine driver bugs that have now been fixed, wouldn't you say?

If they already had a bunch of shader replacement code in there, and they added a whole bunch more, how does their file size keep dropping?
If they changed the previous code, which I'm sure they did. Hell I could give you a 1 MB driver, compress it then give you a 2 MB driver that compresses even lower than the 1 MB one. More code but it's smaller.

However the real issue here is the compression to begin with: if these are truly driver bugs that have now been fixed then what does nVidia have to hide? Why are they preventing the public from testing them out for themselves?

You have to work under the assumption that nVidia changed their driver structure for the purpose of looking better on a few benches instead of it being planned already. If they did it in response to the anti detect, they have some incredibly fast thinking people on their staff to come up with a countermeasure and release it as quickly as they did.
You didn't appear to answer the question at all.

Answer the question: if nVidia is not cheating and have fixed what you call genuine driver bugs then why implement counter-measures against programs that illustrate cheating?

Now they get their drivers sorted out and performance is up and you still can't acknowledge the fact that maybe they just plain fvcked up. It has to be a giant conspiracy.
When they drop the compression mechanism, the anti-cheat programs prove the drivers work just fine, all of the flaky "does not use AA even when requested" problems are fixed and the likes of Gabe and Futuremark release detailed reports outlining that the issues have been fixed, then I'll be glad to admit it.

Until then all I've got is your comments which I'm afraid don't hold a lot of weight.

The answer to that is no if you look at S3's implementations. They followed the creator, they didn't one up them.
The reason I asked was because you brought up the Quack issue. If you claim ATi was cheating then you must also claim that nVidia was cheating at the hardware level since all early Quake III benchmarks were done with texture compression and that meant that the nV boards were all running with 16 bit textures. Also this was cheating that could not be fixed and continued through several generations of boards. Even the NV25 is still loading 16 bit textures but they use a dithering technique to mask the problem; not sure about the NV3x series though.

Thus nVidia have been cheating for every single benchmark using DXT1/S3TC that was run on on NV25 boards and earlier.

So nVidia was cheating then? Don't hide behind the spec, just answer yes and no.

Do not mistake your own zealotry at the time for mine.
I'm not the only one in this thread who feels the same way about your attitude.
 

DaveBaumann

Member
Mar 24, 2000
164
0
0
Why not optimize for the board? In comparitive terms a recompile and additional path mapped to it should be very simplistic.

All they needed to add was a switch for the DX9 'Pure' path to use the compiled code for the FX.

So, you know their engine better than they do now? I'm sure they'll be so pleased to hear how simplistic these things are!! ;)

Except for improved performance. Newell talked about the reduced IQ of the MM path, why wouldn't anyone want to see an improved performing version of the highest quality path?

Ben, they are attempting to minimise the reduction in quality in whilst still maintining their performance. Again, the performance increase is unquantified and may not be able to be used in Full Precision DX9 path.

HDR was disabled for the public bench so its not particularly relevant to this discussion(although I see how it will effect the final product).

HDR had only just been implemented. The benchmark was from the E3 levels/build and HDR had not been implemented in them.

Given the lengths they claimed they went through bending over backwards for the FX they could have at least compiled the default shaders for the 5900.

Again, you are making the assumption that this can 'just' happen without the creation of an entirely separate path.

However, the other issue is that the entire FX specific path is a short term measure for the purposes of the initial HL2 game as far as Valve are concerned - they have already stated that as new content is added it will most likely be full precision DX9, they are moving over entirely to DX9/HLSL development ASAP and they have concerns as to whether smaller developers will be able to put the effort into the FX path that they have. It seems fairly clear from their statements that they view this as a short term measure and I hardly think they want to increase their load by creating more paths with what they probably view as having an extremely limited lifespan.
 

Sazar

Member
Oct 1, 2003
62
0
0
Originally posted by: BenSkywalker[/iSazar-

the IQ on the fx cards IS worse... specially if you are using 45.23 as the baseline... this has been shown in other tests all around the internet...

They were using the R3x0 core boards as a baseline.


okies... you did not get my point... the baseline I am suggesting is based on det versions...

it makes sense to compare various det revisions for IQ differences to see improvements/degradations... this is only prudent when there is a suggestion of lowered IQ ongoing...

many other sites have tested with 44.03 v/s 45.23 and there are differences... IQ on 44.03 is better...

that is why I had suggested perhaps there could be benches using the det's with better IQ as a baseline...
 

BFG10K

Lifer
Aug 14, 2000
22,709
3,003
126
I have two things for you to look at Ben.

Firstly, regarding shader substitution. You claim it's a valid optimisation while myself and everyone else can see it's obviously a cheat. Here is a point taken from FutureMark's audit whitepaper, from page 3 (emphasis added by me):

4. In game test 4, the water pixel shader (M_Water.psh) is detected. The driver uses this detection to artificially achieve a large performance boost - more than doubling the early frame rate on some systems. In our inspection we noticed a difference in the rendering when compared either to the DirectX reference rasterizer or to those of other hardware. It appears the water shader is being totally discarded and replaced with an alternative more efficient shader implemented IN THE DRIVERS THEMSELVES. The drivers produce a similar looking rendering, but not an identical one.

In other words it has absolutely nothing to do with the compilation or conversion process like you claim - the driver instead completely ignores the instructions fed to it from the program and instead uses it's own special custom shader that was hardcoded into it for the sole purpose of inflating 3DMark's performance.

This is about as much of an optimisation as a driver that is instructed to draw a straight line but instead draws a single pixel when it detects application "A" is running, but not when "B", "C" or "D" programs are running. Honestly, are you going to continue with this nonsense and continue to claim that what nVidia is doing is a valid optimisation? Or that it's a bug and it happened by accident?

Secondly, I'd like to ask you whether an application changing its texture filtering behaviour on the basis of its executable name is a "bug" or an "optimisation".

Beyond3D Link.
 

Redviffer

Senior member
Oct 30, 2002
830
0
0
Great review.

My thoughts on Nvidia drivers: its obvious that these cards have the horsepower to do well, they just do it differently than ATi does. Nvidia has had trouble getting their drivers to work to appease the masses, or, in truth, the people who have something against Nvidia. Why else would you really take a magnifying glass out to look at still pictures for games that are normally played in constant motion? That is nit-picking, plain and simple.

Nvidia should make 2 sets of drivers:
1) an OEM/Regular set of drivers for compatible use, adequate performance, etc.
2) a performance set of drivers for maximum performance, and if it sacrifices a little bit of visual quality or "cheats" then so be it, use it at your own risk.

This way, those that actually own these cards can get what they want: drivers that make their card as fast as it can be, and those who enjoy crying about driver optimization/cheating can have their oem drivers. I suspect a lot of people complaining about these cards/drivers don't even own a GeforceFX card.
 

Pete

Diamond Member
Oct 10, 1999
4,953
0
0
I suspect you haven't been keeping up with the state of 3D hardware for the past year or so, Redviffer. How else would you attempt to show the difference in IQ to internet readers--bandwidth-choking movies? And nV already offers both "Performance" and "High Performance" options (as an alternative to "Quality") in their current drivers.
 

Redviffer

Senior member
Oct 30, 2002
830
0
0
Originally posted by: Pete
I suspect you haven't been keeping up with the state of 3D hardware for the past year or so, Redviffer. How else would you attempt to show the difference in IQ to internet readers--bandwidth-choking movies? And nV already offers both "Performance" and "High Performance" options (as an alternative to "Quality") in their current drivers.

LOL! I would have to have been under a rock to not notice the current state of 3D hardware. :)

My take on it, by having both a GeforceFX and an ATi card, is that I don't ever freeze a frame of my games to stop and take a look, and I don't think that anyone really does this either, other than the benchmarking sites. When the first drivers came out from Nvidia that increased their score in 3DMark2003, people found that they were using optimizations that they considered cheating. I really don't care how well it scores on 3DMark2001/3, what I want is for it to run my games at good enough frame rates so it doesn't look like I'm watching a slideshow. In other words, if they have to reduce the IQ to increase the performance, I'd rather have that than a card that looks great but runs like crap. ATi has great hardware, they can do both, I think we've already established that they are the kings of the hill right now. What I want is the choice to run "optimized" drivers that "cheat" on games (by lowering IQ, not by actually cheating in the game to provide any unnecessary advantages) so that I can use my GeforceFX. Let me decide if I want a card that looks great or performs great. They actually used to do this by having access to beta drivers on their site, I don't see that choice anymore. Their site now only shows the 45.23 drivers, where I'd like to try out the 52.14's for myself, even though they are beta.
 

jiffylube1024

Diamond Member
Feb 17, 2002
7,430
0
71
Originally posted by: Redviffer

Nvidia should make 2 sets of drivers:
1) an OEM/Regular set of drivers for compatible use, adequate performance, etc.
2) a performance set of drivers for maximum performance, and if it sacrifices a little bit of visual quality or "cheats" then so be it, use it at your own risk.

Talk about a step in the wrong direction. You can adjust settings in the drivers, we don't need a series with "cheats" all over it. Let me guess, the "performance" set is the benchmarking standard, but the OEM/Regular set is the one you'd need to run games with all of the proper effects and settings on.
 
Apr 17, 2003
37,622
0
76
Originally posted by: Redviffer
Originally posted by: Pete
I suspect you haven't been keeping up with the state of 3D hardware for the past year or so, Redviffer. How else would you attempt to show the difference in IQ to internet readers--bandwidth-choking movies? And nV already offers both "Performance" and "High Performance" options (as an alternative to "Quality") in their current drivers.

LOL! I would have to have been under a rock to not notice the current state of 3D hardware. :)

My take on it, by having both a GeforceFX and an ATi card, is that I don't ever freeze a frame of my games to stop and take a look, and I don't think that anyone really does this either, other than the benchmarking sites. When the first drivers came out from Nvidia that increased their score in 3DMark2003, people found that they were using optimizations that they considered cheating. I really don't care how well it scores on 3DMark2001/3, what I want is for it to run my games at good enough frame rates so it doesn't look like I'm watching a slideshow. In other words, if they have to reduce the IQ to increase the performance, I'd rather have that than a card that looks great but runs like crap. ATi has great hardware, they can do both, I think we've already established that they are the kings of the hill right now. What I want is the choice to run "optimized" drivers that "cheat" on games (by lowering IQ, not by actually cheating in the game to provide any unnecessary advantages) so that I can use my GeforceFX. Let me decide if I want a card that looks great or performs great. They actually used to do this by having access to beta drivers on their site, I don't see that choice anymore. Their site now only shows the 45.23 drivers, where I'd like to try out the 52.14's for myself, even though they are beta.


would you notice if fog wasnt rendered at all? i think you will
 

Redviffer

Senior member
Oct 30, 2002
830
0
0
Originally posted by: jiffylube1024
Originally posted by: Redviffer

Nvidia should make 2 sets of drivers:
1) an OEM/Regular set of drivers for compatible use, adequate performance, etc.
2) a performance set of drivers for maximum performance, and if it sacrifices a little bit of visual quality or "cheats" then so be it, use it at your own risk.

Talk about a step in the wrong direction. You can adjust settings in the drivers, we don't need a series with "cheats" all over it. Let me guess, the "performance" set is the benchmarking standard, but the OEM/Regular set is the one you'd need to run games with all of the proper effects and settings on.

No, the OEM/Regular ones would work fine, but would be WHQL, and would be the ones that are "standard" for benchmarking, if that's your sort of thing. The other ones would be totally performance oriented, meaning that they would still work, but would have optimizations to give increased framerates for games at the expense of IQ. Give the choice of what drivers people want to use to the people. Kinda like: the cpu that controls power for car engines, most come with a regular power band, but if you want, you can get a hop up one that will give you more power (at the expense of gas economy). The choice is up to you. Some people will want to use the regular/quality drivers: look at the Omega drivers that claim to have better image quality, and lots of people prefer them to even the regular drivers for that reason. Well, I want some that give performance instead of quality. I guess that you could also use SSE/SSE2 as an example, if everything was processed equally among all cpu's then the P4's would suck a$$, but with SSE/SSE2 optimizations, they perform better. Although I'm not sure that they are doing it by giving up anything to do it, but my point is that they are allowed to use optimizations to better their product. Why the beef when Nvidia does the same thing? In all the benchmarks that I see I can't tell the difference, and it usually take some sort of outside viewer to actually see that Nvidia is doing things differently than ATi, I would only think that is normal since their GPU's are different.
 

Redviffer

Senior member
Oct 30, 2002
830
0
0
Originally posted by: shady06
Originally posted by: Redviffer
Originally posted by: Pete
I suspect you haven't been keeping up with the state of 3D hardware for the past year or so, Redviffer. How else would you attempt to show the difference in IQ to internet readers--bandwidth-choking movies? And nV already offers both "Performance" and "High Performance" options (as an alternative to "Quality") in their current drivers.

LOL! I would have to have been under a rock to not notice the current state of 3D hardware. :)

My take on it, by having both a GeforceFX and an ATi card, is that I don't ever freeze a frame of my games to stop and take a look, and I don't think that anyone really does this either, other than the benchmarking sites. When the first drivers came out from Nvidia that increased their score in 3DMark2003, people found that they were using optimizations that they considered cheating. I really don't care how well it scores on 3DMark2001/3, what I want is for it to run my games at good enough frame rates so it doesn't look like I'm watching a slideshow. In other words, if they have to reduce the IQ to increase the performance, I'd rather have that than a card that looks great but runs like crap. ATi has great hardware, they can do both, I think we've already established that they are the kings of the hill right now. What I want is the choice to run "optimized" drivers that "cheat" on games (by lowering IQ, not by actually cheating in the game to provide any unnecessary advantages) so that I can use my GeforceFX. Let me decide if I want a card that looks great or performs great. They actually used to do this by having access to beta drivers on their site, I don't see that choice anymore. Their site now only shows the 45.23 drivers, where I'd like to try out the 52.14's for myself, even though they are beta.


would you notice if fog wasnt rendered at all? i think you will

That's exactly what I'm saying WOULDN'T fly, you can give performance, but not that way: no fog rendered = cheating in the game so you can see things that others can't. I'm not a cheater, and I wouldn't appreciate someone else doing that sort of thing.
 
Apr 17, 2003
37,622
0
76
Originally posted by: Redviffer
Originally posted by: shady06
Originally posted by: Redviffer
Originally posted by: Pete
I suspect you haven't been keeping up with the state of 3D hardware for the past year or so, Redviffer. How else would you attempt to show the difference in IQ to internet readers--bandwidth-choking movies? And nV already offers both "Performance" and "High Performance" options (as an alternative to "Quality") in their current drivers.

LOL! I would have to have been under a rock to not notice the current state of 3D hardware. :)

My take on it, by having both a GeforceFX and an ATi card, is that I don't ever freeze a frame of my games to stop and take a look, and I don't think that anyone really does this either, other than the benchmarking sites. When the first drivers came out from Nvidia that increased their score in 3DMark2003, people found that they were using optimizations that they considered cheating. I really don't care how well it scores on 3DMark2001/3, what I want is for it to run my games at good enough frame rates so it doesn't look like I'm watching a slideshow. In other words, if they have to reduce the IQ to increase the performance, I'd rather have that than a card that looks great but runs like crap. ATi has great hardware, they can do both, I think we've already established that they are the kings of the hill right now. What I want is the choice to run "optimized" drivers that "cheat" on games (by lowering IQ, not by actually cheating in the game to provide any unnecessary advantages) so that I can use my GeforceFX. Let me decide if I want a card that looks great or performs great. They actually used to do this by having access to beta drivers on their site, I don't see that choice anymore. Their site now only shows the 45.23 drivers, where I'd like to try out the 52.14's for myself, even though they are beta.


would you notice if fog wasnt rendered at all? i think you will

That's exactly what I'm saying WOULDN'T fly, you can give performance, but not that way: no fog rendered = cheating in the game so you can see things that others can't. I'm not a cheater, and I wouldn't appreciate someone else doing that sort of thing.

my point is you dont need to freeze gameplay to notice IQ anomilies
 

Redviffer

Senior member
Oct 30, 2002
830
0
0
Originally posted by: shady06
Originally posted by: Redviffer
Originally posted by: shady06
Originally posted by: Redviffer
Originally posted by: Pete
I suspect you haven't been keeping up with the state of 3D hardware for the past year or so, Redviffer. How else would you attempt to show the difference in IQ to internet readers--bandwidth-choking movies? And nV already offers both "Performance" and "High Performance" options (as an alternative to "Quality") in their current drivers.

LOL! I would have to have been under a rock to not notice the current state of 3D hardware. :)

My take on it, by having both a GeforceFX and an ATi card, is that I don't ever freeze a frame of my games to stop and take a look, and I don't think that anyone really does this either, other than the benchmarking sites. When the first drivers came out from Nvidia that increased their score in 3DMark2003, people found that they were using optimizations that they considered cheating. I really don't care how well it scores on 3DMark2001/3, what I want is for it to run my games at good enough frame rates so it doesn't look like I'm watching a slideshow. In other words, if they have to reduce the IQ to increase the performance, I'd rather have that than a card that looks great but runs like crap. ATi has great hardware, they can do both, I think we've already established that they are the kings of the hill right now. What I want is the choice to run "optimized" drivers that "cheat" on games (by lowering IQ, not by actually cheating in the game to provide any unnecessary advantages) so that I can use my GeforceFX. Let me decide if I want a card that looks great or performs great. They actually used to do this by having access to beta drivers on their site, I don't see that choice anymore. Their site now only shows the 45.23 drivers, where I'd like to try out the 52.14's for myself, even though they are beta.


would you notice if fog wasnt rendered at all? i think you will

That's exactly what I'm saying WOULDN'T fly, you can give performance, but not that way: no fog rendered = cheating in the game so you can see things that others can't. I'm not a cheater, and I wouldn't appreciate someone else doing that sort of thing.

my point is you dont need to freeze gameplay to notice IQ anomilies

yeah, but if it isn't too noticable does it really matter? I'm willing to put up with a little blurryness for increased speed, especially since most of the time it's stuff that is far out of range or is out of site anyway. Take the fog example, let's just say that ATi renders it at x-amount of pixels, and Nvidia renders it at x-amount that is less than that of ATi, who cares as long as the fog is there in enough quantity to make it playable, but not cheating. As long as the user knows that the drivers they are using are going to give up that IQ. Let me be the one to make that choice. The same thing happened to ATi with the Quake/Quack thing, did it affect the gameplay? Did it allow people to actually cheat in the game? They did reduce certain IQ levels of the game but not to the point where cheating was possible. I'm ok with that optimizaton, as long as the video card manufacturers let you know that you will be getting reduced IQ for more speed. Once again, let me make that choice.
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
Dave-

So, you know their engine better than they do now? I'm sure they'll be so pleased to hear how simplistic these things are!!

However, the other issue is that the entire FX specific path is a short term measure for the purposes of the initial HL2 game as far as Valve are concerned - they have already stated that as new content is added it will most likely be full precision DX9, they are moving over entirely to DX9/HLSL development ASAP and they have concerns as to whether smaller developers will be able to put the effort into the FX path that they have.

So the obvious smart thing to do is leave the 'Pure' DX9 path the least optimized option, right?

It seems fairly clear from their statements that they view this as a short term measure and I hardly think they want to increase their load by creating more paths with what they probably view as having an extremely limited lifespan.

The millions of FX board users won't want to play the other HL2 powered games? How does it make more sense to focus enormous effort on one highly customized path that won't be replicated by licensees and ignore optimizing the path that will?

BFG-

You need some serious help with your memory. I'm not talking about your attempted lies about my position in the past, you can't even keep what has been said in this thread straight.

You claim it's a valid optimisation

You have listed one. I have not argued that one bench cheat.

You are trying to project your own rabid stance on to me, it won't work. This forum has archives, I have already shown that you like to make things up to try and make yourself look better, and you go and do it again. Since you are so keen to hype up Half-Life2, why don't you list off all the Black Mesa games you have played through? Why don't you let everyone know how much of a fan of Valve's titles and their offspring you really are.

Answer the question: if nVidia is not cheating and have fixed what you call genuine driver bugs then why implement counter-measures against programs that illustrate cheating?

You assume that is why they did it instead of it being a planned event. I know nVidia's driver team is good, but to come up with a proprietary compression method to defeat optimization checks in that short of time? I don't think they are that good.

Right, so instead you'll stick to an inferior NV25 board and continue to shout that ATi's AF is unusable.

I've tried the R300 core boards, and I'm not going to waste any more money on a product with driver issues. I don't have any problems with the games I'm playing right now. I actually am hoping the Volari will be a decent part at the moment.

If you claim ATi was cheating then you must also claim that nVidia was cheating at the hardware level since all early Quake III benchmarks were done with texture compression and that meant that the nV boards were all running with 16 bit textures.

WTF? OK, a few different points-

1) They weren't using 16bit textures, they were using 16bit interpolation, an entirely different thing. Where did you ever get the idea that they were using 16bit textures?

2) nV's S3TC implementation followed S3's.

3) Initial Quake3 benches sure as he!l didn't show S3TC scores as S3TC wasn't enabled in nV's drivers until after the cross licensing agreement was made due to the lawsuit that was pending at the launch of the GeForce. After the drivers were released then there was a performance boost from using S3TC, mainly on Rev's Quaver bench, but the artifacts were easily significantly reduced by making a switch to use S3TC3 over S3TC1. I converted over the executables for Quake3 and was hosting them on the Basement with this fix, I even posted here about it(and it looked better then ATi's solution while performance was within a few percent, even in Quaver).

So nVidia was cheating then?

Absolutely not, particularly considering you are far off base on your accusation to begin with. If you want to argue the finer nuances of this topic I would be glad to, I still have correspondance from Carmack on the situation here from a back up of the Basement, and numerous tests and their results on the situation. I also still have the modified Quake3 executables although last I knew they wouldn't work on Radeons(they didn't support S3TC3, although I never tried them on the R9500Pro, no need for TC now on any card in Q3).

Secondly, I'd like to ask you whether an application changing its texture filtering behaviour on the basis of its executable name is a "bug" or an "optimisation".

Neither, it's a hack. I have had this same question posed about other hardware that wasn't nV in the past and I answered the same way. What's more, I posted here about it too. If you want to keep testing your own memories of how you felt at the time versus what I actually stated feel free to deny it. I don't need to jump around and change my stories, I don't care who does what, I feel the same way about most things now in terms of hardware that I have for the last few years.
 

DaveBaumann

Member
Mar 24, 2000
164
0
0
The millions of FX board users won't want to play the other HL2 powered games?

The vast majority of FX users are using 5200's, which are mainly using DX8 anyway.

How does it make more sense to focus enormous effort on one highly customized path that won't be replicated by licensees and ignore optimizing the path that will?

It doesn't - thats their issue. I think these were the points they were trying to convey.
 

oldfart

Lifer
Dec 2, 1999
10,207
0
0
I remember the whole S3TC/DXTC thing on the GF/2/3 cards. It was quite a topic of conversation.

The way it went was nVidia uses 16 bit color for DXTC1, ATi uses 32 bit color. This caused the well known "ugly sky" issue In Q3A, and also banding and other anomalies in UT when run in OpenGL mode with high res S3TC textures.

The fix was a hack to force DXTC1 calls to DXTC3. nVidia uses 32 bit color for DXTC3 which got around the problem in Q3A based games, but does not work in UT since the textures are precompressed. This hack is easily implemented via the various tweak tools like Rivatuner.

And yes, DXTC3 does look better than DXTC1. The reason is, DXTC3 uses a lower compression than DXTC1. I forget the numbers, but it was something like 6:1 vs 3:1?? So nVidia using the hack with a lower level of compression resulted a better looking "Q3A sky", (big surprise), but no improvement in UT. It did also cause a (small) performance hit in Q3A.

From worst quality to best is

nVidia DXTC1
ATi DXTC1
nVidia DXTC3
ATi/nVidia - no compression


At that time, the nVidia fans claimed there was no bug, no problem. Many claimed it was a Q3A bug, but it was proven to be an nVidia issue. nVidia was not cheating then (I dont think so), but they certainly had a bug/image quality issue that some of the nVidia fan base would not acknowledge (still wont).

 

THUGSROOK

Elite Member
Feb 3, 2001
11,847
0
0
lets put it this way~

9800XT is on my Xmas list :D
(im tired of all this NV BS, been putting up with it for too long)
 
Apr 17, 2003
37,622
0
76
Originally posted by: Redviffer
Originally posted by: shady06
Originally posted by: Redviffer
Originally posted by: shady06
Originally posted by: Redviffer
Originally posted by: Pete
I suspect you haven't been keeping up with the state of 3D hardware for the past year or so, Redviffer. How else would you attempt to show the difference in IQ to internet readers--bandwidth-choking movies? And nV already offers both "Performance" and "High Performance" options (as an alternative to "Quality") in their current drivers.

LOL! I would have to have been under a rock to not notice the current state of 3D hardware. :)

My take on it, by having both a GeforceFX and an ATi card, is that I don't ever freeze a frame of my games to stop and take a look, and I don't think that anyone really does this either, other than the benchmarking sites. When the first drivers came out from Nvidia that increased their score in 3DMark2003, people found that they were using optimizations that they considered cheating. I really don't care how well it scores on 3DMark2001/3, what I want is for it to run my games at good enough frame rates so it doesn't look like I'm watching a slideshow. In other words, if they have to reduce the IQ to increase the performance, I'd rather have that than a card that looks great but runs like crap. ATi has great hardware, they can do both, I think we've already established that they are the kings of the hill right now. What I want is the choice to run "optimized" drivers that "cheat" on games (by lowering IQ, not by actually cheating in the game to provide any unnecessary advantages) so that I can use my GeforceFX. Let me decide if I want a card that looks great or performs great. They actually used to do this by having access to beta drivers on their site, I don't see that choice anymore. Their site now only shows the 45.23 drivers, where I'd like to try out the 52.14's for myself, even though they are beta.



would you notice if fog wasnt rendered at all? i think you will

That's exactly what I'm saying WOULDN'T fly, you can give performance, but not that way: no fog rendered = cheating in the game so you can see things that others can't. I'm not a cheater, and I wouldn't appreciate someone else doing that sort of thing.

my point is you dont need to freeze gameplay to notice IQ anomilies

yeah, but if it isn't too noticable does it really matter? I'm willing to put up with a little blurryness for increased speed, especially since most of the time it's stuff that is far out of range or is out of site anyway. Take the fog example, let's just say that ATi renders it at x-amount of pixels, and Nvidia renders it at x-amount that is less than that of ATi, who cares as long as the fog is there in enough quantity to make it playable, but not cheating. As long as the user knows that the drivers they are using are going to give up that IQ. Let me be the one to make that choice. The same thing happened to ATi with the Quake/Quack thing, did it affect the gameplay? Did it allow people to actually cheat in the game? They did reduce certain IQ levels of the game but not to the point where cheating was possible. I'm ok with that optimizaton, as long as the video card manufacturers let you know that you will be getting reduced IQ for more speed. Once again, let me make that choice.

how could missing fog not be noticable :D?

with some games, the fog doesnt get rendered AT ALL
 

jiffylube1024

Diamond Member
Feb 17, 2002
7,430
0
71
These "let Nvidia run with some reduced IQ in their drivers" arguments are growing VERY thin. If this is the case, then it's no longer an apples-to-apples comparaison in all video card benchmarks. If Nvidia gets away with this, then what is to prevent ATI from doing this?

This will cause a full-out war over high framerates, in which the definition of "cheats" "hacks" etc in drivers will become so skewed and convoluted that both companies will be fighting hand-over-fist to cheat the most on their drivers to get the highest numerical framerates in benchmarks. IQ will get so crappy in time, with both company deeming visual quality to be "acceptable" while maintaining their drivers are the fastest.

Game companies develop games to perform with the specific effects they created for a reason. If Nvidia (or ATI) cuts the fog out of a foggy explosion, that kinda changes it from what the developers intended, eh?
 

Pete

Diamond Member
Oct 10, 1999
4,953
0
0
Redviffer, I could care less if nV lowered IQ across the board for the sake of performance ... as long as reviewers note this. The problem is, the great majority of reviewers do not note this IQ discrepancy clearly or comprehensively.

And the IQ difference is noticable, according to people who look into it. A few people in B3D's forums report that when moving from an ATi DX9 card to an nV DX9 card, they notice the lack of true trilinear filtering in nV's cards (MIP-map boundaries are visible, when they shouldn't be). And Mike C's excellent, unorthodox (read: go the extra mile) reviewing at NVNews.net has produced some Quake 3 videos that show fairly clearly that ATi's AA is better than nV's in smoothing jaggies. I think he included those large and short videos in an NVNews.net review, but I can't find the link right now. I'll admit that the videos focused on AA quality on a near-horizontal line, where ATi's jittered-grid and gamma-corrected AA will look better than nV's ordered-grid (no gamma correction) method (whereas OG will look better than JG with diagonal lines, though ATi's gamma correction closes that gap). You can check for static differences using Mike's AA comparison tool here but AA, like trilinear filtering, is really best appreciated in motion.

You've gotten your wish, though: the 52.14 betas totally remove trilinear filtering from D3D rendering. You want trilinear? You'll have to uninstall the 52.14's and reinstall the 45.23's. This is not good for consumers, IMO.
 

BFG10K

Lifer
Aug 14, 2000
22,709
3,003
126
This forum has archives, I have already shown that you like to make things up to try and make yourself look better, and you go and do it again.
I didn't make up anything but you're right, I probably shouldn't have pushed the issue so much as there was simply no need for it in a discussion like this.

If I've offended you in a personal capacity then I sincerely apologise and I will drop the whole bias issue completely.

You have listed one. I have not argued that one bench cheat.
Uh no, you admitted that the static clip planes were a cheat, however you never admitted that the other seven cheats found by FutureMark were cheats, nor have you admitted that the other "optimisations" and questionable driver bugs in other games are cheats.

FutureMark's audit found no less than eight different nVidia cheats and IIRC Unwinder found over forty application driver strings used for app detection in the same version of Detonators that FutureMark performed the audit on.

And since you think I have problems with my memory, I'll refresh yours. When I asked you:
Are you telling me that any of the above nVidia problems can be justifed as driver bugs?

You responded:
The static clip planes I've already said was a cheat, the rest absolutely.

You were not only claiming that shader subsitution (which was in my list) was a driver bug, you also later tried lump it into the compilation process to claim it was normal when in reality in was being modified at runtime, something that isn't normal by any stretch of the imagination.

Why don't you let everyone know how much of a fan of Valve's titles and their offspring you really are.
What on earth does that have to do with anything?

You assume that is why they did it instead of it being a planned event.
Well OK, let's go down that road then - why plan it? The sole purpose of such an action is to defeat any applications that expose driver cheating so again I'll ask the question, why go to such trouble to implement something that according to you isn't needed?

I've tried the R300 core boards, and I'm not going to waste any more money on a product with driver issues.
That's fair enough. All I'm pointing out is that there are plenty of people who have little/no issues with ATi's drivers (myself included) and that doesn't make them fanATIcs.

Where did you ever get the idea that they were using 16bit textures?
Because there was both a performance gain and less texture swapping in Quake III when DXT1 was used instead of DXT3. If the textures themselves were the same size then the texture swapping should've been the same and since AFAIK the decompression is done entirely on the core, there shouldn't have been a performance difference between the two either.

On my nVidia cards I observed a ~10% performance difference between using the two different compression methods, in memoy bandwidth limited situations.

nV's S3TC implementation followed S3's.
So they both sucked ass and couldn't really be compared to the other vendors using 32 bits. Right?

Initial Quake3 benches sure as he!l didn't show S3TC scores as S3TC wasn't enabled in nV's drivers until after the cross licensing agreement was made due to the lawsuit that was pending at the launch of the GeForce.
You're grasping at straws as you know exactly what I mean by "initial". By the time the Radeon/GeForce/Voodoo5 benchmarking was in full swing nVidia was using 16 bit compression and gained an unfair advantage by doing so.

I converted over the executables for Quake3 and was hosting them on the Basement with this fix,
I know exactly what happened and I'm not denying that you don't either. In fact your conversion wasn't even needed after version 1.25 of Quake III since it stopped compressing lightmaps and RivaTuner started allowing DXT1 to be forced into DXT3 calls through nVidia's drivers.

That's not the issue though, the issue is the thousands of benchmarks that were already published and were wrong, not to mention that not a single reviewer to my knowledge ever used the DXT3 fix for nVidia's boards.

Absolutely not,
Right, so according to you an nVidia hardware level problem that exists through three generations of boards and reduces image quality to gain performance isn't cheating, but a software problem that exists in a single ATi driver and when removed in the next driver not only fixes the issue but also improves performance is a cheat?

That's a very interesting piece of logic you employ there and I'd love to hear how you arrived at it.

I also still have the modified Quake3 executables although last I knew they wouldn't work on Radeons(they didn't support S3TC3,
Like I already explained to you those executables are obsolete and haven't been needed for a long time. And yes, Radeons support S3TC/DTXx just fine and even DXT1 looks great on them. I use S3TC (wrapped to DXT1) all the time in Quake III based games and in UT as well.

Neither, it's a hack.
Please explain what you mean by this and what your stance on the position is.
 

Redviffer

Senior member
Oct 30, 2002
830
0
0
Originally posted by: shady06
Originally posted by: Redviffer
Originally posted by: shady06
Originally posted by: Redviffer
Originally posted by: shady06
Originally posted by: Redviffer
Originally posted by: Pete
I suspect you haven't been keeping up with the state of 3D hardware for the past year or so, Redviffer. How else would you attempt to show the difference in IQ to internet readers--bandwidth-choking movies? And nV already offers both "Performance" and "High Performance" options (as an alternative to "Quality") in their current drivers.

LOL! I would have to have been under a rock to not notice the current state of 3D hardware. :)

My take on it, by having both a GeforceFX and an ATi card, is that I don't ever freeze a frame of my games to stop and take a look, and I don't think that anyone really does this either, other than the benchmarking sites. When the first drivers came out from Nvidia that increased their score in 3DMark2003, people found that they were using optimizations that they considered cheating. I really don't care how well it scores on 3DMark2001/3, what I want is for it to run my games at good enough frame rates so it doesn't look like I'm watching a slideshow. In other words, if they have to reduce the IQ to increase the performance, I'd rather have that than a card that looks great but runs like crap. ATi has great hardware, they can do both, I think we've already established that they are the kings of the hill right now. What I want is the choice to run "optimized" drivers that "cheat" on games (by lowering IQ, not by actually cheating in the game to provide any unnecessary advantages) so that I can use my GeforceFX. Let me decide if I want a card that looks great or performs great. They actually used to do this by having access to beta drivers on their site, I don't see that choice anymore. Their site now only shows the 45.23 drivers, where I'd like to try out the 52.14's for myself, even though they are beta.



would you notice if fog wasnt rendered at all? i think you will

That's exactly what I'm saying WOULDN'T fly, you can give performance, but not that way: no fog rendered = cheating in the game so you can see things that others can't. I'm not a cheater, and I wouldn't appreciate someone else doing that sort of thing.

my point is you dont need to freeze gameplay to notice IQ anomilies

yeah, but if it isn't too noticable does it really matter? I'm willing to put up with a little blurryness for increased speed, especially since most of the time it's stuff that is far out of range or is out of site anyway. Take the fog example, let's just say that ATi renders it at x-amount of pixels, and Nvidia renders it at x-amount that is less than that of ATi, who cares as long as the fog is there in enough quantity to make it playable, but not cheating. As long as the user knows that the drivers they are using are going to give up that IQ. Let me be the one to make that choice. The same thing happened to ATi with the Quake/Quack thing, did it affect the gameplay? Did it allow people to actually cheat in the game? They did reduce certain IQ levels of the game but not to the point where cheating was possible. I'm ok with that optimizaton, as long as the video card manufacturers let you know that you will be getting reduced IQ for more speed. Once again, let me make that choice.

how could missing fog not be noticable :D?

with some games, the fog doesnt get rendered AT ALL

Yeah, I'm in agreement with you about this: they need to fix those kinds of problems.
 

Redviffer

Senior member
Oct 30, 2002
830
0
0
Originally posted by: Pete
Redviffer, I could care less if nV lowered IQ across the board for the sake of performance ... as long as reviewers note this. The problem is, the great majority of reviewers do not note this IQ discrepancy clearly or comprehensively.

And the IQ difference is noticable, according to people who look into it. A few people in B3D's forums report that when moving from an ATi DX9 card to an nV DX9 card, they notice the lack of true trilinear filtering in nV's cards (MIP-map boundaries are visible, when they shouldn't be). And Mike C's excellent, unorthodox (read: go the extra mile) reviewing at NVNews.net has produced some Quake 3 videos that show fairly clearly that ATi's AA is better than nV's in smoothing jaggies. I think he included those large and short videos in an NVNews.net review, but I can't find the link right now. I'll admit that the videos focused on AA quality on a near-horizontal line, where ATi's jittered-grid and gamma-corrected AA will look better than nV's ordered-grid (no gamma correction) method (whereas OG will look better than JG with diagonal lines, though ATi's gamma correction closes that gap). You can check for static differences using Mike's AA comparison tool here but AA, like trilinear filtering, is really best appreciated in motion.

You've gotten your wish, though: the 52.14 betas totally remove trilinear filtering from D3D rendering. You want trilinear? You'll have to uninstall the 52.14's and reinstall the 45.23's. This is not good for consumers, IMO.

The way they have been going lately is that they will fix that. Seems like they release a performance increase, then they release the quality increase with the prior performance increase. In a way it seems that they have to learn how to go faster before they can do it with visual quality.

In all honesty, I've gotten what I truly desire from both ATi and Nvidia: drivers decent enough to play any game that I load without problems.

 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
Dave-

It doesn't - thats their issue. I think these were the points they were trying to convey.

So Valve isn't trying to license this engine...? I was under the impression that they were but if you say they aren't then I guess that would make sense. They don't care about engine licensees at all, but if someone happens to be willing to work with that they have they can license it, I guess if that's what's going on it makes sense. So Gabe isn't going after Sweeney and Carmack.

Oldfart-

The way it went was nVidia uses 16 bit color for DXTC1, ATi uses 32 bit color.

It's not 16bit color, it's 16bit interpolation. S3TC works on compressing in a grid, it removes data out of the center of a square(to simplify it a bit). The creators of the technique and nVidia utilized 16bit accuracy to interpolate the data and extract the missing portions(again, a simplification). The original textures used were not 16bit color, nor were the textures ouputted 16bit color, they used 16bit interpolation to extract the data that was held inside the compressed information.

I forget the numbers, but it was something like 6:1 vs 3:1??

6:1 and 4:1.

Many claimed it was a Q3A bug

There was a particular issue with Quake3, they were compressing alpha textures and lightmaps. That should not be done on any hardware with S3TC, even ATi's image quality suffered too much due to this.

but they certainly had a bug/image quality issue that some of the nVidia fan base would not acknowledge

Following your logic the R8500 had a bug/image quality issue, it did not support FP24. The Radeon was even worse, it didn't even support PS1.0 or VS1.0. It also used fixed function T&L, relied on a crude method for EMCM and EMBM. Do you want to 'acknowledge' these bug/image quality issues? What about the Voodoo3 not supporting larger then 256x256 textures? Was that a bug or image quality issue? How about the Voodoo1's 800x600 resolution limit, was that a bug or image quality issue? You are talking about a feature that nV didn't have implemented. Some of the shaders Maya uses require FP32 support to render properly, does that mean the FireGL has a bug/image quality issue since they can't handle FP32? This is the same thing you are saying now. I would say that they were missing a feature that nV had. But hey, what do I know, I'm just the biggest nVidiot on the net who for some strange reason was telling people to buy the Radeon over the GF2 back in those days......

BFG-

Uh no, you admitted that the static clip planes were a cheat, however you never admitted that the other seven cheats found by FutureMark were cheats, nor have you admitted that the other "optimisations" and questionable driver bugs in other games are cheats.

They cheated in 3DMark2K3, the reason I didn't address in particular the rest of the accusations is that you have a bunch of them lumped together in a take it or leave it fashion.

You were not only claiming that shader subsitution (which was in my list) was a driver bug, you also later tried lump it into the compilation process to claim it was normal when in reality in was being modified at runtime, something that isn't normal by any stretch of the imagination.

You claim nV uses shader replacement all over the place, that I take issue with. I never said that they didn't use shader replacement under 3DMark2K3, I only commented that ATi did the same thing. You want to bash nVidia on that one you better bash ATi too.

The sole purpose of such an action is to defeat any applications that expose driver cheating so again I'll ask the question, why go to such trouble to implement something that according to you isn't needed?

You familiar with Linux at all? You know how they have two different batch of drivers, one of them is open source and seriously limits functionality while the other is proprietary and closed and has full funtionality. Why do they do that? Because they don't want people to be able to take a close look at their architecture which the drivers would give a good glimpse in to. Now they are at the point where they are working on a compiler for their hardware that would likely expose a great deal of detail in to how their shader architecture works in great detail. By trying to encrypt the information as much as possible they reduce the odds of anyone being able to reverse engineer it and use the information to gather insight in to their design. Now if nVidia had always been open with their driver architecture I would say that this line of thought was a stretch, but they have always been very protective of their drivers much to the dismay of the Linux community.

Because there was both a performance gain and less texture swapping in Quake III when DXT1 was used instead of DXT3.

6:1 v 4:1 compression, of course memory useage changed.

So they both sucked ass and couldn't really be compared to the other vendors using 32 bits. Right?

On one side, on the other side the other vendor didn't support S3TC 3/4/5 which sucked ass compared to theirs.

You're grasping at straws as you know exactly what I mean by "initial". By the time the Radeon/GeForce/Voodoo5 benchmarking was in full swing nVidia was using 16 bit compression and gained an unfair advantage by doing so.

How was it unfair? Was it more unfair then the V5's inablity to do trilinear filtering? Was it unfair to pit the V5 against the R100 and NV1X when the V5 had superior FSAA? What about the V3 not being able to run the highest quality textures? It was a factor that should have been and was looked at by numerous sites. The site I was writing for more then any other IIRC.

Right, so according to you an nVidia hardware level problem that exists through three generations of boards and reduces image quality to gain performance isn't cheating

It was as much a cheat for nVidia as the R300 core boards not supporting PS 3.0 is. I have two games that it impacted, one of them I fixed myself.

but a software problem that exists in a single ATi driver and when removed in the next driver not only fixes the issue but also improves performance is a cheat?

It's as much a cheat as what nV has been experiencing in most of the recent benches.

And yes, Radeons support S3TC/DTXx just fine and even DXT1 looks great on them. I use S3TC (wrapped to DXT1) all the time in Quake III based games and in UT as well.

The original Radeons supported S3TC1(DXTC is the exact same thing, bit for bit, MS licensed the technology from S3). They didn't support the other levels.

Please explain what you mean by this and what your stance on the position is.

It is another hack in the same mold that ATi's and nVidia's current AF is, the same way that PowerVR's box filter trilinear approximation is, the same way that most other hacks are. It reduces IQ and improves performance. The difference between a hack and a cheat is they aren't rendering anything 'wrong' per se, they are simply reducing the workload to get a comparable(although always with lower IQ) effect. The brilinear filtering isn't doing pure bilinear or pure trilinear(although not quite as bad as PVR's) and it is reducing the amount of proper AF filtering even further(although not as bad as the R200 core boards) reducing the overall IQ compared to a proper full implementation, but improved compared the image not using technique would provide. Actually, in the case of the PVR hack I tended to chose bilinear filtering over the box filter psuedo trilinear as at least with bilinear you had a lot less noticeable mip transitions(in terms of quantity, although the transistions themselves on an individual basis were far more noticeable with bilinear then the approximation).
 

jbirney

Member
Jul 24, 2000
188
0
0
First of all I did not find this article very useful. With out full screen images any IQ comparisons are worthless. That along with the fact that there is no Tri-linear in any D3D app with these drivers with-out a single peep from Anandtech does not give me a good confidence level with the rest of the findings. If the miss the simple and obvious..then what else did they miss?

Also I am very disappointed in the fact that Anadtech used these special benchmark drivers. The did the same thing back with the 8500 launch and benched ATI's new card against drivers the we never ever saw. They stated that in part 1 that these det 50s would be available soon. Today about two weeks later still no drivers to download. Once again I find it very poor that Anadtech benches with non-public drivers that we will never see. And no two weeks is not soon. You bench with public drivers or you dont bench at all. Pretty simple concept.


Ben,

Seriously though, any image glitch in a game from nV now is considered a cheat instead of the possibility of it being a bug.

Given that a majority of these "bugs" seem to only effect games/apps used in benchmarks and with their documented 3dmark cheats, then yeap. That's the hole NV dug for themselves when they admitty over stepped the optimization bounds. Remember how each ATI driver was scrubbed for a time being after the Q3 fiasco? I am not saying its right. But that's the reaction here, at nVnews, 3dgpu, rage3d, TR, FS and other sites.


nV's drivers certainly have some bugs, haven't run in to ones like this though.

Cat 3.8 fixed that issue and my son and I have been enjoying playing MP for the last 4 days with no lock ups on his 8500. However that little SOB is much better at the banshee than I am...oh well. Btw there is a command line switch to get AA to work on Halo on the R9700. Disables the camafluge effect.


This rigged up test is continually being used as an example of how much faster ATi's parts are then nV's in shader intensive benches.

Not really. When you consider the fact that 3dmark, Shadermark, FurtureMark, Halo and TR:AOD all showed the same thing. Granted the new Dets may help. But until they are public and have been analyzed by other sites (sorry don't trust AT anymore in this capacity) then they really don't count. Sorry but I just don't think non-public drivers are good to base real world performance off.


For nV, I think the overwhelming majority of 'cheats' were bugs.

Sorry I disagree. Yes you say you lumped all of the 3dmark into one, but that's where most of the cheats come in. Besides there were 8 cheats not one. And thats the majority we have been talking about. Then we have issues where FS found that when you run custom time demos on Q3/SS you have drastically different results. Then what about total lack of Tri-linear filtering in any D3D app/game? That's not a bug. That's there only to increase FPS scores and is a hack/cheat. It was an option that NV added to their drivers and not an built in hardware limitation like ATI's AF... And I know your going to say that ATI's AF is the same thing. No its not. It was a design trade off ATI's engineers made during the time they were working on the R300 core. We knew from day one of this limitation. The Tri-linear cheat is much different.


As far as AF, I've never seen an ATI card with any driver that did it properly, why don't you point one out.

Current ATI R3xx do AF by the definition of AF However they do drop the levels of AF and yes its an optimization that I wish I could avoid. However as mentioned before dropping Tri-linear only to increase FPS scores is not a bug. Its a cheat IMHO. If I am paying $500 for a card that could do trilinear before but now can't because NV needs a few more FPS that's sad.


I don't believe Gabe in any way shape or form on that. Not even close.

Sorry as BFG said that's your issue. Just because you do not believe him does not make it false. Nor just because he has said it, does make it true. However he is the one that's in the best position to make that call. Not you nor I. Besides did you know that Activision was paid 4 million by NV for Doom3 rights?

You realize that most of your ranting about nV cheats and changing shader code have vanished with the latest drivers and they are faster then before?

Again that remains to be seen and with no public drivers we can take that claim with a truck load of salt. So far AT is not a reliable source to do IQ test anymore.


Apply Occam's razor to your logic and see what you think.

Funny you mention that. Look at the info here:
Nv has never been the leader which just lost the lead (yes they have been in the leader role..but not the just lost lead role). They are not longer the performance or technology leader the day the R300 shipped. They lost the Xbox2 contract to ATI and suffered massive losses due to the delay of the FX line. Then when the FX line shipped it showed massive issues with DX9 code. NV when on a campaign to discredit 3dmark, dropped out of 3dmark beta program, then treated legal action, finally rejoined the beta program only after 3dmark agree to let certain optimizations in. Then you have NV saying it over optimized (not diver/compiler bugs) and admitting to over zealous optimizations. And on HardOCP they posted their new driver optimization guide lines. All of a sudden drivers removing the option to do trilinear in D3D (which breaks their own optimization guide lines), the fact that custom time demos show no difference on ATI scores yet large drops on NV scores. Then all of Values claims. Finally a few days agao ATI stock was hider than nV stock wich is a frist time in a long time.

So you add it up and what do you have? That tells me (my theory) that NV is still scared of being in 2nd place to ATI and has to increase their performance by continuing known driver optimizations which trade off IQ for speed. Your theory seems to be based off new drivers bugs that only crop in in 3d benchmark games/application from a driver team that is gold, has many remarkable timings of driver releases which disable anti-cheat programs the moment they occur, involes legal maneuvering with 3dmark which has turned into a soap Opera, that has a major developer (Value) conspiring to hurt NV which in turn will hurt Values sales as NV fans have already said they wont buy HL2 due to the poor prefromance on FX cards, has un-explanible custom demo prefromance and has yet to be made public to verfiy that any IQ changes were made. Again by Occam's razor which you quoted given two outcomes the less complicated one is more often correct. Pretty easy to see one is much less complexed than the other. But again its not Occam's law so it may just be wrong this time :)