New 3dmark03 patch - nVIDIA cheating ... again???

Page 6 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

THUGSROOK

Elite Member
Feb 3, 2001
11,847
0
0
i think i have found the perfect solution to this problem....

i uninstalled 3dmark2001 and 3dmark2003 from my system ;)
 

Matthias99

Diamond Member
Oct 7, 2003
8,808
0
0
Originally posted by: Genx87
nVidia's reply about the their compiler is absolute BS. I proper realtime shader compiler/optimizer (read, like the one ATI and Matrox obviously have) detects the general cases of a shader instruction or group of instructions and can re-order the instructions or replace the instructions with faster (yet correct) instructions. This methodology allows for optimizations to occurr on shaders the moment they are introduced (such while a deveoper is creating a shader). This is how MS's compilers and optimizers work, right?


This is what the unified compiler does...............see Anands article on the 5700 Ultra. It will reorder the instructions to better fit Nvidia's arch.

I don't doubt that their unified compiler architecture is doing this. But NVIDIA is saying that Futuremark stopped their compiler from working (or from executing on the GPU, or something -- I haven't seen anything with a straight answer yet) with their new patch, which is why the scores dropped so much.

Futuremark says all they did was change the order of some of their rendering instructions to prevent app detection. If the unified compiler is just reordering instructions anyway, why would that make it stop working? The general idea is that it should take *any* DX9 code and make it more efficent, not just replace shaders with new ones that are hardcoded into the drivers.
 

spam

Member
Jul 3, 2003
141
0
0
Originally posted by: THUGSROOK
i think i have found the perfect solution to this problem....

i uninstalled 3dmark2001 and 3dmark2003 from my system ;)

Did you ever hear about the desert nomad who woke up one night in his tent and he was hungry? He lit a lamp and ate fig, he noticed it had a worm so he threw it out. He picked up another fig bit it, found a another worm, he looked at his bowl of figs for a monent then picked up his lamp blew it out and ate the rest of the figs.

Go ahead Thugsrook -eat your worms.
 

THUGSROOK

Elite Member
Feb 3, 2001
11,847
0
0
spam ~ thats not what i meant.

what i do mean, is that im not wasting any more time with the figs ;)
 

stardust

Golden Member
May 17, 2003
1,282
0
0
What's with these personal attacks lol... Thugsrook's comment was just what it was, a comment. Spam, you don't need to get all worried about what people think... unless you are the big bad "T" word.

I think this thread was created to generate controversy.

Don't reply to me
 

quikah

Diamond Member
Apr 7, 2003
4,198
743
126
An update on the beyond3d 340 patch review is saying that the IQ differences are supposed to be there, (hair, fire, fog is random). So now we have practically no IQ diferences but big drop in performance for the FX card. This would point to having replacement shader code in their drivers (bad), BUT it also means that the FX CAN perform well in DX 9 type games, it just takes a lot of extra work.
 

spam

Member
Jul 3, 2003
141
0
0
You are right Stardust,

I meant it to be funny but if it was offensive I do apologize to Thugsrook. I did not mean to have it taken as a personal insult.
As to not replying to you well - you'll get over it.:D
 

spam

Member
Jul 3, 2003
141
0
0
Originally posted by: quikah
An update on the beyond3d 340 patch review is saying that the IQ differences are supposed to be there, (hair, fire, fog is random). So now we have practically no IQ diferences but big drop in performance for the FX card. This would point to having replacement shader code in their drivers (bad), BUT it also means that the FX CAN perform well in DX 9 type games, it just takes a lot of extra work.

That is the problem with this situation, how do you give a fair comparison between the contenders, when one uses optimized code and others (ATI and Matrox) apparently do not? If it is unaddressed then what ever shred of value that this benchmark has left is loss. Remember that Derek Perez said that it is alright to optimize and hand code for any benchmark that is not a game! At least that is the implication I took from Dave Bauman's conversation with him at Beyond 3d. In that case inflated scores would be given on all of these benchmarks.

Unless some one can convice the trade magazines and web review sites not to use these benchmarking utilities, it will be an unfair advantage to Nvidia.

- Future mark has to draw the line other wise it loses its relevance
 

Rage187

Lifer
Dec 30, 2000
14,276
4
81
it already did when they made a benchmark that did not represent how a true dx9 game would function.
 

Genx87

Lifer
Apr 8, 2002
41,091
513
126
I don't doubt that their unified compiler architecture is doing this. But NVIDIA is saying that Futuremark stopped their compiler from working (or from executing on the GPU, or something -- I haven't seen anything with a straight answer yet) with their new patch, which is why the scores dropped so much.

Why would you doubt it? Do you have proof this is not what it is really doing?

Futuremark says all they did was change the order of some of their rendering instructions to prevent app detection. If the unified compiler is just reordering instructions anyway, why would that make it stop working? The general idea is that it should take *any* DX9 code and make it more efficent, not just replace shaders with new ones that are hardcoded into the drivers.


If it can detect and reorder. They made it so the Unified Compiler cant work at its fullest potential. Then this surprised you it lowers performance?
 

Rogodin2

Banned
Jul 2, 2003
3,219
0
0
It is utterly naive to think that NVIDIA would ever stop optimizing for synthetic benchmarks given that they've said several times that they would not stop.

Unless you can find me somewhere where NVIDIA explicitly clarified the statement to mean what the public has interpreted it to mean, I don't think you can hold NVIDIA reasonably responsible for it. Sure, they probably ambiguously worded it...but that's hardly the point.

We've known for some time now that NVIDIA has been breaking their guidelines by changing IQ in UT2003 and 3dmark2001/3dmark03 anyway. Is there anyone that is actually surprised that scores fell with this new patch? I think the interesting thing is why Pixel Shader 2.0 test didn't lose any performance. Does anyone know if Futuremark changed the shader here to prevent detection?
 

JavaTenor

Junior Member
Dec 17, 2002
10
0
0
This "lowered Universal Compiler performance" excuse is utter crap. If you run the same 3dMark 3.3 -> 3.4 test on the 45.23 or the 44.03 NVidia drivers, you get similar drops in performance, and neither of those drivers contain the vaunted "Universal Compiler". Unless "Universal Compiler" is a codename for "specific shader replacements when a particular benchmark is detected", of course.

And the reason that this is important is that I doubt NVidia will be able or willing to write new hand-crafted shaders for every 3D game that gets released, so anyone wanting to play a game that's not a common benchmark gets screwed.
 

BFG10K

Lifer
Aug 14, 2000
22,709
3,003
126
After the pasting nVidia received the first time they did this coupled with the PR rubbish they then published which basically stated they weren't going to perform such "optimisations" again, one might be forgiven into thinking that nVidia's cheating was behind them. But clearly this isn't the case.

Also those of you who are stating that 3DMark is a synthetic benchmark and is therefore somehow invalid are completely missing the point of these findings, findings which can (and have already been proven to do so) extend to real games.

As it stands now, nobody has any real idea as to how fast the FX line really is since nobody really knows how much cheating nVidia is doing.
 

Rogodin2

Banned
Jul 2, 2003
3,219
0
0
The synthetic bencharks are supposed to be the 1/4 mile for video cards but nvidia is doing their best to fubar anything that level the playing field.

It's very disapointing as a small buisness owner. I know that I could fleece my customers but since I'm an honest man I won't do it regardless of how much more money I could make.

Read the jungle all of you "ginfests"

rogo
 

Matthias99

Diamond Member
Oct 7, 2003
8,808
0
0
Originally posted by: Genx87
I don't doubt that their unified compiler architecture is doing this. But NVIDIA is saying that Futuremark stopped their compiler from working (or from executing on the GPU, or something -- I haven't seen anything with a straight answer yet) with their new patch, which is why the scores dropped so much.

Why would you doubt it? Do you have proof this is not what it is really doing?

I said don't doubt -- I believe fully that NVIDIA actually has some sort of optimizing real-time compiler running on shaders that get passed into the drivers. But *how* would they possibly stop the driver from optimizing the code? The driver code runs at a *lower* level than the application, and the application only deals with the DirectX API. The only thing they can do is modify the actual shaders and operations, which should have no impact whatsoever on the functioning of the compiler.

Futuremark says all they did was change the order of some of their rendering instructions to prevent app detection. If the unified compiler is just reordering instructions anyway, why would that make it stop working? The general idea is that it should take *any* DX9 code and make it more efficent, not just replace shaders with new ones that are hardcoded into the drivers.


If it can detect and reorder. They made it so the Unified Compiler cant work at its fullest potential. Then this surprised you it lowers performance?

What did they do that's making it not work at its fullest potential? If the compiler can "detect and reorder" DX9 shader code (which is its *primary feature*!), then it should easily be able to handle Futuremark flipping the order or registers of a few shader operations in its code. That's the most trivial case possible for the compiler to handle! And if they made the code less efficient, why aren't ATI's boards affected? Wait, clearly 3DMark03 must be detecting when it's running on an NVIDIA board and making it go 30% slower...
rolleye.gif


The most logical explanation here is that NVIDIA is still doing its "replace known shader X with hand-optimized shader Y" routine -- which is application detection and thereby prohibited by Futuremark. It's also in violation of their own optimization guidelines! NVIDIA will claim, I'm sure, that prohibiting them from optimizing like this for benchmarking applications is unrealistic, since they optimize for many real games. But giving them a green light to optimize for benchmarking programs is a slippery slope -- NVIDIA will inevitably spend more time working on popular games and benchmarks, much to the detriment of smaller game developers and the buying public. Is an 'optimized' benchmark going to have any correlation with real-world performance? Ideally NVIDIA should continue improving their overall driver performance, and getting their DX9 compiler to put out code that can compete with their hand-optimized shaders -- and with ATI -- rather than relying on the crutch of faking it with app detection and preprogrammed shaders.
 

Sazar

Member
Oct 1, 2003
62
0
0
just fyi for everyone who thinks FM is doing something damaging to nvidia's compiler

Luciano Alibrandi, European Product PR Manager for NVIDIA Corporation, has made a correction in regards previous information about NVIDIA?s Unified Compiler and 3DMark03 benchmark after getting into details with the company?s engineers. Apparently, the statement claiming that NVIDIA?s Unified Complier deployed to optimize pixel shader performance is disabled by the new version of 3DMark03 is not fully correct.

?I would like to inform you that a part of my response was not accurate. I stated that the compiler gets disabled, by 3DMark and that is in fact not true,? he said.

So, after all NVIDIA denied the problems between the Unified Compiler technology and the latest version of popular 3DMark03 benchmark. As a result, we may now conclude that the accusations in Futuremark direction from Hans-Wolfram Tismer, a Managing Director for Gainward Europe GmbH were not correct at all.

In October 2003 Santa Clara, California-based NVIDIA Corporation introduced its Unified Compiler integrated in its ForceWare 52.16 drivers to optimize Pixel Shader code for NVIDIA GeForce FX architecture in an attempt to improve performance of graphics cards powered by NVIDIA?s latest GPUs in variety of demanding applications.

NVIDIA said that the Unified Compiler technology tunes DirectX 9.0 execution on the GeForce FX GPUs, and can be used to correct any similar conflict that arises with future APIs. NVIDIA indicated the Unified Compiler as an automatic tuning tool that optimizes Pixel Shader performance in all applications, not just on specific ones. Officials from NVIDIA again stressed today that one of the things the Unified Compiler does is to reinstruct the order of lines of code in a shader. By simply doing this the performance can increase dramatically since the GeForce FX technology is very sensitive to instruction order. So, if the re-ordering is not happening NVIDIA?s GeForce FX parts have a performance penalty.

Since the complier is still active with the new version of 3DMark03 there is currently no explanations for performance drops of certain GeForce FX parts in the latest build 340 of the famous 3DMark03.

?The only change in build 340 is the order of some instructions in the shaders or the registers they use. This means that new shaders are mathematically equivalent with previous shaders. A GPU compiler should process the old and the new shader code basically with the same performance,? said Tero Sarkkinen, Executive Vice President of Sales and Marketing for Futuremark Corporation ? the developer of 3DMark03 application.

He was indirectly confirmed by an ATI official yesterday, who said: ?ATI has had a compiler since CATALYST 3.6. We did not have any problems with Futuremark?s changes.?

ergo... perhaps some PR people should learn about the technical feasability of their statements before they make it..

so anyways.. back to square one... it seems there were application specific optimizations not meeting FM's guidelines that were seemingly disabled...
 

Jeff7181

Lifer
Aug 21, 2002
18,368
11
81
According to Jeff, you'll never see 3DMark report the same (or even similar) results twice. Ask him for an explanation.

PS: Jeff, I have been trying to download 3DMark03 since your first post, I reinstalled fresh recently, and wanted to see my scores using 340, but the download keeps failing for me. I will continue trying, and let you know once I've run the benchmarks.
Since I'm just me, why don't you ask HardOCP, which has a much better reputation than I do. Apparently they think the same thing I do.

*EDIT* By the way... I hate being misquoted and having my words twisted...

Originally posted by: Jeff7181
Looks like a bunch of BS to me... they're showing the 9800XT scores changing by only 1 point... that's HIGHLY unlikely... run 3DMark twice in a row without changing ANYTHING and I guarantee you won't come within 1 point of the previous test.
 

JustStarting

Diamond Member
Dec 13, 2000
3,135
0
76
exerpt from HardOCP article on the 340 patch thing....

"This reminds me of the early days of CPU's, when for some weird reason the industry felt the need to run "unoptimized code" through fear of the "new" optimizing compilers. It took the CPU industry a couple decades to accept optimizing compilers as legitimate, and of course now everyone assumes that as standard practice. Lets hope that we've learned from that experience and that it doesn't take the GPU industry anything near that time to accept compiler technology as legitimate and proper in this new age of programmable GPU's."


Funny how all the guys knocking Nvidia, have NF2 mobo's?? Do you feel "robbed" with your NVidia mobo and Barton running optimized codes for performance?....... Doubt it, cause you own it.

Oh... and thanks for boosting my stock shares the last few months buying those NF2 mobo's... you bought my FX5950 for me!!

Getting 6547 with the old 330 build.... Nvidia "cheats" included.
 

Jeff7181

Lifer
Aug 21, 2002
18,368
11
81
Originally posted by: JustStarting
exerpt from HardOCP article on the 340 patch thing....

"This reminds me of the early days of CPU's, when for some weird reason the industry felt the need to run "unoptimized code" through fear of the "new" optimizing compilers. It took the CPU industry a couple decades to accept optimizing compilers as legitimate, and of course now everyone assumes that as standard practice. Lets hope that we've learned from that experience and that it doesn't take the GPU industry anything near that time to accept compiler technology as legitimate and proper in this new age of programmable GPU's."


Funny how all the guys knocking Nvidia, have NF2 mobo's?? Do you feel "robbed" with your NVidia mobo and Barton running optimized codes for performance?....... Doubt it, cause you own it.

Oh... and thanks for boosting my stock shares the last few months buying those NF2 mobo's... you bought my FX5950 for me!!

Getting 6547 with the old 330 build.... Nvidia "cheats" included.

MMX, SSE, 3DNow, SSE2... hmmmmm... I wonder what those do?
 

JustStarting

Diamond Member
Dec 13, 2000
3,135
0
76
Originally posted by: Jeff7181
Originally posted by: JustStarting
exerpt from HardOCP article on the 340 patch thing....

"This reminds me of the early days of CPU's, when for some weird reason the industry felt the need to run "unoptimized code" through fear of the "new" optimizing compilers. It took the CPU industry a couple decades to accept optimizing compilers as legitimate, and of course now everyone assumes that as standard practice. Lets hope that we've learned from that experience and that it doesn't take the GPU industry anything near that time to accept compiler technology as legitimate and proper in this new age of programmable GPU's."


Funny how all the guys knocking Nvidia, have NF2 mobo's?? Do you feel "robbed" with your NVidia mobo and Barton running optimized codes for performance?....... Doubt it, cause you own it.

Oh... and thanks for boosting my stock shares the last few months buying those NF2 mobo's... you bought my FX5950 for me!!

Getting 6547 with the old 330 build.... Nvidia "cheats" included.

MMX, SSE, 3DNow, SSE2... hmmmmm... I wonder what those do?

My point exactly!! In a few years everyone will expect optimization when it is an industry standard. I'll stick with the optimization weenies for now and see where this all ends up.

I feel sooo cheated now.. I own a NF2 mobo, and an AMD CPU... Oh wait... I can't play both sides of the fence.

Or can I??

 

Matthias99

Diamond Member
Oct 7, 2003
8,808
0
0
Originally posted by: JustStarting
exerpt from HardOCP article on the 340 patch thing....

"This reminds me of the early days of CPU's, when for some weird reason the industry felt the need to run "unoptimized code" through fear of the "new" optimizing compilers. It took the CPU industry a couple decades to accept optimizing compilers as legitimate, and of course now everyone assumes that as standard practice. Lets hope that we've learned from that experience and that it doesn't take the GPU industry anything near that time to accept compiler technology as legitimate and proper in this new age of programmable GPU's."


Funny how all the guys knocking Nvidia, have NF2 mobo's?? Do you feel "robbed" with your NVidia mobo and Barton running optimized codes for performance?....... Doubt it, cause you own it.

Oh... and thanks for boosting my stock shares the last few months buying those NF2 mobo's... you bought my FX5950 for me!!

Getting 6547 with the old 330 build.... Nvidia "cheats" included.

The comparison to optimizing compilers for programming languages is completely inapplicable. An optimizing C compiler had better damn well put out the same result -- you can't degrade the quality of a series of computations without screwing everything up in general purpose programming, whereas a few missing pixels in graphics apps won't make it crash.

A better analogy here would be NVIDIA trying to compare a precompiled program running on their system against a JIT-compiled program on ATI's system and claiming that their computer is faster, whereas if they both run the same exact thing it's the other way around. The question is: would you rather have a faster computer to begin with, or hope that they can keep coming up with a better compiler to make up the difference?
 

Jeff7181

Lifer
Aug 21, 2002
18,368
11
81
The question is: would you rather have a faster computer to begin with, or hope that they can keep coming up with a better compiler to make up the difference?
correction...

The question is: would you rather have a faster computer to begin with that may not always work, or hope that they have the resources to modify the compiler for new games?

:D
 

Matthias99

Diamond Member
Oct 7, 2003
8,808
0
0
or hope that they have the resources to modify the compiler for new games?

That is a significant problem, IMHO, since DX9 games are just going to get more and more complex and even NVIDIA cannot hope to create hand-coded optimizations like this for every game. Clearly their much-vaunted "unified compiler" -- at least the reordering component of it -- is not all it's cracked up to be. Can they improve it a great deal? Probably. Will they do so, or do it quickly? I don't know, but I'd rather have a sure thing than a crapshoot.

ATI's recent drivers (despite your comments) seem to have few to no problems with the vast majority of current games -- or at least the benchmarking sites aren't running around complaining about them, and they should know. Compare to NVIDIA, whose drivers are not bug-free and are also being questioned in terms of IQ all over the place.
 

Sazar

Member
Oct 1, 2003
62
0
0
did you guys even bother to read the quote I posted from xbit labs... attributed to the nvidia pr director from europe ?

:confused:

um... FM is doing NOTHING to the compiler... the nvidia PR guru himself admits his claims were bogus (as we all knew anyways)...

therefore the optimizations have NOTHING to do with the compiler's automatic shader thinger or whatever they are calling it nowadays...

nvidia are specifically targetting 3dmark.. these are not generic optimizations else they would not suffer a drop in performance...
 

Jeff7181

Lifer
Aug 21, 2002
18,368
11
81
As someone else pointed out... then why are the scores for the CPU tests effected as well?