Originally posted by: Genx87
nVidia's reply about the their compiler is absolute BS. I proper realtime shader compiler/optimizer (read, like the one ATI and Matrox obviously have) detects the general cases of a shader instruction or group of instructions and can re-order the instructions or replace the instructions with faster (yet correct) instructions. This methodology allows for optimizations to occurr on shaders the moment they are introduced (such while a deveoper is creating a shader). This is how MS's compilers and optimizers work, right?
This is what the unified compiler does...............see Anands article on the 5700 Ultra. It will reorder the instructions to better fit Nvidia's arch.
Originally posted by: THUGSROOK
i think i have found the perfect solution to this problem....
i uninstalled 3dmark2001 and 3dmark2003 from my system![]()
Originally posted by: quikah
An update on the beyond3d 340 patch review is saying that the IQ differences are supposed to be there, (hair, fire, fog is random). So now we have practically no IQ diferences but big drop in performance for the FX card. This would point to having replacement shader code in their drivers (bad), BUT it also means that the FX CAN perform well in DX 9 type games, it just takes a lot of extra work.
Originally posted by: Genx87
I don't doubt that their unified compiler architecture is doing this. But NVIDIA is saying that Futuremark stopped their compiler from working (or from executing on the GPU, or something -- I haven't seen anything with a straight answer yet) with their new patch, which is why the scores dropped so much.
Why would you doubt it? Do you have proof this is not what it is really doing?
Futuremark says all they did was change the order of some of their rendering instructions to prevent app detection. If the unified compiler is just reordering instructions anyway, why would that make it stop working? The general idea is that it should take *any* DX9 code and make it more efficent, not just replace shaders with new ones that are hardcoded into the drivers.
If it can detect and reorder. They made it so the Unified Compiler cant work at its fullest potential. Then this surprised you it lowers performance?
Luciano Alibrandi, European Product PR Manager for NVIDIA Corporation, has made a correction in regards previous information about NVIDIA?s Unified Compiler and 3DMark03 benchmark after getting into details with the company?s engineers. Apparently, the statement claiming that NVIDIA?s Unified Complier deployed to optimize pixel shader performance is disabled by the new version of 3DMark03 is not fully correct.
?I would like to inform you that a part of my response was not accurate. I stated that the compiler gets disabled, by 3DMark and that is in fact not true,? he said.
So, after all NVIDIA denied the problems between the Unified Compiler technology and the latest version of popular 3DMark03 benchmark. As a result, we may now conclude that the accusations in Futuremark direction from Hans-Wolfram Tismer, a Managing Director for Gainward Europe GmbH were not correct at all.
In October 2003 Santa Clara, California-based NVIDIA Corporation introduced its Unified Compiler integrated in its ForceWare 52.16 drivers to optimize Pixel Shader code for NVIDIA GeForce FX architecture in an attempt to improve performance of graphics cards powered by NVIDIA?s latest GPUs in variety of demanding applications.
NVIDIA said that the Unified Compiler technology tunes DirectX 9.0 execution on the GeForce FX GPUs, and can be used to correct any similar conflict that arises with future APIs. NVIDIA indicated the Unified Compiler as an automatic tuning tool that optimizes Pixel Shader performance in all applications, not just on specific ones. Officials from NVIDIA again stressed today that one of the things the Unified Compiler does is to reinstruct the order of lines of code in a shader. By simply doing this the performance can increase dramatically since the GeForce FX technology is very sensitive to instruction order. So, if the re-ordering is not happening NVIDIA?s GeForce FX parts have a performance penalty.
Since the complier is still active with the new version of 3DMark03 there is currently no explanations for performance drops of certain GeForce FX parts in the latest build 340 of the famous 3DMark03.
?The only change in build 340 is the order of some instructions in the shaders or the registers they use. This means that new shaders are mathematically equivalent with previous shaders. A GPU compiler should process the old and the new shader code basically with the same performance,? said Tero Sarkkinen, Executive Vice President of Sales and Marketing for Futuremark Corporation ? the developer of 3DMark03 application.
He was indirectly confirmed by an ATI official yesterday, who said: ?ATI has had a compiler since CATALYST 3.6. We did not have any problems with Futuremark?s changes.?
Since I'm just me, why don't you ask HardOCP, which has a much better reputation than I do. Apparently they think the same thing I do.According to Jeff, you'll never see 3DMark report the same (or even similar) results twice. Ask him for an explanation.
PS: Jeff, I have been trying to download 3DMark03 since your first post, I reinstalled fresh recently, and wanted to see my scores using 340, but the download keeps failing for me. I will continue trying, and let you know once I've run the benchmarks.
Originally posted by: Jeff7181
Looks like a bunch of BS to me... they're showing the 9800XT scores changing by only 1 point... that's HIGHLY unlikely... run 3DMark twice in a row without changing ANYTHING and I guarantee you won't come within 1 point of the previous test.
Originally posted by: JustStarting
exerpt from HardOCP article on the 340 patch thing....
"This reminds me of the early days of CPU's, when for some weird reason the industry felt the need to run "unoptimized code" through fear of the "new" optimizing compilers. It took the CPU industry a couple decades to accept optimizing compilers as legitimate, and of course now everyone assumes that as standard practice. Lets hope that we've learned from that experience and that it doesn't take the GPU industry anything near that time to accept compiler technology as legitimate and proper in this new age of programmable GPU's."
Funny how all the guys knocking Nvidia, have NF2 mobo's?? Do you feel "robbed" with your NVidia mobo and Barton running optimized codes for performance?....... Doubt it, cause you own it.
Oh... and thanks for boosting my stock shares the last few months buying those NF2 mobo's... you bought my FX5950 for me!!
Getting 6547 with the old 330 build.... Nvidia "cheats" included.
Originally posted by: Jeff7181
Originally posted by: JustStarting
exerpt from HardOCP article on the 340 patch thing....
"This reminds me of the early days of CPU's, when for some weird reason the industry felt the need to run "unoptimized code" through fear of the "new" optimizing compilers. It took the CPU industry a couple decades to accept optimizing compilers as legitimate, and of course now everyone assumes that as standard practice. Lets hope that we've learned from that experience and that it doesn't take the GPU industry anything near that time to accept compiler technology as legitimate and proper in this new age of programmable GPU's."
Funny how all the guys knocking Nvidia, have NF2 mobo's?? Do you feel "robbed" with your NVidia mobo and Barton running optimized codes for performance?....... Doubt it, cause you own it.
Oh... and thanks for boosting my stock shares the last few months buying those NF2 mobo's... you bought my FX5950 for me!!
Getting 6547 with the old 330 build.... Nvidia "cheats" included.
MMX, SSE, 3DNow, SSE2... hmmmmm... I wonder what those do?
Originally posted by: JustStarting
exerpt from HardOCP article on the 340 patch thing....
"This reminds me of the early days of CPU's, when for some weird reason the industry felt the need to run "unoptimized code" through fear of the "new" optimizing compilers. It took the CPU industry a couple decades to accept optimizing compilers as legitimate, and of course now everyone assumes that as standard practice. Lets hope that we've learned from that experience and that it doesn't take the GPU industry anything near that time to accept compiler technology as legitimate and proper in this new age of programmable GPU's."
Funny how all the guys knocking Nvidia, have NF2 mobo's?? Do you feel "robbed" with your NVidia mobo and Barton running optimized codes for performance?....... Doubt it, cause you own it.
Oh... and thanks for boosting my stock shares the last few months buying those NF2 mobo's... you bought my FX5950 for me!!
Getting 6547 with the old 330 build.... Nvidia "cheats" included.
correction...The question is: would you rather have a faster computer to begin with, or hope that they can keep coming up with a better compiler to make up the difference?
or hope that they have the resources to modify the compiler for new games?