Why would AMD "cheat" on its drivers in five year old games that no sites use to benchmark? Think about that for a second to see how ludicrous the conjecture proposed in this thread is.
http://blogs.nvidia.com/ntersect/2010/11/testing-nvidia-vs-amd-image-quality.html
FP16 Render Observations
In addition to the above recent findings, for months AMD had been performing a background optimization for certain DX9 applications where FP16 render targets are demoted to R11G11B10 render targets, which are half the size and less accurate. When recently exposed publically, AMD finally provided a user visible control panel setting to enable/disable, but the demotion is enabled by default. Reviewers and users testing DX9 applications such as Need for Speed Shift or Dawn of War 2, should uncheck the “Enable Surface Format Optimization” checkbox in the Catalyst AI settings area of the AMD control panel to turn off FP16 demotion when conducting comparative performance testing.
If people want something to read, here is a article that explains one of the last issues Nvidia raises in the current blog article. sept 15
ATI cheating benchmarks and degrading game quality, says NVIDIA
So does FP16 Demotion actually do anything at all?
From our testing it's clear that ATI's implementation FP16 Demotion does affect image quality in games as challenged by NVIDIA. However, the extent and prevalence of it is not universal - from four games, only one showed any real visible influence. On the other side of the coin are performance improvements, which are plentiful in two games from four: boosting performance by 17 per cent at high resolutions, and a still-impressive 10 per cent at lower resolutions.
Ultimately the decision to run FP16 Demotion has always been up to the end-user, as the use of Catalyst A.I has been optional since its first inclusion in Catalyst drivers - now many years ago - and as it appears predominantly beneficial or benign in the majority of cases, we'd suggest you leave it enabled.
We also can't help but wonder why NVIDIA attempted to make such an issue over this rendering technique; one that they wholeheartedly support and simultaneously condemn. In the two titles that their hack functioned with we saw no image quality nor performance loss, and rather the opposite: a performance boost. Why haven't NVIDIA included support for FP16 Demotion in their drivers until now for specific titles? Why choose now to kick up a fuss?
The answer, as always, will never really be known.
Here in the snowy town of Bruma, near the East Gate, we see one very slight change - the torch to the far left of both pictures. With Catalyst A.I enabled the flame is quite average, and with it disabled, the flame is slightly higher. That wouldn't be so weird, until we saw...
Catalyst A.I Enabled
Catalyst A.I Disabled
That inside the town of Bravil we had the same torch flame height difference. Does this mean that the extra half-frame performance boost comes from a slight reduction in flame height? It would appear so!
This had me rolling on the floor.
This is the glenn beck method of "not" saying something.
"Now people! I can't prove that Keysplayer has gout. But I can't disprove it either. I'm just asking questions here!"
I mean, you posted this thread, you to hold a neutral "Let's wait for more information" stance, but everybody else who has actually stated anything neutral has gotten shot down (by you) without any technical explanation.
IN the end, this is a technical forum. We want answers, not spin. And definitely not fried rice. Not today.
Don't know what you're problem is, but you definitely need to get a handle on it. And if you think you do, you don't.
Me having gout has nothing to do with this thread!! And how did you find out about my gout anyway!!!!
Just relax dude and "focus" less on me, and more on the subject. "focus", get it?![]()
We have a multi page thread with almost no concrete information (until recently)
You are solely responsible for this. Make this thread better. Answer my questions. Send me free stuff.
Why has this turned into an attack Keysplayr thread instead of a lets find out for ourselves and get answers thread?
Why has this turned into an attack Keysplayr thread instead of a lets find out for ourselves and get answers thread?
Can't this all be put to rest by testing newer games to see if it is actually a "cheat"? How hard is that really? If I had the cards I would test it myself.
I think you already know the answer to that question. AMD vs Nvidia is like rooting for your favorite team on Sunday. And, when that 'no call' that was called that went against your team that angered you to no end suddenly finds it way to being commited by your team you somehow find a way to justify why it wasn't called and was 'legal' even though the same thing occured.So why do we dismiss subjective opinions of some people about image quality and yet accept subjective opinions of others on what constitutes "smooth frames per second" as facts?
I think you already know the answer to that question. AMD vs Nvidia is like rooting for your favorite team on Sunday. And, when that 'no call' that was called that went against your team that angered you to no end suddenly finds it way to being commited by your team you somehow find a way to justify why it wasn't called and was 'legal' even though the same thing occured.
In regards to the issue at hand, if AMD would have left a way to disable the optimization they enabled by default and been up front about it...this would be a non-issue imho. Of course, now that it is, it is somewhat of a non-issue....though reviewers need to be sure and double check that the driver settings are indeed the same with each driver release when benchmarks are made to be sure accurate comparisons are indeed being made because trusting either AMD or Nvidia to do the right thing is a bit naiive at this point imho.
That is what has been asked throughout this thread. Nothing yet.
Not everyone has HD6850/70 cards on hand; so asking people to mysteriously come "up" with benchmarks without cards in their possession is unrealistic don't you think? In the previous Catalyst 10.9 vs. 10.10 thread, I asked for owners of HD6850/70 cards to test image quality with Q and HQ and make comments, but no one volunteered.
Most importantly, all of these websites found that the only way for HD6850/70 series to have superior filtering quality to HD5870 was to enable HQ in the first place; and even AnandTech made note of the changes in the driver control panel. It's difficult to claim that HD68xx series has superior image quality to the previous generation of AMD cards without testing it at HQ in reviews. Therefore, the "increased image quality over HD58xx series" as it is being touted by AMD as a feature is somewhat misleading since it's not enabled by default. Without even bringing NV into the equation, this alone is a valid reason to test HD68xx series at HQ only.
The irony of this thread is that some people who dismiss image quality differences unless they are tested objectively find completely subjective HardOCP benchmarking as the best method for testing videocards...Kyle Benett at HardOCP comes up with his own "smooth gameplay" benchmarks which he claims are the only right way to test videocards. The problem is Kyle is the only one who determines what "smooth" is.
Even if some people disagree that there is a material difference in filtering image quality between both camps, the fact is that a lot of websites have changed their testing methodologies. As independent reviewers, they have every right to independently test image quality and arrive at their own conclusions. We have seen FX5800/5900 series penalized for trying to diminish image quality and some of the same websites took notice (Xbitlabs for example). This example clearly highlights they will revise their testing if they feel the workload is not comparable, regardless of the brand.
The irony of this thread is that some people who dismiss image quality differences unless they are tested objectively find completely subjective HardOCP benchmarking as the best method for testing videocards...Kyle Benett at HardOCP comes up with his own "smooth gameplay" benchmarks which he claims are the only right way to test videocards. The problem is Kyle is the only one who determines what "smooth" is.
So why do we dismiss subjective opinions of some people about image quality and yet accept subjective opinions of others on what constitutes "smooth frames per second" as facts?