[AMD_Robert] Concerning the AOTS image quality controversy

Page 13 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

antihelten

Golden Member
Feb 2, 2012
1,764
274
126
I don't think they did it with malice but I do think it's not fair game to not use the release driver for a product you can buy in a store.

So by that logic we should also ignore every single 1070 review out there, seeing as they also used this driver, and were posted after the retail driver was made available?
 

Magee_MC

Senior member
Jan 18, 2010
217
13
81
How is it fishy? Drivers development never stops. Would you prefer to have IHVs not fixing driver bugs?

No, I wouldn't prefer to have IHV's not fixing driver bugs. I don't actually think that it's fishy, I was just using your phrasing to show that the opposite position could be advanced as well. I probably should have been more clear that I was making a point instead of a statement.

I honestly don't have a problem with what either AMD or NV did here. AMD benched against the review drivers. NV fixed a bug. The real problem here is that the reviewers aren't going to go back and rerun their charts with the updated drivers.

That is the crux of many of the problems we (as a group) have when trying to discuss issues like these. The reviews are only one slice of information at one point in time, yet they hang around and are used for years, long after their sell by date has passed.
 

Magee_MC

Senior member
Jan 18, 2010
217
13
81
Lol, yeah. Cause pascal can't handle snow in dx12.

:p




Yes, basically. But it doesn't appear many of the reviews even used the .19 driver.

https://www.reddit.com/r/Amd/commen..._twitter_so_it_appears_to_us_that_the/d3yo1v8

To me that makes it even more confusing. Why were so many different drivers used on the press release cards? Did NV send different drivers with different cards using the most up do date as they were produced, or did the sites download and install different drivers on their own?

I would think that the review guidelines would have stated which drivers to use.
 
May 11, 2008
19,491
1,163
126
Correct, the reason it was removed is because it's a marketing term from AMD.

That's your own prejudice. Nvidia is being called out by their own fans just as much.

To my understanding Pascal can now switch between Compute and Graphics at the GPC boundary, which definitely is a form of concurrent compute + graphics. I'm just not entirely convinced of the usefulness when you can only switch between the two in 25% granularity (GP106 might be limited to 33% or 50% granularity, GP102 could increase this to 17%).
In comparison, AMD can switch between the two workloads in >3% granularity on Hawaii to stuff holes in the graphics pipeline. This seems a lot more useful to be honest to fill both expected and unexpected gaps in the graphics workload.


The issue is that the term "Asynchronous Compute" was a bad choice to describe what is actually happening.

Technically Maxwell can do asynchronous compute - as in, if your context of the chip has been switched to the compute mode then you can have multiple compute threads scheduled asynchronously (sorry for the complicated sentence). DX11 or 12 doesn't matter in that case.
What it can't do (to the best of my knowledge) is to do both compute and graphics workloads at the same time on different parts of the GPU (also called concurrent compute and graphics). GCN can do this, it can freely schedule any number of CUs to graphics and the rest to compute at the same time with very little side effects.

It should have been called concurrent c&g from the start, really.


Asynchronous Compute as a name makes perfect sense. It implies that you can feed the CUs with compute shaders at any point in the pipeline, without having to worry about the order they need to be executed in. It allows your engine to be multithreaded and queue graphics and compute shaders asynchronously, as in independent of each other.

Traditionally this was done synchronously, meaning compute shaders were coupled to the pipeline's order of things before they could be executed.

Maxwell and to some degree Pascal aren't as flexible as GCN in this regard.

Correct, and Maxwell can do this as long as you already are in compute context.

That isn't described in the term "Async Compute" though, that's just something GCN can do beyond scheduling compute tasks asynchronously - to schedule compute along graphics with no need for global context switching.

This may be clear now, but it made for a lot of confusion when this whole discussion started and it probably lead to some of the contradictory statements that came from AMD/Nvidia/developers.



Piroko and Sirmo.
Thanks for the explanation about async compute. It sure makes a lot more sense now. From the other poster Renderstate i learned nothing.

It may or may not be a part of dx12, but it sure makes sense it is going to be used in the near future. Since Nvidia can do it as well but with limitations with the current gfx cards (Remains to be seen how much of a real world limitation that will be, since game engine developers work with AMD and Nvidia for the 3d rendering )seems that it is here to stay.

EDIT:

Forgot to write that async compute can also be used with the vulkan API (which seems to be derived from mantle).
The two dominant graphic API's for the future both seem to be promoted with async compute as an important part of the gpu that can be used by both API's.
And since the Xbox one and PS4 can make use of async compute as well. It is very possible this will show up in more and more game engines in the near future.
 
Last edited:

Yakk

Golden Member
May 28, 2016
1,574
275
81
Regardless of performance;I think the takeaway here is there should NOT be "press only" drivers which consumers do not have access to. It breeds mistrust and opens the door wide open to potentially shady activities.

If not this time, maybe the next?
 

Piroko

Senior member
Jan 10, 2013
905
79
91
I don't think they did it with malice but I do think it's not fair game to not use the release driver for a product you can buy in a store.
Careful there. It definitely is tolerable to reproduce the test that was used to advertise a product. Discrepancies between reviews and later tests that hurt customers are a legal minefield.

Thankfully in this case performance wasn't affected but once again we have seen how it's so easy for people to scream NVIDIA cheated without a shred of evidence. Moreover, as you can see in this thread, they won't stop claiming NVIDIA cheated. I'll assure you they are driven by the same mindset of flat earthers and global warming deniers, nothing will change their minds.
I think i've said this before:
The market leader has to endure more scrutiny, as he can damage both competition and customers more easily than any other market member.

Also, evidence was there in the form of pictures.
 

Erenhardt

Diamond Member
Dec 1, 2012
3,251
105
101
I think nvidia is probing the market, just like they did with $1000 titan.

They are watching reaction from the press, from consumers.

Who is guilty of reducing youtube video playback quality to 720p from fullhd to have smooth experience without buffering breaks? Well, youtube does that itself novadays, but that was not the case previously. This is a grand proof that people are keen to sacrifice quality for performance.

You start with little things like that. Ooops, a bug in a driver for reviews only that doesn't render some little things. Good we fixed that before it was even reported! It was so minor that its not included in the changelog, but hey!

Then you go one step further and reduce quality for more performance in the press driver. Reviews show only bars and fps numbers, they don't care. And even if they notice, they will ignore it (how many sites reported the bug discussed here?) People won't notice since not everyone has 2 GPUs from both vendors to compare and they don't have access to the NDA press driver.
 

imaheadcase

Diamond Member
May 9, 2005
3,850
7
76
Anytime a new card is released both sides claim others are doing something to effect performance. Which is fine, the average consumer really does not care as long as the game is running nice and looks nice. Based on screenshots its not a huge deal.

I mean you can mince words however you like it, but if one company wants to render a certain feature in a game different than another card, that is a design decision.

Lets be honest here, benchmarks are really a thing of a past. You should look at them like you should look at AMD or Nvidia slides with benchmarks..a grain of salt. Gone are the days of huge jumps in performance every year when benchmarks meant something major. Now its just nitpicking of finite changes that the average consumer does not care about.
 

renderstate

Senior member
Apr 23, 2016
237
0
0
So by that logic we should also ignore every single 1070 review out there, seeing as they also used this driver, and were posted after the retail driver was made available?


Yet another straw man. Can you walk in a store and buy a 1070? No.
Could AMD walk in a store buy a 1080 and therefore install the release driver at the time of the demo. Yes.

Reviewers use the best driver available at review time. AMD didn't use the best driver at the time of their demo, despite it being available for several days.
 

Piroko

Senior member
Jan 10, 2013
905
79
91
Piroko and Sirmo.
Thanks for the explanation about async compute. It sure makes a lot more sense now.
You're welcome. I picked up most of this knowledge in the beyond3d forums if you're curious.

It may or may not be a part of dx12, but it sure makes sense it is going to be used in the near future.
Yes. The whole concept of asynchronous jobs and multiple queues probably is the only sensible way to keep scaling intact with and past the GPUs that we'll see in the coming years. The discrepancy between CPU and GPU performance is getting ridiculous. GP104 has between 35 and 70 times the processing power of a Skylake 6700k, that is a lot of work that needs to be fed continuously.
 

Bacon1

Diamond Member
Feb 14, 2016
3,430
1,018
91
Has anyone confirmed that the newer drivers actually fix the problem?

I have those drivers, and ran the AOTS benchmark on 1440p with crazy preset. I still didnt see the "snow".

I did have a 980 ti. honestly i never knew about the "snow" issue until i received my 1080. So i cant test on another card. I cant confirm either if 980 ti rendered the "snow".
I am running 368.25 drivers.

https://www.reddit.com/r/nvidia/com..._twitter_so_it_appears_to_us_that_the/d3yfltv
 

zinfamous

No Lifer
Jul 12, 2006
110,568
29,179
146
Do you guys really think of yourselves as being part of a "team" in this?

My pointless piece of hardware is better than your pointless piece of hardware! The soulless company that doesn't care about me is better than the other soulless company that doesn't care about me! waaaa!

Why does what appears to be a simple mistake on one or both sides of this have to result in an internet pissing match--every freaking time?

..grow up guys.
 

poofyhairguy

Lifer
Nov 20, 2005
14,612
318
126
Yes. The whole concept of asynchronous jobs and multiple queues probably is the only sensible way to keep scaling intact with and past the GPUs that we'll see in the coming years. The discrepancy between CPU and GPU performance is getting ridiculous. GP104 has between 35 and 70 times the processing power of a Skylake 6700k, that is a lot of work that needs to be fed continuously.

Yeah, this part of a Total War: Warhammer review jumped out at me:

The summary of these tests are that if you want to game at a steady 60FPS, then it might not be possible until and unless you have a really powerful CPU strapped in the socket. Even an i7-5930K overclocked at 4.2GHz came short at 1080p Ultra, but according to the source, a Skylake i7-6700K overclocked at 4.7GHz might just feature the processing power you need to render sufficient number of frames.

http://wccftech.com/total-war-warhammer-benchmarks-unveiled-invest-powerful-cpu

When we need the best CPU out there OVERCLOCKED to run the game at 60fps at 1080p then we are screwed. Without Directx 12 I don't even see how the industry can move forward given current CPU technology.
 

Magee_MC

Senior member
Jan 18, 2010
217
13
81
Yet another straw man. Can you walk in a store and buy a 1070? No.
Could AMD walk in a store buy a 1080 and therefore install the release driver at the time of the demo. Yes.

Reviewers use the best driver available at review time. AMD didn't use the best driver at the time of their demo, despite it being available for several days.

So you're saying that they shouldn't be allowed to test hardware on an even playing field with the same drivers?

I think that if you're looking at comparing hardware then there should be some level of consistency in which drivers are used. If you're comparing drivers then the newest should be used against whichever older driver you're comparing against.

Again, the question is even muddier since it appears from this link https://www.reddit.com/r/Amd/comment...at_the/d3yo1v8 that different review sites used different drivers.
 

antihelten

Golden Member
Feb 2, 2012
1,764
274
126
Yet another straw man. Can you walk in a store and buy a 1070? No.
Could AMD walk in a store buy a 1080 and therefore install the release driver at the time of the demo. Yes.

Reviewers use the best driver available at review time. AMD didn't use the best driver at the time of their demo, despite it being available for several days.

You don't get it do you. The best drivers available at the time of the 1070 reviews were the retail drivers not the 368.19 drivers (368.25 was available 3 days before the 1070 NDA ended), and yet the reviewers didn't use them, they used the 368.19 drivers, just like AMD.

So if you think what AMD did was fishy, then obviously it must also be fishy for reviewers to do the exact same thing.
 
Last edited:
May 11, 2008
19,491
1,163
126
You're welcome. I picked up most of this knowledge in the beyond3d forums if you're curious.

Yes. The whole concept of asynchronous jobs and multiple queues probably is the only sensible way to keep scaling intact with and past the GPUs that we'll see in the coming years. The discrepancy between CPU and GPU performance is getting ridiculous. GP104 has between 35 and 70 times the processing power of a Skylake 6700k, that is a lot of work that needs to be fed continuously.

Thanks for the explanation.
I already read that for Virtual Reality asynchronous jobs will be very important. :)
I have another question, i found this pdf on the internet about async compute :

http://amd-dev.wpengine.netdna-cdn....10/Asynchronous-Shaders-White-Paper-FINAL.pdf

ASYNCHRONOUS COMPUTING
For many tasks in the graphics rendering pipeline, the GPU needs to know about ordering; that is, it
requires information about which tasks must be executed in sequence (synchronous tasks), and
which can be executed in any order (asynchronous tasks). This requires a graphics application
programming interface (API) that allows developers to provide this information. This is a key
capability of the new generation of graphics APIs, including Mantle, DirectX® 12, and Vulkan™.
In DirectX 12, this is handled by allowing applications to submit work to multiple queues. The API
defines three types of queues:
* Graphics queues for primary rendering tasks
* Compute queues for supporting GPU tasks (physics, lighting, post-processing, etc.)
* Copy queues for simple data transfers
Command lists within a given queue must execute synchronously, while those in different queues
can execute asynchronously (i.e. concurrently and in parallel). Overlapping tasks in multiple queues
maximize the potential for performance improvement.
Developers of games for the major console systems are already familiar with this idea of multiple
queues and understand how to take advantage of it. This is an important reason why those game
consoles have typically been able to achieve higher levels of graphics performance and image quality
than PCs equipped with a similar level of GPU processing power. However the availability of new
graphics APIs is finally bringing similar capabilities to the PC platform.

I think i am getting confused about async compute, i though it meant all these compute threads would run in parallel, one with a higher priority than the other. But when reading about the 3 different queues in directx 12, that just seem to match the hardware of gcn. DMA engines for copying, graphic command processor for the graphics and 8 asynchronous compute engines (ACE) for the compute tasks.

If i understand it correctly, all these compute threads, would be run by the 8 ACE asynchronously from each other. How much of these async compute threads (I hope that is the correct word for it) can run in parallel on GCN ? 8 ?
And how much can run on Pascal or Maxwell(2) ?


EDIT:

I think i understand, they mention Queues and not Queue for each described queue. So there can be for example multiple compute queues that can run in parallel. In a given queue a synchronous command list. But multiple compute queues can all run asynchronously giving asynchronously running command lists with compute tasks Yes ?
 
Last edited:

renderstate

Senior member
Apr 23, 2016
237
0
0
I don't get the question about an even playing field. If NVIDIA were using an old AMD driver to perform a public demonstration you could bet this forum would turn into hell on earth, Reddit servers would break down and twitter would shut down to contain the rage of the tinfoil hatters that see NVIDIA conspiracies everywhere. This thread is the perfect example of that.
 

antihelten

Golden Member
Feb 2, 2012
1,764
274
126
I don't get the question about an even playing field. If NVIDIA were using an old AMD driver to perform a public demonstration you could bet this forum would turn into hell on earth, Reddit servers would break down and twitter would shut down to contain the rage of the tinfoil hatters that see NVIDIA conspiracies everywhere. This thread is the perfect example of that.

You do realise that this whole controversy started because people on the Nvidia side where accusing AMD of cheating and using lower settings for their RX 480 setup?

And guess who were among the people accusing AMD of cheating, hypocrisy much?
 
Last edited:

Elixer

Lifer
May 7, 2002
10,376
762
126
AMD selected the drivers. Its a driver not used by any reviews. And its a driver they selected despite of the retail driver being out.

So, ask AMD :)

Funny guy. Nope, no reviews at all...
http://www.hardwarezone.com.sg/revi...less-half-price/performance-benchmarks-page-1 "NVIDIA supplied us with beta driver version 368.19 for testing"

http://www.tomshardware.com/reviews/nvidia-geforce-gtx-1070-8gb-pascal-performance,4585-2.html "Nvidia pushed out a 368.19 update ahead of today’s story."

There are plenty of other reviews out there that also used these 368.19 drivers.

I am also sure you know that this wasn't a live demo, and it does take time to make these things, so, everything had to be ready before they made the trip down to China.
 

Krteq

Senior member
May 22, 2015
991
671
136
Reviewers use the best driver available at review time. AMD didn't use the best driver at the time of their demo, despite it being available for several days.
And once again
What the heck. Do you know how long it takes to prepare this kind of presentation?
They simply don't have enough time to re-test with newest driver and edit slidedeck when they want to be on time to present scores. No conspiracy, just simple as f*ck.
 

Coalscraper

Junior Member
Jun 1, 2016
11
0
6
Or you know, prerelease drivers having a bug. Would hardly be the first time.
It is also possible that some German cars have a software bug that just shows on emission tests? Please think about what you are saying. The next bug will be a reduction of details and no AA regardless what is displayed on settings?