Our clarification on how Time Spy works:
http://www.futuremark.com/pressreleases/a-closer-look-at-asynchronous-compute-in-3dmark-time-spy
What you see in GPUView is time and not workload. It doesnt say anything about the amount work a card has to do.
/edit: You can see it with Fury X and GTX1080: The first compute workload looks "longer" on the GTX1080. But it is just the time the hardware has needed to finish the job.
I see. Does it do the same if you were to rename 3dmark to something else?That is, as far as we can tell, that is what drivers or D3D is doing. "not ours, probably drivers, but we can't be 100% sure" was what the programmers said.
Ask NVIDIA? 🙂
whats with all the fences?
How do you check for executable based optimizations? Detect that a driver is detecting that your software is running? And are exe name based optimizations the only way to optimize?
Detailed write-up here at the usage of "Async Compute" in Time Spy DX12:
I find it funny that NV showed of "Async Compute" on DX11 with Pascal at one point.
Detailed write-up here at the usage of "Async Compute" in Time Spy DX12:
http://www.overclock-and-game.com/news/pc-gaming/50-analyzing-futuremark-time-spy-fiasco
I find it funny that NV showed of "Async Compute" on DX11 with Pascal at one point.
Preemption has never been about parallel execution of work. Yet they market it as "Async Compute" anyway.
The approach of one-size-fits-all that FM has taken with Time Spy shows NV's Preemption tech can indeed improve shader utilization, that at least, is a good thing.
That article is full of errors and is just plain poorly written.
And we've already established there is nothing preventing this from being possible in DX11. There's just no explicit control of it on the application side. It was odd for them to show it that way though.
It would be nice if you could find some credible developers who say such things about DX11 to back your claims, but when you talk about this subject, it's always back to DX12/Vulkan/Mantle.
@FM_Jarnis
That article ain't mine. It's from a tech blogger.
Wow, that website referenced above is one of the most biased sites I've ever seen posted here. Suggest you don't post any of their articles again, unless you want to be laughed at...
Jarnis, I commend you for trying to explain to the people here the choices FM made for this benchmark, but do understand you are talking to some who will just not accept anything you say. Don't take it personal, just the way things are 'round here.
Seriously? This is what you log on to Anandtech forums and post?
What I really miss in 3dmark are those external tests. Like we used to have tests for fillrate etc. Perhaps we could see external tests added later.
It's even worse for ASC (Async Compute). It would be a great opportunity to compare the max possible gaines between GPU vendors/generation with ASC, but with suboptimal utilisation on some vendors this is not possible at the moment.