DirectX 12Futuremark 3DMark Time Spy Benchmarks Thread

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Feb 19, 2009
10,457
10
76
+ 4 ACEs == 8 total.

Another post covered it in more details.

Yup. RX 480 gains decent, but note that it's 32 ROPs and lower bandwidth which may or may not be a bottleneck, it's dependent on the game/synthetic.

The real interesting thing here is to find out how NV enabled Async in Time Spy for Pascal.
 

guskline

Diamond Member
Apr 17, 2006
5,338
476
126
Rig 1 below (5960x@4.4 + GTX1080 stock)
Total 7,507
Graphics 7212
CPU 9780

Rig1 below (4790k@4.7 + GTX 980 TI SC stock)
Total 5,336
Graphics 5381
CPU 5096
 
Last edited:

AdamK47

Lifer
Oct 9, 1999
15,231
2,851
126
My Results:
3DMark Score - 13051
Graphics Score - 13453
CPU Score - 11163
Graphics Test 1 - 88.33 fps
Graphics Test 2 - 76.64 fps
CPU Test - 37.5 fps

CPU @ 4.0GHz, memory @ 3000, GPUs at +100 core / +100 memory.

http://www.3dmark.com/spy/19118
 
Last edited:

YBS1

Golden Member
May 14, 2000
1,945
129
106
I figured two 1080s would be faster than three 980it's in a DX12 bench. I'm on my phone so I don't know how my scores split out but overall was 13697 with the cards at default.
 

Azix

Golden Member
Apr 18, 2014
1,438
67
91
This has me wondering about how benchmarks are made. At least dedicated benchmark apps. What was their goal? best case on each arch? worst case? What level of optimizations? How did they choose what they put in? Why does it run like crap and look lame?

Why should it matter, if they choose to make it in a way that won't fit most games? i kinda see why some sites ignore 3dmark.

IMO they should do maximum feature support to achieve the same visuals. that might be worth something.
 

Keysplayr

Elite Member
Jan 16, 2003
21,209
50
91
This has me wondering about how benchmarks are made. At least dedicated benchmark apps. What was their goal? best case on each arch? worst case? What level of optimizations? How did they choose what they put in? Why does it run like crap and look lame?

Why should it matter, if they choose to make it in a way that won't fit most games? i kinda see why some sites ignore 3dmark.

IMO they should do maximum feature support to achieve the same visuals. that might be worth something.

It also has me wondering what prompts you to ask this at this time.
 

nurturedhate

Golden Member
Aug 27, 2011
1,743
676
136
It also has me wondering what prompts you to ask this at this time.

Yep, because wanting to know how something works and why is a terrible thing...

Asking at this time? Probably because there's a new release and a post about it.

Why does your post smell like you are trying to start something that doesn't belong in this thread?
 

IllogicalGlory

Senior member
Mar 8, 2013
934
346
136
Results from my sig rig, although with a higher overclock - 1125/1525

Stock 947/1250:
A-sink off:
2828764
A-sink on:
2828738

Overclocked:
A-sink off:
2828765
A-sink on:
2828739

Looking at the graphics scores, the gains are consistent at 15.3%~15.4% from async.
 

Elixer

Lifer
May 7, 2002
10,376
762
126

Does anyone have a debugger (like https://www.visualstudio.com/en-us/features/directx-game-dev-vs.aspx) & different cards (like a 1080 & a 480) that they can test the actual draw / shader calls with?

I am curious if they use the exact same code path for each vendor, and they aren't playing any special tricks to artificially increase the score depending on vendor.
Guess I am looking for a in-depth breakdown of this application, and what it is doing behind the scenes.
 

Azix

Golden Member
Apr 18, 2014
1,438
67
91
Under the hood, the engine only makes use of FL 11_0 features, which means it can run on video cards as far back as GeForce GTX 680 and Radeon HD 7970. At the same time it doesn't use any of the features from the newer feature levels, so while it ensures a consistent test between all cards, it doesn't push the very newest graphics features such as conservative rasterization.

That said, Futuremark has definitely set out to make full use of FL 11_0. Futuremark has published an excellent technical guide for the benchmark, which should go live at the same time as this article, so I won't recap it verbatim. But in brief, everything from asynchronous compute to resource heaps get used. In the case of async compute, Futuremark is using it to overlap rendering passes, though they do note that "the asynchronous compute workload per frame varies between 10-20%." On the work submission front, they're making full use of multi-threaded command queue submission, noting that every logical core in a system is used to submit work.

http://www.anandtech.com/show/10486/futuremark-releases-3dmark-time-spy-directx12-benchmark

Feature level 11_0. Anyone knows what this means to the relevance of the benchmark for actual dx12 games?

And the meaning of this

Both cards pick up 300-400 points in score. On a relative basis this is a 10.8% gain for the RX 480, and a 5.4% gain for the GTX 1070. Though whenever working with async, I should note that the primary performance benefit as implemented in Time Spy is via concurrency, so everything here is dependent on a game having additional work to submit and a GPU having execution bubbles to fill.

Futuremark is using it to overlap rendering passes, though they do note that "the asynchronous compute workload per frame varies between 10-20%."

Guess it would be a good idea to read the technical guide

http://s3.amazonaws.com/download-aws.futuremark.com/3DMark_Technical_Guide.pdf
 
Last edited:
Feb 19, 2009
10,457
10
76
@Azix

That's a terrible technical guide, it does not even go into the technical aspects.

They don't specify further, just that they use Async Compute to increase GPU utilization.

id Software uses Async Compute to both increase shader utilization with post effects, and to actually run Rasterizers & DMAs in parallel with Shaders via Shadow Maps & Megatexture streaming.

If it's just filling out gaps in shader usage, then Fiji should have much bigger gains than Tahiti, Tonga or RX 480, due to the scheduler : shader ratio being so shader heavy.
 

Azix

Golden Member
Apr 18, 2014
1,438
67
91
@Azix

That's a terrible technical guide, it does not even go into the technical aspects.

They don't specify further, just that they use Async Compute to increase GPU utilization.

id Software uses Async Compute to both increase shader utilization with post effects, and to actually run Rasterizers & DMAs in parallel with Shaders via Shadow Maps & Megatexture streaming.

If it's just filling out gaps in shader usage, then Fiji should have much bigger gains than Tahiti, Tonga or RX 480, due to the scheduler : shader ratio being so shader heavy.

yeah I was disappointed. adds nothing much.

They probably held off on some things for the same reason they use FL 11_0. But then that brings up the question of the relevance of the benchmark. If it doesn't reflect what we see in games then using it would give inaccurate information.
 

Samwell

Senior member
May 10, 2015
225
47
101
yeah I was disappointed. adds nothing much.

They probably held off on some things for the same reason they use FL 11_0. But then that brings up the question of the relevance of the benchmark. If it doesn't reflect what we see in games then using it would give inaccurate information.

There is no game out there at the moment which is using a higher FL than 11_0, so probably that's why they choose it. All DX12 games are FL11_0, so going to a higher FL would lower the relevance of the bench even more.
 

redzo

Senior member
Nov 21, 2007
547
5
81
With the arrival of dx12 and vulkan I'm thinking how much can we trust synthetic benchmarks like this. Since now the developers are the ones more responsible for optimization and there should be less impact from driver tuning from the likes of amd and nvidia.
Isn't the gap between 3dmark and a real game engine now bigger than ever?
 
Last edited:

Hitman928

Diamond Member
Apr 15, 2012
5,316
7,988
136
Can someone with Pascal do me a favor? Can you run this at 4k with async on and off?
 

dacostafilipe

Senior member
Oct 10, 2013
772
244
116
So, this is the first time we see "Async Compute" on Nvidia hardware working?

Let's hope that it will be integrated in games in the near future!