[DX12] Fable Legends Beta Benchmarks

AtenRa

Lifer
Feb 2, 2009
14,001
3,357
136
Just imagine how good DX-12 GE games will be with GCN cards.

edit. Also, no need for high-end CPUs, core i3 is enough even for Fury X and GTX980Ti
 
Last edited:
Feb 19, 2009
10,457
10
76
Not showing the same as AOTS. How...unexpected.

An RTS with heavy focus on draw calls & AI doesn't behave like a run-through scenery?

Who would have thought? Different games will be... different.

Interestingly, on the official UE4 dev documentation portal, it says Async Compute (the function is added to UE4 by Lionhead Studios) is only functional for Xbone and not for PC. Wonder what's going on there?

ps. This is a damn amazing result for AMD (even on unoptimized drivers, re: AT's article), other UE4 games have the 970/980 destroying 390X and Fury X.
 

4K_shmoorK

Senior member
Jul 1, 2015
464
43
91
I hope as more new DX12 titles and their benchmarks are released, people will begin to see that the disparity between team green and team red DX12 tech is really overstated. I'm also really anxious to see what kind of DX12 performance Pascal/Artic Islands is capable of compared to this gen. Also can't wait for people to get over this Ashes of Singularity nonsense. I still can't play Galactic Civilizations 3 for over 30 min without it crashing. Not to mention, the menus freeze and lag when navigating through creating your own race.

In my experience, Stardock games have always performed relatively bad compared to other 4X/Strategy games.

4%20-%20FRP%201080p%20612_575px.png
 

BigDaveX

Senior member
Jun 12, 2014
440
216
116
AMD's cards do seem to cope a little better comparatively speaking. That said, the 980 Ti doesn't seem to suffer at all, and even Nvidia's lower-end cards don't experience the GeForce FX-style implosion that many of AMD's more... enthusiastic fans have spent the last few months assuring us would happen.
 

railven

Diamond Member
Mar 25, 2010
6,604
561
126
GCN showing it's console roots, dat 720P performance! :awe:

Kidding aside, good to see that my expensive GTX 980 Ti didn't become a paperweight over night as some forum posters here were predicting.

If the performance keeps up, my GF won't shed a tear when she inherits it and I upgrade next year :D
 

AtenRa

Lifer
Feb 2, 2009
14,001
3,357
136
AMD's cards do seem to cope a little better comparatively speaking. That said, the 980 Ti doesn't seem to suffer at all, and even Nvidia's lower-end cards don't experience the GeForce FX-style implosion that many of AMD's more... enthusiastic fans have spent the last few months assuring us would happen.

Most probable because Async Compute is not working in the PC version, at least for the Beta.
 
Feb 19, 2009
10,457
10
76
Railven, did you remember when Oxide said UE4 may not even use async compute?

I find this situation fascinating. :)

https://docs.unrealengine.com/lates...ing/ShaderDevelopment/AsyncCompute/index.html

I wonder if Lionhead Studios who claim Async Compute is "free performance on GCN" only meant it for Xbone. Given it's a MS sponsored title, they would care a great deal about Xbone performance.

DX12 has a lot of great features, which GPU has the advantage will depend entirely on which features are used. There's no point generalizing about all DX12 games as if they were the same.
 

railven

Diamond Member
Mar 25, 2010
6,604
561
126
Railven, did you remember when Oxide said UE4 may not even use async compute?

I find this situation fascinating. :)

https://docs.unrealengine.com/lates...ing/ShaderDevelopment/AsyncCompute/index.html

Yes, yes I do. My comment was mostly tongue-in-cheek. From the start I said we need more games to really see anything.

I wonder if Lionhead Studios who claim Async Compute is "free performance on GCN" only meant it for Xbone. Given it's a MS sponsored title, they would care a great deal about Xbone performance.

My console comment wasn't as tongue-in-cheek. There is clearly a huge benefit to GCN running lower resolution. Akin to consoles resolution. I'd be interested if they can explain why AMD sees such huge gains at lower resolutions.

DX12 has a lot of great features, which GPU has the advantage will depend entirely on which features are used. There's no point generalizing about all DX12 games as if they were the same.

Well, yeah. But AMD would benefit greatly if AC is used. If enough AC is used it could crush Nvidia. Dat Tessellation complex!
 

zlatan

Senior member
Mar 15, 2011
580
291
136
An RTS with heavy focus on draw calls & AI doesn't behave like a run-through scenery?

Who would have thought? Different games will be... different.

Interestingly, on the official UE4 dev documentation portal, it says Async Compute (the function is added to UE4 by Lionhead Studios) is only functional for Xbone and not for PC. Wonder what's going on there?

ps. This is a damn amazing result for AMD (even on unoptimized drivers, re: AT's article), other UE4 games have the 970/980 destroying 390X and Fury X.

Probably Lionhead didn't able to write back the async compute implementation to the main batch in time. So this feature will be added in a later build. Xbox One is a fixed hardware, so it is much more easier to write a stable implementation. But Lionhead can use this feature in Fable Legends PC port.
 
Feb 19, 2009
10,457
10
76
Btw, do people recall here the Oxide v NV issue, at the end of it, Oxide has said Async Compute is not functional on Maxwell because its not ready, the drivers disabled it. NV does not deny this in any way.

This raises the question why this Fable Benchmark is advertising itself as using Async Compute when UE4 is not capable and the feature is not supported in NV's drivers, yet.
 

AtenRa

Lifer
Feb 2, 2009
14,001
3,357
136
Another one,

http://www.extremetech.com/gaming/2...o-head-to-head-in-latest-directx-12-benchmark

Test Results

We observed a higher-then-usual amount of variation between benchmark runs on both AMD and Nvidia hardware and adjusted our testing methodology to compensate. We ran the benchmark 4x on each card, at each quality preset, but threw out the first run in each case. We also threw out runs that appeared unusually far from the average — AMD and Nvidia both had at least one 720p run that returned a result of ~115 FPS, for example.

FableLegends.png
 

zlatan

Senior member
Mar 15, 2011
580
291
136
This raises the question why this Fable Benchmark is advertising itself as using Async Compute when UE4 is not capable and the feature is not supported in NV's drivers, yet.

Async is working for this benchmark (Dynamic Global Illumination / Compute Shader Simulation & Culling) for all GCN and also for Maxwell2. The Nano can calculate those algorithm more than two times faster with async shader.
 

geoxile

Senior member
Sep 23, 2014
327
25
91
Async is working for this benchmark (Dynamic Global Illumination / Compute Shader Simulation & Culling) for all GCN and also for Maxwell2. The Nano can calculate those algorithm more than two times faster with async shader.

Where does it say that? Anand's rendering sub-system breakdown shows GCN's up to 3 times slower than Maxwell in GI.
 

flopper

Senior member
Dec 16, 2005
739
19
76
My console comment wasn't as tongue-in-cheek. There is clearly a huge benefit to GCN running lower resolution. Akin to consoles resolution. I'd be interested if they can explain why AMD sees such huge gains at lower resolutions.



!

Nvidia are cpu limited in that resolution...
 

tential

Diamond Member
May 13, 2008
7,355
642
121
So amd doesn't have a massive dx12 advantage? Looks pretty normal results I only looked at anandtech so far.

I'm more interested in seeing how this translates to graphics and physics quality in games with dx12 then what vendor does better though.
 

railven

Diamond Member
Mar 25, 2010
6,604
561
126
So amd doesn't have a massive dx12 advantage? Looks pretty normal results I only looked at anandtech so far.

I'm more interested in seeing how this translates to graphics and physics quality in games with dx12 then what vendor does better though.

For those on mobile with 720p or similar resolutions AMD has an advantage.

I would assume that advantage will translate to lower end GPUs and CPUs. So AMD can essentially carve a hole in the entry/casual PC gamer demograph.

I wonder what their APUs would score.
 

desprado

Golden Member
Jul 16, 2013
1,645
0
0
Railven, did you remember when Oxide said UE4 may not even use async compute?

I find this situation fascinating. :)

https://docs.unrealengine.com/lates...ing/ShaderDevelopment/AsyncCompute/index.html

I wonder if Lionhead Studios who claim Async Compute is "free performance on GCN" only meant it for Xbone. Given it's a MS sponsored title, they would care a great deal about Xbone performance.

DX12 has a lot of great features, which GPU has the advantage will depend entirely on which features are used. There's no point generalizing about all DX12 games as if they were the same.
Next time plz provide some better excuse.
http://forums.overclockers.co.uk/showthread.php?t=18690671
 

zlatan

Senior member
Mar 15, 2011
580
291
136
Where does it say that? Anand's rendering sub-system breakdown shows GCN's up to 3 times slower than Maxwell in GI.

That's a complier issue. The D3D bytecode not designed for GCN, and the FXC optimization make more hurt than good for AMD. So the Catalyst must deoptimize the compiled bytecode to compile a good binary for GCN.
On Xbox One the HLSL complier can compile a much faster code. The same HLSL code for PC is 30 percent slower avg. And the reason behind this is the D3D bytecode. SPIR-V will be an ultimate weapon for AMD because it will be a modern IR.

Microsoft will probably replace the D3D bytecode in the future, because most companies will build a GCN-like architecture, and the change is really needed in the complier space for this.
 

naukkis

Senior member
Jun 5, 2002
705
576
136
Where does it say that? Anand's rendering sub-system breakdown shows GCN's up to 3 times slower than Maxwell in GI.

Of course subtest runt async is much slower than reserving whole gpu to run one subtest.
 

Genx87

Lifer
Apr 8, 2002
41,095
513
126
Bleh so much for AMD crushing Nvidia in DX12. I would think after being proved wrong so many times, people would stop buying into the AMD marketing slides.
 

AtenRa

Lifer
Feb 2, 2009
14,001
3,357
136
Bleh so much for AMD crushing Nvidia in DX12. I would think after being proved wrong so many times, people would stop buying into the AMD marketing slides.


From GTX980 being 15-20% faster at launch than R9 290X

http://www.anandtech.com/show/8526/nvidia-geforce-gtx-980-review/13
67720.png


To R9 390X being faster than GTX980 in DX-12 games

fable-1080p-avg.png


Also R9 290 faster than GTX970 and very close to double priced GTX 980.
Anyone bought an R9 290 at $240-250 the last months should be commended ;)

Edit:

Also to add that GTX 960 should start to look way overpriced at DX-12 games against the R9 380.

1080pi7.png
 
Last edited: