[DX12] Fable Legends Beta Benchmarks

Page 12 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
FWIU, the *_1 features were added to DX12 at nVidia's request. Thus, the separate nomenclature from DX12_0. It's not like Msft simply decided to split DX12 into two feature sets before it was even released.

Of course we've already had claims that certain functions were supported only to find out they aren't. So, we'll know better as we go along. Assuming no more shens though, nVidia should completely support the *_1 feature set.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
12.1 is more than 2 years old. More likely 3 years. Else it wouldn't be in Gen9 graphics from Intel as seen with Skylake.
 

jpiniero

Lifer
Oct 1, 2010
14,688
5,318
136
So long story short, if we see improvements of only 10-15% with a sizeable reduction in power usage as AtenRa is predicting it would have nothing to do with a lack of transistor cost reduction as you claimed, and everything to do with this new node being business as usual.

If you're referring to a later timeframe (2017+), then that's fine, but that's not what was being discussed.

16FF+ isn't a new node in the academic sense; it's really more of a minor patch to 16FF which itself is still really 20 nm with finfets enabled. Which would then be a nearly 2 year old node.
 

lilltesaito

Member
Aug 3, 2010
110
0
0
16FF+ isn't a new node in the academic sense; it's really more of a minor patch to 16FF which itself is still really 20 nm with finfets enabled. Which would then be a nearly 2 year old node.

I can use that same logic and say every video game made is not new compared to Pong.:whiste:
 

antihelten

Golden Member
Feb 2, 2012
1,764
274
126
16FF+ isn't a new node in the academic sense; it's really more of a minor patch to 16FF which itself is still really 20 nm with finfets enabled. Which would then be a nearly 2 year old node.

That is possible, but that would simply mean that 16FF+ would potentially start out at a lower transistor cost than what is usual for a new node (due to being more mature and thus having better yields), and not as ShintaiDK was implying a higher transistor cost than what is usually seen for a new node.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
That is possible, but that would simply mean that 16FF+ would potentially start out at a lower transistor cost than what is usual for a new node (due to being more mature and thus having better yields), and not as ShintaiDK was implying a higher transistor cost than what is usually seen for a new node.

It still cost more than 28nm. Its all about multipatterning when going below 28nm. The only thing to change this progress is EUV. And that's not there yet.

LithoCost.jpg

11635d1406145622-sfdsoi2-jpg
 
Last edited:

Erenhardt

Diamond Member
Dec 1, 2012
3,251
105
101
I hate all of you. Every [redacted] time I come here to see any news on this benchmark I have some Legacy node cost structure BS to read.

Someone msg me if they release a new version of this benchmark or make it public.

Profanity isn't allowed in the technical forums.
-- stahlhart
 
Last edited by a moderator:

readers

Member
Oct 29, 2013
93
0
0
We see only SoCs on new Low Power nodes, there are NO high performance nodes available for mass production to this day, only 28nm.

If we had 16nm FF+ last year, both AMD and NVIDIA would release a high performance, high price, low volume product to replace the high-end 28nm GPUs.

Only if you exclude Intel who's been making large powerful chip in 22nm for a long time and is at 14nm now.

So yes, there is, they exist, not having access to it and there is no such thing are two completely different case.
 

stahlhart

Super Moderator Graphics Cards
Dec 21, 2010
4,273
77
91
Please take the process node digression elsewhere, and get this discussion back on topic.
-- stahlhart
 

Kenmitch

Diamond Member
Oct 10, 1999
8,505
2,249
136
I hate all of you. Every [redacted] time I come here to see any news on this benchmark I have some Legacy node cost structure BS to read.

Someone msg me if they release a new version of this benchmark or make it public.

I feel your pain.

Please take the process node digression elsewhere, and get this discussion back on topic.
-- stahlhart

Thanks!

Any new benchmarks with the latest drivers?
 
Last edited:

readers

Member
Oct 29, 2013
93
0
0
It really depends on what type of article/review one wishes to portray, .ie motive. Comparing a nearly 1400mhz oc card to a stock one is hardly fair. If you have no problem with that, then there's not much to debate really.

[redacted]

BTW, it seems either 980 ti or 390 is still the way to go if you want a new GPU today, unless you are thermal or power limited, then there is the 970.

Warning issued for profanity.
-- stahlhart
 
Last edited by a moderator: