[DX12] Fable Legends Beta Benchmarks

Page 11 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
Not really, node changes have historically actually come with fairly large transistor cost increases not decreases. It is only down the line (after 1.5-3 years) that the new node becomes cheaper than the previous node.

The industry disagrees with you.
 

AtenRa

Lifer
Feb 2, 2009
14,002
3,357
136
Ya, but perf/watt should skyrocket 50-100%. That means a GPU priced at $200 that replaces a 960, $350 that replaces a 970 and $550 that replaces the Fury/980 would smash those cards in performance. Do you think NV will just bump the performance only 15-30% for each segment in 2016? In other words 960 successor just 15-30% faster for $200, 970 successor just 15-30% faster for $330?

Yes, same price, 15-30% higher performance, half the power.


You are forgetting that 680 was 35-40% faster than the 580 and as well forgetting that the level of performance of the 580 dropped to far lower pricing levels. For instance, HD7850 OC traded blows with 580 OC and the former cost just $250 and had 2GB of VRAM vs. 1.5GB on the $450 580. Also, those faster GPUs will eventually start dropping in price. From now until December 2016 is a long time. Don't forget that existing cards will have sales too as they are EOL. R9 390 has already dropped to $280-290, R9 390X can be had for $370-380 and these cards were released just this year.

GTX680 was not 30-40% faster than GTX580 on release,

perfrel_1920.gif


Also, what i said was that you will not see high performance gains from the same price next year than what you pay TODAY. The $300 GPUs were an example.

980Ti came out at $650, not $750 though.

Yes my bad, i meant that GP204 will be 15-30% faster than GTX980Ti at $750 or $100 more expensive.

Even if that were true, there is a disproportionate premium for the fastest cards, which leaves room for lower-tier version of GP204 to be priced at a far more reasonable $400-450 level. Furthermore, if GP204 beats GM200 by 15-30%, that's A LOT faster than Fury X which means Fury X will need to drop massively from $650 to $400 or lower to make sense. So no matter how we slice it, by December 2016 the increase in price/performance should be significant from where we are sitting today.

First of all nobody will buy a 28nm Card at $300 at the end of 2016 when a 16nm $300 will be the same price at half the power.

Secondly, If 16nm cards arrive in summer 2016 there will be no high-end 28nm cards available by December 2016.

Thirdly, you have to understand that 16nm in 2016 will be low volume vs 28nm, lower yield vs 28nm and that will make a 350mm2 16nm die cost higher than 600mm2 28nm die even without installing HBM 2.0.

Both AMD and NVIDIA will be 16nm volume limited the entire 2016 and that will have an affect in 16nm GPUs availability that will translate in to higher prices.

Anyone believes that he will buy a $300 GPU like R9 390 / GTX970 today and he will be able to buy a GTX980Ti 16nm equivalent performance card for $300 after 12 months from now (H2 2016) is badly mistaken.
 

railven

Diamond Member
Mar 25, 2010
6,604
561
126
First off it isn't GCN's implementation. It;s by the book. MSFT DX12 book. People need to quit trying to make it sound like it's anything except an nVidia hardware issue.

Hmmm....so what you're saying AMD and Nvidia have the exact same implementation of Tessellation? Roger roger.

Please show me where MSFT dictates AS as hardware driven.

2nd, I'm just asking a simple question. Is async compute up and working on any nVidia DX12 cards? Either through software/drivers or on hardware? Special rendering path? Anyway, anyhow?

Define "working." If working means it takes NV cards extra long to process it due to lack of hardware and relying on simulation, yes it's working. Is it beneficial to them? No, it seems to not be. However, it didn't seem to break their performance in another sample - Fable Legends, which if recent accounts are true has Asynch Compute (though presumably no where near as much as AOTS).

That was pre Titan where we had GPU prices double in one generation.

Pretty sure it was AMD that jacked up the price of their former lead chip from $370 to $550. AND they didn't even beat Nvidia's middle chip @$500. [Yes, and I defended AMD's $550 price tag, still do.]
 
Last edited:

tential

Diamond Member
May 13, 2008
7,355
642
121
If you stick around, you will get accustom to the heat. Just try to spend a lot of time out doors from the end of winter on thru spring and into the summer. The summers will still be hot but you will adapt.

As for you PC expenditures.....
Why would you be embarrassed? It is your money and your life. There is nothing wrong with spending your own money on things you enjoy. That is what it is all about and it doesnt matter if there are others who dont get it. Its your hobby and it brings you joy. Unless you are maxing out credit cards, burying yourself in mountains of debt with an obsessive spending addiction- you have no reason for shame.

Your hobby is actually great compared to some. Its not dangerous or at odds with any law. There are people out there hooked on really reckless past times. See, we all have interest and we all need outlets. If you worked worked worked and never had no time or money for the things you enjoy and love to do.......what life is that?
True on all accounts!!
I have no debt so I'll enjoy away!
 

EarthwormJim

Diamond Member
Oct 15, 2003
3,239
0
76
Depends on how well the person can time their GPU upgrade. Surely next year cards like GTX970/390 will dip towards the $200 mark and that would represent a 50%+ increase in performance and way more VRAM over the $199 MSRP 960 2GB.

R9 290X cost $549 October 2013. By October 2014 last year when 970 came out, those 290X cards dropped to $300. As I mentioned above, 780Ti's $700 MSRP turned into a $330 GTX970 in just 10 months. In just 2.5 years since $650 780 came out, better performance can be had in a $240 R9 290, $280 R9 390, $280 GTX970.

I am not denying that AMD/NV might charge premiums for the latest SKUs. For example, remember the time when GTX580 cost $500 and it was possible to buy GTX480 for $300? We will always have a situation like that.

Just make a mental note of how much it'll cost to buy $480 GTX980/$550 Fury level of performance by October-December 2016, regardless of what generation/vendor. I bet it's going to be possible for $300-325, or even less.

You're listing prices, the person I quoted was talking about node jumps and new architectures. Midrange is often the last to see those.

Not really, node changes have historically actually come with fairly large transistor cost increases not decreases. It is only down the line (after 1.5-3 years) that the new node becomes cheaper than the previous node.

The (alleged) difference this time around is that the new nodes might never transition to a level where they are cheaper than the previous node.

That's really not true if you're talking on the transistor level. New nodes, provided yields aren't atrocious have always brought a lower cost per transistor.

Now if you're talking whole wafers or chips, they don't necessarily bring lower costs since there are other design factors involved.
 
Last edited:
Feb 19, 2009
10,457
10
76
Funny that these guys are doing the investigative piece when the likes of AT just regurgitate info fed to them.

With GPUView for full access to the rendering pipelines, Maxwell is aware of Async Compute queues when its requested, but it shoves them into the same single graphics pipeline. It's as what the guys on b3d have been saying.

http://wccftech.com/asynchronous-compute-investigated-in-fable-legends-dx12-benchmark/2/

The benchmark uses up to 18% compute but it only uses 1 parallel async compute queue, or 1 ACE only, in it's current form. Now this data falls in-line with what Zlatan has said when he disabled AC, performance drops by around that mark, more than the 5% Lionhead told journalists.

18% uplift + multi-threaded rendering explains the strong performance of AMD GCN in a UE4 game, where the deficit is normally quite big. Can't wait to see some more DX12 titles in neutral engines. :D
 

monstercameron

Diamond Member
Feb 12, 2013
3,818
1
0
Funny that these guys are doing the investigative piece when the likes of AT just regurgitate info fed to them.

With GPUView for full access to the rendering pipelines, Maxwell is aware of Async Compute queues when its requested, but it shoves them into the same single graphics pipeline. It's as what the guys on b3d have been saying.

http://wccftech.com/asynchronous-compute-investigated-in-fable-legends-dx12-benchmark/2/

The benchmark uses up to 18% compute but it only uses 1 parallel async compute queue, or 1 ACE only, in it's current form. Now this data falls in-line with what Zlatan has said when he disabled AC, performance drops by around that mark, more than the 5% Lionhead told journalists.

18% uplift + multi-threaded rendering explains the strong performance of AMD GCN in a UE4 game, where the deficit is normally quite big. Can't wait to see some more DX12 titles in neutral engines. :D

wccftech is getting better by the day.
 

Snafuh

Member
Mar 16, 2015
115
0
16
Can't wait to see some more DX12 titles in neutral engines. :D

Fable Legends uses its own lighting system. It might run with DX11 just as good on Amd as with DX12. The source code for the Unreal Engine is public. Everybody is free to look into it why it is not "neutral" and devs can change everything for their games.
Epic works closely with Nvidia but don't judge the engine based on some indie games.
 

antihelten

Golden Member
Feb 2, 2012
1,764
274
126
The industry disagrees with you.

That's really not true if you're talking on the transistor level. New nodes, provided yields aren't atrocious have always brought a lower cost per transistor.

Now if you're talking whole wafers or chips, they don't necessarily bring lower costs since there are other design factors involved.

Unless you guys have a better source than Nvidia and AMD, then you are both wrong.

NV-Pres3.jpg


9501d1386985215-amd32-jpg


Transistor cost starts out higher on new nodes, and only reaches cost parity with the older node after 2-4 quarters on average (I said 1.5-3 years before, which was incorrect).
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
So you say we are wrong, while admitting you are wrong? ;)

28nm was essentially the last node to give lower transistor cost. Even in the long run. And its not going to change before EUV.
 

Piroko

Senior member
Jan 10, 2013
905
79
91
First 28nm products were released in 4q11. With a fab time of 6-8 weeks and a transistor cost crossover in 2q12 that means that anything sold prior to 3q12 was more expensive to produce per transistor than 40nm. antihelten is completely correct here.
 

readers

Member
Oct 29, 2013
93
0
0
The answer depends on whether we're talking as intellectual honesty participants or partisan cheerleaders.

You can either say yes or no to this question: "Is acceptably fast performing software emulation the same as hardware for purposes of calling a feature 'Supported'?"

If you say yes this means:
* Maxwell2 supports Async Compute
* AMD could theoretically support DX FL 12_1 if they can write fast enough software paths for those features

If no:
* Maxwell2 does not support Async Compute (Based upon current information)
* AMD can not support DX FL 12_1

The problem is that cheerleaders want to have it both ways for their side, which is intellectually dishonest and illogical. This division also appears between Haswell iGPU based h.265 decode vs gm206 decode

My opinion is that if you can write a fast enough software path (using a mix of CPU, GPGPU, Fixed hardware where appropriate) then you support it because ultimately the consumer gets to experience the feature. Hardware accelerated is always going to be faster and use less energy, and software based is always going to be more flexible and can be updated after its been shipped.

If the question is: "Is there fixed function hardware built into the GPU which supports X feature" then the answers are (based upon current information):
* Maxwell2 does not support fixed-function-hardware accelerated async compute
* AMD does not support fixed-function-hardware accelerated DX12 FL12_1 features

If anyone has verifiable information which proves otherwise I am of course completely willing to change my final assessments on these points, but based on current information that is how it stands.

Great post, but one question does Maxwell 2 support fixed-function-hardware accelerated DX12 FL12_1 features?
 

antihelten

Golden Member
Feb 2, 2012
1,764
274
126
So you say we are wrong, while admitting you are wrong? ;)

28nm was essentially the last node to give lower transistor cost. Even in the long run. And its not going to change before EUV.


Sigh.

No, I'm saying that you're are wrong when you claim that new nodes traditionally come with lower transistor costs. Traditionally new nodes come with higher transistor costs and it is only down the line that they hit transistor cost parity. I initially incorrectly stated that it took 1.5-3 years to hit transistor cost parity for a new node, but historically it is closer to 0.5-1 years (feel free to claim some sort of internet points on "winning" that one, if you must).

The fundamental fact still remains that your claim is wrong and is in fact in direct opposition to the truth.

And 28nm did eventually give lower transistor costs with time, but it did not come with those lower costs, it came with higher costs, just like every other node that preceded it.
 
Last edited:

jpiniero

Lifer
Oct 1, 2010
14,804
5,427
136
The fundamental fact still remains that your claim is wrong and is in fact in direct opposition to the truth.

I'm actually confused as to what are you implying. 28 nm was expensive at first; but it got much cheaper over time yes. TSMC did a real good job in lowering costs eventually. Until EUV arrives, 16FF and future nodes will be super expensive at first but in the long run still won't get appreciably cheaper than where 28 nm is today.
 

antihelten

Golden Member
Feb 2, 2012
1,764
274
126
I'm actually confused as to what are you implying. 28 nm was expensive at first; but it got much cheaper over time yes. TSMC did a real good job in lowering costs eventually. Until EUV arrives, 16FF and future nodes will be super expensive at first but in the long run still won't get appreciably cheaper than where 28 nm is today.

ShintaiDK claimed that new nodes usually came with lower transistor cost.

In reality new nodes have always come with higher transistor costs, not lower, and it isn't until 0.5-1 year later that the new nodes hit cost parity (and as Piroko noted, when a new GPU generations is launched on a new node it is usually produced within that first 0.5-1 year where the transistor costs are higher, so the later lower cost is not really relevant for the discussion of the upcoming Pascal/Arctic Islands GPUs).

In other words, as far as the upcoming GPUs are concerned 16nm will be no different than 28nm (i.e. the node change will involve higher transistor costs, just like 28nm did for GCN 1.0 and Kepler), the difference (which is entirely speculative at this point), is that 16nm will never become cheaper than 28nm. even with time. However that will only be an issue for the inevitable refreshes of Pascal/Arctic Islands (it will potentially also affect the ability of Nvidia/AMD to lower prices of Pascal/Arctic Islands over time).
 
Last edited:

jpiniero

Lifer
Oct 1, 2010
14,804
5,427
136
ShintaiDK claimed that new nodes usually came with lower transistor cost.

Over time is kind of implied.

when a new GPU generations is launched on a new node it is usually produced within that first 0.5-1 year where the transistor costs are higher, so the later lower cost is not really relevant for the discussion of the upcoming Pascal/Arctic Islands GPUs).
Except you'll notice that when the 7970 shipped, it was right at the beginning of 28 nm. Both are totally skipping the "expensive early phase" because the cost would be absurd. You aren't seeing anything (unless the product is super expensive like Tesla) until the curve flattens. Even then it won't be much cheaper than 28 nm. Then you have declining volumes...
 

antihelten

Golden Member
Feb 2, 2012
1,764
274
126
Over time is kind of implied.

No it isn't, the discussion was explicitly about GPUs launched in 2016. If ShintaiDK wanted to expand the discussion from this, then he should have said so.

Except you'll notice that when the 7970 shipped, it was right at the beginning of 28 nm. Both are totally skipping the "expensive early phase" because the cost would be absurd. You aren't seeing anything (unless the product is super expensive like Tesla) until the curve flattens. Even then it won't be much cheaper than 28 nm. Then you have declining volumes...

What makes you think that Pascal/Arctic Islands is skipping the expensive early phase? Also to my knowledge both are launching on 16FF+, not 16, although I could certainly be mistaken here.

Anyway this is all getting a bit of the point. I don't think anyone disagrees that 16FF+ will be more expensive than 28 (on a transistor basis). The claim that ShintaiDK made (and with which I disagree), is that this is somehow unusual, and that previous nodes came with large reductions in transistor cost (which they obviously didn't).
 
Last edited:

AtenRa

Lifer
Feb 2, 2009
14,002
3,357
136
Exactly. :thumbsup:

And this is the key problem for nodes below 28nm until EUV arrives. And why we see high end SoCs on new nodes and not GPUs.

We see only SoCs on new Low Power nodes, there are NO high performance nodes available for mass production to this day, only 28nm.

If we had 16nm FF+ last year, both AMD and NVIDIA would release a high performance, high price, low volume product to replace the high-end 28nm GPUs.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
Sure, because cost doesn't matter. Even AMD directly said they abandoned 20nm due to cost/benefit and not because it wasn't there. 28nm is simply too attractive to let go for AMD and NVidia until they have to.
 

thesmokingman

Platinum Member
May 6, 2010
2,307
231
106
Sure, because cost doesn't matter. Even AMD directly said they abandoned 20nm due to cost/benefit and not because it wasn't there. 28nm is simply too attractive to let go for AMD and NVidia until they have to.


Didn't they find that 20nm gave less performance?


http://www.theregister.co.uk/2014/01/14/amd_unveils_kaveri_hsa_enabled_apu/?page=3

"What we found was with the CPU with planar transistors, when we went from 28 to 22, we actually started to slow down," he said, "because the pitch of the transistor had to become much finer, and basically we couldn't get as much oomph through the transistor."
The problem, he said, was that "our IDsat was unpleasant" at 22nm, referring to gate drain saturation current*. In addition, the chip's metal system needed to be scaled down to fit within the 22nm process, which increased resistance.
"So what we saw was the frequency just fall off the cliff," he said. "This is why it's so important to get to FinFET."
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106

That's more likely the PR statement of it. Always go to the investor info where they can only tapdance.

Before that there was no shortage of praising from Lisa Su.

20nm is an important node for us. We will be shipping products in 20nm next year and as we move forward

The truth seems to be this from Lisa Su.

AMD cancelled all these plans as she hopes more profit will come from switching to FinFET production

Its the ugly ROI face that hits all the minor companies like AMD and NVidia.
 

thesmokingman

Platinum Member
May 6, 2010
2,307
231
106
Lmao. I know now never to bother replying to any of your posts. Joe Macri, cto = pr person. Bottom line they scrap 20nm because it's not cost effective and wait for FF. Cost effective could mean 20nm is too expensive or it could mean its slower or both because why would their CTO admit to it being slower unless it was slower. Geeze, what a tool that guy is.
 

antihelten

Golden Member
Feb 2, 2012
1,764
274
126
Exactly. :thumbsup:

So what timeframe exactly where you refering to?

The GPUs launched in the 2016 timeframe that AtenRa mentioned would fall quite solidly within the "more expensive than last node" time frame for 16FF+.

The corresponding timeframe for the 28nm node would be 2012 (with GPUs launched here being tapped out in H2 2011 and H1 2012), and what AtenRa is predicting for $200/$300 GPUs this time around also happened on the 40nm to 28nm transition (for example the $250 560 Ti on 40nm launched Jan. 2011 to the $250 7850 on 28nm launched May 2012, which brought roughly 10% higher performance and 40% lower power usage, almost exactly what AtenRa is predicting). AtenRa also points out that this has nothing to do with the new node being different, and is simply history repeating itself right before your post.

So long story short, if we see improvements of only 10-15% with a sizeable reduction in power usage as AtenRa is predicting it would have nothing to do with a lack of transistor cost reduction as you claimed, and everything to do with this new node being business as usual.

If you're referring to a later timeframe (2017+), then that's fine, but that's not what was being discussed.
 
Last edited: