That was the thinking beforehand, which is why people were surprised by
this:
Right, but if I'm not mistaken their 14nm is using the BEOL of that other 20nm process? So it's more or less that with finfets and some other improvements? So it is a bigger change over the production 20nm, but it is still actually based on 20nm development, just the one that wasn't put into production.
Although, is it the one that they're going to put in production? I saw somewhere that they were going to offer a new and better 20nm that should offer cost benefits for chips that don't need what finfet offers? That's what I mentioned at the end there.
The 20nm issues are more that it was delayed, and then that 2nd version of it never materialized so the companies that needed what it offered were struggling with trying to work with the production version of the process and so moved to 14nm which was based on the 20nm they were aiming for.
The issue with 14/16nm is that its cost per transistor didn't actually go down, and so chips on it will actually increase in cost (although possibly can improve, but would have to be weighed against the cost of revising the design compared to 20nm). I think that is the impetus behind them also offering a revised 20nm production, as it'll lessen costs (both cost per transistor will be lower initially and will be easier to work with for those used to planar).
I hope to be posting the S810 piece sometime in the next couple of weeks to finally get the numbers out there.
To further this discussion here though: it's not the process which is to blame. At least not for the biggest part. SoCs employing the same IP such as ARM Cortex cores means nothing in terms of expected power consumption, as the actual physical implementation and layout can be extremely different between the various companies. Even on 20nm A57s, Samsung's looks nothing like Qualcomm's which looks nothing like Nvidia's.
Exactly, you also have to iterate the design into production. Apple and Samsung both managed to do that on 20nm, and before Qualcomm as well, while not exhibiting the issues that the 810 does.
And like was mentioned on the iPhone 6 review, there's tradeoffs that are dictated by the process:
In practice TSMCs 20nm process is going to be a mixed bag; it can offer 30% higher speeds, 1.9x the density, or 25% less power consumption than their 28nm process, but not all three at once. In particular power consumption and speeds will be directly opposed, so any use of higher clock speeds will eat into power consumption improvements. This of course gets murkier once were comparing TSMC to Samsung, but the principle of clock speed/power tradeoffs remains the same regardless.
However, from what I gather, the production 20nm wasn't quite as big of a change as full nodes saw in the past as it is aimed at low cost and lower performance parts, while they were planning to have a different version for higher performance chips? Which other nodes in the past had multiple versions for different needs as well.
And Qualcomm has revisions (respins?) that improve the 810 (but I also got the impression that it was as much software like limiting outright speed as much as anything?).
Another thing I wondered might be causing issues is that they had a wide variance of chips but didn't do much binning (possibly to meet demand, as the 800, 801, and 805 were popular and ended up in quite a lot of devices). Like AMD, on both CPUs and GPUs, seems to actually go with a higher voltage than necessary which is detrimental to the power/performance ratio of their chips, making them hotter and underperform relative to their potential (which in mobile devices would be doubly hurtful with throttling).
Definitely would like to see what all issues resulted in the 810's lackluster showing.