Why is the Snapdragon 810 weaker/hotter than the Exynos 7420.

Oct 30, 2013
45
1
71
I'm just wondering as to why the user experience on both these chips is wildly different.

They are both using arms A57 core in a quad configuration (for the faster cores).

What's causing the S810 chip to run hotter, throttle more (heat) and perform worse than this counterpart.

Aren't arm reference designs, well reference.
 

ChronoReverse

Platinum Member
Mar 4, 2004
2,562
31
91
Even reference designs aren't implemented verbatim. Plus Samsung is on their own 14nm process while Qualcomm has to use TSMC's 20nm process (on top of TSMC's silicon generally not being as good in the first place).
 

s44

Diamond Member
Oct 13, 2006
9,427
16
81
That 20nm process is a dog. Notice the GPU makers skipping it altogether.

Samsung's 14nm with FinFET (the first non-Intel FinFET process) is really good.

Plus Samsung did a dry run with A57/A53 - last fall's Exynos 5433.
 
Feb 19, 2001
20,155
23
81
Is the process that big of a difference maker? Remember in the desktop CPU world when Intel went from 32nm to 22nm the benefits weren't that big, and part of the explanation was that moving to something that small and concentrated in heat actually makes it hard to dissipate. Heck even the 45nm to 32nm wasn't that big of a gain in terms of thermals.

There has to be more with the architecture that's causing the problem as well.
 

ChronoReverse

Platinum Member
Mar 4, 2004
2,562
31
91
Is the process that big of a difference maker? Remember in the desktop CPU world when Intel went from 32nm to 22nm the benefits weren't that big, and part of the explanation was that moving to something that small and concentrated in heat actually makes it hard to dissipate. Heck even the 45nm to 32nm wasn't that big of a gain in terms of thermals.

There has to be more with the architecture that's causing the problem as well.

It's not just the process size, there are a lot of other details in a process that makes it good or bad including tradeoffs between power usage, leakage and performance.

TSMC's 20nm is just bad (relatively) all around while Intel and Samsung's (very different) 14nm processes have good characteristics.

Along with the tweaks Samsung would have done on their own silicon, the Exynos 7420 ends up far better than the SD810
 
Mar 11, 2004
23,444
5,850
146
Is the process that big of a difference maker? Remember in the desktop CPU world when Intel went from 32nm to 22nm the benefits weren't that big, and part of the explanation was that moving to something that small and concentrated in heat actually makes it hard to dissipate. Heck even the 45nm to 32nm wasn't that big of a gain in terms of thermals.

There has to be more with the architecture that's causing the problem as well.

It is. What you need to keep in mind is that during that time Intel made other changes. They spent a large portion of the chip on the GPU, while the CPUs were relegated to a smaller and condensed portion of the die meaning their heat wasn't spread out or placed optimally (like if they were right under center or spread out, although there's obvious reasons why they wouldn't do things like that). Also the reason why thermals didn't improve with Ivy Bridge is that Intel stopped soldering the IHS to the top of the chip and instead switched to using thermal paste, leading to not as good of conduction between the two parts. And, basically since Ivy Bridge their designs and their fabbing are focused on minimizing power usage over all else.

It is more complex than that as it boils down to the electronic design of the chip with how well it suits the process. 20nm was largely a failure (although a revised version of it I believe is supposed to come out for chips that might work better on traditional transistors versus finfets).

It's not just the process size, there are a lot of other details in a process that makes it good or bad including tradeoffs between power usage, leakage and performance.

TSMC's 20nm is just bad (relatively) all around while Intel and Samsung's (very different) 14nm processes have good characteristics.

Along with the tweaks Samsung would have done on their own silicon, the Exynos 7420 ends up far better than the SD810

Absolutely. Process does matter a lot, but you still have to make proper use of it. And even if you have a good design on paper that should work well with the process, you also still have to iterate that design such that it can be mass produced well. I think this latter is the 810's real problem, but I also think it occurred because of issues with 20nm in the fabs.

From what I've gathered, their 16/14nm is basically 20nm but with finfets which is why they're moving on from 20nm (and a lot of companies skipped it). It isn't comparable to Intel's 14nm. But they should be superior to 20nm.
 

s44

Diamond Member
Oct 13, 2006
9,427
16
81
From what I've gathered, their 16/14nm is basically 20nm but with finfets
That was the thinking beforehand, which is why people were surprised by this:
A great deal of discussion ensued over whether Samsung’s 14nm process really represented a “true” die shrink over its 20nm predecessor. We were ourselves surprised to see Chipworks announce that the piece came in at only 78mm² compared to the Exynos 5433’s 113mm². This 31% shrink was beyond what we expected, as we previously reported that Samsung’s 14nm process was to continue to use the 20nm’s BEOL (Back-End-Of-Line, a chip’s largest metal layer) and thus make for only a minor progression. Both the BEOL’s M1 metal pitch and the transistor’s contacted gate pitch equally determine the density and just how much a design is able scale in area on a given process. It was only after Samsung’s ISSCC February 2015 presentation on the Exynos 5433 (Credits to our colleagues at PC Watch) that it became clear as to what is going on:

While Samsung has in the past only referred to the 20nm node as a single process, the reality is that there seems to have been two planned variants of the node. The variant we’ve seen in The Exynos 5430 and 5433 was in fact called 20LPE. In contrast, the process of which 20nm borrows its BEOL from is another variant called 20LPM – and this node sees a very different [much smaller] M1 metal pitch.
 

Andrei.

Senior member
Jan 26, 2015
316
386
136
I hope to be posting the S810 piece sometime in the next couple of weeks to finally get the numbers out there.

To further this discussion here though: it's not the process which is to blame. At least not for the biggest part. SoCs employing the same IP such as ARM Cortex cores means nothing in terms of expected power consumption, as the actual physical implementation and layout can be extremely different between the various companies. Even on 20nm A57s, Samsung's looks nothing like Qualcomm's which looks nothing like Nvidia's.
 

s44

Diamond Member
Oct 13, 2006
9,427
16
81
Wow, so Qualcomm actually just screwed up their chip design. Amazing.
 

Commodus

Diamond Member
Oct 9, 2004
9,215
6,820
136
Wow, so Qualcomm actually just screwed up their chip design. Amazing.

Yep. I feel sorry for the phone makers that don't control their own processors (those that aren't Apple, Huawei or Samsung, really) and depend on Qualcomm for high-end chips. They're forced to either use a problematic chip or dial things down a notch by using the 808. If Samsung is struggling to keep up with Apple at the high end even with a well-done custom CPU, imagine if you're HTC or Sony forced to settle for a wonky processor.
 
Dec 4, 2013
187
0
0
I hope to be posting the S810 piece sometime in the next couple of weeks to finally get the numbers out there.

To further this discussion here though: it's not the process which is to blame. At least not for the biggest part. SoCs employing the same IP such as ARM Cortex cores means nothing in terms of expected power consumption, as the actual physical implementation and layout can be extremely different between the various companies. Even on 20nm A57s, Samsung's looks nothing like Qualcomm's which looks nothing like Nvidia's.

Awesome. Looking forward to that article, Andrei!
 

Roland00Address

Platinum Member
Dec 17, 2008
2,196
260
126
It is quite common for chips with the same architecture have completely different performance at similar frequencies but have completely different power consumption. Does anyone else remember Tegra 2, 3, or 4?
 
Mar 11, 2004
23,444
5,850
146
That was the thinking beforehand, which is why people were surprised by this:

Right, but if I'm not mistaken their 14nm is using the BEOL of that other 20nm process? So it's more or less that with finfets and some other improvements? So it is a bigger change over the production 20nm, but it is still actually based on 20nm development, just the one that wasn't put into production.

Although, is it the one that they're going to put in production? I saw somewhere that they were going to offer a new and better 20nm that should offer cost benefits for chips that don't need what finfet offers? That's what I mentioned at the end there.

The 20nm issues are more that it was delayed, and then that 2nd version of it never materialized so the companies that needed what it offered were struggling with trying to work with the production version of the process and so moved to 14nm which was based on the 20nm they were aiming for.

The issue with 14/16nm is that its cost per transistor didn't actually go down, and so chips on it will actually increase in cost (although possibly can improve, but would have to be weighed against the cost of revising the design compared to 20nm). I think that is the impetus behind them also offering a revised 20nm production, as it'll lessen costs (both cost per transistor will be lower initially and will be easier to work with for those used to planar).

I hope to be posting the S810 piece sometime in the next couple of weeks to finally get the numbers out there.

To further this discussion here though: it's not the process which is to blame. At least not for the biggest part. SoCs employing the same IP such as ARM Cortex cores means nothing in terms of expected power consumption, as the actual physical implementation and layout can be extremely different between the various companies. Even on 20nm A57s, Samsung's looks nothing like Qualcomm's which looks nothing like Nvidia's.

Exactly, you also have to iterate the design into production. Apple and Samsung both managed to do that on 20nm, and before Qualcomm as well, while not exhibiting the issues that the 810 does.

And like was mentioned on the iPhone 6 review, there's tradeoffs that are dictated by the process:
In practice TSMC’s 20nm process is going to be a mixed bag; it can offer 30% higher speeds, 1.9x the density, or 25% less power consumption than their 28nm process, but not all three at once. In particular power consumption and speeds will be directly opposed, so any use of higher clock speeds will eat into power consumption improvements. This of course gets murkier once we’re comparing TSMC to Samsung, but the principle of clock speed/power tradeoffs remains the same regardless.

However, from what I gather, the production 20nm wasn't quite as big of a change as full nodes saw in the past as it is aimed at low cost and lower performance parts, while they were planning to have a different version for higher performance chips? Which other nodes in the past had multiple versions for different needs as well.

And Qualcomm has revisions (respins?) that improve the 810 (but I also got the impression that it was as much software like limiting outright speed as much as anything?).

Another thing I wondered might be causing issues is that they had a wide variance of chips but didn't do much binning (possibly to meet demand, as the 800, 801, and 805 were popular and ended up in quite a lot of devices). Like AMD, on both CPUs and GPUs, seems to actually go with a higher voltage than necessary which is detrimental to the power/performance ratio of their chips, making them hotter and underperform relative to their potential (which in mobile devices would be doubly hurtful with throttling).

Definitely would like to see what all issues resulted in the 810's lackluster showing.
 

poofyhairguy

Lifer
Nov 20, 2005
14,612
318
126
Wow, so Qualcomm actually just screwed up their chip design. Amazing.

Not that amazing. I called it last year. The 810 is a stop gap SoC that was slapped together when Apple went 64bit long before anyone expected.

What we have seen again and again is it takes experience to implement standard ARM designs. The Galaxy S4 Exynos version was basically broken from the start and was a cheating and throttling disaster all around. But that failure allowed them to make a great S6.

Prior to the 810 Qualcomm only really messed around with lowend ARM designs. It was obvious as soon as it was announced that the 810 had albatross written all over it.
 
Mar 11, 2004
23,444
5,850
146
Not that amazing. I called it last year. The 810 is a stop gap SoC that was slapped together when Apple went 64bit long before anyone expected.

What we have seen again and again is it takes experience to implement standard ARM designs. The Galaxy S4 Exynos version was basically broken from the start and was a cheating and throttling disaster all around. But that failure allowed them to make a great S6.

Prior to the 810 Qualcomm only really messed around with lowend ARM designs. It was obvious as soon as it was announced that the 810 had albatross written all over it.

I thought Qualcomm had a history of making base ARM designs when a new one came out, and then transitioning to their own version of it like a year later? Meaning it wasn't so much the 810 slapped together (which I take as meaning the design of it), it was always going to be Qualcomm's next chip. But the delays on 20nm and then Apple buying up a lot of the early 20nm production left Qualcomm behind so they then rushed to have chips for companies to put in products to show off at CES. It meant they didn't have the proper development and tapeout needed to get it right and ready for production.

I think it bit Apple as well. There were reports that Apple and TSMC had deals for 20, 16, and 10 nm production, but Samsung also produced 20nm chips for Apple, and won back production for A9 on 14nm.

Samsung managing to get A5x 20nm design of their own last fall and then 14nm version for the S6 makes Qualcomm and TSMCs 20nm that much bigger of a relative failure.

Of course, there could be more to this as well. TSMC is suing Samsung over trade secrets pertaining to process tech. That coupled with Samsung getting aggressive (specifically spending $$$ and resources) about development is why Samsung has radically changed in chip production in about 2-3 years.

20nm was a major "whoa" moment for fabs. If I recall Intel even had issues, but their move to finfet actually helped them (which is why other companies are doing that with what is largely 20nm based, just calling it 16/14nm).

So, I would say Apple did have a hand in Qualcomm's 810 issues, but I don't think it had to do with going 64bit (that was already in the cards in my opinion, and Qualcomm was going to be more limited by Android than Apple in that regard anyway), it was Apple getting aggressive in securing early production on newest process nodes. And then the changes in the industry (like Apple's success; Samsung, and other ARM design companies like Mediatek as well) made the screwup look even worse. In many ways, Qualcomm's (and TSMC's likewise) own success is to blame for that. Just goes to show how things can change quickly, but we'll see how things go.