Apple A10 Fusion is ** Quad-core big.LITTLE **

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Nothingness

Platinum Member
Jul 3, 2013
2,423
754
136
LOL. Sign of power hungry cores (like A15), even with FinFET?

Nice job, Apple!
Yeah they must be doomed. They should quickly get in touch with Intel, they'll provide them with a nice CPU for smartphones... Oh wait! :D

It's interesting Apple finally went to b.L like configurations. OTOH it took them much longer to have to use it.
 

.vodka

Golden Member
Dec 5, 2014
1,203
1,537
136
That's unexpected for Apple. The big cores must be insane performance and power draw wise for them to go down this path.

The controller part is interesting, I wonder how their implementation works vs the rest of the SoCs out there. They've already shown how high performance ARM cores should be done, are they now going to teach a lesson to the rest of the industry on how big.LITTLE should be done?
 

Andrei.

Senior member
Jan 26, 2015
316
386
136
That's unexpected for Apple. The big cores must be insane performance and power draw wise for them to go down this path.

The controller part is interesting, I wonder how their implementation works vs the rest of the SoCs out there. They've already shown how high performance ARM cores should be done, are they now going to teach a lesson to the rest of the industry on how big.LITTLE should be done?
So my bet is that simply because they likely went with 16FF+ again this generation to achieve the higher clocks they needed to do a physical implementation with faster/higher leakage transistors which adversely affects power at low frequency so opening up the need for low leakage/low power cores for low perf/idle scenarios.

Because they mention the controller, i.e. it being a hardware governor it should mean that the kernel/OS only sees 2 cores and the switching is transparent between pairs of big and little cores.
 
  • Like
Reactions: FIVR and .vodka

liahos1

Senior member
Aug 28, 2013
573
45
91
1 question and 1 comment

Comment: I saw Aicha Evans seated near the front which must confirm intel is supplying modems for the new phone.

Question: Is there a possibility Apples implementation of large/small cores could be unique in such a way that they would not have to license the IP from arm?
 

ChronoReverse

Platinum Member
Mar 4, 2004
2,562
31
91
Isn't that what Nvidia did with their two-process trick with some of the Tegra's? I wonder if they'll still have a way to flip it into the heterogeneous mode ARM has.
 
Mar 10, 2006
11,715
2,012
126
Yeah they must be doomed. They should quickly get in touch with Intel, they'll provide them with a nice CPU for smartphones... Oh wait! :D

It's interesting Apple finally went to b.L like configurations. OTOH it took them much longer to have to use it.

I don't know why some people here want to bash Apple's achievements. They do dang fine work, year in and year out.
 

Andrei.

Senior member
Jan 26, 2015
316
386
136
Question: Is there a possibility Apples implementation of large/small cores could be unique in such a way that they would not have to license the IP from arm?
I don't think there's anything to license to implement a big.LITTLE SoC, firstly they must already have some sort of AMBA protocol through the SoC since they for sure don't have full custom IP for the different blocks, plus the coherency protocol is likely proprietary much like Samsung's or Arteris'.
 

Eug

Lifer
Mar 11, 2000
23,587
1,001
126
Longer battery life in both the iPhone 7 (+2 hrs over 6s) and iPhone 7 Plus (+1 hr over 6s Plus). :) A result of big.LITTLE alone, or other factors (besides slightly bigger battery)?

No mention of h.265 HEVC video encoding. :( If it's there, Apple still isn't using it outside of FaceTime.
 

witeken

Diamond Member
Dec 25, 2013
3,899
193
106
I guess that good micro-architecture just can't compensate for the laws of physics.
Well, Big-little was invented because ARM had created an architecture that just sucked. But even biglittle didn't make A15 a good core. Now, if you have a decent architecture, if you have an R&D budget that is unlimited, and if you have a transistor budget that is unlimited (because in-house design), then, sure, why not waste your time and money on making a second architecture that will result in negligible power savings?

BTW, for a phone this is more necessary than a laptop which has 10x battery capacity, so Intel won't do that since Core is pretty good at idling.
 

Andrei.

Senior member
Jan 26, 2015
316
386
136
Well, Big-little was invented because ARM had created an architecture that just sucked. But even biglittle didn't make A15 a good core. Now, if you have a decent architecture, if you have an R&D budget that is unlimited, and if you have a transistor budget that is unlimited (because in-house design), then, sure, why not waste your time and money on making a second architecture that will result in negligible power savings?
It has nothing to do with microarchitecture and all about physical implementation. Even in something like the Snapdragon 820 with two identical microarchitecture clusters the power differences (at same frequency) between implementing in low power versus high frequency can be very noticeable so already there a big.LITTLE scheme is very much validated, not to mention when you further lower the power via microarchitecture. If you go high frequency implementation you will suffer power disadvantage in low perf scenarios, that's just how it is.

Edit: Also I disagree with the notion that A15 sucked. The first SoCs that implemented it sucked. The Exynos 5430 and Kirin 920 both have fantastic A15 implementations with great efficiency.
 

Eug

Lifer
Mar 11, 2000
23,587
1,001
126
Well, Big-little was invented because ARM had created an architecture that just sucked. But even biglittle didn't make A15 a good core. Now, if you have a decent architecture, if you have an R&D budget that is unlimited, and if you have a transistor budget that is unlimited (because in-house design), then, sure, why not waste your time and money on making a second architecture that will result in negligible power savings?

BTW, for a phone this is more necessary than a laptop which has 10x battery capacity, so Intel won't do that since Core is pretty good at idling.
Not sure what you're getting at. Intel chose to bail completely out of the phone market. It doesn't even compete in this segment.
 
Last edited:

Eug

Lifer
Mar 11, 2000
23,587
1,001
126
So my bet is that simply because they likely went with 16FF+ again this generation to achieve the higher clocks they needed to do a physical implementation with faster/higher leakage transistors which adversely affects power at low frequency so opening up the need for low leakage/low power cores for low perf/idle scenarios.

Because they mention the controller, i.e. it being a hardware governor it should mean that the kernel/OS only sees 2 cores and the switching is transparent between pairs of big and little cores.
What happens then when Apple goes 10 nm with A11? I wonder if they will choose big.LITTLE again.
 

Eug

Lifer
Mar 11, 2000
23,587
1,001
126
Why not? It's not like 10nm brings a big xtor performance/power enhancement.
I was just asking what he thought since it seemed he was suggesting they needed big.LITTLE in the absence of availability of a process shrink. Or did I misinterpret his post?
 

witeken

Diamond Member
Dec 25, 2013
3,899
193
106
Now on to some more important questions about the SoC instead of that big+little debate.

What architecture are the little cores and what architecture improvements do the big cores have?

Yeah, Intel mobile processors were beyond bad.
Where is the dislike button?
 

witeken

Diamond Member
Dec 25, 2013
3,899
193
106
I was just asking what he thought since it seemed he was suggesting they needed big.LITTLE in the absence of availability of a process shrink. Or did I misinterpret his post?
No, what he means is that Apple's done with the major architecture improvements and now has to reside to higher clock speeds. They do this with faster transistors (for instance, more fins per transistor I would suppose). So the CPU has (relatively) poor power at lower performance (leakage, etc.), so they decided to insert 2 more cores in the SoC that are now made with power optimized transistors (and maybe architecrture).
 

Eug

Lifer
Mar 11, 2000
23,587
1,001
126
So, 2. 4
No, what he means is that Apple's done with the major architecture improvements and now has to reside to higher clock speeds. They do this with faster transistors (for instance, more fins per transistor I would suppose). So the CPU has (relatively) poor power at lower performance (leakage, etc.), so they decided to insert 2 more cores in the SoC that are now made with power optimized transistors (and maybe architecrture).
OK. So I did misinterpret his post. I got sidetracked by his comment about the process used.

BTW, I wonder if Ming-chi Kuo's 2.4 GHz clockspeed for A10 is accurate. He seemed to be right on with most of his other predictions about this release. If so, that represents about a 30% improvement in clock speed over A9. If Apple's +40% performance claim is accurate, it means a little IPC improvement as well.

That 40% is better than what I was expecting.

AppleA10FusionPerformance_zpsgqolxcov.png


a9xclockspeeds-800x369.jpg


Apple_A9_Graphs.jpg
 
Last edited:

Andrei.

Senior member
Jan 26, 2015
316
386
136
They do this with faster transistors (for instance, more fins per transistor I would suppose).
Example of vendors changing the actual physical implementation is by tweaking the path lengths to the most optimal frequency target at lowest power throughout the block. For example a power optimisation would be: the frequency limit is determined by the critical path and thus that path has to use high leakage transistors to reach the speed, but at the same time all other paths which are shorter don't require these high power transistors so it would be a waste to use them. What vendors do is lengthen these non-critical paths with use of slower transistors thus lowering overall leakage without impact on performance. The same applies in reverse when you want to go higher in frequency, you need higher leakage transistors but then you also increase your leakage which has a particularly bad effect at low DVFS as it's just wasted away. The actual change in the transistors is more vague and secretive, but it's not about more fins but more about the libraries used.

Snapdragon 810 is an example of an implementation with very high leakage transistors to reach its target frequencies, it even example reached lower voltages at >1.3Ghz than the FinFET Exynos 7420 at the same frequencies. However leakage was absurd so power was horrible.

I'd put my money on the 2.4GHz rumours to be true.
 
Last edited:

Eug

Lifer
Mar 11, 2000
23,587
1,001
126
I wonder how large this chip is. Here's hoping we get another Chipworks analysis in a few weeks.
 

krumme

Diamond Member
Oct 9, 2009
5,952
1,585
136
I wonder how large this chip is. Here's hoping we get another Chipworks analysis in a few weeks.
Yep. Its far faster than i expected. And it looks like a very expensive solution.

What is all that cpu power good for?

Are they preparing to use it in laptops with this high freq stuff?? 4c a10 at 2.4 is sure damn fast.

To me there looks to be a huge difference in cost from say a 2 wide a73 plus tiny in order a53/a35 vs this monster. Quite different routes. Arm is clearly going the lean way while this looks more like core perf.
 

witeken

Diamond Member
Dec 25, 2013
3,899
193
106
Example of vendors changing the actual physical implementation is by tweaking the channel lengths to the most optimal frequency target at lowest power throughout the block.
You can't change channel length with finFETs. FInFETs come in only 1 flavor AFAIK (whereas with planar x'tors designers can change certain aspects of x'tor feature sizes). That's one of the ways in which Intel improved density with 14nm. Higher performance per fin = less fins per transistors.

14nmFinfet3.png


The actual change in the transistors is more vague and secretive, but it's not about more fins but more about the libraries used.
I'm not sure there's a change in transistor. Libraries are higher-level, right?

http://download.intel.com/newsroom/kits/22nm/pdfs/22nm-details_presentation.pdf
http://www.intel.com/content/dam/ww...ilicon-technology-leadership-presentation.pdf