Apple A14 - 5 nm, 11.8 billion transistors

Page 8 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Doug S

Platinum Member
Feb 8, 2020
2,201
3,405
136
It's clear that:
  • Apple use 4-year cycle when they do new groud-up uarch.
  • One major uarch change after 2-years (A9 was massive IPC uplift)
  • last step uarch is weak due to uarch IPC potential was depleted, big frequency jump as compensation


No, it is clear they used a 4 year cycle from A7 to A10. That doesn't imply that A11-A14 does the same. C'mon, does anyone really claim to extrapolate patterns based on a sample size of 1???

You could just as well have predicted A8 would be a 128 bit core, since they had gone from 32 to 64 bits after only one year of designing custom cores, so A8 would be 128 bit, A9 256 bit, and so forth :cool:
 

Eug

Lifer
Mar 11, 2000
23,583
996
126
No, it is clear they used a 4 year cycle from A7 to A10. That doesn't imply that A11-A14 does the same. C'mon, does anyone really claim to extrapolate patterns based on a sample size of 1???

You could just as well have predicted A8 would be a 128 bit core, since they had gone from 32 to 64 bits after only one year of designing custom cores, so A8 would be 128 bit, A9 256 bit, and so forth :cool:
128-bit or bust! ;)

We should not forget the large boosts in neural engine, machine learning, and compute speeds as well, and I hope for improved power efficiency on the CPU side too.

Anyhow, I'm too am not convinced that A15 will be a giant leap forward in design if the only "evidence" for this is one previous series of chips. Maybe we've finally hit the ceiling. Apple has already picked all the low-hanging fruit, and we shouldn't expect 20% IPC speed boosts going forward.

To be honest, at least for the iPad and iPhone, IMO the CPU speeds have outstripped needs for a while now. A10 is still decent, and A12 is outright fast, with A13 even faster. I know Apple is planning for the Macs and for the future of iDevices as well, but I can forgive Apple for de-emphasizing CPU performance boosts this time around. And it's not as if the speed boost this time is horrible. It's just that it's not mind-blowing this time.

Despite the negativity expressed by some, in real terms I think A14X (or whatever it's called) is going to be a great chip for 2021. And the A14 derived chips for Macs are going to be even faster.
 

name99

Senior member
Sep 11, 2010
404
303
136
No, it is clear they used a 4 year cycle from A7 to A10. That doesn't imply that A11-A14 does the same. C'mon, does anyone really claim to extrapolate patterns based on a sample size of 1???

You could just as well have predicted A8 would be a 128 bit core, since they had gone from 32 to 64 bits after only one year of designing custom cores, so A8 would be 128 bit, A9 256 bit, and so forth :cool:

"Clear" is a strong word. BUT
I think it's a reasonable hypothesis that they have a 4 year supercycle underlying their annual development cycle. Not only does that make sense of logical/management grounds, but we've seen two cycles that match the pattern.

Look, understanding the world is imperfect.
You can be a certain type of philosopher, so paralyzed by epistemological doubt that you aren't CERTAIN that you do nothing for your entire life except engage in an irritating wankfest of "maybe/maybe not".
Or you can be a physicist, accept evidence that looks plausible and see where it leads, while always retaining in your mind the possibility that you could be wrong and will have to backtrack.
 

Richie Rich

Senior member
Jul 28, 2019
470
229
76
It should be noted though that A14 is just 4-core for the GPU, like A13 and A12, which means it’s got only half the number of GPU cores of A12Z. Yet, the compute scores of A14 and A12Z are about the same.
Jeez, you are right. A14 has almost doubled GPU performance. That's the answer to people how doubted Apple can't replace Radeons. Not only replace, but Apple can outperform Radeons. If Apple will use HBM2 memory, they can use huge GPU without being bandwidth bottlenecked like Renoir. And Fujitsu CPU uses HBM memory already, ARM Neoverse V1 will use it next year in SiPearl CPU. So HBM memory is possible in theory.
 

Eug

Lifer
Mar 11, 2000
23,583
996
126
Jeez, you are right. A14 has almost doubled GPU performance. That's the answer to people how doubted Apple can't replace Radeons. Not only replace, but Apple can outperform Radeons. If Apple will use HBM2 memory, they can use huge GPU without being bandwidth bottlenecked like Renoir. And Fujitsu CPU uses HBM memory already, ARM Neoverse V1 will use it next year in SiPearl CPU. So HBM memory is possible in theory.
Yes and no.

Apple has doubled Metal performance in Geekbench 5 vs A12, but Apple only claims a 30% increase in GPU speed vs. A12.

This reminds me of A10X vs A12. A10X is much faster than A12 for Metal performance in Geekbench 5, but this doesn't fully translate to gaming performance.
 
  • Like
Reactions: Mopetar

Richie Rich

Senior member
Jul 28, 2019
470
229
76
No, it is clear they used a 4 year cycle from A7 to A10. That doesn't imply that A11-A14 does the same. C'mon, does anyone really claim to extrapolate patterns based on a sample size of 1???

You could just as well have predicted A8 would be a 128 bit core, since they had gone from 32 to 64 bits after only one year of designing custom cores, so A8 would be 128 bit, A9 256 bit, and so forth :cool:
No offense, but you have no clue what you talking about here. Jim Keller said that CPU uarch need ground up design every 4 years and if possible every 2 years. Where did he get that? Wasn't he working at Apple's CPU design team? Yes, he was.

Another confirmation of A15 being BIG uarch change is Nuvia Phoenix's performance. The lowest range for Phoenix for 4.5W is 1900 pts at 3 GHz which is +20% above A14. That's huge jump after only 5% uplift of A14. And remember A11 Monsoon as first 6xALU core was +19% IPC uplift in GB5. G. Williams designed A15 before he left in 2018 (A15 design started in 2017 or maybe together with A14 in 2016). Nuvia Phoenix is probably very similar to A15, his last child in Apple.
 
  • Like
Reactions: name99

Etain05

Junior Member
Oct 6, 2018
11
22
81
I find it a bit curious that they lowballed their GPU performance figures so much for the A14. I mean, Apple has always been very conservative with its figures, always publicising the smallest increase for their SoCs, not the largest (whether that be single-core or multi-core), and I seem to remember that even last year Andrei found they had underpromised regarding the A13 GPU and overdelivered, but this is really a massive difference.

They promised a 30% increase compared to the A12, but instead we’re getting a ~72% increase according to the Metal compute scores compared to the A13 (from ~7300 to ~12500) and a massive ~135% increase compared to the A12 (from ~5300 to ~12500). That’s a gigantic discrepancy, between +30% and +135%.

Is compute performance really that different from gaming performance? Can that explain it?

Looking at Andrei’s review of the A13, it seems peak gaming performance increased by about 20-25% compared to the A12, while Geekbench compute performance increased by about 38% so there is some discrepancy between compute and gaming, but nowhere near enough to explain the difference between figures.

If suddenly the GPU performance increase of the A14 compared to the A13 is 50% or more instead of the +8,3% Andrei assumed in his article based on Apple’s projections, then this immediately becomes a fantastic GPU update, and it could explain where some of those new transistors went...I never thought it possible for the NPU alone to cause that big of an increase in transistor count.
 

Eug

Lifer
Mar 11, 2000
23,583
996
126
Plus there are other factors that often get overlooked, and some may be quasi-GPU related. For example, hardware video encoding on A10X appears to be faster than A12, but A12X hardware video encoding is even faster, but it depends on the specific test. A12 is sometimes as fast as A10X, but A10X is often faster. A10X is sometimes as fast as A12X, but A12X is usually faster. Will hardware video encoding speed on A14 match A12X?

Then there are other features like USB support, but I don’t know if this is related to the SoC or it’s due to other chips. A12 devices like Phone XS and iPad Air 3 are USB 2, but A10X iPad Pros have at least partial support for USB 3. A12X is fully USB 3. The A14 iPad Air 4 has to be USB 3, but I don’t know if the A14 iPhone 12 series will be only partially USB 3 or fully USB 3. My guess is only partially USB 3, since the iPhone 12 series will be Lightning devices.
 

defferoo

Member
Sep 28, 2015
47
45
91
No offense, but you have no clue what you talking about here. Jim Keller said that CPU uarch need ground up design every 4 years and if possible every 2 years. Where did he get that? Wasn't he working at Apple's CPU design team? Yes, he was.

Another confirmation of A15 being BIG uarch change is Nuvia Phoenix's performance. The lowest range for Phoenix for 4.5W is 1900 pts at 3 GHz which is +20% above A14. That's huge jump after only 5% uplift of A14. And remember A11 Monsoon as first 6xALU core was +19% IPC uplift in GB5. G. Williams designed A15 before he left in 2018 (A15 design started in 2017 or maybe together with A14 in 2016). Nuvia Phoenix is probably very similar to A15, his last child in Apple.
Everything you say in this post is at best speculation. There are very few facts here, just conjecture.
 
  • Like
Reactions: Avalon and Eug

smalM

Member
Sep 9, 2019
54
54
91
Several of the compute subtests are memory bandwidth starved.
The A13 has a very inconsistent performance uplift compared to the A12, with scarcely anything to more than twice in those subtests.
The A14 in contrast doubles the performance of nearly every subtest compared to the A12 (with the exception of Gaussian Blur).
So the A14 has the biggest uplift in performance in exactly in those subtest which had the lowest uplift in the A13. I think most of the uplift comes from better bandwidth (LPDDR5) and/or bigger SLC.

Update: GB5 links
A12 : A13
A12 : A14
A13 : A14
 
Last edited:

NTMBK

Lifer
Nov 14, 2011
10,208
4,940
136
Jeez, you are right. A14 has almost doubled GPU performance. That's the answer to people how doubted Apple can't replace Radeons. Not only replace, but Apple can outperform Radeons. If Apple will use HBM2 memory, they can use huge GPU without being bandwidth bottlenecked like Renoir. And Fujitsu CPU uses HBM memory already, ARM Neoverse V1 will use it next year in SiPearl CPU. So HBM memory is possible in theory.

Yeah, it all comes down to the cost. I suspect AMD might be more likely to use GDDR6 main memory, like the next gen consoles, before they go to HBM2- 16/32GB of HBM2 for a Macbook would be crazy expensive.
 

Richie Rich

Senior member
Jul 28, 2019
470
229
76
Everything you say in this post is at best speculation. There are very few facts here, just conjecture.
Are you kidding?
  • IPC is measured in GB5 from A7 up to A13. And now also A14.
  • IPC of Nuvia Phoenix is known if we take the lowest boundary of blue blob as minimum
  • Chief Apple architect Gerard Williams III is CEO in his Nuvia, he took a lot of people with him, also known fact
  • uarch of Apple cores is also known indirectly (either measured or from SW opt guide)
However I admit that for somebody who lacks knowledge mentioned above it might look like conjecture :D
 
  • Like
Reactions: name99

Doug S

Platinum Member
Feb 8, 2020
2,201
3,405
136
Several of the compute subtests are memory bandwidth starved.
The A13 has a very inconsistent performance uplift compared to the A12, with scarcely anything to more than twice in those subtests.
The A14 in contrast doubles the performance of nearly every subtest compared to the A12 (with the exception of Gaussian Blur).
So the A14 has the biggest uplift in performance in exactly in those subtest which had the lowest uplift in the A13. I think most of the uplift comes from better bandwidth (LPDDR5) and/or bigger SLC.

Update: GB5 links
A12 : A13
A12 : A14
A13 : A14


When doing these comparisons be careful about comparing iPhone to iPad. Apple uses double the memory width on iPad, so the results you are seeing here on A14 for GPU/Metal testing are not what you'll see on the iPhone 12.

I guess we can't say for sure that's the case when using a non 'X' version of the SoC (the SoC would have to have a double wide memory controller) but since they are using a non 'X' version in an iPad and the results are so much better than what Apple claimed for the A12 -> A14 uplift this seems the most likely explanation.
 

defferoo

Member
Sep 28, 2015
47
45
91
Are you kidding?
  • IPC is measured in GB5 from A7 up to A13. And now also A14.
  • IPC of Nuvia Phoenix is known if we take the lowest boundary of blue blob as minimum
  • Chief Apple architect Gerard Williams III is CEO in his Nuvia, he took a lot of people with him, also known fact
  • uarch of Apple cores is also known indirectly (either measured or from SW opt guide)
However I admit that for somebody who lacks knowledge mentioned above it might look like conjecture :D
unless you have actual inside sources that know what’s going on, it’s all conjecture. it’s a nice “story” but let’s wait until we have actual numbers first.
 

Entropyq3

Junior Member
Jan 24, 2005
22
22
81
When doing these comparisons be careful about comparing iPhone to iPad. Apple uses double the memory width on iPad, so the results you are seeing here on A14 for GPU/Metal testing are not what you'll see on the iPhone 12.

I guess we can't say for sure that's the case when using a non 'X' version of the SoC (the SoC would have to have a double wide memory controller) but since they are using a non 'X' version in an iPad and the results are so much better than what Apple claimed for the A12 -> A14 uplift this seems the most likely explanation.
Only the iPad Pros use a 128-bit wide memory interface. Unfortunately. There may however have been a shift to LPDDR5, which would help.
 

Eug

Lifer
Mar 11, 2000
23,583
996
126
Engadget: Apple on designing the A14 Bionic for the iPad Air and beyond

The A14's Neural Engine now packs 16 cores, compared to eight in last year's A13. Doubling the engine's core count was an interesting choice since many of the iOS features that relied on it already seemed to run well enough. Since that’s the case, why not instead devote more of those new transistors to further ramping up CPU and GPU performance, which most people may more immediately notice?

The answer is two-fold. For one, Apple continues to see huge potential in supercharging neural networks, not just for the sake of its own software experiences, but for the ones app developers might be able to achieve with the right components in place. The popular image editing app Pixelmator Pro, for instance, leans on the Neural Engine for a feature that makes low-resolution images look surprisingly crisp and clean. Meanwhile, on the other end of the creative spectrum, Algoriddim’s djay Pro AI app uses the Neural Engine to more capably isolate vocals and instrument tracks in songs.


---

When the company says the A14's CPU is 30 percent more powerful than the current iPad Air's A12 chipset, for instance, it isn't going off results from popular benchmarking tools you and I have access to. According to Boger, those figures are an amalgamation of "real-world application workloads." In other words, they're composite numbers derived from many different performance factors -- all to demonstrate what it’s like to actually use this thing.

"We understand that single-thread performance for a lot of applications is really important," Millet added. "So we make sure that when we're talking about things like that, we're representing the single-thread performance well. We also represent that more forward-looking developers are actually taking advantage of the extra cores that are coming in."
 

Roland00Address

Platinum Member
Dec 17, 2008
2,196
260
126
Thinking of a Brad Sams comment from Thurrott dot com.

Same 3 apps used every day, same processes no matter which phone you buy today or if you have a recent iPhone. So why upgrade? Because you are getting more distance with the same process, same physics yet you are going farther.

The a14 is amazing. Not $700 to $1400 amazing (depending on phone size, storage size, and camera) unless you have an older phone and you need to upgrade. Yet amazing in the sense you are still getting dramatic performance increases on already fast phones, surpassing desktops in some tasks. Apple is still advancing in ways to contrast how intel feels like it is barely gaining in the wrestling match of performance uplift via physics, engineering, and just hard technical problems with making silicon faster year over year.
 

Eug

Lifer
Mar 11, 2000
23,583
996
126
Thinking of a Brad Sams comment from Thurrott dot com.

Same 3 apps used every day, same processes no matter which phone you buy today or if you have a recent iPhone. So why upgrade? Because you are getting more distance with the same process, same physics yet you are going farther.

The a14 is amazing. Not $700 to $1400 amazing (depending on phone size, storage size, and camera) unless you have an older phone and you need to upgrade. Yet amazing in the sense you are still getting dramatic performance increases on already fast phones, surpassing desktops in some tasks. Apple is still advancing in ways to contrast how intel feels like it is barely gaining in the wrestling match of performance uplift via physics, engineering, and just hard technical problems with making silicon faster year over year.
I'm upgrading mainly because of the camera. For the SoC I'd be happy with A12... that is if it could handle the camera, but I suspect A14 has purpose built upgrades specifically for that camera.

EDIT:

Oh and I shouldn't forget: 5G and 6 GB RAM, along with 128 GB entry level.


There is an iPhone 12 score too: https://browser.geekbench.com/v5/cpu/4169508
Compute: https://browser.geekbench.com/v5/compute/1640361
6 GB RAM
MT Crypto looks much better now, MT Int & FP not so much.
That iPhone 12 multi-core score is terrible in comparison to the iPad Air 4 MT scores. What gives? Surely it's not running long enough to throttle.
 
Last edited:

awesomedeluxe

Member
Feb 12, 2020
69
23
41
I wonder if anyone here took an interest in this article from the other day? Based on what we know about the performance of Apple's GPU cores, it seems like not much has changed with them - it looks like they just split the difference on power/performance benefits from N5 and made few (if any) other modifications to the core arch. In the larger context of Apple using the A14 as a platform for multiple new APU configurations, it certainly seems plausible that more engineers were focused on macro-level APU design. This may be where they put their energy. Apple has a licensing deal with Imagination (inked less than a year before this announcement), and Imagination's multi-gpu design seems like an interesting solution for future MacBooks.

That iPhone 12 multi-core score is terrible in comparison to the iPad Air 4 MT scores. What gives? Surely it's not running long enough to throttle.
If I were taking those benchmarks as legitimate, differences in the memory configuration would be my first guess. Memory can have a big impact on multicore scores and I don't think we have any information on what kinds of memory our new iDevices use.
 

Mopetar

Diamond Member
Jan 31, 2011
7,797
5,899
136
I'm still using my 6S so it's a pretty big upgrade no matter how you look at it. I'm not doing any heavy lifting with my phone so it really doesn't matter, but I suppose that just means a longer battery life compared to now assuming the screen isn't using more power.

I am interested in how well these chips will do when they end up in laptops and desktops. It's pretty obvious they've got a massive leg up on the mobile competitors at the very least.
 

IvanKaramazov

Member
Jun 29, 2020
56
102
66
That iPhone 12 multi-core score is terrible in comparison to the iPad Air 4 MT scores. What gives?
The Macrumors article suggests multicore often runs slower when the phone is first set up as it's dedicated processes to lots of background stuff. Presumably that would be it, otherwise the A14 is slower in multicore than the A13, which seems unlikely.

EDIT - The compute benchmark, on the other hand, I find far more realistic for the A14 than the one supposedly for the same chip in the iPad Air? At least it better fits the GPU improvements Apple itself suggested for the Air. I thought it was telling and a bit disappointing that they gave no numbers whatsoever for the improvements of the iPhone 12 GPU over the 11, suggesting they are very minor.