Apple A12 & A12X *** Now A12Z as well *** Now in a Mac mini

Page 10 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

blckgrffn

Diamond Member
May 1, 2003
9,111
3,029
136
www.teamjuchems.com
Shadow of the Tomb Raider running on that thing was mighty impressive.

Anand was pleased years ago to be offering Xbox One levels of GPU performance on a iPad.

I think the bigger question is if traditional PC gaming matters at all on Mac OS. Most the gamers I know that run a Mac primary have a gaming PC anyway OR *shudders* dual boot into Windows to actually let their GPU use modern graphics APIs and optimized drivers. That seems less and less likely to be an option.

The number of PC Apple devices is surely dwarfed many fold by the phones and ipads, so it seems like a small target in general.
 

Ajay

Lifer
Jan 8, 2001
15,332
7,792
136
A12Z is used for Apple's new ARM macOS dev platform.

Logic Pro, Final Cut, Photoshop, and Office are already running on it with fat "Universal 2" binaries.

Rosetta does dynamic translation to ARM.
Hmm, wonder what the performance is for x86->ARM.
 

Thala

Golden Member
Nov 12, 2014
1,355
653
136
Performance under binary translation can be summarized by a rule of thumb - 50-70% of native with 90-95% compatibility.

I wonder where you got that "rule of thumb" from, because the best binary translators achieving at most about 50%. More common implementations like QEMU are much slower. I actually do know only a single implementation, which achieves up to 60% native, and that is Micrsoft WARM implementation - and even this requires a high hit rate in the translation cache.
 
Last edited:

Panino Manino

Senior member
Jan 28, 2017
813
1,010
136
Shadow of the Tomb Raider running on that thing was mighty impressive.

One thing that I missed, what about the GPU?
Apple will continue using AMD GPUs? But will do this only for the "high end" offerings, while for "low to mid end" will use only it's own integrated GPU developed in-house?
 

Eug

Lifer
Mar 11, 2000
23,583
996
126
Performance under binary translation can be summarized by a rule of thumb - 50-70% of native with 90-95% compatibility.
That performance seems optimistic. However, the compatibility is probably the bigger issue for a lot of software.

Dunno if this is representative, but I lived through the last Rosetta translation period, with PowerPC --> x86 translation. For any major software running under Rosetta, it was like using very early betas. Lots of bugs in lots of software. I wonder if Rosetta 2 will be any better.
One thing that I missed, what about the GPU?
Apple will continue using AMD GPUs? But will do this only for the "high end" offerings, while for "low to mid end" will use only it's own integrated GPU developed in-house?
I would be surprised if the dev kit is using anything other than the A12Z GPU. For the low end, there isn't a necessity to use anything else. External GPUs would be useful for high end machines though.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
That performance seems optimistic. However, the compatibility is probably the bigger issue for a lot of software.

Dunno if this is representative, but I lived through the last Rosetta translation period, with PowerPC --> x86 translation. For any major software running under Rosetta, it was like using very early betas. Lots of bugs in lots of software. I wonder if Rosetta 2 will be any better.

I would be surprised if the dev kit is using anything other than the A12Z GPU.

Yea, I am likely optimistic.

We went through binary translation/emulation quite a few times.
-Alpha Windows NT
-Transmeta
-Itanium, both the hardware one and IA32EL
-Intel mobile Android
-Windows on ARM
-Many Apple transitions

Since its trying to be something its not, problems are bound to happen. Nothing fundamentally changes. I think you'll see 70% at least for press release numbers.
 

Panino Manino

Senior member
Jan 28, 2017
813
1,010
136
I would be surprised if the dev kit is using anything other than the A12Z GPU. For the low end, there isn't a necessity to use anything else. External GPUs would be useful for high end machines though.

But will this high end GPU still be AMD or Apple will just scale it's in house GPU?
 

Arx Allemand

Member
Sep 24, 2019
57
24
81
A12X is two years old. Z doesn't really change much.
I have to wonder if there is going to be an A13X or will they just leapfrog 13 (lucky 13?) and go straight for A14X.

I have an iPad Pro 2018 and its performance is amazing. The only phone that can match its screen fluidity is my S20 Ultra set at 120Hz.

I'm going to put iPad OS14 on it like right now, as I just got in on my 11 Pro Max and I like the direction they are going with this, particularly the camera interface. About time!

And back on topic, I cannot wait for an "Apple Silicon" 12" Macbook. That would truly rock!
 

Eug

Lifer
Mar 11, 2000
23,583
996
126
But will this high end GPU still be AMD or Apple will just scale it's in house GPU?
We will see, but I would envision a low end Apple SoC (A14), a mid end Apple SoC (A14X), a higher end solution with Apple SoC (faster A14X) with third party GPU, and then a super high end solution with some sort of Apple SoC plus up to quad GPU support.


A12X is two years old. Z doesn't really change much.
I have to wonder if there is going to be an A13X or will they just leapfrog 13 (lucky 13?) and go straight for A14X.

I have an iPad Pro 2018 and its performance is amazing. The only phone that can match its screen fluidity is my S20 Ultra set at 120Hz.

I'm going to put iPad OS14 on it like right now, as I just got in on my 11 Pro Max and I like the direction they are going with this, particularly the camera interface. About time!

And back on topic, I cannot wait for an "Apple Silicon" 12" Macbook. That would truly rock!
My prediction is the new iPad Pro will be released with A14X, and so will a MacBook Pro and iMac in 2020/2021. MacBooks will be A14 in 2021.
 

Richie Rich

Senior member
Jul 28, 2019
470
229
76
We will see, but I would envision a low end Apple SoC (A14), a mid end Apple SoC (A14X), a higher end solution with Apple SoC (faster A14X) with third party GPU, and then a super high end solution with some sort of Apple SoC plus up to quad GPU support.
Or Apple can stay with two SoC due to using same die for iPhone and iPad (4-big cores A14) and laptops/desktops can use bigger die with 8-big cores A14X. A13 Lightning core at N7P has 2.61 mm2 die size (link) so at 5nm process A14 Firestorm core will have less than 2 mm2. Additional 4 mm2 for two more cores is like nothing even considering L3 cache. IMHO Bigger area eaters will be GPU and NPU.
 

Eug

Lifer
Mar 11, 2000
23,583
996
126
Or Apple can stay with two SoC due to using same die for iPhone and iPad (4-big cores A14) and laptops/desktops can use bigger die with 8-big cores A14X. A13 Lightning core at N7P has 2.61 mm2 die size (link) so at 5nm process A14 Firestorm core will have less than 2 mm2. Additional 4 mm2 for two more cores is like nothing even considering L3 cache. IMHO Bigger area eaters will be GPU and NPU.
IMO it doesn't really make sense to have A14X running in both the iPad Pro and the high end Macs... unless you're talking about multiple A14X chips in the high end machines.

They need A14X for iPad Pro, because the A14 will not have sufficient performance for what they're trying to do with the iPad Pro.

And it doesn't make sense to have A14X run on both the iPad Pro and high end Macs, since iPad Pro is fanless, and high end Macs sell an order of magnitude fewer units than the iPad Pro. If you have just one chip encompassing both, either you have a crippled high end Mac chip, or you have lots of wasted silicon for iPad Pros, unless you're saying the high end Macs would have multiple A14X chips.
 

Thala

Golden Member
Nov 12, 2014
1,355
653
136
And it doesn't make sense to have A14X run on both the iPad Pro and high end Macs, since iPad Pro is fanless, and high end Macs sell an order of magnitude fewer units than the iPad Pro. If you have just one chip encompassing both, either you have a crippled high end Mac chip, or you have lots of wasted silicon for iPad Pros, unless you're saying the high end Macs would have multiple A14X chips.

Of course the MacBook Pro SoCs will have more cores, larger caches, larger GPU, higher frequency etc. in order to use the much higher power headroom.
 

Ajay

Lifer
Jan 8, 2001
15,332
7,792
136
Of course the MacBook Pro SoCs will have more cores, larger caches, larger GPU, higher frequency etc. in order to use the much higher power headroom.
I think @Glo. posted a chart showing that current Apple SoCs don't have much power headroom at all. Maybe, starting with the A14, things will be a bit different.
 

avAT

Junior Member
Feb 16, 2015
24
10
81
To be fair, two chips can already be considered as a family. But i do understand you point and agree that it might be more than just two chips.

In the Platforms State of the Union it seemed pretty clear that the “family of Mac SoCs” are in addition to the A_ and A_X chips.

Sri referred to those as being designed specifically for the iPhone and iPad and talked about chips being designed specifically for the Mac.

Edit: “We’re building a family of SoCs designed specifically for the Mac. Just like we did with the iPhone, iPad, and Apple Watch, we’re making sure the chips we build are tailored to the unique needs of the Mac.”
 
Last edited:

beginner99

Diamond Member
Jun 2, 2009
5,208
1,580
136
And it doesn't make sense to have A14X run on both the iPad Pro and high end Macs, since iPad Pro is fanless, and high end Macs sell an order of magnitude fewer units than the iPad Pro. If you have just one chip encompassing both, either you have a crippled high end Mac chip, or you have lots of wasted silicon for iPad Pros, unless you're saying the high end Macs would have multiple A14X chips.

All what you are saying actual is in favor of using the same SOC. The lower Mac volume makes it hard to justify a separate SOC which needs separate mask sets etc. Mask sets on 5nm cost mid-double digit millions.

You also don't need more cores. The fact that iPad is fanless on macs aren't will be the main difference in sustained performance (sustained clocks). With a fan the chip can run at higher clocks for much longer periods of time. I mean the intel chips that go into 15 or 28 w laptops are exactly the same that go into desktop i3. Only difference is binning. Binning actually being another reason to have only 1 SOC. mac market being much smaller is perfect. You can divert the best but small bin to macs.
 

Doug S

Platinum Member
Feb 8, 2020
2,203
3,405
136
Notice it says "Family of Mac SoCs", which IMO implies more than just two chips.

While your interpretation could be correct I took that to mean future versions like A14Z (or whatever they call it) A15Z and so forth, not that they will have a family of different Mac SoCs in every A* generation. There might be differences in the same way the A12X and A12Z are "different" so they can re-use the same die with 8 big cores across a lot of stuff by disabling cores in lower end models. They might start binning so they'll have different frequencies to give higher end machines higher clock rates instead of having to select a lower bound frequency that almost every chip can meet as with the iPhone.

They might want a second die for the bigger machines since they would need a fabric to connect multiple chips like AMD (doing a monolithic huge die like Intel just doesn't make sense to me so I discount that as a possibility) though they certainly could do a single die for all Macs that includes the fabric which remains inactive in single chip Macs. It isn't like the fabric would impact die size much in today's processes (it is probably more costly in pad area than transistor area) so considering the ever escalating cost of mask sets it might be cheaper to have unused fabric on all chips rather than do two separate designs and require a second mask set.

On the other hand even though a second mask set costs more, if they want to squeeze out every bit of performance possible at the high end the Mac Pro / iMac Pro would get their own design separate from the lower end Macs. If they can add a few hundred MHz extra maybe they'll see grabbing an extra 10% of performance as worth it. It would also let them do stuff like giving it wider SVE2 units than the lower end Macs get so they really kick ass on specialized loads and Intel wouldn't be able to match them even with AVX-512 code. It really depends on Apple's goals, and if they've been tasked with the goal of simply and efficiently replacing x86 with ARM, or have been let loose with the additional goal of making Intel look bad while doing so.
 
  • Like
Reactions: deathBOB

Eug

Lifer
Mar 11, 2000
23,583
996
126
While your interpretation could be correct I took that to mean future versions like A14Z (or whatever they call it) A15Z and so forth, not that they will have a family of different Mac SoCs in every A* generation. There might be differences in the same way the A12X and A12Z are "different" so they can re-use the same die with 8 big cores across a lot of stuff by disabling cores in lower end models. They might start binning so they'll have different frequencies to give higher end machines higher clock rates instead of having to select a lower bound frequency that almost every chip can meet as with the iPhone.
I agree they will use older chips in new machines... but not in the MacBook Pros. The MacBook Pros will likely mostly have the latest generation chips, but some of them will not have all the cores of the high end models.

I also agree they may start releasing differently binned chips.

ie. I think it's going to be a combination of what both of us said.
 

Ajay

Lifer
Jan 8, 2001
15,332
7,792
136
While your interpretation could be correct I took that to mean future versions like A14Z (or whatever they call it) A15Z and so forth, not that they will have a family of different Mac SoCs in every A* generation. There might be differences in the same way the A12X and A12Z are "different" so they can re-use the same die with 8 big cores across a lot of stuff by disabling cores in lower end models. They might start binning so they'll have different frequencies to give higher end machines higher clock rates instead of having to select a lower bound frequency that almost every chip can meet as with the iPhone.

They might want a second die for the bigger machines since they would need a fabric to connect multiple chips like AMD (doing a monolithic huge die like Intel just doesn't make sense to me so I discount that as a possibility) though they certainly could do a single die for all Macs that includes the fabric which remains inactive in single chip Macs. It isn't like the fabric would impact die size much in today's processes (it is probably more costly in pad area than transistor area) so considering the ever escalating cost of mask sets it might be cheaper to have unused fabric on all chips rather than do two separate designs and require a second mask set.

On the other hand even though a second mask set costs more, if they want to squeeze out every bit of performance possible at the high end the Mac Pro / iMac Pro would get their own design separate from the lower end Macs. If they can add a few hundred MHz extra maybe they'll see grabbing an extra 10% of performance as worth it. It would also let them do stuff like giving it wider SVE2 units than the lower end Macs get so they really kick ass on specialized loads and Intel wouldn't be able to match them even with AVX-512 code. It really depends on Apple's goals, and if they've been tasked with the goal of simply and efficiently replacing x86 with ARM, or have been let loose with the additional goal of making Intel look bad while doing so.

I think Apple needs at least 3 SoCs.

1. iPhone A1? line. New every year.
2. iPad A1?Z/X. Every other year. Better GFX performance
3. Laptop A1?+. Every other year. Higher TDP, higher freq and or more cores.

And maybe a chiplet solution for 16" Mac-book Pro and Mac Pro (but I'm not sure how much they care about pushing the performance envelop this far).

That's 3 mask sets, but not all are new every year. I think they will need to do this to get the right performance for each segment.
 

zinfamous

No Lifer
Jul 12, 2006
110,514
29,100
146
Yea, I am likely optimistic.

We went through binary translation/emulation quite a few times.
-Alpha Windows NT
-Transmeta
-Itanium, both the hardware one and IA32EL
-Intel mobile Android
-Windows on ARM
-Many Apple transitions

Since its trying to be something its not, problems are bound to happen. Nothing fundamentally changes. I think you'll see 70% at least for press release numbers.

For someone trying to dispense advice to those that are set on going ~macbook-iMac this year, in the very near future, what does this performance hit mean for first gen hardware, about 5 years out? Does the translation get better over time--basically, is it a software improvement that will overcome this translation over time, or will the first gen Ax whatever processors be hard-fixed in some way for the years to come, in terms of a comparable performance hit?

I'm trying to consider the best course of action between picking up a last gen x86 Apple, first Gen ARM, or maybe 2nd gen ARM-based system. ...it sounds like the more common software will be OK, but I'm thinking about how niche uses--open source software like imageJ and various bioinformatics tools (usually PERL or python-based, sometimes c++) are effected by these things. Now, the latter type of tools tend to perform very well in multi-threaded environments, and I'm not sure if that means it will just always be better on x86 or ARM is perfectly fine?
 

jpiniero

Lifer
Oct 1, 2010
14,510
5,159
136
Does the translation get better over time--basically, is it a software improvement that will overcome this translation over time, or will the first gen Ax whatever processors be hard-fixed in some way for the years to come, in terms of a comparable performance hit?

The translation should be considered not much more than a crutch to get you along until the app gets converted.
 
  • Like
Reactions: Tlh97 and Ajay

Doug S

Platinum Member
Feb 8, 2020
2,203
3,405
136
The translation should be considered not much more than a crutch to get you along until the app gets converted.

Yes there is not really any 'legacy' macOS applications. They dropped support for PPC binaries a while back, and last year dropped support for 32 bit x86 binaries. So the only place the x86 translation will matter in the long run is in the case of an 'orphan' application that is no longer supported by the vendor, or where you have paid for an old application and don't care about it enough to pay for an upgrade to get the latest version with native code.

Where it will really hurt is for Mac owners who want to run Windows stuff. While you can boot Windows/ARM or run it in a VM you can't assume you will ever get native ARM binaries (unless Windows/ARM gains in popularity, and Apple's move to ARM may help some there) meaning you may have to rely on Wow64's slow JIT translation for a long time. Of course Apple is still selling x86 Macs and will support them for probably 5-7 years after they quit selling them so those people need not be in any hurry to adopt ARM.
 

Ajay

Lifer
Jan 8, 2001
15,332
7,792
136
We went through binary translation/emulation quite a few times.
-Alpha Windows NT
Alpha translation worked very well because Digital cached the application's binary translation to the hard drive (and I was running 10K SCSI drives). If Rosetta 2 does this (Macs will have more storage space), then performance will be great. SSDs FTW!

Edit: Apple could add some silicon based functional unit that improves translation as well, since they design their own chips.