Solved! ARM Apple High-End CPU - Intel replacement

Page 59 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Richie Rich

Senior member
Jul 28, 2019
470
229
76
There is a first rumor about Intel replacement in Apple products:
  • ARM based high-end CPU
  • 8 cores, no SMT
  • IPC +30% over Cortex A77
  • desktop performance (Core i7/Ryzen R7) with much lower power consumption
  • introduction with new gen MacBook Air in mid 2020 (considering also MacBook PRO and iMac)
  • massive AI accelerator

Source Coreteks:
 
  • Like
Reactions: vspalanki
Solution
What an understatement :D And it looks like it doesn't want to die. Yet.


Yes, A13 is competitive against Intel chips but the emulation tax is about 2x. So given that A13 ~= Intel, for emulated x86 programs you'd get half the speed of an equivalent x86 machine. This is one of the reasons they haven't yet switched.

Another reason is that it would prevent the use of Windows on their machines, something some say is very important.

The level of ignorance in this thread would be shocking if it weren't depressing.
Let's state some basics:

(a) History. Apple has never let backward compatibility limit what they do. They are not Intel, they are not Windows. They don't sell perpetual compatibility as a feature. Christ, the big...

Doug S

Platinum Member
Feb 8, 2020
2,269
3,521
136
On second thought, Apple's 'tiler' GPU design will end up being a bigger issue for game performance than the translation layer ...

Why? It doesn't handicap games on the phone/tablet, so why should it handicap them on the Mac? It isn't as if hardcore gamers are buying Macs to play games anyway, that's never been a big part of Mac sales.

Heck the Mac should be getting a lot more games than it has ever had before once it goes ARM and Apple GPU since iOS game devs will have easy access the Mac market they didn't previously enjoy.
 

ThatBuzzkiller

Golden Member
Nov 14, 2014
1,120
260
136
Why? It doesn't handicap games on the phone/tablet, so why should it handicap them on the Mac? It isn't as if hardcore gamers are buying Macs to play games anyway, that's never been a big part of Mac sales.

When was it the last time a desktop GPU had a real tiler design ? (don't use Nvidia as an example since their marketing material is misleading)

Heck the Mac should be getting a lot more games than it has ever had before once it goes ARM and Apple GPU since iOS game devs will have easy access the Mac market they didn't previously enjoy.

They'll never be able to compete in high-end graphics with this mindset since their content with iOS quality games ...
 

soresu

Platinum Member
Dec 19, 2014
2,667
1,865
136
When was it the last time a desktop GPU had a real tiler design ? (don't use Nvidia as an example since their marketing material is misleading)
Probably the exact same one they are planning to use now while calling it their own.

ie PowerVR.
They'll never be able to compete in high-end graphics with this mindset since their content with iOS quality games ...
It won't even matter.

Apple products have never really been on the forefront of games, with only a handful from any given generation making it to Mac OS at all.

I can remember when there was once a rush to port older games to iOS and Android which dried up quite some time ago - likely due to the stingy attitudes of smartphone manufacturers (both Apple and Android vendors) towards increasing flash capacities as much as the difficulty of achieving an acceptably enjoyable/usable control scheme for PC and console games on a smartphone or tablet device.
 
Last edited:

Doug S

Platinum Member
Feb 8, 2020
2,269
3,521
136
When was it the last time a desktop GPU had a real tiler design ? (don't use Nvidia as an example since their marketing material is misleading)

They'll never be able to compete in high-end graphics with this mindset since their content with iOS quality games ...

Well I'm glad we have such an expert here who can tell us tiling will be impossible to make perform well, without even waiting for Apple to try. That will save us all the trouble of looking at actual benchmarks when they appear.

If tiling is such a handicap, why do iPhones have better performing GPUs than Android?
 
  • Like
Reactions: Etain05 and name99

awesomedeluxe

Member
Feb 12, 2020
69
23
41
How many machines do you think Apple will print N5 chips for before moving to N5P? iPhone will wipe out a lot of N5 capacity Q4 this year, but iirc Apple claims they have two more machines on the way. Those must be N5, right?
 

soresu

Platinum Member
Dec 19, 2014
2,667
1,865
136
How many machines do you think Apple will print N5 chips for before moving to N5P? iPhone will wipe out a lot of N5 capacity Q4 this year, but iirc Apple claims they have two more machines on the way. Those must be N5, right?
They will probably design and contract fabbing for separate chips only when there is sufficient expected market interest.

The fact that a higher end iPad got its own AxxX line of chips implies that it gets high enough sales figures to warrant a separate chip.

I guess the question is what kind of sales figures to Apple auger for such new ARM based SKU's, and how many separate designs will there be.

For that matter will they be only fully integrated SoC designs with zero upgrade path - or will some models have PCIE slots or some proprietary Apple alternative for their homegrown GPU's (which it likely will be to prevent their use in PC's).
 

awesomedeluxe

Member
Feb 12, 2020
69
23
41
For that matter will they be only fully integrated SoC designs with zero upgrade path - or will some models have PCIE slots or some proprietary Apple alternative for their homegrown GPU's (which it likely will be to prevent their use in PC's).
This might sound a little wild but I was thinking Apple might make their own GPU and put it on the same package as the A14Z.

I think Apple wants to reuse the same SoC across many devices, maybe disabling a core or two on the way down. But it was also pointed out to me that Apple really emphasized the benefits of unified memory in their presentation. You can have your cake and eat it too if you stick the GPU on the same package and have it share X GB of HBM2E, like a more ambitious Kaby G. Plus, no worries about anyone putting their GPU in anything else ever!
 

soresu

Platinum Member
Dec 19, 2014
2,667
1,865
136
This might sound a little wild but I was thinking Apple might make their own GPU and put it on the same package as the A14Z.

I think Apple wants to reuse the same SoC across many devices, maybe disabling a core or two on the way down. But it was also pointed out to me that Apple really emphasized the benefits of unified memory in their presentation. You can have your cake and eat it too if you stick the GPU on the same package and have it share X GB of HBM2E, like a more ambitious Kaby G. Plus, no worries about anyone putting their GPU in anything else ever!
Kaby G was a huge frankenpackage and pretty wasteful area wise.

Had AMD designed it all as a Zen product it would have been much more compact IMHO.

It was basically just a jury rigged solution with a semi custom Polaris/HBM GPU plus the Intel bridge thing (can't remember its name now) linking it to the Intel CPU SoC.

What surprises me is that AMD never did such a thing themselves - I'm half wondering if they didn't sign some kind of short term contract preventing such a more compact solution making Intel look bad, because with Renoir Vega generation at 24 CU clocked just 1.3-1.5 Ghz it would just ruin Intel's day in SFF systems like NUC's.
 

Doug S

Platinum Member
Feb 8, 2020
2,269
3,521
136
How many machines do you think Apple will print N5 chips for before moving to N5P? iPhone will wipe out a lot of N5 capacity Q4 this year, but iirc Apple claims they have two more machines on the way. Those must be N5, right?

N5P is basically a refinement of N5. Uses the same design rules and IP so Apple might start out Mac chips on N5 and then take them to N5P later. That would allow them to offer slightly higher performance if they wanted, or they could treat them as the "same" meaning cheaper and newer Macs would simply consume a little less power.

Sort of like 15-20 years ago when overclocking was a bigger deal on Intel CPUs and if you waited for a process to 'mature' you had a better chance of getting a chip that could overclock than if you bought right out of the gate. Intel didn't announce a different version of a process when they refined it, but the chips that came out of it consumed less power under normal use and thus had a better shot at overclocking than ones from the less refined process. Nowadays they call it "14nm+" and ++ and so on because they didn't want people thinking they were getting the same chips they got four years ago.
 

ThatBuzzkiller

Golden Member
Nov 14, 2014
1,120
260
136
Well I'm glad we have such an expert here who can tell us tiling will be impossible to make perform well, without even waiting for Apple to try. That will save us all the trouble of looking at actual benchmarks when they appear.

Others before Apple did try and it ended in failure like the Kyro did. Tile-based GPU designers couldn't figure out how to integrate hardware accelerated T&L efficiently and they won't figure out mesh shading either since it runs counter to their tiling architecture ...

If tiling is such a handicap, why do iPhones have better performing GPUs than Android?

The vast majority of Android GPUs also use tiling as well just like iPhone GPUs do too. The only mobile GPU that isn't tile-based is the Tegra X1 and the driver quality obliterates both Apple's and other Android graphics vendors.

Even Samsung are starting to see the light by transitioning to AMD based graphics technology because they want high-end features like ray tracing/pmesh shaders and decent driver quality if they want any chance of a shot at porting next-generation games on mobile devices. Only sane way to target games designed for non-tiling architectures is to have non-tiling GPU architectures to match too ...
 

awesomedeluxe

Member
Feb 12, 2020
69
23
41
Kaby G was a huge frankenpackage and pretty wasteful area wise.

Had AMD designed it all as a Zen product it would have been much more compact IMHO.

It was basically just a jury rigged solution with a semi custom Polaris/HBM GPU plus the Intel bridge thing (can't remember its name now) linking it to the Intel CPU SoC.

What surprises me is that AMD never did such a thing themselves - I'm half wondering if they didn't sign some kind of short term contract preventing such a more compact solution making Intel look bad, because with Renoir Vega generation at 24 CU clocked just 1.3-1.5 Ghz it would just ruin Intel's day in SFF systems like NUC's.
Yeah, it was a mess. From what I heard AMD didn't have the ability at that time to get HBM2 onto the package and connect it to their APU and they needed the Intel bridge-thing for that. AMD may be able to do it now but the business case for a costly APU with HBM is pretty narrow.

I think Apple could do it if they are just wed and married to the idea of unified memory. It would still help with cooling a little to have the dGPU on a separate die even if it's on the same package, and the APU would appreciate being able to cut off their gpu cores when running the firestorms at max clock.

N5P is basically a refinement of N5. Uses the same design rules and IP so Apple might start out Mac chips on N5 and then take them to N5P later. That would allow them to offer slightly higher performance if they wanted, or they could treat them as the "same" meaning cheaper and newer Macs would simply consume a little less power.
I think they will 100% be capitalizing on the performance of N5P to get the clock speeds up. Will be a little tough to hit 3GHz where they are now. I think whatever comes out this year on N5 stays on N5 because Apple will tie up N5P by themselves with the rest of their product line. They are also happy to keep N5 busy with existing products to try and keep AMD off the 5nm node until 2022.
 

name99

Senior member
Sep 11, 2010
404
303
136
They will probably design and contract fabbing for separate chips only when there is sufficient expected market interest.

The fact that a higher end iPad got its own AxxX line of chips implies that it gets high enough sales figures to warrant a separate chip.

I guess the question is what kind of sales figures to Apple auger for such new ARM based SKU's, and how many separate designs will there be.

For that matter will they be only fully integrated SoC designs with zero upgrade path - or will some models have PCIE slots or some proprietary Apple alternative for their homegrown GPU's (which it likely will be to prevent their use in PC's).

A 5nm mask set costs ~$15M.
Apple sells ~20M macs a year.
In other words it's actually not that big a deal (from the point of view of mask costs) to create a new mask set for [all macs as a whole].
Of course one has to add into that the probably larger costs of the design and verification of the alternative mac SoCs, but Apple so far seems to be doing a very good job of ensuring that IP can be reused fairly easily across designs (from watch to phone to iPad to ...) It helps when you aren't determined to cripple some of your SoCs to force some target customers to buy the more expensive SoC...

OK. So if all we do is have a separate design for Macs, we're talking less than a dollar in extra cost per SoC from masks. Throw in $10 extra for design, verification and it's still clearly a great deal.

Next question is how finely do we segment the Mac chips? THAT is the real question here, how finely Apple will segment the SoCs across the Mac range.
We don't know how many Macs are sold in different categories (MB+MBA, MBP+mini+iMac, iMac Pro+Mac Pro say). If Apple uses the iPad Pro chip for MB and MBP, that takes out a lot of the volume available for a dedicated Mac SoC. How much volume is left? 10% An extra $10 for mask sets and an extra $100 for verification is starting to become uncomfortable.

A third split, to an dedicated iMac Pro/Mac Pro chip makes things even worse, if those are, what, maybe a fifth of the MBP/iMac/mac mini numbers? A tenth?

OK, that's bad? How can we make it better?
(a) Mac SoCs only get updated every two years instead of every year. (Like iPad Pro has kinda sorta been for a while). That gets us a factor of two. The actual models could still possibly get mid-like kickers on alternate years, like camera upgrade, or faster flash, or whatever.

(b) Do we aggregate the power (number of cores, size of GPU) as we go up by more of smaller SoCs, rather than different SoCs? ie either chiplets, or simply putting two (or three or four) SoCs on the PCB? Both of these seem like reasonable choices.

Going forward Apple doesn't have to follow the path they were forced to follow by Intel's pricing. It's not clear that they believe it's great for customers to have iMacs at multiple levels of i3, i5, i7, i9 all also at different frequencies. I expect they will toss this sort of complication and offer a single iMac (8+8 cores or whatever, at a single frequency) and you'll choose like you choose your phone -- by screen size, by flash, maybe by amount of RAM.

Secondly chiplets are nicer than separate SoCs because smaller, lower power, faster communication. But separate SoCs are not a TERRIBLE choice, especially for rev 1. Intel And AMD seem to have reasons to want to avoid this (you kinda want to charge more for the 8 core than the 4 core, but in particular ways, so you land up making it difficult for anyone to want to put together a system from two 4-cores), but Apple will not have to engage in that nonsense.

Bottom line:
- the numbers can work if Macs all get iPad Pro SoCs, the high end ones (which for the next year I see as MBP, mac mini, iMac) getting saying 2 or even 3 or 4 SoCs on the PCB.
- the numbers can (apparently barely -- but that depends on design/verification costs that we don't know) work if the lowest end Macs get iPad Pro SoCs, the others get a Mac-specific SoC

Further trout in the milk: is Apple planning to put these things in its data centers? Who knows? I can see a way to do things without much disruption for this year (which I believe will be conventional SoCs, no chiplets, AND no iMac Pro, Mac Pro (so both higher core counts and extreme GPU demands). Next year with the A15 I can imagine multiple solutions, I have no feeling for which will happen.

Final data point. The people who do this sort of thing claim that there are *three* SoC part numbers in the macOS beta's corresponding to Apple Silicon SoCs. We have no idea what this means.
One is this year's developer silicon, one is A14X, one is A14Mac?
All three are A14Mac, but with some sort of modifications of the SoC like different amounts of RAM? (So far Apple has not used this sort of differentiator for say the SoCs that go into iPads with 4 vs 6GB, but that could change.)
Other modifications of the SoC, like the same basic design, but different mask sets (not THAT expensive) allowing for different transistor choices and thus higher frequency/higher power for the mini and iMac? (Think eg Qualcomm's one extra-fast A77 core on Snapdragon 865 and 865+)
 
  • Like
Reactions: ancientarcher

awesomedeluxe

Member
Feb 12, 2020
69
23
41
A 5nm mask set costs ~$15M.
Apple sells ~20M macs a year.
In other words it's actually not that big a deal (from the point of view of mask costs) to create a new mask set for [all macs as a whole].
Of course one has to add into that the probably larger costs of the design and verification of the alternative mac SoCs, but Apple so far seems to be doing a very good job of ensuring that IP can be reused fairly easily across designs (from watch to phone to iPad to ...) It helps when you aren't determined to cripple some of your SoCs to force some target customers to buy the more expensive SoC...

OK. So if all we do is have a separate design for Macs, we're talking less than a dollar in extra cost per SoC from masks. Throw in $10 extra for design, verification and it's still clearly a great deal.

Next question is how finely do we segment the Mac chips? THAT is the real question here, how finely Apple will segment the SoCs across the Mac range.
We don't know how many Macs are sold in different categories (MB+MBA, MBP+mini+iMac, iMac Pro+Mac Pro say). If Apple uses the iPad Pro chip for MB and MBP, that takes out a lot of the volume available for a dedicated Mac SoC. How much volume is left? 10% An extra $10 for mask sets and an extra $100 for verification is starting to become uncomfortable.

A third split, to an dedicated iMac Pro/Mac Pro chip makes things even worse, if those are, what, maybe a fifth of the MBP/iMac/mac mini numbers? A tenth?

OK, that's bad? How can we make it better?
(a) Mac SoCs only get updated every two years instead of every year. (Like iPad Pro has kinda sorta been for a while). That gets us a factor of two. The actual models could still possibly get mid-like kickers on alternate years, like camera upgrade, or faster flash, or whatever.

(b) Do we aggregate the power (number of cores, size of GPU) as we go up by more of smaller SoCs, rather than different SoCs? ie either chiplets, or simply putting two (or three or four) SoCs on the PCB? Both of these seem like reasonable choices.

Going forward Apple doesn't have to follow the path they were forced to follow by Intel's pricing. It's not clear that they believe it's great for customers to have iMacs at multiple levels of i3, i5, i7, i9 all also at different frequencies. I expect they will toss this sort of complication and offer a single iMac (8+8 cores or whatever, at a single frequency) and you'll choose like you choose your phone -- by screen size, by flash, maybe by amount of RAM.

Secondly chiplets are nicer than separate SoCs because smaller, lower power, faster communication. But separate SoCs are not a TERRIBLE choice, especially for rev 1. Intel And AMD seem to have reasons to want to avoid this (you kinda want to charge more for the 8 core than the 4 core, but in particular ways, so you land up making it difficult for anyone to want to put together a system from two 4-cores), but Apple will not have to engage in that nonsense.

Bottom line:
- the numbers can work if Macs all get iPad Pro SoCs, the high end ones (which for the next year I see as MBP, mac mini, iMac) getting saying 2 or even 3 or 4 SoCs on the PCB.
- the numbers can (apparently barely -- but that depends on design/verification costs that we don't know) work if the lowest end Macs get iPad Pro SoCs, the others get a Mac-specific SoC

Further trout in the milk: is Apple planning to put these things in its data centers? Who knows? I can see a way to do things without much disruption for this year (which I believe will be conventional SoCs, no chiplets, AND no iMac Pro, Mac Pro (so both higher core counts and extreme GPU demands). Next year with the A15 I can imagine multiple solutions, I have no feeling for which will happen.

Final data point. The people who do this sort of thing claim that there are *three* SoC part numbers in the macOS beta's corresponding to Apple Silicon SoCs. We have no idea what this means.
One is this year's developer silicon, one is A14X, one is A14Mac?
All three are A14Mac, but with some sort of modifications of the SoC like different amounts of RAM? (So far Apple has not used this sort of differentiator for say the SoCs that go into iPads with 4 vs 6GB, but that could change.)
Other modifications of the SoC, like the same basic design, but different mask sets (not THAT expensive) allowing for different transistor choices and thus higher frequency/higher power for the mini and iMac? (Think eg Qualcomm's one extra-fast A77 core on Snapdragon 865 and 865+)
Someone on macrumors had a pretty good breakdown of die size for what could the standard-bearer N5 SoC. It ended up being 131mm2 for 8 perf cores, 4 efficiency cores, and 16 GPU cores. You can fit that in an iPad and with 4 of the perf cores and a handful of GPU cores disabled it will be fine. Should be OK in the MBP16 as well, just clock the Firestorm cores up a bit and add a dGPU. You can keep this design going into N5P. It's just cheaper to make now, and you use it in whatever needs the extra 8% clock.

I think no matter what you need a different part for the Mac Pro. I don't think it has to be too complicated. It could just be 8 firestorm cores and you put however many of those dies on the package as you need, ala threadripper. May as well use this in the iMac too. Or it can just be 16 firestorm cores on a die, and there's one very niche Mac Pro that has two of those and sells for all dollars.
 

soresu

Platinum Member
Dec 19, 2014
2,667
1,865
136
and they won't figure out mesh shading either since it runs counter to their tiling architecture
Technically that is a DX12 feature, so a Metal API focused GPU wouldn't be bound to offer it - though if it ends up in Vulkan they may get a few developers moaning at them so that it can be supported through MoltenVK.
 

ThatBuzzkiller

Golden Member
Nov 14, 2014
1,120
260
136
Technically that is a DX12 feature, so a Metal API focused GPU wouldn't be bound to offer it - though if it ends up in Vulkan they may get a few developers moaning at them so that it can be supported through MoltenVK.

It's a desktop graphics hardware feature and AMD/Intel/Nvidia are committed to support DX12 Ultimate so mesh shaders are bound to appear on Vulkan as an extension even if only desktop IHV will support it. MoltenVK is just a translation layer so if Metal doesn't have it natively then it can't be mapped on to it from other APIs unless the maintainers want to emulate it which they wouldn't be too happy about ...

If Apple are serious about competing with desktop level graphics then they'll stop making tilers and start adding in features like geometry shaders and mesh shading as well. Metal also can't support transform feedbacks either just like every other tiler GPU out there ...
 

Doug S

Platinum Member
Feb 8, 2020
2,269
3,521
136
I think they will 100% be capitalizing on the performance of N5P to get the clock speeds up. Will be a little tough to hit 3GHz where they are now. I think whatever comes out this year on N5 stays on N5 because Apple will tie up N5P by themselves with the rest of their product line. They are also happy to keep N5 busy with existing products to try and keep AMD off the 5nm node until 2022.

Hitting 3 GHz for laptop/desktop chips on N5 is not even a question. I think there's even a decent chance they will do so in phones, though my guess is they'll end up just under that mark. TSMC offers an option of using "HPC cells" for a 10% performance bump instead of the low power cells that Apple uses in their current SoCs that Apple may use for Macs. Probably not for laptops where power still matters but they absolutely will for desktops like iMac and Mac Pro. There are further tunables TSMC offers to add another 10% or so above that mark.

So if they are able to hit 2.9 GHz with the phone with N5, for example, they should be able to reach 3.5 GHz on the desktop without any design changes on N5 - i.e. taking the A14 core from the phone and using it unchanged (other than using HPC cells etc.) on the desktop. If they make design changes, which they almost certainly will since there is no indication they are going to settle for the phone cores on the Mac, depending on what changes they make they could go even higher. Though remember the high end desktop products like Mac Pro won't come this year so they will not be N5, they will be N5P or possibly even N3 given that the Mac Pro seems likely to be the last Mac to go ARM so there will never be a Mac Pro with chips made on N5.
 

name99

Senior member
Sep 11, 2010
404
303
136
Someone on macrumors had a pretty good breakdown of die size for what could the standard-bearer N5 SoC. It ended up being 131mm2 for 8 perf cores, 4 efficiency cores, and 16 GPU cores. You can fit that in an iPad and with 4 of the perf cores and a handful of GPU cores disabled it will be fine. Should be OK in the MBP16 as well, just clock the Firestorm cores up a bit and add a dGPU. You can keep this design going into N5P. It's just cheaper to make now, and you use it in whatever needs the extra 8% clock.

I think no matter what you need a different part for the Mac Pro. I don't think it has to be too complicated. It could just be 8 firestorm cores and you put however many of those dies on the package as you need, ala threadripper. May as well use this in the iMac too. Or it can just be 16 firestorm cores on a die, and there's one very niche Mac Pro that has two of those and sells for all dollars.

Die size is the least of my concerns. The issue is the NRE money, not the per-square-mm money.

But your analysis is too simplistic.
What's you story for DRAM? Anything above a MacBook will want at least two memory channels not the iPad's single (128-bit wide) channel.
What's your story for IO (ie all those USB and Thunderbolt ports, HDMI, ethernet, ...)? A14X die? Separate chiplet? Separate chip?
What's your story for GPU? You can grow the logic parts of the GPU "fairly easily" (hah) but at some point you have to deal with the fact that you are in two very different regimes
- thermal
- memory bandwidth
from an iPad.

The issue is not that these are complicated challenges that Apple has no knowledge of how to solve; that's a stupid uninteresting claim. The issue is that these require different sorts of technologies from what's appropriate for an iPhone/iPad, and that's what the discussion is about for the engineers in this forum.

Geekbench 5 is not ideal as a GPU metric, but as a starting point its Metal benchmark has at the high end AMD Radeon Pro Vega II Duo at 97,000 and A12X at 9105. This is not a smear on the A12X, it's a recognition that if Apple wants to match and exceed that AMD number, it will probably need some aspects of the technology AMD uses to get there. That includes a vastly larger power budget -- maybe not AMD's almost 500W!, but even if it's say 120W, that implies something very different from iPad packaging. Likewise it will need some sort of comparable high memory bandwidth technology, something like GDDR or HBM2.
Point is you can't really get there just by tying 4 iPad SoCs together.

How far CAN you get? Well best iMac right now has AMD Radeon Pro 580X. If we say the target for this year is to get to the iMac, then we need to match its 42000. MAYBE you could get there with two iPad SoCs, each running a new A14X GPU a little over twice as fast as the A12X. Lots of hopes there that are not completely impossible but very much on the side of unlikely. And that's the 2019 iMac. There's apparently going to be a new one this year before the ARM iMac; and Apple will want the ARM iMac to substantially beat the Intel iMac, not to be "kinda sorta the same, better in some benchmarks, worse in others"....

So THAT's what this is about.
Once you accept some baseline realities:
- two memory channels for the midrange, at least 4 as you move up
- lotsa IO
- beefy GPUs
then just gluing together iPad SoCs becomes sub-optimal. Gluing together enough to hit GPU goals means you have a lot extra ISPs and media decoders and enclaves and suchlike just wasted on your motherboard.

So what's the alternative? Well that's why you get into questions of perhaps a dedicated Mac SoC? Perhaps chiplets? Perhaps daughterboards?
Certainly gluing together lots of iPad SoCs COULD be made to work, probably well enough to still beat Intel, and still cheaply enough to not be a problem. But it wouldn't be engineering optimal. So that's the question -- is something closer to engineering optimal cheap enough to be feasible, which is what my numbers were all about.
 

name99

Senior member
Sep 11, 2010
404
303
136
It's a desktop graphics hardware feature and AMD/Intel/Nvidia are committed to support DX12 Ultimate so mesh shaders are bound to appear on Vulkan as an extension even if only desktop IHV will support it. MoltenVK is just a translation layer so if Metal doesn't have it natively then it can't be mapped on to it from other APIs unless the maintainers want to emulate it which they wouldn't be too happy about ...

If Apple are serious about competing with desktop level graphics then they'll stop making tilers and start adding in features like geometry shaders and mesh shading as well. Metal also can't support transform feedbacks either just like every other tiler GPU out there ...

"If Apple are serious about competing with desktop level graphics"

This is like demanding that Apple support some or other Windows or Linux API.
Good luck with that. But the history of these things is that the feature
- will be added to Apple Silicon if it's the best way of solving a problem, not on the grounds of "being serious about competing with desktop level graphics"
- if it's not the best way of solving a problem that Apple, nonetheless, wants to solve, an alternative will be added, and MoltenVK etc can write an adapter to the alternative or pound sand.

You continue to live in the same sort of mindspace as the people who, a month ago, were insisting that Apple would not move off Intel because "what about Boot Camp?"
Your priorities (things like "run existing PC games well" and "make it easy to port Windows or Linux code") are not Apple's priorities. That's just a fact of life; nothing is gained by pretending otherwise; or by scolding Apple (and those of us who agree with Apple on this) by demanding that we take your priorities more seriously.
You think Tim Cook or Johnny Srouji are saying to themselves "well, we could take Metal in this interesting alternative direction. But that would break MoltenVK, so, no, we'd better not." Has Apple ever shown the slightest interest in MoltenVK? Have they ever even acknowledged its presence via a mention or a demo at some public event?
 

LightningZ71

Golden Member
Mar 10, 2017
1,628
1,898
136
I don't see Apple going too crazy with all this. Why not:

One SoC for phone and tablet big.LITTLE 2+4
One SoC for Pro tablet, iMac and MacBook Air big.LITTLE 4+4
One SoC for MacBook Pro, iMac Pro, Mac Pro. Big all around 8(4+4), glue logic for 2P, expanded I/O.

While that may be fewer cores on the top end, with how "massively superior" Apple's A series cores are to x86 cores, end users will be completely blown away with the power that their systems have!
 
  • Haha
Reactions: Tlh97 and lobz

jpiniero

Lifer
Oct 1, 2010
14,629
5,247
136
Probably more:

2+4 for iPhone and the lower end iPads
4+4 for iPad, Mac Mini and Macbook/Air
8+4 for MBP and iMac

There might be some overlap. Apple could for instance make a OSX 2-in-1 that sort of replaces both the iPad Pro and the Air.
 
  • Like
Reactions: Tlh97

DrMrLordX

Lifer
Apr 27, 2000
21,640
10,858
136
@LightningZ71

What kind of "glue logic" do you think Apple needs, and why do you think they would go multi-socket? DynamIQ alone gives them a lot of core cluster interconnect options. They likely have developed alternatives in-house assuming DynamIQ doesn't meet their needs.
 

ThatBuzzkiller

Golden Member
Nov 14, 2014
1,120
260
136
"If Apple are serious about competing with desktop level graphics"

This is like demanding that Apple support some or other Windows or Linux API.
Good luck with that. But the history of these things is that the feature
- will be added to Apple Silicon if it's the best way of solving a problem, not on the grounds of "being serious about competing with desktop level graphics"
- if it's not the best way of solving a problem that Apple, nonetheless, wants to solve, an alternative will be added, and MoltenVK etc can write an adapter to the alternative or pound sand.

OK, I guess Apple GPUs aren't competition for the other desktop graphics vendors then since Apple wants them to be nothing more than toys ...

You continue to live in the same sort of mindspace as the people who, a month ago, were insisting that Apple would not move off Intel because "what about Boot Camp?"
Your priorities (things like "run existing PC games well" and "make it easy to port Windows or Linux code") are not Apple's priorities. That's just a fact of life; nothing is gained by pretending otherwise; or by scolding Apple (and those of us who agree with Apple on this) by demanding that we take your priorities more seriously.
You think Tim Cook or Johnny Srouji are saying to themselves "well, we could take Metal in this interesting alternative direction. But that would break MoltenVK, so, no, we'd better not." Has Apple ever shown the slightest interest in MoltenVK? Have they ever even acknowledged its presence via a mention or a demo at some public event?

I wasn't even all that active here a month ago so it's not my responsibility to defend what others here said before ...

I still think Apple is making a stupid move by moving off of x86 but it just seals the deal for other developers to stop maintain their mac versions of their software since it was hard enough before to maintain that platform ...

If you really believe that Apple has different priorities then we may as well believe that their not a threat at all to either AMD/Intel/Microsoft/Nvidia since they obviously don't want to have the same software as them ...
 

Thala

Golden Member
Nov 12, 2014
1,355
653
136
I still think Apple is making a stupid move by moving off of x86 but it just seals the deal for other developers to stop maintain their mac versions of their software since it was hard enough before to maintain that platform ...
.

Maintaining software will not going to be any harder than before.
 

LightningZ71

Golden Member
Mar 10, 2017
1,628
1,898
136
@LightningZ71

What kind of "glue logic" do you think Apple needs, and why do you think they would go multi-socket? DynamIQ alone gives them a lot of core cluster interconnect options. They likely have developed alternatives in-house assuming DynamIQ doesn't meet their needs.

i, of course, have no inside knowledge about what sort of glue logic they would, or could, use. We know that there have been several recent introductions of cache coherent high speed interconnects that have been introduced recently. Any one of them could work. I don’t think that the choice is important, as long as it meets their needs.

As for why? It’s all a matter of volume. Apple doesn’t move anywhere near enough high end (greater than 8 core processor) processors to justify a “large” core design that is restricted to those machines. However, to remain relevant in that space, Apple needs to have a decent product. I have previously suggested that Apple is going to offer cloud processing power as a service through extensive cloud integration in a future version of Mac OS. However, if they don’t go that route, they need something.

the premise of these threads is that one A series core is dramatically faster than an x86 core. If we accept that, then it’s reasonable to suppose that fewer Mac Pro products can be better with fewer cores, but that 8 isn’t enough. If that’s a reasonable assumption, then Apple needs more cores, but still wants to make money, and therefore won’t make a chip at a loss for a low volume product. Therefore, a reasonable assumption is that their highest end A series processor will have the ability to be used in a 2p configuration.

16 super fast A series cores should be more computational power than anyone could ever need! /s