Discussion Apple Silicon SoC thread

Page 76 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Eug

Lifer
Mar 11, 2000
23,583
996
126
M1
5 nm
Unified memory architecture - LP-DDR4
16 billion transistors

8-core CPU

4 high-performance cores
192 KB instruction cache
128 KB data cache
Shared 12 MB L2 cache

4 high-efficiency cores
128 KB instruction cache
64 KB data cache
Shared 4 MB L2 cache
(Apple claims the 4 high-effiency cores alone perform like a dual-core Intel MacBook Air)

8-core iGPU (but there is a 7-core variant, likely with one inactive core)
128 execution units
Up to 24576 concurrent threads
2.6 Teraflops
82 Gigatexels/s
41 gigapixels/s

16-core neural engine
Secure Enclave
USB 4

Products:
$999 ($899 edu) 13" MacBook Air (fanless) - 18 hour video playback battery life
$699 Mac mini (with fan)
$1299 ($1199 edu) 13" MacBook Pro (with fan) - 20 hour video playback battery life

Memory options 8 GB and 16 GB. No 32 GB option (unless you go Intel).

It should be noted that the M1 chip in these three Macs is the same (aside from GPU core number). Basically, Apple is taking the same approach which these chips as they do the iPhones and iPads. Just one SKU (excluding the X variants), which is the same across all iDevices (aside from maybe slight clock speed differences occasionally).

EDIT:

Screen-Shot-2021-10-18-at-1.20.47-PM.jpg

M1 Pro 8-core CPU (6+2), 14-core GPU
M1 Pro 10-core CPU (8+2), 14-core GPU
M1 Pro 10-core CPU (8+2), 16-core GPU
M1 Max 10-core CPU (8+2), 24-core GPU
M1 Max 10-core CPU (8+2), 32-core GPU

M1 Pro and M1 Max discussion here:


M1 Ultra discussion here:


M2 discussion here:


Second Generation 5 nm
Unified memory architecture - LPDDR5, up to 24 GB and 100 GB/s
20 billion transistors

8-core CPU

4 high-performance cores
192 KB instruction cache
128 KB data cache
Shared 16 MB L2 cache

4 high-efficiency cores
128 KB instruction cache
64 KB data cache
Shared 4 MB L2 cache

10-core iGPU (but there is an 8-core variant)
3.6 Teraflops

16-core neural engine
Secure Enclave
USB 4

Hardware acceleration for 8K h.264, h.264, ProRes

M3 Family discussion here:

 
Last edited:

Mopetar

Diamond Member
Jan 31, 2011
7,797
5,899
136
One could argue that they stand to lose money they would otherwise have made from the sale of MacOS software and the Linux effort prevented such sales from being realized. I think Sony similarly blocked Linux on the PS3 because it was preventing thousands of their hardware units from being used for the intended purpose of boosting their software sales. Yes, I'm aware that preventing jailbreak was likely a much bigger factor in Sony's decision.

That argument doesn't make sense. They can't sell software to a person who never buys their hardware. If a Linux user can't run Linux on the hardware they won't buy it in the first place. It only works if there are a lot of Linux users who buy hardware that would have otherwise gone to customers that would buy Apple software for their Macs because there aren't enough Macs to go around.

Sony didn't like people running Linux for the reason you point out, but Sony was also heavily subsidizing the cost of the PS3 with the intent of making it up on software sales. The PS3's cell processor was actually a really cheap way to do certain tasks it was good at for what Sony was changing.

Apple doesn't have that problem and their hardware margins are pretty significant. The number of hardware sales to people who want to run Linux on a Mac is so small it's probably not going to matter.
 

Heartbreaker

Diamond Member
Apr 3, 2006
4,222
5,224
136
Apple doesn't have that problem and their hardware margins are pretty significant. The number of hardware sales to people who want to run Linux on a Mac is so small it's probably not going to matter.

The main point though, is that it's so small as to be irrelevant, so Apple isn't going to spend any money, or effort on supporting it. They aren't going to actively block it, but expect ZERO support.

A LOT more people would like to run Windows as a Native OS, but they aren't going to spend money on that either.

Apples official solution to any third party OS on M1 Macs is VM.
 
  • Like
Reactions: Mopetar

Doug S

Platinum Member
Feb 8, 2020
2,201
3,404
136
I was under the impression that companies ensure that they have licenses for the tech used in their products to prevent patent litigation. The license could be from someone other than a major player and might implement things just a little bit differently to be deemed as a non violation of an existing patent. But yeah, Nvidia could do that in a strategic sense, just to settle with Apple with one of the stipulations being that Apple has to use their GPUs at a price convenient to Nvidia.


If you know you are using patented tech, that's one thing. But there are many cases where you are unaware you may be using patented tech. If you add up all the currently valid tech industry patents, then multiply them by the number of claims in those patents, then multiply by the number of countries in which those are filed where judges/juries may have a different ruling than elsewhere, there are millions of potential patent landmines out there.

No company goes looking around to see which patents they may be infringing upon. Not only would that run up unfathomable legal and engineering bills, but knowingly infringing patents will put you in a much worse situation legally. Ignorance is the safest course - that's why engineers are always told not to read patents.

So given that you are releasing products without knowing how many patents you may be infringing upon, you don't want to make it any easier than you have to for the owners of those patents to find out. For some stuff there's no way around that - the operation of the IR dot projector for Face ID is pretty "in your face" so to speak, so if someone owns a patent there you can expect a lawsuit in the mail (and knowing how easy it would be to check on something like that, it may be something you choose to have your lawyers check on first)

On the other hand, the algorithm used by Face ID to turn that dot information into a hash of your face to compare with the stored hash of your face, that's not something you can figure out how it works just from playing around with an iPhone. Perhaps someone who owns a patent there might be able to tease out the information by subjecting it to a battery of tests and see if it reacts in the same way their patented tech does, but unless Apple documented the algorithm you wouldn't know for sure.

The more information Apple provides about the inner workings of their GPU, the more chance that someone with a patent might be able to determine whether or not it is infringing. If providing that information would in some way increase their sales by enough to compensate for the risk of a patent lawsuit then maybe they do that. In this case though, I'm pretty sure Apple's legal team is going to advise them to leave this alone.
 

Mopetar

Diamond Member
Jan 31, 2011
7,797
5,899
136
It doesn't really matter as long as Apple themselves get a patent on their technology. Once the USPTO grants one it's much harder for another company to turn around and sue Apple because Apple has a piece of paper from the government saying their implementation is different enough to not infringe on that other parent.

That's why you always see these companies patent every little thing. It's not really about them trying to get one up on their other largest competitors, but rather to prevent the very expensive lawsuits that some patent troll could try to bring forward against them. Having your own patent stops that kind of thing before it even starts.
 

Hitman928

Diamond Member
Apr 15, 2012
5,177
7,628
136
It doesn't really matter as long as Apple themselves get a patent on their technology. Once the USPTO grants one it's much harder for another company to turn around and sue Apple because Apple has a piece of paper from the government saying their implementation is different enough to not infringe on that other parent.

That's why you always see these companies patent every little thing. It's not really about them trying to get one up on their other largest competitors, but rather to prevent the very expensive lawsuits that some patent troll could try to bring forward against them. Having your own patent stops that kind of thing before it even starts.

It does make it harder but I don't know how much harder. Patents get revoked all the time for violating earlier patents. There's too many patents out there for anyone to actually keep track so when you file a patent you have to list pre-existing patents for the patent office to review. I don't know how much the PO looks beyond the patents listed on the application but I doubt it's all that much compared to how many patents in each category there are. Long story short, getting a patent granted is great and is valuable, but most of the time isn't considered a solid patent until it gets challenged and help up in court.

Edit: Of course if the patent in question already listed the potentially suing company's patent as a relevant patent then you would feel a lot better about being able to defend your patent against that particular lawsuit.
 
  • Like
Reactions: BorisTheBlade82

Eug

Lifer
Mar 11, 2000
23,583
996
126
There have been no benchmark leaks for M1X yet. Thus, I will have to guess.

It seems a top end score for M1 on Geekbench 5 is roughly 1750/7800, although more typically it's closer to 1750/7700.

That's with 4 performance cores and 4 efficiency cores for a total of 8 cores, AFAIK all being utilized for the bench.

Thus, if we were to extrapolate to a 10-core variant (8/2), I'm GUESSING the Geekbench 5 score would be around 1750/13000 to 1750/14000 or so. That puts it into the workstation/server/high end gaming class CPU speed tier for Intel and AMD.

That score would be ahead of Ryzen 9 5900, but not as fast as Ryzen 9 5900X, at least according to this artificial benchmark. However, it would be at a power envelope well below Ryzen 9 5900. As such, these are basically laptop chips. Apple's true workstation chips are rumoured to be Jade 2C-Die and Jade 4C-Die, 20-core (16/4) and 40-core (32/8) respectively, which would be absolutely monstrous performance since they appear effectively to be M1X x 2 and M1X x 4 respectively.

BTW, if the variants of 10-core Jade being the "Die" version with 32 GPU cores and the "Chop" version with 16 GPU cores turn out to be true, are we all in agreement that they would be identical chips but binned for GPU cores?
 
Last edited:

Heartbreaker

Diamond Member
Apr 3, 2006
4,222
5,224
136
BTW, if the variants of 10-core Jade being the "Die" version with 32 GPU cores and the "Chop" version with 16 GPU cores turn out to be true, are we all in agreement that they would be identical chips but binned for GPU cores?

Chances of a 32 Core GPU in the SoC seem negligible. It seems overkill in the midrange, and makes for a huge (expensive, lower supply) SoC.
 

Eug

Lifer
Mar 11, 2000
23,583
996
126
Chances of a 32 Core GPU in the SoC seem negligible. It seems overkill in the midrange, and makes for a huge (expensive, lower supply) SoC.
Separate die but still a single 16/32 binned part then? Or are you suggesting 16 and 32 will be different parts?
 

Eug

Lifer
Mar 11, 2000
23,583
996
126
M1_575px.png



That M1 is about 121 mm2.

Doing some rough measurements:

It appears the 8-core GPU is about 25.6 mm2.
16-cores would be +25.6 mm2.
32-cores would be +77 mm2.

4 Firestorm cores is about 18 mm2.
8 Firestorm cores is another 18 mm2.

That means not counting the reduced size provided by loss of 2 Icestorm cores, and not accounting other stuff like additional cache, the additional space needed for an 8/2 + 16 M1X is +44 mm2, and the space needed for an 8/2 + 32 M1X is +95 mm2.

That works out to die sizes of 165 mm2 and 216 mm2 for 16-core GPU M1X and 32-core GPU M1X respectively.

It should be noted that the die size of the i9-9980HK in the 2020 MacBook Pro is 149 mm, but that doesn't include the GPU. The Navi 10 GPU in those MacBook Pros is 251 mm2 (!), for a total of 400 mm2.

A9X in the 2015 iPad Pro is 147 mm2.
 

Eug

Lifer
Mar 11, 2000
23,583
996
126
That works out to die sizes of 165 mm2 and 216 mm2 for 16-core GPU M1X and 32-core GPU M1X respectively.
It seems a number of different people have done the calculations already, and with more detail than me.


Screen Shot 2021-10-16 at 12.18.17 PM.png

Screen Shot 2021-10-16 at 12.20.02 PM.png

Screen Shot 2021-10-16 at 12.16.24 PM.png

So, my takeaways:

1. I believe (as a n00b) there is a very strong chance that M1X will indeed have 32 cores on-die, although I'm not sure if all 32 would be active if yields aren't perfect. I might expect Apple to charge say $400 for the GPU upgrade from 16-core to 32-core.

2. For the 16-core variant, I'm guessing that could be due to binning as opposed to a different chip.

3. Yes, that's a big die, but it's not so ginormous as to be unmanageable, especially when you consider the monstrous performance vs. the comparatively low power. That 147 mm2 iPad Pro SoC I mentioned earlier is already as big as those AMD APUs, but it's in a fanless device and sips power.
 
  • Like
Reactions: Etain05 and Viknet

Heartbreaker

Diamond Member
Apr 3, 2006
4,222
5,224
136
For cost it's utterly pointless to compare die size from different silicon processes.

The more advanced process, the more expensive it gets, to the point that Price/Transistor is nearly flat-lined.

It's more appropriate compare the transistor counts. The M1 is already large and expensive at 16 Billion transistors. The M1 already exceeds Xbox Series X SoC. If you double the Performance cores, double the GPU, then it's getting obscenely large and expensive, to the point that quadrupling the GPU is unlikely in the Extreme.

You also have to consider the memory bandwidth to feed that GPU. You will need 4 (64 bit) Memory channels to feed 16 GPU cores (up from 2 in the M1). You would need at least 6 if not 8 memory channels to feed 32 Core GPU. NOT going to happen anytime soon, especially for a Laptop SoC.
 
Last edited:

Doug S

Platinum Member
Feb 8, 2020
2,201
3,404
136
I think the 32 GPU core part is necessary for the Mac Pro, so it can have 128 GPU cores in its high end and be competitive with top of the line workstation GPUs from Nvidia/AMD. The question is, will the Mac Pro use the same SoC as next week's Macs? That seems unlikely to me (other than beta hardware)

I did my own rough calculations about a year ago and came out with 225 mm^2 for a "M1x" or whatever you want to call it with a 32 core GPU, and it seems everyone else is in the same ballpark.

That's totally fine for a die size, and since they won't need all those GPU cores in most of those chips the added size does not impact yield at all. 225 mm^2 comes out to 256 chips per wafer, which is less than $70 per chip at TSMC's N5 wafer pricing. Once you add in yield, test, and packaging you are probably looking at around $100.

Having "too many" GPU cores is fine, even if they don't plan to offer a high end Macbook Pro with 32 cores maybe they offer one with 24. It will be used in the higher end iMac and maybe the higher end Mini will get some real power this time - its cooling was designed for a 65W Intel CPU so it could easily handle 8 big cores and 32 GPU cores.

I don't know if Jade-C/M1X really will have 32 GPU cores, but it certainly could have without impacting cost. We're talking less than $15 per chip, in hardware that will sell for $1000-$2500!
 

Eug

Lifer
Mar 11, 2000
23,583
996
126
Would it be easier marketing wise? ... since Apple wants people to know the name of the chips (but less than 5% of the buying public, only the people who care about speeds and feeds and comparison between Apple and non Apple)

To use M1 "Big" instead of M1X?

( I am just throwing stuff at the wall, not making a case of true advocacy of such a name. )
I didn’t think they would do that but now I’m wondering if they just might:

M1 Pro and M1 Max


Also, if this is accurate, it does indicate two different versions of the chip, which I suspect will be differentiated by the number of active GPU cores.
 

Commodus

Diamond Member
Oct 9, 2004
9,206
6,799
136
I didn’t think they would do that but now I’m wondering if they just might:

M1 Pro and M1 Max


Also, if this is accurate, it does indicate two different versions of the chip, which I suspect will be differentiated by the number of active GPU cores.

Makes sense to me. Apple has been using "X" and "Z" in part because the chip line has never been particularly diverse, and these were usually kludges meant to extract more performance. Now, Apple is going to have a whole range of computers that need a range of chips with consistent names between generations. This would make shopping relatively easy.
 

BorisTheBlade82

Senior member
May 1, 2020
660
1,003
106
I don't believe there will be a 32 core GPU part at all. Not this year in laptops.
IMHO Jade-C is not only for the Notebook market. With it they will kick some serious ass in the HPC market - and with serious I mean 5950x-ish.
And they will want to pair it with some serious GPU - bandwidth might be provided by HBM - maybe they will even use TSMCs Info-LSI as an interconnect. And from there on to chiplets and multiple Jades - but I digress 😉
 

eek2121

Platinum Member
Aug 2, 2005
2,904
3,903
136
IMHO Jade-C is not only for the Notebook market. With it they will kick some serious ass in the HPC market - and with serious I mean 5950x-ish.
And they will want to pair it with some serious GPU - bandwidth might be provided by HBM - maybe they will even use TSMCs Info-LSI as an interconnect. And from there on to chiplets and multiple Jades - but I digress 😉

Not in this reality. At least not before the 5950X gets replaced.
 

Eug

Lifer
Mar 11, 2000
23,583
996
126
Makes sense to me. Apple has been using "X" and "Z" in part because the chip line has never been particularly diverse, and these were usually kludges meant to extract more performance. Now, Apple is going to have a whole range of computers that need a range of chips with consistent names between generations. This would make shopping relatively easy.
Well, it's a little inconsistent.

Now it seems there will likely be:

M1 + 7-core GPU
M1 + 8-core GPU
M1 Pro
M1 Max

My apparently incorrect assumption was:

M1 + 7-core GPU
M1 + 8-core GPU
M1X + 16-core GPU (or else M1 Pro + 16-core GPU)
M1X + 32*-core GPU (or else M1 Pro + 32-core GPU)

BTW, I suspect they will not bin by CPU clock speed, just GPU core count, so ignoring the Mac Pro coming in the future, there are only 4 chips. I suspect the same would be true for the M2 series, with 3 to 4 chips.

The advantage here though is that this will completely eliminates the need to mention the number of cores at all, which might be important when M2 or M3 rols around, if one of them uses less cores to achieve the same or better performance.

As for the Mac Pro, what's that going to be? M1 Extreme?
 

jpiniero

Lifer
Oct 1, 2010
14,509
5,159
136
As for the Mac Pro, what's that going to be? M1 Extreme?

There was rumors are that the Mac Pro is going to be refreshed with Icelake-W. They can release that whenever or even be cancelled. And also an ARM product that could be called Mac Pro but would really be something like a Mac Mini but bigger.
 

Commodus

Diamond Member
Oct 9, 2004
9,206
6,799
136
Well, it's a little inconsistent.

Now it seems there will likely be:

M1 + 7-core GPU
M1 + 8-core GPU
M1 Pro
M1 Max

My apparently incorrect assumption was:

M1 + 7-core GPU
M1 + 8-core GPU
M1X + 16-core GPU (or else M1 Pro + 16-core GPU)
M1X + 32*-core GPU (or else M1 Pro + 32-core GPU)

BTW, I suspect they will not bin by CPU clock speed, just GPU core count, so ignoring the Mac Pro coming in the future, there are only 4 chips. I suspect the same would be true for the M2 series, with 3 to 4 chips.

The advantage here though is that this will completely eliminates the need to mention the number of cores at all, which might be important when M2 or M3 rols around, if one of them uses less cores to achieve the same or better performance.

As for the Mac Pro, what's that going to be? M1 Extreme?

Sounds like the gap between the 16- and 32-core M1X chips may be enough that Apple doesn't feel comfortable using... well, M1X.

As for the Mac Pro and other high-end desktops, what if they're not M-series chips at all? I can imagine Apple introducing a D chip series that's reserved solely for the Mac Pro and systems like it.
 

jpiniero

Lifer
Oct 1, 2010
14,509
5,159
136
As for the Mac Pro and other high-end desktops, what if they're not M-series chips at all? I can imagine Apple introducing a D chip series that's reserved solely for the Mac Pro and systems like it.

Don't think you will see the real Mac Pro replacement until the chiplets are ready.
 

Eug

Lifer
Mar 11, 2000
23,583
996
126
There was rumors are that the Mac Pro is going to be refreshed with Icelake-W. They can release that whenever or even be cancelled. And also an ARM product that could be called Mac Pro but would really be something like a Mac Mini but bigger.
Since you mentioned the Mac mini... FWIW, the pundits are doubling down on a Mac mini refresh tomorrow. Mac mini Pro? Or at least Mac mini with M1 Pro.

However, since it's not a "real" Mac Pro, I suspect it will be smaller than the M1 Mac mini. The current M1 Mac mini is largely just empty space.
 

jpiniero

Lifer
Oct 1, 2010
14,509
5,159
136
Makes sense if they offer the M1 Pro and M1 Max on the 16" MBP, the Mini (even if it's a Pro Mini) and the iMac. Can dump the client Intel models or make it while supplies last.
 
Jul 27, 2020
15,738
9,790
106
Less than 24 hours left until the event. Here's what I'm hoping to see:

LPDDR5 (please, please let it be max 128GB)
Double the amount of cores (both CPU and GPU)
4K HDR mini-LED display in MBP16
PCIe Gen 4 SSD
Already available games running in Windows ARM edition at 3050 Ti performance level

Don't you dare disappoint me, Apple!