Discussion Apple Silicon SoC thread

Page 21 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Eug

Lifer
Mar 11, 2000
23,583
996
126
M1
5 nm
Unified memory architecture - LP-DDR4
16 billion transistors

8-core CPU

4 high-performance cores
192 KB instruction cache
128 KB data cache
Shared 12 MB L2 cache

4 high-efficiency cores
128 KB instruction cache
64 KB data cache
Shared 4 MB L2 cache
(Apple claims the 4 high-effiency cores alone perform like a dual-core Intel MacBook Air)

8-core iGPU (but there is a 7-core variant, likely with one inactive core)
128 execution units
Up to 24576 concurrent threads
2.6 Teraflops
82 Gigatexels/s
41 gigapixels/s

16-core neural engine
Secure Enclave
USB 4

Products:
$999 ($899 edu) 13" MacBook Air (fanless) - 18 hour video playback battery life
$699 Mac mini (with fan)
$1299 ($1199 edu) 13" MacBook Pro (with fan) - 20 hour video playback battery life

Memory options 8 GB and 16 GB. No 32 GB option (unless you go Intel).

It should be noted that the M1 chip in these three Macs is the same (aside from GPU core number). Basically, Apple is taking the same approach which these chips as they do the iPhones and iPads. Just one SKU (excluding the X variants), which is the same across all iDevices (aside from maybe slight clock speed differences occasionally).

EDIT:

Screen-Shot-2021-10-18-at-1.20.47-PM.jpg

M1 Pro 8-core CPU (6+2), 14-core GPU
M1 Pro 10-core CPU (8+2), 14-core GPU
M1 Pro 10-core CPU (8+2), 16-core GPU
M1 Max 10-core CPU (8+2), 24-core GPU
M1 Max 10-core CPU (8+2), 32-core GPU

M1 Pro and M1 Max discussion here:


M1 Ultra discussion here:


M2 discussion here:


Second Generation 5 nm
Unified memory architecture - LPDDR5, up to 24 GB and 100 GB/s
20 billion transistors

8-core CPU

4 high-performance cores
192 KB instruction cache
128 KB data cache
Shared 16 MB L2 cache

4 high-efficiency cores
128 KB instruction cache
64 KB data cache
Shared 4 MB L2 cache

10-core iGPU (but there is an 8-core variant)
3.6 Teraflops

16-core neural engine
Secure Enclave
USB 4

Hardware acceleration for 8K h.264, h.264, ProRes

M3 Family discussion here:

 
Last edited:

Roland00Address

Platinum Member
Dec 17, 2008
2,196
260
126
Must I remind you there are up to sixteen of those cores on a single die, and also it's on an older process node. It won't be a completely fair comparison until we see what AMD does on 5nm.

Just a reminder to echo SarahKerrigan is saying that the 5950x is a 16 core chip but it is actually 2 dies to achieve this result. Thus we are comparing a 5nm chip with 4 big cores and 4 small cores vs a 7nm chip that is 8 big cores (if we are comparing die sizes and cutting multithreaded benchmarks in half) or we are comparing 4+4 vs 16 when we do not care about die sizes and we are comparing "final products" no matter how they mix match for the 5950x is a 105w desktop chip against a laptop chip with who knows what TDP currently.

What Zen3 part has sixteen cores on a die?
 

Roland00Address

Platinum Member
Dec 17, 2008
2,196
260
126
Speaking of numbers of cores, how far up do you think they'll scale?

It's a lock they'll create 12 core chips for MacBook Pros and iMacs, but presumably that will consist of 8 performance cores and 4 efficiency cores. I'm also thinking they'll have a Mac Pro chip with 12 performance cores, but would it make sense to remove the efficiency cores? And what about beyond that? Would it make sense to create a dual-CPU Mac Pro with 2 x 12 performance cores, for a total of 24 performance cores and no efficiency cores?
Why stop there? Why not just make lots of apple 12z devices (or 12x) cores and put them in an add on card with some ram and sell it as an "accelerator card" for the MacPro.

I pick the a12z instead of the much better a14 or m1 for those are 5nm devices and those wafers are supply constrained, the 7nm wafers on the other hand should not be supply constrained.

While the 5nm a14 or m1 is even better if we are talking a 100 or 200 w tdp for the add on card with dozens of add on cores it makes sense to just throw cores at the problem, underclock those cores (aka keep it in the "ideal" desired performance to voltage area of the curve) and sell this device to happy developers, pixar, etc.

Yes eventually there is a point where where more cores do not scale for many workloads (something we reached with some workloads with the Mac Pro and iMac Pro.) But it just makes sense to do a mix and match of high and low performance cores where the real constraint is what is 5nm and 7nm.
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
Then you heard wrong.

You don't have be an engineer to know that increasing IPC is much more difficult than putting more cores on a piece of Silicon. Companies have placed 64 or more ARM cores on a Server chip with negligible history of CPU Design (Amazon) and done a credible job of it. Multiple cores is a well known and well solved issue.

I'm not an semiconductor engineer (and I don't even work in the IT industry) so I can't say with any degree of finality whether scaling up to high core count designs is as trivial as you claim. But regarding Amazon, it's an assumption on your part that they hired a bunch of engineers with a "negligible history of CPU design." Knowing Amazon and how much money they have, it's doubtful that they did.

Also, the Graviton2 CPU got pretty much destroyed by Zen 2 if I recall, winning just 9% of all the benchmarks they ran on Phoronix.

Rome vs Graviton.

There is absolutly nothing stopping Apple from putting any reasonable number of cores on a chip to power Mac Pros. Though they may choose a multi-chip option when going beyond 16 cores for the Mac Pro.

A theoretical 16 core chip with 8 high performance cores and 8 energy efficient cores would have 32MB of L2 cache. Not even sure how that would work, as from what I've heard, SRAM takes up a lot of power and space.

Scaling up a design would probably affect the cache hierarchy.

OTOH, despite all the companies selling ARM based designs, it's really only Apple that has succeeded in producing designs with significantly higher IPC than the designs that ARM licences. Adding more cores is trivia compared to actually designing new cores that advance the state of the art on IPC.

Or perhaps did you ever think that pursuing very wide, high IPC designs is what works well for Apple given their platform and the relatively exorbitant prices of their products? It's not that the other licensees can't design such chips, it's that they don't want to.
 
  • Like
Reactions: Tlh97 and amrnuke

Roland00Address

Platinum Member
Dec 17, 2008
2,196
260
126
How would you connect it?
Currently there is a Mac Pro based around Intel it has PCI-Express and apple sells an accelerator card that is an Afterburner Accelerator Card. Likewise they could always make a Mac Apple Silicon Card.

I assume there will be an eventual Apple Silicon Mac Pro device even if it may just be something smaller like a Mac Mini or a larger version between Mac Mini and Mac Pro.

But my point is if Apple is going to go chiplets they could always use older silicon for it is still competitive and the newer silicon is on the leading edge foundry. There is a point of having 8 or so, or 16 or so fast cores on the latest process hits diminishing returns and if you need more cores you are fine with cores that are 50% or 75% as fast per individual core and you just want as many cores as possible. Thus in that "Pro situation" that is not a laptop or tablet it could make sense to just have as many cores as possible and downclock them to the best desired performance per watt scenario.
 

SarahKerrigan

Senior member
Oct 12, 2014
339
468
136
I'm not an semiconductor engineer (and I don't even work in the IT industry) so I can't say with any degree of finality whether scaling up to high core count designs is as trivial as you claim. But regarding Amazon, it's an assumption on your part that they hired a bunch of engineers with a "negligible history of CPU design." Knowing Amazon and how much money they have, it's doubtful that they did.

Also, the Graviton2 CPU got pretty much destroyed by Zen 2 if I recall, winning just 9% of all the benchmarks they ran on Phoronix.

Rome vs Graviton.

This is a somewhat overblown point IMO; the use of 32MB L3 in Grav2 is going to be much of why it scales relatively badly, and that looks to me like a cost choice rather than "Amazon is incapable of using 64MB-128MB like the ARM hyperscale reference design would suggest they should." The economics are very different when you're building CPUs for yourself than building merchant silicon, and this looks to me like it came down to "well, we can get x higher %/socket at so-and-so workloads, or we can get some extra chips per wafer."

Whether it was the right choice is beyond me, but I suspect that's basically what the explanation is.
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
I wanted to circle back to this, because someone said you're wrong. You are right. Process node does not increase IPC. It just gives you more real estate to add more logic/cache, the liberty to clock higher, the ability to use less power. But it doesn't directly affect IPC (cf. Zen -> Zen+, on which there was a 3% IPC gain from revised cache, but nothing from process).

I concede this point (especially when you think of Zen 3), but I still think they're tangentially related. Could Apple have achieved their rate of execution and performance uplifts without access to node advancements?

Same for Intel. Intel's domination in manufacturing is a major part in why they were so successful over the years. Coincidentally, when they lost their process node leadership with the 10nm disaster, they lost their performance crown as well.
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
This is a somewhat overblown point IMO; the use of 32MB L3 in Grav2 is going to be much of why it scales relatively badly, and that looks to me like a cost choice rather than "Amazon is incapable of using 64MB-128MB like the ARM hyperscale reference design would suggest they should." The economics are very different when you're building CPUs for yourself than building merchant silicon, and this looks to me like it came down to "well, we can get x higher %/socket at so-and-so workloads, or we can get some extra chips per wafer."

Whether it was the right choice is beyond me, but I suspect that's basically what the explanation is.

This is similar to the point I was trying to make. That scaling up core counts is not trivial, because the uncore needs to be able to support all of those cores in terms of bandwidth and cohesiveness, which is not easy.

We've already discussed this in other threads, but you probably can't just scale up an M1 CPU to 64 cores without tampering with the cache hierarchy. The large unified L2 caches that the M1 has would need to go, and be replaced with much smaller private L2 caches, while adding a big L3 cache. In effect, doing this would probably dramatically lower the single thread performance of the cores while boosting their multicore performance.
 
  • Like
Reactions: Tlh97

thunng8

Member
Jan 8, 2013
152
61
101
There are some very curious results in that thread for vector results.

M1
ST: 504
MT: 2032

3700X
ST: 284
MT: 1567

10600
ST: 294
MT: 1578

9900K
ST: 315
MT: 1837

3900X
ST: 295
MT: 2079

We know the 10900K is at most 15-25% off an A14 in single-threaded workloads (SPEC 2006, GB5). Yeah, the 9900K is slower than the 10900K but not by that much. For the ST vector score of a 9900K (overclocked!) to be off by 60% from the M1 makes little sense. It also makes little sense that a 3900X with 12 cores and 24 threads sees an MT score only 7 times its ST score.

I'll be waiting for other benchmarks. This one seems like quite the outlier, or at least too specialized/niche to extrapolate to anything meaningful for general comparison.
You might get shock when results for image editing application results comes out. I’m not sure why Apple processors seem so fast in these sorts of applications but my iPad Pro 2018 runs circles around my 6 core Mac mini 2018 in Adobe Lightroom in common tasks. Theoretically the Mac mini should be significantly faster.

This looks more like AMD advocacy, and the opposite of reality.

The 5950X isn't designed for more, it does significantly less, than an A14.

No GPU.
No Media encoder.
No AI engine.
No Flash controller.

The A14 SoC includes all that and does WAY more than the 5950x, which is just a CPU.
When real application benchmarks comes out - I believe that the M1 would outperform by a significant margin what the SOC's geekbench or spec results would suggest. And it is all because of how easy it is for third party developers to leverage the extra units in the SOC. They just need to call the appropriate system level frameworks like accelerate and it will automatically use the extra appropriate accelerators. Another example is Davinci resolve. They just did a press release saying their optimised build running on M1 is 5X faster than the previous generation intel macbook pro. This figure is far more than what geekbench or spec results would suggest.

There are examples already where the ipad pro can edit videos faster than a desktop system. Luma Fusion and Adobe Premiere Rush scream on the ipad pro. The M1 will be able edit multiple 4k videos without proxies. The M1 can also playback flawlessly 8k video. That is just not possible in any PC CPU without the need for an add on high end discrete GPU.
 

name99

Senior member
Sep 11, 2010
404
303
136
Not arguing anything here. I am just stating facts and expressing my discontent with how so many people are picking sides and denigrating others based on dreams that aren't yet realized. I get being excited about things, speculating, anticipating. But it's this idea of calling AMD or Intel less smart, or holding up Apple as this paragon of the mastery of computer engineering is a bit out of hand.


I fully agree that the M1 and Apple's work on core design will permit them to move over to Arm with a nice performance/efficiency gain for the workloads most people use their Mac minis, iMacs and Macbooks for. The A12Z was more than enough for this already. But remember that Apple is already a walled-off world in many ways, and moving over to Arm will only exacerbate that.

Here's the thing. The rest of us are discussing the M1 as a technology; you're more interested in your political opinions about Apple, and that slants the way you view M1. Whatever.
But try to get it through your head that the rest of us are interested in the technology. Every time you make a statement about technology that is flat out crazy -- but serves to further your political agenda -- you are losing credibility among the technologists.
Perhaps you would be happier finding a group of similar-minded individuals who prioritize political opinions over truth?
 

Eug

Lifer
Mar 11, 2000
23,583
996
126
Rosetta 2 on M1 is doing pretty well in Geekbench single-core. :D

rosetta-2-m1-benchmark-single-core.jpg

This is about 75% of native. I would presume this is a translated-at-install score.
 
  • Haha
Reactions: name99

name99

Senior member
Sep 11, 2010
404
303
136
Who said Apple will scale clock speeds fast?

Also, who said that Apple will scale core counts fast?

Personally I think Apple won't raise clock speeds that much, and I also personally think that Apple will only scale core counts up to a certain level that fits their business model. Apple is not in the server or HPC business.

Apple is not in the server or HPC business YET.

There are many ways to "be in a business". I agree that it makes little sense for Apple to sell traditional type servers (which prioritize endless backward compatibility over novelty and innovation) or or data warehouse racks (which prioritize lowest TCO).
But Apple needs data warehouse racks for its internal use, and there will likely come a point where they can make them cheaper than they can buy/rent them.

Beyond that some Apple customers (specifically developers) require the same sort of functionality that's provided by Azure and AWS, and it makes sense for Apple to move towards providing that. You can't argue that Apple wants to grow services without acknowledging the importance of this particular service.
As always, Apple will not be selling this based on cheapness or flexibility; what they sell will be based on tight integration with the rest of the Apple experience -- familiar APIs, easy ways for Apple users to authenticate and pay, stuff like that.

At some point I expect Apple One to start expanding out to more types of apps (ie not just games), perhaps with the model being that developers get paid based on how much users use their apps? Think for example of utility type apps, maybe this would make sense, pay $1 per month extra and every time you need to use a specialty app that only need once a year you can just use it. Once again, if you're hosting on Apple Web Services, this sort of functionality just comes along for the ride.

At some point all this extra convenience and value (and its also user convenience! god knows I am SO SICK of every app that requires yet another fscking login, yet another entering my address and payment info) is just worth it, even if Apple charges more for superficially similar functionality. (And don't forget, that data warehouse functionality may actually be a lot better if you go below superficiality -- faster CPU, easy access to GPU and NPU, better security, ...)

These things take time. Look at history. The first two iPhones used the same (lousy) CPU. The A8 was not much of a performance bump over the A7 -- disappointing if you didn't realize the primary design target (achieved) was to halve the energy usage of the A7.
It took Series0, 1/2, and 3 before the aWatch became non-maddeningly slow with the Series 4.
People are looking at the M1 and making prediction about all time. Give it a year for the A15 and the Mac Pro targeting chips to come out. Give it another year for Apple to pull together the various strands of the data center plan. The date of interest is 2025, not January 1 2021!
 

name99

Senior member
Sep 11, 2010
404
303
136
Speaking of numbers of cores, how far up do you think they'll scale?

It's a lock they'll create 12 core chips for MacBook Pros and iMacs, but presumably that will consist of 8 performance cores and 4 efficiency cores. I'm also thinking they'll have a Mac Pro chip with 12 performance cores, but would it make sense to remove the efficiency cores? And what about beyond that? Would it make sense to create a dual-CPU Mac Pro with 2 x 12 performance cores, for a total of 24 performance cores and no efficiency cores?

Call it an 8 core chip, Eug, or an 8+4!
The fact that the idiots (on both sides) will call it a 12 core chips doesn't mean you have to. You're better than that.
 

Eug

Lifer
Mar 11, 2000
23,583
996
126
Call it an 8 core chip, Eug, or an 8+4!
The fact that the idiots (on both sides) will call it a 12 core chips doesn't mean you have to. You're better than that.
Apple calls their 4+4 chips 8-core. But I'll stick with the x+x nomenclature from now on for clarity's sake.
 

name99

Senior member
Sep 11, 2010
404
303
136
100% agree there. Hopefully we'll have answers soon.

For what it's worth, I too am skeptical that the M1 cores can be scaled up to 5950X type MT workloads.

Based on what? What EXACTLY do you imagine is the limiting constraint in converting a 4 core SoC into an 8 core SoC?

I honestly do not understand 90% of what goes on in this thread. People claim to know and understand technology, but their "understanding" appears to be of cargo-cult nature.
That insanity about vectors in Affinity Photo to me epitomizes everything about this forum. Never in my life would I have imagined that someone who actually claims to understand these things would confuse vector graphics with the utilization of an on-core vector unit. But this is what we are seeing constantly -- people who don't have a clue about the difference between large and small cores giving us opinions about scaling. People who call storage "memory" giving us opinions about DRAM performance.
Unbelievable.
 

Eug

Lifer
Mar 11, 2000
23,583
996
126
Apple is not in the server or HPC business YET.

There are many ways to "be in a business". I agree that it makes little sense for Apple to sell traditional type servers (which prioritize endless backward compatibility over novelty and innovation) or or data warehouse racks (which prioritize lowest TCO).
But Apple needs data warehouse racks for its internal use, and there will likely come a point where they can make them cheaper than they can buy/rent them.

Beyond that some Apple customers (specifically developers) require the same sort of functionality that's provided by Azure and AWS, and it makes sense for Apple to move towards providing that. You can't argue that Apple wants to grow services without acknowledging the importance of this particular service.
As always, Apple will not be selling this based on cheapness or flexibility; what they sell will be based on tight integration with the rest of the Apple experience -- familiar APIs, easy ways for Apple users to authenticate and pay, stuff like that.
I don't buy this. But even if hypothetically they do decide to do their own server farms or whatever, that's gonna be pretty low volume for quite some time to come. That wouldn't be enough justification to come up with a series of dedicated server chips just for this. Instead, they would just repurpose their Mac Pro chips or even iMac chips for this purpose in the near to mid term.
 

name99

Senior member
Sep 11, 2010
404
303
136
I'm not an semiconductor engineer (and I don't even work in the IT industry) so I can't say with any degree of finality whether scaling up to high core count designs is as trivial as you claim. But regarding Amazon, it's an assumption on your part that they hired a bunch of engineers with a "negligible history of CPU design." Knowing Amazon and how much money they have, it's doubtful that they did.

Also, the Graviton2 CPU got pretty much destroyed by Zen 2 if I recall, winning just 9% of all the benchmarks they ran on Phoronix.

I have told you before. Graviton is not targeting the sorts of use cases that Larabel is benchmarking. Phoronix benchmarking is the equivalent of benchmarking an iPhone A7 against an i9 and then laughing that one is much faster than the other. Clearly showing that '
- you don't understand the problem the A7 is solving
- you don't understand the quality of the A7 solution, and what it implies for the future.

Do you want to be part of the (not very) in crowd of Phoronix and Larabel? Or do you want to learn something? Because Phoronix is not a place to learn the future of technology; it's a place for a bunch of technology has-beens to get together to reminisce about their glory days, when x86 was all that mattered and people were impressed that you knew how to write a web page.
 

name99

Senior member
Sep 11, 2010
404
303
136
I don't buy this. But even if hypothetically they do decide to do their own server farms or whatever, that's gonna be pretty low volume for quite some time to come. That wouldn't be enough justification to come up with a series of dedicated server chips just for this. Instead, they would just repurpose their Mac Pro chips or even iMac chips for this purpose in the near to mid term.

Well duh. Of course they will use the same SoC for mac pro, imac pro, and data warehouse!
That's just obvious.

The issue is not "will they make a special 'server' chip?" (I've no idea what would even define such a thing); it's "will they use their high end chips to provide server/data warehouse functionality?"
 

shady28

Platinum Member
Apr 11, 2004
2,520
397
126
Meh, I'm gonna call it. All this stuff released in the last 3 months other than Tiger Lake has been vaporware. For the average person not willing to constantly refresh in stock items - $499 Nvidia 3070s don't exist, $299 Zen 3 5600X doesn't exist. Looking at OEMs like Dell, Dell / Alienware, Acer, HP - even looking at their sites the new stuff (Zen 3 / 30x0 cards) don't exist, not even an option.

I'll be surprised if M1 exists past the first 6 hours of release. Let's see what happens.
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
You might get shock when results for image editing application results comes out. I’m not sure why Apple processors seem so fast in these sorts of applications but my iPad Pro 2018 runs circles around my 6 core Mac mini 2018 in Adobe Lightroom in common tasks. Theoretically the Mac mini should be significantly faster.

Probably due to machine learning via neural network. The reviewer should disable hardware acceleration when applicable, for testing raw CPU performance.

The M1 can also playback flawlessly 8k video. That is just not possible in any PC CPU without the need for an add on high end discrete GPU.

What do you think the M1 uses to playback 8K video? It's being hardware accelerated, just like with a discrete GPU that supports it.

Also, you can definitely playback 8K video in software mode, especially if you have a newer hexcore or better CPU.

I know my old 6900K is capable of 8K60fps playback.
 

Eug

Lifer
Mar 11, 2000
23,583
996
126
Well duh. Of course they will use the same SoC for mac pro, imac pro, and data warehouse!
That's just obvious.

The issue is not "will they make a special 'server' chip?" (I've no idea what would even define such a thing); it's "will they use their high end chips to provide server/data warehouse functionality?"
The discussion earlier had touched on uber high core count chips like 32 or 64 core etc. and my post was partially addressing that. I said Apple wouldn't do this.


What do you think the M1 uses to playback 8K video? It's being hardware accelerated, just like with a discrete GPU that supports it.

Also, you can definitely playback 8K video in software mode, especially if you have a newer hexcore or better CPU.

I know my old 6900K is capable of 8K60fps playback.
Well, yeah, but that's missing the point. EVERY single Arm Mac includes this hardware accelerator, even their sub-$1000 entry level student Mac because Apple deems it important.

And the ability to play back 8K video in software is also missing the point. Just try editing that 8K video on the same machine without proxies. Ugh.
 

amrnuke

Golden Member
Apr 24, 2019
1,181
1,772
136
Quotes from your last three posts:

1:
I have told you before.
Do you want to be part of the (not very) in crowd of Phoronix and Larabel? Or do you want to learn something? Because Phoronix is not a place to learn the future of technology; it's a place for a bunch of technology has-beens to get together to reminisce about their glory days, when x86 was all that mattered and people were impressed that you knew how to write a web page.
Patronizing and insulting instead of talking about the technology.

2:
Well duh.
More patronizing.

3:
Based on what? What EXACTLY do you imagine is the limiting constraint in converting a 4 core SoC into an 8 core SoC?

I honestly do not understand 90% of what goes on in this thread. People claim to know and understand technology, but their "understanding" appears to be of cargo-cult nature.
That insanity about vectors in Affinity Photo to me epitomizes everything about this forum. Never in my life would I have imagined that someone who actually claims to understand these things would confuse vector graphics with the utilization of an on-core vector unit. But this is what we are seeing constantly -- people who don't have a clue about the difference between large and small cores giving us opinions about scaling. People who call storage "memory" giving us opinions about DRAM performance.
Unbelievable.
Again, patronizing, no discussion. Just attacking people and insulting them instead of discussing and teaching.



Here's the thing. The rest of us are discussing the M1 as a technology; you're more interested in your political opinions about Apple, and that slants the way you view M1. Whatever.
But try to get it through your head that the rest of us are interested in the technology. Every time you make a statement about technology that is flat out crazy -- but serves to further your political agenda -- you are losing credibility among the technologists.
Perhaps you would be happier finding a group of similar-minded individuals who prioritize political opinions over truth?
It seems like you're the one not all that interested in talking about the technology. Let's get this straight first: you're the one who called Apple "smarter" than AMD and have failed to back-up and discuss why. That's not a factual statement, it's a "political" one (though I think you fail to understand the definition of political...). In response to me, you threw out some statement about better branch predictors, etc., and then went off and cited features of the Arm ISA that have nothing to do with whether Apple are smart or not. You can't just say "because their branch predictors are better" and just have us all believe it. Can you explain why? Give references to why Apple's branch prediction on their uarch and Arm's ISA is better than AMD's neural branch prediction or TAGE branch prediction in their uarch on x86?

Again, I'm not the one who brought up these derailing statements about one side being better than the other, you are. I was just responding to it. I've talked plenty about the tech and how it might translate. So we can get back to that. I'm open to listening to what knowledge you have, but when you continue to just attack people (from me to other posters to other sites to entire groups of users to companies/computer engineers) instead of actually discussing the tech, you're not doing anyone, or the discussion, any service.

What statements have I made that are flat-out crazy regarding the technology? I've talked about A12's F/V curve and what that might mean for A14/M1 and I've talked plenty about the marketplace. But if there's something about the technology that I've been flat-out crazy on, please respond to that instead of ad hominem attacks.
 
Last edited:

thunng8

Member
Jan 8, 2013
152
61
101
Probably due to machine learning via neural network. The reviewer should disable hardware acceleration when applicable, for testing raw CPU performance.



What do you think the M1 uses to playback 8K video? It's being hardware accelerated, just like with a discrete GPU that supports it.

Also, you can definitely playback 8K video in software mode, especially if you have a newer hexcore or better CPU.

I know my old 6900K is capable of 8K60fps playback.
Most reviewers will be interested in the application performance, not pure CPU performance. Does it matter why the application is faster? Most people will care that their particular application is faster, and Apple makes it relatively easy for developers to tap into the extra accelerators available in the M1 SOC.

This is where the M1 will shine. Although its core is arguably tied with a desktop Zen3 chip as the fastest core on the market, when it comes to real application performance, I suspect even the non actively cooled macbook air will surpass the performance of the Zen3 5950 in many applications.
 
Last edited: