Discussion Apple Silicon SoC thread

Page 42 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Eug

Lifer
Mar 11, 2000
23,586
1,000
126
M1
5 nm
Unified memory architecture - LP-DDR4
16 billion transistors

8-core CPU

4 high-performance cores
192 KB instruction cache
128 KB data cache
Shared 12 MB L2 cache

4 high-efficiency cores
128 KB instruction cache
64 KB data cache
Shared 4 MB L2 cache
(Apple claims the 4 high-effiency cores alone perform like a dual-core Intel MacBook Air)

8-core iGPU (but there is a 7-core variant, likely with one inactive core)
128 execution units
Up to 24576 concurrent threads
2.6 Teraflops
82 Gigatexels/s
41 gigapixels/s

16-core neural engine
Secure Enclave
USB 4

Products:
$999 ($899 edu) 13" MacBook Air (fanless) - 18 hour video playback battery life
$699 Mac mini (with fan)
$1299 ($1199 edu) 13" MacBook Pro (with fan) - 20 hour video playback battery life

Memory options 8 GB and 16 GB. No 32 GB option (unless you go Intel).

It should be noted that the M1 chip in these three Macs is the same (aside from GPU core number). Basically, Apple is taking the same approach which these chips as they do the iPhones and iPads. Just one SKU (excluding the X variants), which is the same across all iDevices (aside from maybe slight clock speed differences occasionally).

EDIT:

Screen-Shot-2021-10-18-at-1.20.47-PM.jpg

M1 Pro 8-core CPU (6+2), 14-core GPU
M1 Pro 10-core CPU (8+2), 14-core GPU
M1 Pro 10-core CPU (8+2), 16-core GPU
M1 Max 10-core CPU (8+2), 24-core GPU
M1 Max 10-core CPU (8+2), 32-core GPU

M1 Pro and M1 Max discussion here:


M1 Ultra discussion here:


M2 discussion here:


Second Generation 5 nm
Unified memory architecture - LPDDR5, up to 24 GB and 100 GB/s
20 billion transistors

8-core CPU

4 high-performance cores
192 KB instruction cache
128 KB data cache
Shared 16 MB L2 cache

4 high-efficiency cores
128 KB instruction cache
64 KB data cache
Shared 4 MB L2 cache

10-core iGPU (but there is an 8-core variant)
3.6 Teraflops

16-core neural engine
Secure Enclave
USB 4

Hardware acceleration for 8K h.264, h.264, ProRes

M3 Family discussion here:

 
Last edited:

DrMrLordX

Lifer
Apr 27, 2000
21,617
10,826
136
But yes, in general you just want to use the provided SIMD intrinsics and leave the rest to the compiler.

Yup. I remember people fawning all over Future Crew and their demos being able to run on what was then not-exactly-great hardware in . . . I don't know, the early 1990s, owing most of their performance to "hand-tuned assembler" versus the kludgey mess that compilers would spit out back then. Of course that was before SIMD and intrinsics. Anyway, times have changed.

optimized in Assembly.

Then how did Phoronix even manage to compile a binary native to M1? If the only available codebase is x86 assembler, the only way to run it would be Rosetta 2. And you still aren't looking at the bigger picture: why does it being "optimized in Assembly" make it somehow faster? I'll tell you why: in terms of raw performance, that's the best way to make use of SIMD. The second-best way being use of intrinsics, and the worst way would be autovectorization via something like OpenMP.

There's likely an alternate codepath for ARM with no assembly or intrinsics providing SIMD support for the application, and that's likely what Phoronix used to compile their native binary for testing.
 
  • Like
Reactions: Carfax83 and Tlh97

Eug

Lifer
Mar 11, 2000
23,586
1,000
126
To be honest, it's more of an ego thing on this forum. A lot of people here have Ryzen systems or are AMD fans. The minute their PC master race systems get blown up by a tiny Macbook Air in common applications that most people use, they start to find ways to boost their ego.

For example, people here will point to the Cinebench numbers and say "see, I told you M1 isn't as fast as mobile Ryzen", despite knowing full well that 99.99% of people will never use Cinebench on the Macbook or Ryzen systems. Cinebench is just an AMD-friendly tool to boost the ego of AMD buyers.

Meanwhile, the M1 runs circles around Ryzen in the most commonly used applications including web browsing, hardware-accelerated video editing, AI acceleration, etc. And somehow, saying that the M1 chip is the "fastest laptop CPU" or the "fastest overall laptop chip(SoC)" is somehow controversial here.
Nah, we just called you on your incorrect statements, and you got all butthurt. And apparently, after all this time, you're STILL all butthurt about it.

BTW, I'm an all Mac and iDevice household, aside from one decade old Phenom II slim desktop. That's my AMD "master race system" as you say... with integrated nVidia 9400 graphics and total system 220 Watt power supply.
 
Last edited:

mikegg

Golden Member
Jan 30, 2010
1,755
411
136
Nah, we just called you on your incorrect statements, and you got all butthurt. And apparently, after all this time, you're STILL all butthurt about it.

BTW, I'm an all Mac and iDevice household, aside from one decade old Phenom II slim desktop. That's my AMD "master race system" as you say... with integrated nVidia 9400 graphics and total system 220 Watt power supply.
But I didn't actually make any incorrect statements since no one has actually proved them wrong.

The M1 is the fastest laptop CPU you can buy, period. The M1 is the fastest laptop chip (meaning SoC/APU) you can buy, period.

Nothing I've said is controversial.
 

Heartbreaker

Diamond Member
Apr 3, 2006
4,226
5,228
136
Then how did Phoronix even manage to compile a binary native to M1? If the only available codebase is x86 assembler, the only way to run it would be Rosetta 2. And you still aren't looking at the bigger picture: why does it being "optimized in Assembly" make it somehow faster? I'll tell you why: in terms of raw performance, that's the best way to make use of SIMD. The second-best way being use of intrinsics, and the worst way would be autovectorization via something like OpenMP.

It says "Written in C", "optimized in Assembler". I would expect they have the original C code, and use compiler directives to override it with the Assembler code only on appropriate architecture.

But going back the Flac example that uses AVX, it makes no sense at all.

Rosetta 2 can't translate AVX:
"Rosetta translates all x86_64 instructions, but it doesn’t support the execution of some newer instruction sets and processor features, such as AVX, AVX2, and AVX512 vector instructions. If you include these newer instructions in your code, execute them only after verifying that they are available."

So it must be using the non-AVX, fallback code, and translating that... And it's still much faster than the native... This makes no sense at all.

Anyway. Phoronix results are "curious" to say the least. But I certainly wouldn't base my decision process on them.
 

amrnuke

Golden Member
Apr 24, 2019
1,181
1,772
136
But I didn't actually make any incorrect statements since no one has actually proved them wrong.

The M1 is the fastest laptop CPU you can buy, period. The M1 is the fastest laptop chip (meaning SoC/APU) you can buy, period.

Nothing I've said is controversial.
I mean, each sentence in this post is incorrect.

You did make incorrect statements and there are benchmarks to prove those statements wrong. The 2nd and 3rd sentences are false because at the very least, the 4800U and 4900HS beat it in CB23 MT, the MBP16" with an i9 beats it in Final Cut. So whether you label it a CPU or chip or whatever, your 2nd and 3rd sentences are wrong and there is ample evidence to suggest that for heavier rendering/creative workloads the M1 is not the fastest laptop CPU/chip.

Your final statement is wrong, and hilarious, because clearly it IS controversial.

In any case, the M1 is already a damn good SoC. It doesn't need people lying about it and making premature and exaggerated claims.
 

lobz

Platinum Member
Feb 10, 2017
2,057
2,856
136
To be honest, it's more of an ego thing on this forum. A lot of people here have Ryzen systems or are AMD fans. The minute their PC master race systems get blown up by a tiny Macbook Air in common applications that most people use, they start to find ways to boost their ego.

For example, people here will point to the Cinebench numbers and say "see, I told you M1 isn't as fast as mobile Ryzen", despite knowing full well that 99.99% of people will never use Cinebench on the Macbook or Ryzen systems. Cinebench is just an AMD-friendly tool to boost the ego of AMD buyers.

Meanwhile, the M1 runs circles around Ryzen in the most commonly used applications including web browsing, hardware-accelerated video editing, AI acceleration, etc. And somehow, saying that the M1 chip is the "fastest laptop CPU" or the "fastest overall laptop chip(SoC)" is somehow controversial here.
Honestly, it's kinda funny to watch how you fail to see that among almost everyone here it's actually you who's sounding the most fanboy-ish :) It's to the extent that it's really not worth picking apart your nonsense sentence by sentence, I've seen enough of that in the past 15 pages.
 

software_engineer

Junior Member
Jul 26, 2020
8
11
41
It says "Written in C", "optimized in Assembler". I would expect they have the original C code, and use compiler directives to override it with the Assembler code only on appropriate architecture.

But going back the Flac example that uses AVX, it makes no sense at all.

Rosetta 2 can't translate AVX:
"Rosetta translates all x86_64 instructions, but it doesn’t support the execution of some newer instruction sets and processor features, such as AVX, AVX2, and AVX512 vector instructions. If you include these newer instructions in your code, execute them only after verifying that they are available."

So it must be using the non-AVX, fallback code, and translating that... And it's still much faster than the native... This makes no sense at all.

Anyway. Phoronix results are "curious" to say the least. But I certainly wouldn't base my decision process on them.

Whilst Rosetta does not support translating the AVX family of SIMD instructions, it does support translating the SSE family of SIMD instructions. FLAC has codepaths that make use of SSE intrinsics in addition to codepaths that make use of AVX intrinsics. The reason that the FLAC benchmark shows greater performance when run via Rosetta is most likely due to the fact that SSE SIMD instructions are translated to NEON SIMD instructions and achieve a greater throughput than can be achieved by compiling the fallback FLAC C codebase to ARM.
 

Thala

Golden Member
Nov 12, 2014
1,355
653
136
There's likely an alternate codepath for ARM with no assembly or intrinsics providing SIMD support for the application, and that's likely what Phoronix used to compile their native binary for testing.

Of course that is the case. If you "optimize" something target specific, you never delete the original C-implementation. Instead you implement the optimization as an alternative.
 

Thala

Golden Member
Nov 12, 2014
1,355
653
136
I mean, each sentence in this post is incorrect.

In order to be the fastest, you do not have to win each single benchmark. It is no understatement to say, the M1 is by far the fastest CPU in its TDP range (e.g. up to 15W) if some metric like geometric mean of a larger set of benchmarks is used.
 

itsmydamnation

Platinum Member
Feb 6, 2011
2,764
3,131
136
In order to be the fastest, you do not have to win each single benchmark. It is no understatement to say, the M1 is by far the fastest CPU in its TDP range (e.g. up to 15W) if some metric like geometric mean of a larger set of benchmarks is used.
No you are wrong in this regard, go and read his initial post that started all of this , he set the frame of reference and in the context of the frame of reference he is wrong. That is why people like @Eug who are very excited by M1 and pro apple in general are disagreeing with him. You can try and move the goal posts on his behalf all you want but that's the problem with being definitive, wrong and stubborn is you leave yourself no place to go :).
 

amrnuke

Golden Member
Apr 24, 2019
1,181
1,772
136
In order to be the fastest, you do not have to win each single benchmark. It is no understatement to say, the M1 is by far the fastest CPU in its TDP range (e.g. up to 15W) if some metric like geometric mean of a larger set of benchmarks is used.
That wasn't his argument...

As for a larger set of benchmarks, surely one could select a set of benchmarks, heavily skewed toward whatever point one wants to make. We could go all day saying, yes, the M1 is the fastest mid-high range processor for most people's needs - that is fully true (though it may not be the best value for most people). Or you could say that the MBA is the fastest among laptops that draw at most x watts at peak power draw, where x is some arbitrary number that excludes more competitive chips, and so on. Making blanket statements is almost always fraught with assumption and fallacy, and his case is no exception.
 

Thala

Golden Member
Nov 12, 2014
1,355
653
136
That wasn't his argument...

Not sure who is "he" or "his" or what the argument was.
I was explicitly replying to you...where you tried to use examples to challenge that M1 is the fastest. I believe that just using random examples to make a point is not a particularly valid argument. I was also not talking about carefully choosing the benchmark in order to skew results. Sure one can do this - but then the method of determining the fastest would be questionable to say the least.
 

Roland00Address

Platinum Member
Dec 17, 2008
2,196
260
126
It has been 13 days since the thread started, with 42 pages of comments. Trying to recreate the initial gestalt of intent and points is a mess.

This is my opinion after doing the reading.

  • A) The M1 is fast.
  • B) It succeeded at its goals with it being Apple Management goals, where Apple is convinced it can sell a lot of these. Knowing if it makes consumers happy is unfalsiable at this time, we will know in let’s say 2 years.
  • C) It did not succeed at all my goals, “me personally.” I would want more than 2 USB-C / Thunderbolt ports. Either some USB-A Ports or more USB-C, and with one thunderbolt on the left and one on the right. The ports need not be all Thunderbolt ports. But that is my problem!
  • D) This is going to be the slowest Apple Silicon Mac ever to be sold. Any future chips X, Z, or subsequent numbers will likely be faster.
  • E) Compatibility is good but not perfect. Compatibility is better than almost any previous major emulation project.
  • F) This TSMC 5nm chip is faster than any Intel 14nm laptop chip. It can be in the same “range“ as other TSMC 7nm chips in some tasks, and other tasks Apple Silicon is much faster. (I personally do not care if someone is faster than another, I want to know if someone is trading blows on a task +/-20% vs a blowout of greater than 20%)
  • G) Battery Life is great.
  • H) Price is good even though I always like cheaper stuff.
I may be forgetting things.
 
Last edited:

Heartbreaker

Diamond Member
Apr 3, 2006
4,226
5,228
136
If anyone remembers early tests where 8K Video was a bit rough on the 8GB M1 Mac (still MUCH worse on comparable Intel Machines with significantly more RAM), and there was some speculation it was GPU limited, it might have just been RAM limited. Here are some M1 8G vs M1 16GB benchmarks, most changed is the 8K-4K Video export, but it's an outlier, most are close:

8k-raw-to-4k-export-m1-macbook.jpg
 

Thunder 57

Platinum Member
Aug 19, 2007
2,674
3,796
136
But I didn't actually make any incorrect statements since no one has actually proved them wrong.

The M1 is the fastest laptop CPU you can buy, period. The M1 is the fastest laptop chip (meaning SoC/APU) you can buy, period.

Nothing I've said is controversial.

I have a bit of constructive criticism for you. After reading many pages and indeed this post, please understand that the "." is known as period and sufficient for ending a sentence. You do not need to "double period" a sentence. Yea, I get that you are trying to bring extra emphasis to your point, but when you do it all the time, it loses effect and is annoying to read. Period. By the way, you have made false statements. Period. (Saying period also doesn't mean debate is over and what you just said is true).
 

amrnuke

Golden Member
Apr 24, 2019
1,181
1,772
136
Not sure who is "he" or "his" or what the argument was.
I was explicitly replying to you...where you tried to use examples to challenge that M1 is the fastest. I believe that just using random examples to make a point is not a particularly valid argument. I was also not talking about carefully choosing the benchmark in order to skew results. Sure one can do this - but then the method of determining the fastest would be questionable to say the least.
Agree!

Let me re-state my point I guess for that person's sake, and to clarify what I meant.

We have very limited benchmarks right now. We can't just take the limited benchmarks, for example, from Anandtech, or Puget Systems, or The Verge - and say it's the fastest. Using just the benchmarks in AT's review is fraught with issues, as one example, since they ran only two relatively unconstrained somewhat real-world CPU test against other laptop chips, CB23 MT and Speedometer *** (please see below), where two AMD chips, including one at 15W TDP, beat it (and by "it" I mean the Mac mini with M1, not the MBP or MBA) pretty handily in one, and the M1 won pretty handily in the other. But we didn't even get a GB5 MT score against other AMD laptop chips. And in SPEC 2017 rate-N, the M1 in the Mac mini loses to the 4900HS.

My point being that right now, it's damn impressive at its power consumption, but probably due to sheer core/thread disadvantage, it still loses to Zen2-based laptop APUs in 2 of the 3 non-ST tests where it was compared to laptop chips in AT's review. Calling it the fastest laptop CPU or fastest laptop chip is at best premature, and most likely wrong for the types of benchmarks that are typically run by tech sites on CPUs/chips.

*** I am not sure what the setup of AT's Speedometer test was, but its results are off... I suspect they ran it in Safari on all computers. Speedometer on my Ryzen 3600 scores 140 in Chrome and 142 in Edge. On my iPhone it's 146 in Chrome. That they only got 140 for a 5950X tells you that this test is a bit fishy. I mean, if they want to use Apple's browser optimized for macOS/iOS and Arm on x86 Windows PCs as the comparison... whatever. But I think it must be pointed out that this test is a bit of a stacked deck.
 
Last edited:
  • Like
Reactions: Carfax83 and Tlh97

Eug

Lifer
Mar 11, 2000
23,586
1,000
126
If anyone remembers early tests where 8K Video was a bit rough on the 8GB M1 Mac (still MUCH worse on comparable Intel Machines with significantly more RAM), and there was some speculation it was GPU limited, it might have just been RAM limited. Here are some M1 8G vs M1 16GB benchmarks, most changed is the 8K-4K Video export, but it's an outlier, most are close:

8k-raw-to-4k-export-m1-macbook.jpg
Interesting.

As I suspected, M1 is not magic, and memory still really matters. There are claims out there that M1 just doesn't need 16 GB, but obviously that doesn't make much sense.* The other factor to note was that the swap was bigger on the 8 GB model. However, it just so happens that the SSD is very fast at >3 GB/s so people will notice the swap less.

BTW, the reason I mentioned GPU for 8K earlier was because the guy doing the review had a CPU and GPU monitor active at the same time, and the times his M1 ran into issue with various 8K clips, the monitor indicated that the GPU was pegged at 100%.

But then again, I don't know how that software determines the GPU workload, and the GPU and CPU have access to the same memory, so...

*Maybe I'm biased, cuz I have 16 GB in my 12" MacBook. :p
 
Last edited:
  • Like
Reactions: Tlh97

Heartbreaker

Diamond Member
Apr 3, 2006
4,226
5,228
136
We have very limited benchmarks right now. We can't just take the limited benchmarks, for example, from Anandtech, or Puget Systems, or The Verge - and say it's the fastest. Using just the benchmarks in AT's review is fraught with issues, as one example, since they ran only two relatively unconstrained somewhat real-world CPU test against other laptop chips, CB23 MT and Speedometer *** (please see below), where two AMD chips, including one at 15W TDP, beat it (and by "it" I mean the Mac mini with M1, not the MBP or MBA) pretty handily in one, and the M1 won pretty handily in the other. But we didn't even get a GB5 MT score against other AMD laptop chips. And in SPEC 2017 rate-N, the M1 in the Mac mini loses to the 4900HS.

Seems a little tilted, lets try another alternate viewpoint.

First, Lets consider first single threaded.

I would bet if you averaged all the tests, this was the overall winner, and if it wasn't, the ONLY challenger would be 5950x. Over notebook brethren, single thread win is decisive.

Against 4 Core parts, the win is also decisive.

Now for full MT with the best laptop competition beyond 4 cores.
4900HS wins CB MT.
4900HS wins Spec2017 Int MT.
4900HS loses Spec2017 fp MT.
You can check GB MT for the 4900HS loses there as well.

Overall I would call that a Win for the M1. The only chance the 4900HS has, is in fully parallel workloads, where trades blows with the M1, it might have a slight edge, but fully parallel workloads aren't going to be the most typical on laptops.

And that is just looking at only the CPU performance, and ignoring all the extras, like the fastest integrated GPU, significant ML cores, and impressive media encoders, that all improve the user experience even more.

IMO, right now this is the overall best laptop chip you can get. Sure it doesn't win every benchmark, but it's mostly dominating outside edge cases.
 

coercitiv

Diamond Member
Jan 24, 2014
6,187
11,853
136
Over notebook brethren, single thread win is decisive.

And that is just looking at only the CPU performance, and ignoring all the extras, like the fastest integrated GPU, significant ML cores, and impressive media encoders, that all improve the user experience even more.
FYI this was the exact marketing speech Intel had with the launch of Tiger Lake: ST performance wins in notebooks, GPU lead is paramount, ML features improve user experience even more. This has not slowed down Renoir even a bit.
 
  • Like
Reactions: Tlh97

jeanlain

Member
Oct 26, 2020
149
122
86
FYI this was the exact marketing speech Intel had with the launch of Tiger Lake: ST performance wins in notebooks, GPU lead is paramount, ML features improve user experience even more. This has not slowed down Renoir even a bit.
But the M1 also has exceptional perf/W ratio, which result in silent operation and long battery life. And it's available in large quantities now. I think TGL laptops are not that common, are they?
 

mikegg

Golden Member
Jan 30, 2010
1,755
411
136
Seems a little tilted, lets try another alternate viewpoint.

First, Lets consider first single threaded.

I would bet if you averaged all the tests, this was the overall winner, and if it wasn't, the ONLY challenger would be 5950x. Over notebook brethren, single thread win is decisive.

Against 4 Core parts, the win is also decisive.

Now for full MT with the best laptop competition beyond 4 cores.
4900HS wins CB MT.
4900HS wins Spec2017 Int MT.
4900HS loses Spec2017 fp MT.
You can check GB MT for the 4900HS loses there as well.

Overall I would call that a Win for the M1. The only chance the 4900HS has, is in fully parallel workloads, where trades blows with the M1, it might have a slight edge, but fully parallel workloads aren't going to be the most typical on laptops.

And that is just looking at only the CPU performance, and ignoring all the extras, like the fastest integrated GPU, significant ML cores, and impressive media encoders, that all improve the user experience even more.

IMO, right now this is the overall best laptop chip you can get. Sure it doesn't win every benchmark, but it's mostly dominating outside edge cases.
This nails it. Why aren't people seeing what you're seeing? Are they reading completely different benchmarks?

Saying that the M1 is the best overall laptop chip you can get is not controversial. Saying the M1 has the highest performing CPU in a laptop chip is not controversial.



The 2nd and 3rd sentences are false because at the very least, the 4800U and 4900HS beat it in CB23 MT, the MBP16" with an i9 beats it in Final Cut. So whether you label it a CPU or chip or whatever, your 2nd and 3rd sentences are wrong and there is ample evidence to suggest that for heavier rendering/creative workloads the M1 is not the fastest laptop CPU/chip.
1606206082792.png

1606206109346.png


1606206175254.png



1606206442056.png
1606206469082.png
 

jeanlain

Member
Oct 26, 2020
149
122
86
About the 3D performance of the M1...
There aren't many GPU benchmark tools that are native to ARM. There's GFXBench, which is rather basic, and some iOS 3DMark tests used by Anandtech. The latter rely on the ancient openGL ES 2, which is deprecated on Apple platforms and which probably runs on top of Metal using a translation layer.
But 3DMark Wild Life uses Metal/Vulkan and is designed to be cross-platform. Unfortunately, it's not available on the macOS App Store, for unclear reasons (considering that the older iOS openGL ES benchmarks are).
But someone @ macrumors could make it run on the M1 Mac mini using a trick. The score is: 18031. That is more than 2X the A14 score and 40% better than the intel Xe 96CU (and more than 2.3 times better than whatever Vega is in the Ryzen 4800U).
The quality of intel and AMD Vulkan drivers may be responsible for some of the difference, but I don't think it can explain much. There are notable Vulkan games that AMD likes to showcase. They cannot neglect Vulkan drivers.

Some may say it's a cherry-picked result. I say it's probably more reprsentative of the GPU performance than results based on commercial games. Basically every Mac game is ported from Windows using some translation tool to convert DX calls to Metal calls. Even Feral interactive, which are know for they high-quality ports, use such tool (called "IndirectX"). Porting will generally incur a performance penalty, which is confirmed by the fact that nearly every game runs better on Windows bootcamp than on macOS on the same Mac. Sometimes the performance difference is staggering, with the Mac version running twice slower, or even worse.
Baldur's gate III may be different (even though the Mac version is handled by a porting house), in the sense that they apparently took great care to optimise it for the M1. But the M1 version is not available yet.
 

Gideon

Golden Member
Nov 27, 2007
1,619
3,645
136
Seems a little tilted, lets try another alternate viewpoint.

First, Lets consider first single threaded.

I would bet if you averaged all the tests, this was the overall winner, and if it wasn't, the ONLY challenger would be 5950x. Over notebook brethren, single thread win is decisive.

Against 4 Core parts, the win is also decisive.

Now for full MT with the best laptop competition beyond 4 cores.
4900HS wins CB MT.
4900HS wins Spec2017 Int MT.
4900HS loses Spec2017 fp MT.
You can check GB MT for the 4900HS loses there as well.

Overall I would call that a Win for the M1. The only chance the 4900HS has, is in fully parallel workloads, where trades blows with the M1, it might have a slight edge, but fully parallel workloads aren't going to be the most typical on laptops.

And that is just looking at only the CPU performance, and ignoring all the extras, like the fastest integrated GPU, significant ML cores, and impressive media encoders, that all improve the user experience even more.

IMO, right now this is the overall best laptop chip you can get. Sure it doesn't win every benchmark, but it's mostly dominating outside edge cases.

I also don't get the obsession of highly MT tests where 8-core chips win very slightly. Once 6 or 8 big-core M chips start rolling out they will be very competitive even with AMDs 12 and 16 core desktop chips.
 

coercitiv

Diamond Member
Jan 24, 2014
6,187
11,853
136
But the M1 also has exceptional perf/W ratio, which result in silent operation and long battery life. And it's available in large quantities now. I think TGL laptops are not that common, are they?
Whether the M1 is a far better product than TGL was not the subject of my observation, but the arguments themselves which cannot stand on their own two feet without proper context - which is the complete experience the product offers.