Question x86 and ARM architectures comparison thread.

Page 13 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

mikegg

Golden Member
Jan 30, 2010
1,918
528
136
You are very right that the 395 for single core boosts further out of it's optimal efficiency range than the M4 does to maximize performance (which they need cause apple cores are very fast), but imo it's fairer for architectural comparisons (core vs core only) to run both cores at a similar point in their efficiency curves, which is what happens in multicore loads on most sanely configured processors, including these two, that way you're not just testing how much power the SKU is allowed to pull (which can make any processor arbitrarily inefficient).
Sure, you can run any chip at their most efficient point. Then AMD chips will be significantly behind in performance - even more so than right now where M4 has a giant lead in ST over Strix Halo.


AMD 395: 116.8 Points in CB24, 2.62 points/watt. 44.24 watts when running ST.

M4 Pro: 178 points in CB24, 20.2 points/watt, 8.9 watts

So M4 Pro is 52.4% faster in ST while also being 397% more power efficient.


0.1Ghz higher clocks speeds on one core and 30MB of L3 cache aren't going to increase power consumption by 16W, lol.
Sure it can. 16w is not a lot.
 
Last edited:

DavidC1

Golden Member
Dec 29, 2023
1,711
2,778
96
It's still ~16W system power for both R23 and R24 ST on the Macbook.

On R23 ST the system power is:
16W for M4 Pro
20W for Lunarlake
30W for Ryzen 365
 

johnsonwax

Senior member
Jun 27, 2024
267
432
96
I don't know where society decided to collectively agree on that protection at all costs even to the point of denial is for good of society?

The faster they realize the x86 camp is *that* behind, the faster they'll get off their fat, lazy asses and do something about it.

I love PCs. I hope they do well. But the fact that I have to fight them about this suggests that it's going to take a LOT more pressure to wake them up.
In all fairness, Factorio hardly touches the GPU and is mainly constrained by single thread performance and memory speed. The X3D does give meaningful benefit to the game, but their 4080s or whatever was top of the line GPU 18 months ago did nothing (not that they expected it to). So like, it's pretty much the ideal candidate game that anyone actually cares about max performance on to run on this laptop. For most other things their gaming PCs would clobber this. And Apple has a real problem if they want to make a serious entry into gaming because they don't really have a solution for a proper GPU. Getting CP2077 ported is nice and all, but Apple is not really in the running when it comes to games like that.

So I don't really think it's something that x86 needs to worry about. I do wonder what an Apple Steam Deck might do, though, even if it relied on Rosetta.
 

poke01

Diamond Member
Mar 8, 2022
3,908
5,225
106
Yea, I updated my post. It's R23 vs R24.
1754446769364.png

This is for M4 Max which includes both R23 and CB24. Yeah, these are wall meter readings that include board power. I mean it makes sense why they do this since these are laptop reviews but for the SoC article it more than confirms it was software based.

Not only that many others report 46-50 watts as peak load power for the 14 core M4 Pro which I can only assume is thru powermetrics.
 

DavidC1

Golden Member
Dec 29, 2023
1,711
2,778
96
This is for M4 Max which includes both R23 and CB24. Yeah, these are wall meter readings that include board power. I mean it makes sense why they do this since these are laptop reviews but for the SoC article it more than confirms it was software based.

Not only that many others report 46-50 watts as peak load power for the 14 core M4 Pro which I can only assume is thru powermetrics.
Ok, I think 46W is CPU power, because they say this about Ryzen:
The only exception in this case is the Ryzen AI 9 HX370 inside the Asus TUF A14, which can permanently consume 80 watts and performed slightly better in the CB-R23 test.
In the Asus TUF review it says this:
When running Prime95, for example, the CPU would boost to 3.8 GHz and 80 W and then maintain those targets indefinitely.
 

511

Diamond Member
Jul 12, 2024
3,296
3,214
106
The problem with Powermetrics is it's CPU reading is akin to IA core power not package power.
Power metrics -> CPU+GPU+ANE the package is missing aka Uncore.
 

DavidC1

Golden Member
Dec 29, 2023
1,711
2,778
96
So I don't really think it's something that x86 needs to worry about. I do wonder what an Apple Steam Deck might do, though, even if it relied on Rosetta.
It's a worry because they have a far superior architecture. Yes, desktops are more resilient due to factors like modularity, easy upgradability. However, their notebooks have a dimmer future.

You know the battery degradation inherent in most laptops? Well, ever since I got a fanless device, I realized that it's better in that regard as well. My Kabylake-Y is only 20% degraded in battery, and I bought it used.

The hardest part in CPU design is uarch. MT and Server is hard too, but not as hard. There were flacks against Woodcrest and Clovertown that it sucked against AMD's Opteron, even though it was a good core. Once they got a proper point-to-point interconnect and integrated memory controller, it was game over for AMD, because the core's performance got uncorked. Servers and MT is the easier part.

x86 is already affected by rise of ARM. Look at how poor Intel is doing.
 

OneEng2

Senior member
Sep 19, 2022
736
982
106
You miss one really important point: Some are faster than others, smarter than others, and more capable than others. And when you have a group of people, that effect is magnified, because the smart people might be stressed, worried, or apathetic.
I guess having interacted with lots of very fine engineers from a great number of companies, I find it impossible to believe that AMD and Intel do not have incredible teams of engineers working on their products.... and they have been doing it for quite a bit longer than Apple .... so, I'm not buying into the idea that Apple is better because they are smarter.

Seems like this thread has been claimed by the ARM advocates with total disregard to the fact that server and workstation are OWNED by x86 at this time and that the few ARM comparisons that I have linked to are getting pummeled in DC and workstation applications.

If ARM is really a "superior" design, how is it that it gets decimated in DC and workstation?

I'm not even saying that ARM is bad, just that it isn't great at everything and that there are reasons that this is true.
 
  • Like
Reactions: Covfefe

poke01

Diamond Member
Mar 8, 2022
3,908
5,225
106
The problem with Powermetrics is it's CPU reading is akin to IA core power not package power.
Power metrics -> CPU+GPU+ANE the package is missing aka Uncore.
How big will be the difference be? 2-3 watts extra for the M4 package?
 

poke01

Diamond Member
Mar 8, 2022
3,908
5,225
106
If ARM is really a "superior" design, how is it that it gets decimated in DC and workstation?
tell me one company that is offering X925 based Neoverse cores in DC or a workstation and there is your answer. Plus a lot SIMD benchmarks are optimised more for x86. No one is arguing that AMD's 5th gen EYPC is weaker than some Neoverse V2 or Grace CPU. The current ARM servers suck because they are bad in terms of perf. Not because the ISA itself sucks for DC.

Its like comparing Zen2 to Apple M1 and saying why does Zen2 get beaten in client workloads? Does it mean that x86 is bad for client, no its not.
 

johnsonwax

Senior member
Jun 27, 2024
267
432
96
If ARM is really a "superior" design, how is it that it gets decimated in DC and workstation?

I'm not even saying that ARM is bad, just that it isn't great at everything and that there are reasons that this is true.
Because engineering isn't free. And the idea that ARM could even scale into that space is a rather new one. Apple releasing a 64 bit ARM mobile processor was 12 years ago and everyone lost their damn mind over that. Be honest, how many people here thought that when Apple announced a move to Apple Silicon that it would even be competitive with x86 on desktop? I bet not many. That was 5 years ago.

You're arguing that ARM isn't superior so why does AWS deploy it? Because it's cheaper and more flexible for what they do. You're kind of suggesting that Graviton is less performant because it can't be rather than because Amazon doesn't need to be. We know there's a non-linear relationship between performance and power, so what's the Pareto optimal design for Amazon profitability? Willing to bet it's not maximum performance because that power bill, rack density (or whatever they do on that scale), and so on all are variables in that equation. What we aren't sure of is if there has been an effort by anyone to actually compete with x86 on DC performance. That may be a lack of software/demand to target ARM on performance application, that may be a lack of need to design that product because AMD is right there but isn't capable of hitting the perf/watt needed to maximize profits. (AWS used to promote ARM for the fractional compute stuff like Lambda where performance really doesn't matter.)
 

Doug S

Diamond Member
Feb 8, 2020
3,369
5,916
136
I don't know where society decided to collectively agree on that protection at all costs even to the point of denial is for good of society?

The faster they realize the x86 camp is *that* behind, the faster they'll get off their fat, lazy asses and do something about it.

I love PCs. I hope they do well. But the fact that I have to fight them about this suggests that it's going to take a LOT more pressure to wake them up.

Uh, what are THEY going to do about it? Unless he's gaming with Intel or AMD employees, they don't have much influence. It isn't like they would switch to a Mac regardless of what they believed about its performance, because no doubt they play other games that simply aren't available on the Mac.
 

Doug S

Diamond Member
Feb 8, 2020
3,369
5,916
136
If ARM is really a "superior" design, how is it that it gets decimated in DC and workstation?

I'm not even saying that ARM is bad, just that it isn't great at everything and that there are reasons that this is true.

Just to clarify I'm definitely not arguing that ARM is "superior" - I don't think either ISA has any real advantage/disadvantage over the other - neither x86 in mobile or ARM in DC.

As for why ARM gets decimated in DC/workstation, that's because the best cores have not been used in anything targeted at the DC or workstation market. Apple, for obvious reasons - they care about selling products to consumers, and consumers don't buy servers or Threadripper level workstations. Qualcomm, who may soon join the "best cores" club, because they haven't had time to get there yet. But it sounds like servers are in their plans somewhere, so we'll have to see. That is supposedly one of the reasons why GW III left Apple in the first place, so you'd think if he can't do servers at Qualcomm he'd leave and play the startup game again.

Same is true for x86 in mobile. If Intel had been SERIOUS about their attempt, and put their best teams on Atom and gave it access to leading edge processes, they could have made it a success in mobile. But it was a half hearted effort with B or C team designers and one or two generation old processes, so it was doomed to failure. That core would have grown up like Apple's and maybe would have eventually beat out their P core.
 
  • Like
Reactions: OneEng2

adroc_thurston

Diamond Member
Jul 2, 2023
6,204
8,716
106
Apple, for obvious reasons - they care about selling products to consumers, and consumers don't buy servers or Threadripper level workstations.
That's cope, Apple tried to make 4T setups work and failed horribly.
That's why Mac Pro (yes it's a thing. yes it's a real market) is orphaned.
That is supposedly one of the reasons why GW III left Apple in the first place, so you'd think if he can't do servers at Qualcomm he'd leave and play the startup game again.
I really-really want to see QC ship a server SoC.
AMD killed every merchant ARM Si vendor to date, and another head on the trophy rack would be good fun for mr. Norrod.
 

Doug S

Diamond Member
Feb 8, 2020
3,369
5,916
136
That's cope, Apple tried to make 4T setups work and failed horribly.
That's why Mac Pro (yes it's a thing. yes it's a real market) is orphaned.

What is a "4T setup"?

The Mac Pro isn't playing in the same workstation market as Threadripper and Xeon based dual socket systems are. It isn't even trying to. That's probably about the highest end system a consumer might buy, and maybe too high for their liking (I'm sure more of them are sold to small businesses and universities than consumers) If they went even higher end there's no longer even an illusion that it is a consumer product, just like Threadripper and Xeon workstations aren't.

It might be fun for us to speculate on the performance of something crazy with say 8 M4 Max SoCs linked together, but Apple's product managers don't make decisions on "what would be cool to talk about in Anandtech forums" but on whether they can sell enough of something to recoup development and ongoing support costs. And can't be a product that's targeted solely at business, because Apple does not sell ANY products targeted solely at business. Or even mostly at business, other than perhaps Mac Pro.
 

adroc_thurston

Diamond Member
Jul 2, 2023
6,204
8,716
106
What is a "4T setup"?
4 tiles.
Ultra is 2t.
The Mac Pro isn't playing in the same workstation market as Threadripper and Xeon based dual socket systems are
Yeah it does, there were dual socket Mac Pros quite literally (I think Westmere-based ones).
TR WSes are single-socket anyway.
It might be fun for us to speculate on the performance of something crazy with say 8 M4 Max SoCs linked together, but Apple's product managers don't make decisions on "what would be cool to talk about in Anandtech forums" but on whether they can sell enough of something to recoup development and ongoing support costs.
Well that's not how any of that works.
4T means the NRE is already spent on Max/Ultra for the most part you see.
Only that it didn't work. Too bad!
 

poke01

Diamond Member
Mar 8, 2022
3,908
5,225
106
There never was a CPID for a 4 tile M chip. It’s just ramblings of Mark Gurman. He’s the one that’s constantly makes it up. Lately he said Hidra for the code name for the Mac Pro chip and it turned out be the code name for the base M5.

If Apple ever tested a 4 title the chip it would have showed up in macOS but it never did not even prototype devices have references to it.
 

mikegg

Golden Member
Jan 30, 2010
1,918
528
136
If ARM is really a "superior" design, how is it that it gets decimated in DC and workstation?
DC - see my post on first page. ARM is already 50%+ of all hyperscaler deployments. Much higher than 50% on AWS actually. So no, it's not getting decimated. It's already winning in datacenters. Also, Nvidia has gone all in on ARM CPUs and they're the biggest DC company in the world by far.

Workstation - very small market. Very very small. Almost negligible. So small that sometimes AMD "forgets" to release new Threadripper chips. ARM hasn't competed here seriously with the exception of Mac Studio and Mac Pro.
 

adroc_thurston

Diamond Member
Jul 2, 2023
6,204
8,716
106
DC - see my post on first page. ARM is already 50%+ of all hyperscaler deployments
This is not a sustainable business model for ARM.
It's just soviet thirdworldism for hyperscalers, and you know well enough how that ended.
Workstation - very small market. Very very small. Almost negligible
yes I know.
Things Where I Lose don't matter.
So small that sometimes AMD "forgets" to release new Threadripper chips
They haven't forgotten in 8 years.