Yea, I updated my post. It's R23 vs R24.But it’s only testing Cinebench in that chart
Yea, I updated my post. It's R23 vs R24.But it’s only testing Cinebench in that chart
Sure, you can run any chip at their most efficient point. Then AMD chips will be significantly behind in performance - even more so than right now where M4 has a giant lead in ST over Strix Halo.You are very right that the 395 for single core boosts further out of it's optimal efficiency range than the M4 does to maximize performance (which they need cause apple cores are very fast), but imo it's fairer for architectural comparisons (core vs core only) to run both cores at a similar point in their efficiency curves, which is what happens in multicore loads on most sanely configured processors, including these two, that way you're not just testing how much power the SKU is allowed to pull (which can make any processor arbitrarily inefficient).
Sure it can. 16w is not a lot.0.1Ghz higher clocks speeds on one core and 30MB of L3 cache aren't going to increase power consumption by 16W, lol.
In all fairness, Factorio hardly touches the GPU and is mainly constrained by single thread performance and memory speed. The X3D does give meaningful benefit to the game, but their 4080s or whatever was top of the line GPU 18 months ago did nothing (not that they expected it to). So like, it's pretty much the ideal candidate game that anyone actually cares about max performance on to run on this laptop. For most other things their gaming PCs would clobber this. And Apple has a real problem if they want to make a serious entry into gaming because they don't really have a solution for a proper GPU. Getting CP2077 ported is nice and all, but Apple is not really in the running when it comes to games like that.I don't know where society decided to collectively agree on that protection at all costs even to the point of denial is for good of society?
The faster they realize the x86 camp is *that* behind, the faster they'll get off their fat, lazy asses and do something about it.
I love PCs. I hope they do well. But the fact that I have to fight them about this suggests that it's going to take a LOT more pressure to wake them up.
Yea, I updated my post. It's R23 vs R24.
Do you mind sharing your factory? Would be a good 1t benchmark for us hereFactorio
Ok, I think 46W is CPU power, because they say this about Ryzen:This is for M4 Max which includes both R23 and CB24. Yeah, these are wall meter readings that include board power. I mean it makes sense why they do this since these are laptop reviews but for the SoC article it more than confirms it was software based.
Not only that many others report 46-50 watts as peak load power for the 14 core M4 Pro which I can only assume is thru powermetrics.
In the Asus TUF review it says this:The only exception in this case is the Ryzen AI 9 HX370 inside the Asus TUF A14, which can permanently consume 80 watts and performed slightly better in the CB-R23 test.
When running Prime95, for example, the CPU would boost to 3.8 GHz and 80 W and then maintain those targets indefinitely.
It's a worry because they have a far superior architecture. Yes, desktops are more resilient due to factors like modularity, easy upgradability. However, their notebooks have a dimmer future.So I don't really think it's something that x86 needs to worry about. I do wonder what an Apple Steam Deck might do, though, even if it relied on Rosetta.
I guess having interacted with lots of very fine engineers from a great number of companies, I find it impossible to believe that AMD and Intel do not have incredible teams of engineers working on their products.... and they have been doing it for quite a bit longer than Apple .... so, I'm not buying into the idea that Apple is better because they are smarter.You miss one really important point: Some are faster than others, smarter than others, and more capable than others. And when you have a group of people, that effect is magnified, because the smart people might be stressed, worried, or apathetic.
How big will be the difference be? 2-3 watts extra for the M4 package?The problem with Powermetrics is it's CPU reading is akin to IA core power not package power.
Power metrics -> CPU+GPU+ANE the package is missing aka Uncore.
tell me one company that is offering X925 based Neoverse cores in DC or a workstation and there is your answer. Plus a lot SIMD benchmarks are optimised more for x86. No one is arguing that AMD's 5th gen EYPC is weaker than some Neoverse V2 or Grace CPU. The current ARM servers suck because they are bad in terms of perf. Not because the ISA itself sucks for DC.If ARM is really a "superior" design, how is it that it gets decimated in DC and workstation?
I'll see if I still have it. The blueprints weren't exactly useful in other contexts.Do you mind sharing your factory? Would be a good 1t benchmark for us here
At such low power level 2-3W extra can affects number let's say M4P scores 13 Spec score at 10W so 1.3/W for 2W extra its 1.08How big will be the difference be? 2-3 watts extra for the M4 package?
Because engineering isn't free. And the idea that ARM could even scale into that space is a rather new one. Apple releasing a 64 bit ARM mobile processor was 12 years ago and everyone lost their damn mind over that. Be honest, how many people here thought that when Apple announced a move to Apple Silicon that it would even be competitive with x86 on desktop? I bet not many. That was 5 years ago.If ARM is really a "superior" design, how is it that it gets decimated in DC and workstation?
I'm not even saying that ARM is bad, just that it isn't great at everything and that there are reasons that this is true.
I should note there is a whole benchmarking scene in Factorio. It's a bit more complicated with 2.0 and I don't have a good sense of where things are under 2.0 (just haven't focused much on that version) but there is no Mac version of the benchmark tool.Do you mind sharing your factory? Would be a good 1t benchmark for us here
I don't know where society decided to collectively agree on that protection at all costs even to the point of denial is for good of society?
The faster they realize the x86 camp is *that* behind, the faster they'll get off their fat, lazy asses and do something about it.
I love PCs. I hope they do well. But the fact that I have to fight them about this suggests that it's going to take a LOT more pressure to wake them up.
If ARM is really a "superior" design, how is it that it gets decimated in DC and workstation?
I'm not even saying that ARM is bad, just that it isn't great at everything and that there are reasons that this is true.
That's cope, Apple tried to make 4T setups work and failed horribly.Apple, for obvious reasons - they care about selling products to consumers, and consumers don't buy servers or Threadripper level workstations.
I really-really want to see QC ship a server SoC.That is supposedly one of the reasons why GW III left Apple in the first place, so you'd think if he can't do servers at Qualcomm he'd leave and play the startup game again.
That's cope, Apple tried to make 4T setups work and failed horribly.
That's why Mac Pro (yes it's a thing. yes it's a real market) is orphaned.
4 tiles.What is a "4T setup"?
Yeah it does, there were dual socket Mac Pros quite literally (I think Westmere-based ones).The Mac Pro isn't playing in the same workstation market as Threadripper and Xeon based dual socket systems are
Well that's not how any of that works.It might be fun for us to speculate on the performance of something crazy with say 8 M4 Max SoCs linked together, but Apple's product managers don't make decisions on "what would be cool to talk about in Anandtech forums" but on whether they can sell enough of something to recoup development and ongoing support costs.
There never was a CPID for a 4 tile M chip. It’s just ramblings of Mark Gurman. He’s the one that’s constantly makes it up. Lately he said Hidra for the code name for the Mac Pro chip and it turned out be the code name for the base M5.4 tiles.
Oh no it was a thing.There never was a CPID for a 4 tile M chip
wrongIf Apple ever tested a 4 title the chip it would have showed up in macOS
DC - see my post on first page. ARM is already 50%+ of all hyperscaler deployments. Much higher than 50% on AWS actually. So no, it's not getting decimated. It's already winning in datacenters. Also, Nvidia has gone all in on ARM CPUs and they're the biggest DC company in the world by far.If ARM is really a "superior" design, how is it that it gets decimated in DC and workstation?
This is not a sustainable business model for ARM.DC - see my post on first page. ARM is already 50%+ of all hyperscaler deployments
yes I know.Workstation - very small market. Very very small. Almost negligible
They haven't forgotten in 8 years.So small that sometimes AMD "forgets" to release new Threadripper chips