Question x86 and ARM architectures comparison thread.

Page 15 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

mikegg

Golden Member
Jan 30, 2010
1,925
532
136
Lets be clear, they aren't doing ARM because it is better, they are doing it because it is cheap. They also have the resources to put code CTM to enhance performance in a hardware specific way (like a game console can).
You said let's be clear but you're not even clear about what defines as "better".

ARM loses out in raw speed in server but wins in perf/watt, perf/$, and control. These are way more important in DC than just raw speed.
 
Last edited:

511

Diamond Member
Jul 12, 2024
3,430
3,315
106
So why did you link me to that website?
I thought phoronix would have the data cause they usually have the data but as it turns out it was vCPU so my bad.
Do I need a public letter like Lip Bu Tan for you to forgive me 🤣
 

mikegg

Golden Member
Jan 30, 2010
1,925
532
136
I thought phoronix would have the data cause they usually have the data but as it turns out it was vCPU so my bad.
Do I need a public letter like Lip Bu Tan for you to forgive me 🤣
Don't need an apology letter. Just need you to come with more firepower.
 

Hitman928

Diamond Member
Apr 15, 2012
6,663
12,301
136
Don't need an apology letter. Just need you to come with more firepower.

It's hard because getting independent testing of Arm server hardware is difficult now due to a mix of AMD's competitiveness and hypervisors rolling their own versions of ARM processors. This is the best comparison I know of:


1754837077601.png
1754837278388.png

So, not that far off in a general sense, but Zen5c would be a significant improvement over everything shown here. With Ampere being bought-out by Softbank, it will be interesting to see if they can get more traction for 3rd party ARM solutions and improved release cycles.
 

johnsonwax

Senior member
Jun 27, 2024
286
447
96
This is most likely the benchmark itself?
Someone more knowledgeable might have a definitive answer
So let's assume the benchmark is coded to do both P and E cores - to what extent is AS designed to run all cores at full tilt, and does that break the thermal budget and start throttling? After all, the expectation is that one process isn't driving them all at once - you might be running all the P cores full chap, but the E cores would be reserved for system tasks, etc. that are decoupled from the P core process and likely are less common because the user is doing something substantial to use all P cores. Again, these are consumer machines that effectively always have someone sitting at the keyboard and the P core use is directly a response to what that user is doing, not servers grinding away at a problem in a closet somewhere. Note the QoS is typically done at the process level, not the thread level, and Apple's thinking is that you're advanced enough to QoS threads for E cores you're doing it either for power saving or for responsiveness say for your main control loop where you want to be responsive to user input, or I/O, etc., not to juice performance, because again, these are consumer products, not HPC.

So I wouldn't be surprised if the SoC in a 1 hour long test exceeded its thermal envelope after a few moments and just started to throttle. Run that test again in liquid nitrogen and I wonder if you get the same result. I would expect more than a 20% drop off in performance if it was accidentally running on E cores, but 20% seems pretty reasonable if it's dialing the P cores back and keeping the E cores running. After all, if you are throttling, you're not going to throttle the E cores - they're not the problem. I don't know how granular their power management is, but my x86 MBP would fall off a cliff the moment it started to throttle.
 

johnsonwax

Senior member
Jun 27, 2024
286
447
96
Note Amazon doesn't specifically care about perf/watt. They care about total cost. Last I read they can acquire a given level of compute for about ⅓ the cost of AMD. This is fixed, but a meaningful sum. They also have to pay for the electricity, the land so compute/rack is a factor, there's other factors in there in terms of their ability to control and monetize it, so one of their big applications for ARM is Lambda which is a very heterogeneous compute environment that isn't even well mapped to individual cores and has all kinds of additional transaction cost to isolate that process, etc.

So I wouldn't even expect Graviton to have particularly similar design constraints to AMD. The whole point of AWS was to rethink the cost model of traditional compute and that should apply not just to datacenter and racks but also the die itself. To them the superiority of ARM isn't to run a bigger Crysis, but to enable that different cost model of compute. Apple is doing something similar but rethinking how compute is delivered. Every x86 benchmark is on desktop, but you can switch most Apple Silicon benchmarks to a laptop with a fan and you'll get basically the same number. FFS, an iPad was atop the leaderboard for half the year. A big part of Apple's market is you can take your 9950X3D performance to an airplane seat, to the hotel room, to the conference. And if you can live with slightly less performance, you can put that in your pocket. So are they trying to win on desktop? Not really. They don't want to be behind, but that's not where their money comes from. Their money comes from continuing to eat away at the desktop PC market, which again peaked in 2012 right when the iPad launched and knowing they can easily win in portables and below.
 

DavidC1

Golden Member
Dec 29, 2023
1,718
2,782
96
Again, where's the evidence that Apple ever invested a single dollar in such an effort? This is another thing that maybe had some unfounded rumors somewhere which you decided to believe Apple tried and failed at it, because I guess it comforts you to think that Apple can't succeed in something as simple as putting a GPU chip on a PCIe board.
The butt kicking Apple does in CPUs is not as amenable to GPUs. You can see simply in comparisons that GPUs once the architecture gets to the ballpark competitiveness, it's mostly dependent on Moore's Law, while CPUs are highly dependent on smarts of the engineering and ideas.

Also, while it may seem simple, making it work flawlessly in games across many different hardware and software configurations as in the PC is a different story. And Apple doesn't care about this, which is the more important part.
Why would they be hard? It would be the same drivers they already have!
Uhh.... no.
 

johnsonwax

Senior member
Jun 27, 2024
286
447
96
The butt kicking Apple does in CPUs is not as amenable to GPUs. You can see simply in comparisons that GPUs once the architecture gets to the ballpark competitiveness, it's mostly dependent on Moore's Law, while CPUs are highly dependent on smarts of the engineering and ideas.

Also, while it may seem simple, making it work flawlessly in games across many different hardware and software configurations as in the PC is a different story. And Apple doesn't care about this, which is the more important part.
The weakness of PC GPUs is that they are hanging off a PCIe spec that is about to be 2 versions ahead of what is actually shipping to customers. Shared memory is their opportunity, as is Metal being a better environment to develop in, and a much less schizophrenic hardware/software/driver space. It's a cost effective space to operate in if you can get gamers to buy in. But it's not so much a technological problem as a really nasty go-to-market problem and the best Apple really can do is keep erasing technical debt and being ready when an opportunity arises.
 

Jan Olšan

Senior member
Jan 12, 2017
553
1,090
136
Metal being a better environment to develop in,
How? Metal is a NIH Apple product, making it an API that game devs have to support on top of the mainstream APIs that already cover most of the market. What about that makes targeting the API better environment to develop in?

and a much less schizophrenic hardware/software/driver space.
What do you base this on? Do you think Apple has more mature tools and platform for game development than what AMD/Nvidia + Microsoft's DirectX SDKs provide? (And that Apple's GPU drivers are more mature stack for games to target?)
 
  • Like
Reactions: poke01

poke01

Diamond Member
Mar 8, 2022
3,961
5,280
106
What do you base this on? Do you think Apple has more mature tools and platform for game development than what AMD/Nvidia + Microsoft's DirectX SDKs provide? (And that Apple's GPU drivers are more mature stack for games to target?)
Unless Apple makes a video game console and its popular, I don't ever see Metal nor Apple GPUs drivers being tuned for gaming like DirectX, Radeon and GeForce is.
 

Thunder 57

Diamond Member
Aug 19, 2007
3,862
6,497
136
The weakness of PC GPUs is that they are hanging off a PCIe spec that is about to be 2 versions ahead of what is actually shipping to customers. Shared memory is their opportunity, as is Metal being a better environment to develop in, and a much less schizophrenic hardware/software/driver space. It's a cost effective space to operate in if you can get gamers to buy in. But it's not so much a technological problem as a really nasty go-to-market problem and the best Apple really can do is keep erasing technical debt and being ready when an opportunity arises.

GPU's don't need PCIe 6 or 7. PCIe 6 won't be coming to desktop for years. It is expensive and not necessary for that market.
 

Doug S

Diamond Member
Feb 8, 2020
3,383
5,976
136
GPU's don't need PCIe 6 or 7. PCIe 6 won't be coming to desktop for years. It is expensive and not necessary for that market.

Really the only case I could see being made for it is if Apple tried to do a dGPU (which they would never do) because they'd want to preserve the same unified memory model and memory map the GPU's local memory over PCIe. They wouldn't need the speed of the new PCIe versions so much (though it wouldn't hurt) but the latency improvements made to support CXL most certainly would.
 

johnsonwax

Senior member
Jun 27, 2024
286
447
96
How? Metal is a NIH Apple product, making it an API that game devs have to support on top of the mainstream APIs that already cover most of the market. What about that makes targeting the API better environment to develop in?
Because it's objectively easier to develop in Metal than say Vulkan. Nobody disputes this. And to the extent that most developers use an existing game engine, you really just need to get the engine ported to get most of the benefit. UE5 is ported to Metal. Unity, Red, Decima are all ported.

There has been a feature gap between Apple GPUs and Nvidia beyond just overall performance, and that gap is mostly getting closed down with Metal 4.
What do you base this on? Do you think Apple has more mature tools and platform for game development than what AMD/Nvidia + Microsoft's DirectX SDKs provide? (And that Apple's GPU drivers are more mature stack for games to target?)
I think Apple has largely the same tools. That's not the issue so much. Apple's GPU drivers aren't more mature so much as they're more stable due to have an infinitely smaller hardware profile. I mean, have you ever read a gaming board?
 

LightningZ71

Platinum Member
Mar 10, 2017
2,387
3,033
136
Unless Apple makes a video game console and its popular, I don't ever see Metal nor Apple GPUs drivers being tuned for gaming like DirectX, Radeon and GeForce is.
The 2022 apple TV is competitive in single core performance and not too far behind in useable MT performance to the og PS5 and XBone. It has a out half the graphics performance, but much faster storage tech.

If the next Apple TV is based on any of their current generation chips, it will be as good or better than them in every way. They have a game console that's far better then the volume leader (switch) and on par with last Gen home models.

They just lack the will...
 

Jan Olšan

Senior member
Jan 12, 2017
553
1,090
136
Because it's objectively easier to develop in Metal than say Vulkan. Nobody disputes this.
Is that so?

And to the extent that most developers use an existing game engine, you really just need to get the engine ported to get most of the benefit. UE5 is ported to Metal. Unity, Red, Decima are all ported.

There has been a feature gap between Apple GPUs and Nvidia beyond just overall performance, and that gap is mostly getting closed down with Metal 4.

I think Apple has largely the same tools. That's not the issue so much. Apple's GPU drivers aren't more mature so much as they're more stable due to have an infinitely smaller hardware profile. I mean, have you ever read a gaming board?
I'm feeling somewhat skeptical...