Question x86 and ARM architectures comparison thread.

Page 14 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

adroc_thurston

Diamond Member
Jul 2, 2023
6,237
8,777
106
Must have been well hidden
Never left the validation bench.
You think the prototypes will ever be seen in public like AirPower eventually came be known.
AirPower had an actual pre-prod attempts.
Real-real Airpower prototypes were far more dingy.
ARM will become a merchant Si vendor because that's the only way to make money in this business.
IP licensing sucks. It's why pretty much all smaller licensing players (both CPU and graphics) died out over the past 10 years.

Neoverse was just a free crack sample for hyperscalers. Soon, they'll have to pay Actual Real Money for their server Si.
 

mikegg

Golden Member
Jan 30, 2010
1,918
529
136
ARM will become a merchant Si vendor because that's the only way to make money in this business.
Neoverse was just a free crack sample for hyperscalers.
Even if they do, which I doubt, it doesn't change that ARM was more popular on AWS than AMD in 2020 already.
 

OneEng2

Senior member
Sep 19, 2022
740
987
106
That's cope, Apple tried to make 4T setups work and failed horribly.
That's why Mac Pro (yes it's a thing. yes it's a real market) is orphaned.
I think it is common for people to believe that if Company A is doing it, it is EASY for company B to do it too.

To your point, obviously, this is not true.
DC - see my post on first page. ARM is already 50%+ of all hyperscaler deployments. Much higher than 50% on AWS actually. So no, it's not getting decimated.
Lets be clear, they aren't doing ARM because it is better, they are doing it because it is cheap. They also have the resources to put code CTM to enhance performance in a hardware specific way (like a game console can).

This has nothing to do with architecture and everything to do with economies of scale in a massively large company. This is pure vertical integration in action. "I'm big enough to make my own computer, processor, and OS, so why buy it from someone?".
 
Jul 27, 2020
26,575
18,280
146
"I'm big enough to make my own computer, processor, and OS, so why buy it from someone?".
This is what M$ is dreaming about too. Their QC partnership plan was actually to put hardware out there and get their new OS in the hands of consumers. Once the wrinkles have been ironed out, they can release their own CPU and get closer to being Apple 2.0 and having a partner is just to make sure their employees can learn (or steal, depending on how you see it) the secrets to controlling all aspects of the hardware platform.

I think this also explains QC's Linux enablement efforts because they would be stupid to fully trust M$. The whole industry knows how M$ burned Sega and Nokia.
 
  • Like
Reactions: OneEng2

Doug S

Diamond Member
Feb 8, 2020
3,372
5,925
136
Oh no it was a thing.

No it was not. They filed a patent showing how they'd do it, but there's no evidence they ever put three sets of I/O pads on any Max die. You're just seeing rumors, speculation and patent claims and somehow concluding that Apple invested a bunch of NRE making a product but "couldn't make it work". That's your Apple hating projection which has nothing to do with reality.


Apple also tried dGPUs.

Again, where's the evidence that Apple ever invested a single dollar in such an effort? This is another thing that maybe had some unfounded rumors somewhere which you decided to believe Apple tried and failed at it, because I guess it comforts you to think that Apple can't succeed in something as simple as putting a GPU chip on a PCIe board.
 

adroc_thurston

Diamond Member
Jul 2, 2023
6,237
8,777
106
No it was not
Yeah it was.
That's your Apple hating projection which has nothing to do with reality.
I don't hate them. Big systems are just hard.
Again, where's the evidence that Apple ever invested a single dollar in such an effort?
You know it even had a proper on-roadmap codename right?
because I guess it comforts you to think that Apple can't succeed in something as simple as putting a GPU chip on a PCIe board
Well yeah that's the hard part.
dGPU drivers alone are a nightmare. see Intel's efforts so far.
 

Jan Olšan

Senior member
Jan 12, 2017
551
1,089
136
It's a worry because they have a far superior architecture. Yes, desktops are more resilient due to factors like modularity, easy upgradability. However, their notebooks have a dimmer future.

You know the battery degradation inherent in most laptops? Well, ever since I got a fanless device, I realized that it's better in that regard as well. My Kabylake-Y is only 20% degraded in battery, and I bought it used.
It has nothing to do with being fanless but with lowering the current loads put on the battery, so it's the lower TDP and lower PL2s/PPTs help. High-wattage processors and dGPUs do harm.
The higher temperature associated with loading a fanless device may actually harm the battery's longevity.
 

poke01

Diamond Member
Mar 8, 2022
3,944
5,253
106
The higher temperature associated with loading a fanless device may actually harm the battery's longevity.
Depends WHICH SoC is being used. Lunar Lake isn’t as efficient as the base M3/M4 under load. Forget Intel CPUs prior to lunar lake being used in a fanless device cause while the TDP is low they made the devices hot and were slower than snails

Look at Core M series from Intel, low TDP but made fanless laptops warm even under moderate load. So yes it’s important to know that devices with a fan can lead to faster battery degradation than devices that are fanless, just depends on the SoC being used.
 

adroc_thurston

Diamond Member
Jul 2, 2023
6,237
8,777
106
Look at Core M series from Intel, low TDP but made fanless laptops warm even under moderate load
That's a function of board/case/etc design though.
Modern phone SoCs have higher peaks and sustained power than the infamous SD810, but phone thermal management just got a lot better.
 

poke01

Diamond Member
Mar 8, 2022
3,944
5,253
106
That's a function of board/case/etc design though.
Modern phone SoCs have higher peaks and sustained power than the infamous SD810, but phone thermal management just got a lot better.
That’s the point, you can’t assume just because a device is fanless it would lead to worse battery life degradation in the future than a device with fans. There are lot of factors involved like you said.
 

adroc_thurston

Diamond Member
Jul 2, 2023
6,237
8,777
106
That’s the point, you can’t assume just because a device is fanless it would lead to worse battery life degradation in the future than a device with fans
Yes you can, on average, fanless devices have higher Tcase.
But I don't think it would matter much.
 

Geddagod

Golden Member
Dec 28, 2021
1,445
1,552
106
Something I thought was interesting, Qualcomm Oryon-L and Oryon-M both use 2-2 N3E libs, while the x925 in the mediatek 9400 uses 3-2 cells.
Despite this, the x925 P-core without L2 arrays isn't all too much larger than the Oryon-L, and without the L2 block it should by all means be a good bit smaller.
The 8 elite chip also has 2 more metal layers than the mediatek chip, but the same as the Xring chip.
This is speculation, but I wonder if the Oryon-L couldn't have been pushed even further, but Qualcomm didn't end up doing it because of area/power reasons. While the x925 looks like it has been pushed pretty far, but also maybe better floorplanning via partitioning the core more couldn't have resulted in additional fmax. The x925 also has an additional partition for the L2 cache logic control, and yet it still appears to fewer obvious "blocks" than Oryon-L has.
It also appears as if Oryon-L scales better at high power, and the curve could keep on extending better with higher power past what has already been plotted compared to both x925s (esp the d9400, which seems very flat compared to the Oring and SD8).
1754596719510.png
Also, Oryon-M just looks bad.
1754596945187.png
It appears they sacrificed a lot to hit 0.5w minimum power consumption lower than the Mediatek x4 competition. It's a bit weird, they used 2-2 on this while Mediatek only used 2-1 for the X4, but the X4 is architecturally way larger than Oryon-M. So you see this weird thing where Oryon-M is a small arch that still has to have decent perf at the high end of the power curve since they need it for the MT perf, but the X4 is an architecturally large core trying to be decently small physically. And it appears it achieves that goal too, IIRC the X4 without the L2 block is as large as Oryon-M is.
 

DavidC1

Golden Member
Dec 29, 2023
1,714
2,779
96
It has nothing to do with being fanless but with lowering the current loads put on the battery, so it's the lower TDP and lower PL2s/PPTs help. High-wattage processors and dGPUs do harm.
The higher temperature associated with loading a fanless device may actually harm the battery's longevity.
The current output is not that high for a LiPo battery. They are in average 50WHr and those laptops are rarely stressed that much, and current wise they can easily handle that. The battery degrades substantially in most laptops over just 3-4 years.

My current fanless device runs cooler than the fan enabled XPS I had before. My friend's gaming XPS batteries didn't last long either. From all the devices I have encountered, my fanless is the only one that had small degradation. The others degrade to the point of uselessness. While the CPU might be arguably cooler(debatable) on a fan device, all other components will be hotter.