Question Qualcomm's first Nuvia based SoC - Hamoa

Page 18 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

controlflow

Member
Feb 17, 2015
115
168
116
Yep Intel is going to in trouble and I doubt MTL is going to fix it.

It certainly looks like an interesting and promising product but relax with all the hyperbole. You always need to take marketing slides with a grain of salt. It is going to be cherry picked to show their product in the absolute best light.

FYI MTL releases in a few weeks...
This will need to compete with Zen 5 and eventually ARL/LNL soon after and it has to also deal with all the SW ecosystem disadvantage of being on Windows for ARM. It is more than pre-mature to call the end of x86 in client.
 

H433x0n

Senior member
Mar 15, 2023
915
992
96
True. But it signals the beginning of a persistent headache for the two main x86 players. Hopefully it leads to more innovation and performance for us in the long run.
For a desktop pc enthusiast I doubt it’ll ever be relevant in our market.

As much as I’d like to buy an Nvidia ARM desktop CPU I don’t see it ever happening. I also doubt Qualcomm will ever become a player in our market.
 
  • Like
Reactions: Henry swagger

ikjadoon

Member
Sep 4, 2006
118
167
126
So how and when did they settle those ARM lawsuits? I'm surprised that this chip has a release date
To be fair, Oryon only has a rough timeline of late Q2 / early Q3 ("mid-year 2024"). Arm v Qualcomm's trial won't start until September 2024, so perhaps just after Qualcomm's intended timeline.

The lawsuit is still heavily ongoing; you can follow along here:


Interesting tidbit #1: AMD, Apple, Ampere, MediaTek, TSMC, NVIDIA, Cadence, Google, Synopsys, Intel, Cadence etc. are all involved in the trial now. Lots of people giving depositions / receiving subpoenas to testify in Court.

Interesting tidbit #2: So far with discovery & depositions, the judge is mostly siding with Arm and against Qualcomm.
  1. ALAs from Arm: Judge says Qualcomm's motion is partly granted, partly denied. Can't see the details.
  2. Qualcomm tried to get a deposition from Masayoshi Son. Judge rules against Qualcomm here.
  3. Qualcomm wanted discovery of Arm's IPO. Judge rules against Qualcomm here.
  4. Qualcomm wanted docs from Antonio Viana at Arm. Judge rules against Qualcomm here.
  5. Qualcomm wanted Apple's & Ampere's specific ALAs. Judge rules against Qualcomm here.
The last update is October 25, so literally yesterday haha.
 

Doug S

Platinum Member
Feb 8, 2020
2,302
3,605
136
So how and when did they settle those ARM lawsuits? I'm surprised that this chip has a release date

It won't matter unless ARM gets an injunction barring their sale. I imagine they would ask for one as part of the pretrial motions once a firm release date is known, but the chance of getting one is probably near zero. To get an injunction in a case like this you would have to convince a judge that you are more likely than not to win the case on its merits, AND that you would suffer considerable harm (or maybe irreparable harm, not sure of the standard) if the injunction is not granted.

It is hard to see how ARM could claim they would suffer considerable (let alone irreparable) harm should Qualcomm sell these products and a later court case ruled they were not properly licensed by ARM. If ARM won the case they would ask again for an injunction, and that would stand a very good chance of being granted. If it stood up to appeals Qualcomm would owe damages, and that would cure any harm from the time of their sale.

I saw mention elsewhere that their CPU is ARMv8.7, which seems a little out of date for something shipping in 2024 (and I don't know if that's been confirmed so it may not even be true) One possible explanation mentioned was that the disagreement with Qualcomm may have to do with an architectural license covering ARMv9 only. We don't know the details of the agreement(s) in question so it is all speculation, but that seemed an interesting theory.
 

Tup3x

Senior member
Dec 31, 2016
969
954
136
For a desktop pc enthusiast I doubt it’ll ever be relevant in our market.

As much as I’d like to buy an Nvidia ARM desktop CPU I don’t see it ever happening. I also doubt Qualcomm will ever become a player in our market.
We'll see about that. The thing if you end up alone, you'll loose. If other companies flood the market with ARM CPUs for Windows and if those have benefits (performance/efficiency)... Things could change rather quickly. Currently the reason (or has been) why PC CPU market is duopoly is that Windows doesn't run on anything else. Once other companies are free to start competing things get interesting.
 

Heartbreaker

Diamond Member
Apr 3, 2006
4,231
5,240
136
For a desktop pc enthusiast I doubt it’ll ever be relevant in our market.

As much as I’d like to buy an Nvidia ARM desktop CPU I don’t see it ever happening. I also doubt Qualcomm will ever become a player in our market.

"Ever" is a long time.

The biggest benefit of ARM is going to be felt in Laptops. That's where the improved efficiency, and potentially better iGPUs will be most noticeable.

If by Enthusiast desktop, you are looking for a Big GPU card and CPU, then the onboard iGPU becomes something of a waste, so this won't be an initial target. Though some may produce efficient desktops using the laptop chips and using their iGPUs.

Further down the line when ARM has much more penetration of the Windows market, some designs may start to have more high end performance that x86, and then we could see them target Enthusiast Desktops as well, but that will be later.
 

Glo.

Diamond Member
Apr 25, 2015
5,726
4,604
136
For a desktop pc enthusiast I doubt it’ll ever be relevant in our market.

As much as I’d like to buy an Nvidia ARM desktop CPU I don’t see it ever happening. I also doubt Qualcomm will ever become a player in our market.
For desktop PC enthusiasts, the DIY market it will NEVER be relevant.

ARM and x86 SOCs however will make more people turn from DIY to devices, pre builts in all kinds of form.

What Snapdragon SOC brings is iPadisation of experience of personal computing to the masses. At least - from technological point of view. Every product however has its price, and it is the most important thing for adoption of idea.
 

beginner99

Diamond Member
Jun 2, 2009
5,211
1,582
136
Or maybe optimizing for power first, IPC second-ish, and frequency last-ish just tends to get you to a better place than when you try to do it the other way around.


ELI5 - Is there something inherent to ARM designs that prevent it from clocking to 5ghz? I don’t think I’ve ever seen a modern ARM core running at clocks seen in Intel / AMD cores.

x86 optimizes for 2 things:
- servers
- die space

because the cover a market from top end server at 300W to mobile devices at 10w which use the same core.

Can't sell a chip in $300 craptop if it uses a ton of die space. Hence you need to clock it high and your efficiency starts to suffer. M2 and this nuvia chips are made for as single specific purpose, mobile client device so no compromise needed in core design and efficiency is a core aspect. So the go wide.

Both m2 and nuvia get there performance at the cost of die size. We know it for the m2 for sure, probably same for nuvia core. You can only do that if your target expensive devices which makes me think this SOC will not be a huge success due to pricing.
 
  • Like
Reactions: Tlh97 and moinmoin

eek2121

Platinum Member
Aug 2, 2005
2,930
4,027
136
1) True, but no one cares bout desktop nuclear reactor ST.
You are so completely wrong about that. nT performance is derived from 1T performance. The quicker a single task is performed, the quicker the chip can go back to sleep.

AMD chips also scale rather well from a power standpoint. A 7950X at 65W is one of the most efficient chips out there.
Nonsense. GB5 is useless benchmark and vendors loved cheating in it by padding scores with irrelevant stuff like vector AES instructions cause surely every user is going to stream AES at the speed of multiple gigabytes stright to /dev/null. Some other parts were not great either.

The only virtue it had was good scaling with core count, HT and was loved by "my threadripper is larger than your xeon" crowd even if said threadripper had no memory controller in half of CCXs and mediocre performance everywhere except in cb23 style tasks.

I am valuing GB6 score much higher for desktop and laptop use than GB5 cause it is much harder to cheat. But surely i can understand why all that "we have heterogenous chip with bunch of useless arm cores with triple number names" crowd hates it - cause GB6 exposes just how useless and incompetent they are vs proper chips. Theyd scale soooo good in gb5.
Crypto was limited to 5% of the score in GB5 (With INT/Float being the other 95%), and good crypto performance is important. Your PC uses those instructions quite heavily under the hood.

Regarding GB5 vs GB6, industry veterans disagree with you, including a certain former author of some really amazing articles right here on AT. They have a lot more credibility than you, and I have used both SPEC and GB5. Both do a great job of estimating performance.

GB5 = Good tool for estimating CPU performance.
GB6 = Okay tool for estimating typical non-pro user performance scenarios.

GB6 does not properly measure highly threaded workloads which makes it a poor tool for measuring total CPU performance (A 32 core TR would scale up to over twice what my 7950X performs at for my development workload, for example). Also since crypto is no longer measured, a CPU can score good on
Int/FP workloads while being complete garbage at all things crypto.

Now get off my lawn. 🤣
 

eek2121

Platinum Member
Aug 2, 2005
2,930
4,027
136
x86 optimizes for 2 things:
- servers
- die space

because the cover a market from top end server at 300W to mobile devices at 10w which use the same core.

Can't sell a chip in $300 craptop if it uses a ton of die space. Hence you need to clock it high and your efficiency starts to suffer. M2 and this nuvia chips are made for as single specific purpose, mobile client device so no compromise needed in core design and efficiency is a core aspect. So the go wide.

Both m2 and nuvia get there performance at the cost of die size. We know it for the m2 for sure, probably same for nuvia core. You can only do that if your target expensive devices which makes me think this SOC will not be a huge success due to pricing.
This. Specifically, I don’t think most folks understand that x86 targets performance above efficiency. AMD/Intel could release a competitive hyper efficient x86 design with 3-4ghz clocks if they wanted. They don’t because end users don’t want that. x86 chips aren’t used in phones, they are used in desktops and laptops.

Every design has tradeoffs. You cannot have your cake and eat it when it comes to chip design.
 

naukkis

Senior member
Jun 5, 2002
718
595
136
Can't sell a chip in $300 craptop if it uses a ton of die space. Hence you need to clock it high and your efficiency starts to suffer. M2 and this nuvia chips are made for as single specific purpose, mobile client device so no compromise needed in core design and efficiency is a core aspect. So the go wide.
It's other way around - making core design to clock high makes it big in silicon. Apple's cores are something like 1.5mm2 where Zen4 is over twice of that and Intel performance cores are even bigger - though different process node.
 

naukkis

Senior member
Jun 5, 2002
718
595
136
This. Specifically, I don’t think most folks understand that x86 targets performance above efficiency. AMD/Intel could release a competitive hyper efficient x86 design with 3-4ghz clocks if they wanted. They don’t because end users don’t want that. x86 chips aren’t used in phones, they are used in desktops and laptops.

Every design has tradeoffs. You cannot have your cake and eat it when it comes to chip design.

High clock speed is irrelevant, performance is target. Qualcomm X offers as much single-core performance for laptops than best x86 desktops could offer - nothing prevents Qualcomm to tweak their design to use more power and offer more performance for desktop if they see those marked segments attractive enough.
 

Hitman928

Diamond Member
Apr 15, 2012
5,372
8,209
136
It's other way around - making core design to clock high makes it big in silicon. Apple's cores are something like 1.5mm2 where Zen4 is over twice of that and Intel performance cores are even bigger - though different process node.

Which cores are you comparing? Not sure your sizes are very accurate. . .

Edit: As a note, I'm not disagreeing with you that designing for higher frequencies increases the die size, I just am not sure your ratio is correct. If memory serves, the M2 core area is slightly larger than Zen4 with Zen4c core area being roughly half of M2 core area.
 
Last edited:
  • Like
Reactions: Tlh97 and Geddagod

Tup3x

Senior member
Dec 31, 2016
969
954
136
This. Specifically, I don’t think most folks understand that x86 targets performance above efficiency. AMD/Intel could release a competitive hyper efficient x86 design with 3-4ghz clocks if they wanted. They don’t because end users don’t want that. x86 chips aren’t used in phones, they are used in desktops and laptops.

Every design has tradeoffs. You cannot have your cake and eat it when it comes to chip design.
Sure. Performance is another thing entirely though.
 

naukkis

Senior member
Jun 5, 2002
718
595
136
Which cores are you comparing? Not sure your sizes are very accurate. . .

Edit: As a note, I'm not disagreeing with you that designing for higher frequencies doesn't increase the die size, I just am not sure your ratio is correct. If memory serves, the M2 core area is slightly larger than Zen4 with Zen4c core area being roughly half of M2 core area.

I did remember Apples core area little bit wrong - A15 core is 2.55mm2. where Zen4 is 3.84mm2 and Zen4c 2.48mm2 at that same TSMC N5 node. Even excluding core private L2 from Zen4 doesn't make it smaller than A15 core. Chasing high clock speed makes silicon parts large.
 

Hitman928

Diamond Member
Apr 15, 2012
5,372
8,209
136
I did remember Apples core area little bit wrong - A15 core is 2.55mm2. where Zen4 is 3.84mm2 and Zen4c 2.48mm2 at that same TSMC N5 node. Even excluding core private L2 from Zen4 doesn't make it smaller than A15 core. Chasing high clock speed makes silicon parts large.

Your Zen4 numbers are with L2, not just core area. . .
 
  • Like
Reactions: Tlh97 and gdansk

HurleyBird

Platinum Member
Apr 22, 2003
2,690
1,278
136
Can't sell a chip in $300 craptop if it uses a ton of die space. Hence you need to clock it high and your efficiency starts to suffer. M2 and this nuvia chips are made for as single specific purpose, mobile client device so no compromise needed in core design and efficiency is a core aspect. So the go wide.

Chasing high clocks costs a huge amount of die space. Look at Zen4 vs. Zen4c.
 

poke01

Senior member
Mar 8, 2022
797
798
106
I did remember Apples core area little bit wrong - A15 core is 2.55mm2. where Zen4 is 3.84mm2 and Zen4c 2.48mm2 at that same TSMC N5 node. Even excluding core private L2 from Zen4 doesn't make it smaller than A15 core. Chasing high clock speed makes silicon parts large.
As now A17 P core is the smallest even when removing L2 cache for Zen 4
 
  • Like
Reactions: Orfosaurio