• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Question Qualcomm's first Nuvia based SoC - Hamoa

Page 28 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Performance seems similar to Zen 4 and M2? All three of them on similar (same) nodes..

Kudos to Qualcomm for this milestone. I do not know if this core will make it in to smartphones, but I do hope revisions will do. Maybe I will stay with Android instead of jumping ship to iOS.
They explicitly said it will. Specifically the 8 Gen 4 will have Oryon cores.
 
I think I'll start calling GB6's "MT" CT instead, as in "Consumer limited Test". GB6 does not contain a classical MT test score, of which people intuitively have the expectation of being able to fairly compare chips with vastly different amounts of cores. So any Cinebench version is better than GB6's "MT" any way any day.
The combined MT score of GB6 is indeed not enough to drive a conclusion. But there two things to note:
  1. In real life many workloads don't scale well.
  2. One should always look at subtests of a benchmark.
Regarding point 2, the rendering subtest of GB6 scales the same as Cinebench. See this post on RealWorldTech.

EDIT: wrote this before reading @roger_k post above.
 
Last edited:
Yes this is one possibility RE the cache — that it’s 36 L2 + 512kbx12 of L1. Apple’s is at 320kb of L1 so it’s not impossible, and it certainly won’t be the size of the Cortex X4’s.

But I wouldn’t count on it
Yeah I have been thinking, how could it be otherwise?

The total cache certainly includes L1 and L2. We know the total L2 is 36 MB. So even if we are conservative and assume 256 KB L1 per core, that's a total of 3 MB L1.

42 MB - (36 MB + 3 MB) = 3 MB

That leaves only 3 MB of cache for the SLC. Which is unbelievable. Even the SD8G2 had 8 MB SLC.

So as I see it the more likely reality:

6 MB L1 + 36 MB L2 = 42 MB 'Total cache'.

Which is 12 MB of L2 across 3 clusters, and a whopping 512 KB of L1 per core.
 
Last edited:
Yeah I have been thinking, how could it be otherwise?

The total cache certainly includes L1 and L2. We know the total L2 is 36 MB. So even if we are conservative and assume 256 KB L1 per core, that's a total of 3 MB L1.

42 MB - (36 MB + 3 MB) = 3 MB

That leaves only 3 MB of cache for the SLC. Which is unbelievable. Even the SD8G2 had 8 MB SLC.

So as I see it the more likely reality:

6 MB L1 + 36 MB L2 = 42 MB 'Total cache'.

Which is 12 MB of L2 across 3 clusters, and a whopping 512 KB of L1 per core.
It's that or they aren't counting the L1 and the 6MB is the L3 or SLC.
 
Now that M3 is here, we can make the IPC comparison.

Geekbench 6 ST

 M3

3076
4.05 GHz
IPC = 759

X Elite Windows
2971
4.3 GHz
IPC = 690

X Elite Linux
3236
4.3 GHz
IPC = 752
 
Hey wait-

So the reason the Linux numbers for the X Elite are higher because the Reference Device running it had maxed out the fans?

I thought it was due to an OS difference in scheduling/overhead.
 
So the reason the Linux numbers for the X Elite are higher because the Reference Device running it had maxed out the fans?

As for the Linux Geekbench results, because Qualcomm does not yet have fan control working under Linux, these systems were running with their fans on full blast. Whereas the Windows systems were running with more typical fan ramp curves, and thus didn’t enjoy the Linux laptops’ effectively unlimited thermal environment.
We may not see that level of performance in shipping devices.
 
Eh I am not sure about that. I having a feeling the X Elite at 4.3 GHz is out of it's efficient range.

4.0 GHz/3.4 GHz Reference Device had 23W TDP and 4.3 GHz/3.8 GHz Reference Device had an 80W TDP.
From the article that @John Bruno posted:
These 23W and 80W numbers also represent the reference design device thermal envelopes, not the SoCs alone. Actual SoC TDPs were not disclosed.
I'm not really sure what to make of that quote. It's reasonable to assume the TDP's are in the ballpark of reference design thermal envelopes but I think its inaccurate to claim the tested SOC's had TDP's of 23w and 80w.

Nitpicks aside, I'm excited to see what one of these SOC's can do in a passively cooled laptop. I'm definitely interested in a fanless laptop that isn't made by Apple. Although I have to admit that I'm a bit worried that when devices with these SOC's hit the market, the pricing might be a kick to the groin.
 
System thermal capacity (in our language) indicates what the SoC+DDR+conversion loss power is at saturation while maintaining maximum specified Tj. Conceptually, in our designs, these sub-components (SoC, DDR, PMICs) are bonded to the thermal solution and are all transferring energy into it.

This definition is different than some others in the market and should not be confused.

The only way to really understand how much power any particular workload consumes is to measure it, hence the various performance/W curves that were provided.
 
So I am guessing laptops with Snapdragon X Elite will use on-package memory like Apple Silicon Macbooks or the https://www.tomshardware.com/news/intel-demos-meteor-lake-cpu-with-on-package-lpddr5x

Aside from the obvious to hit to upgradeability, there are substantial benefits in space-savings and cost-savings. DIMMs take up a lot of space and according to Linus from LTT, a DDR5 DIMM socket costs as much as $8 per part. That's substantial savings for the OEMs.
 
So I am guessing laptops with Snapdragon X Elite will use on-package memory like Apple Silicon Macbooks or the https://www.tomshardware.com/news/intel-demos-meteor-lake-cpu-with-on-package-lpddr5x

Aside from the obvious to hit to upgradeability, there are substantial benefits in space-savings and cost-savings. DIMMs take up a lot of space and according to Linus from LTT, a DDR5 DIMM socket costs as much as $8 per part. That's substantial savings for the OEMs.
Snapdragon X Elite uses LPDDR5X, but I didn't get the impression that it was on-package. If that's the case, I reckon most implementations will be memory down (i.e. packages soldered to the logic board), but there's probably nothing stopping device manufactures from rolling out a design with LPCAMM modules if they want to.
 
Back
Top