Question Qualcomm's first Nuvia based SoC - Hamoa

Page 31 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

soresu

Diamond Member
Dec 19, 2014
4,117
3,572
136
I hope this means laptops with the Snapdragon X Elite (and even Meteor Lake chips that have an NPU), will come with a baseline 16 GB of RAM.

Since the DRAM manufacturers are already starting to churn out 32 Gbit DDR5 dies, while not being the LPDDR5 likely needed for these chips it probably won't be long before we start seeing that.

At 32 Gbit it's just as easy to do 16 GB (4x4 GB) as it was 8 GB (4x2 GB) which is already industry baseline for at least 3 years.
 

hemedans

Senior member
Jan 31, 2015
278
156
116
GB6

X Elite:
• 15,226 multi core
• 2,956 single core

M3 Pro:
• 15,171 multi core
• 3,035 single core

I wonder if there's some deficiencies in the Oryon uncore.

How is a 6P+6E configuration (Apple M3 Pro) matching a 12P one (SD X Elite)?

Fwiw both are 12-core, so the argument that GB6 MT scales poorly with higher core counts does not apply here.
It Apply, because GB 6 favour first 6 cores if am not mistaken. 8 gen 3 has better multicore vs D9300 in GB6 but GB5 it reverse D9300 has better Multicore than 8 gen 3.
 

poke01

Diamond Member
Mar 8, 2022
4,254
5,597
106
You or me are not the first people nor will be the last people to entertain the idea that Geekbench is biased to Apple.

There is a widespread belief especially in the smartphone community on twitter that GB is biased to Apple. When Snapdragon 8 gen 3 Multi-threaded scores came out showing that it rivalled the A17 Pro, some users were sarcastically commenting that GB7 will be released soon.
The smartphone fan community knows Jack s**t about CPU architecture and scaling.

The point is Apple CPU cores are ahead of ARM cortex and Geekbench 6 reflects that in ST. Now they only complain when Apple is ahead...

Spec also is inline with Geekbench 6. We really should be using more than Geekbench when it comes to desktop/laptop testing, this is why Qualcomm also uses Cinebench 2024. To check the CPUs rendering capabilities.

----

As for why the M3 Pro is ahead in MT well its not. That's one of the problems with MT on GB6.
In Cinebench 2024 the 12 Elite should beat it easily.
 

FlameTail

Diamond Member
Dec 15, 2021
4,384
2,762
106
As for why the M3 Pro is ahead in MT well its not. That's one of the problems with MT on GB6.

In Cinebench 2024 the 12 Elite should beat it easily.

Indeed the issue is with MT, not ST.
The what? LOL.
Cinebench 2024
Cinebench 2024 arrived in right nick of time with all it's improvements, just when GB6 changed the MT test and it went to the dogs.

Cinenench 2024 is now great as it truly cross platform, supporting ARM and x86.
 
  • Like
Reactions: Tlh97 and poke01

roger_k

Member
Sep 23, 2021
102
219
86
Now that we have M3 scores, I have to reiterate my confusion about Qualcomm's claims in regards to Oryon's efficiency. They said that Oryon can match peak M2 Max performance while consuming 30% lower power. Since M2 Max consumes about 5.5 watts running at 3.7Ghz, that would put alleged Oryon at 4 watts for the same performance.

Which this in mind let's look at the Cinebench 2024 scores. Oryon gets 1230 points, which is a few hundred points below what I'd expect from 12x M2 Max-like cores. At any rate, based on the power efficiency claims each Oryon core should draw under 4 watts in this scenario, right? So we should be looking at 50 watts + few additional watts for uncore/RAM, but Qualcomm's own slides show power consumption closer to 80 watts. How does that work out? In the meantime M3 Max, which on paper is not any more efficient than Oryon (and I know for a fact that Apples 3nm CPU is not 30% more efficient than M2 core at 3.7 GHz since I measured it myself) manages to get significantly higher performance while running at considerably lower power. Something is off here: either my math, or Qualcomm's claims.
 

FlameTail

Diamond Member
Dec 15, 2021
4,384
2,762
106
Now that we have M3 scores, I have to reiterate my confusion about Qualcomm's claims in regards to Oryon's efficiency. They said that Oryon can match peak M2 Max performance while consuming 30% lower power. Since M2 Max consumes about 5.5 watts running at 3.7Ghz, that would put alleged Oryon at 4 watts for the same performance.

Which this in mind let's look at the Cinebench 2024 scores. Oryon gets 1230 points, which is a few hundred points below what I'd expect from 12x M2 Max-like cores. At any rate, based on the power efficiency claims each Oryon core should draw under 4 watts in this scenario, right? So we should be looking at 50 watts + few additional watts for uncore/RAM, but Qualcomm's own slides show power consumption closer to 80 watts. How does that work out? In the meantime M3 Max, which on paper is not any more efficient than Oryon (and I know for a fact that Apples 3nm CPU is not 30% more efficient than M2 core at 3.7 GHz since I measured it myself) manages to get significantly higher performance while running at considerably lower power. Something is off here: either my math, or Qualcomm's claims.
Where did you get the "80W" slide?

80W is the device thermal capacity of the reference device iirc. John Bruno clarified it in a comment on this thread.
 

Hitman928

Diamond Member
Apr 15, 2012
6,695
12,370
136
Where did you get the "80W" slide?

80W is the device thermal capacity of the reference device iirc. John Bruno clarified it in a comment on this thread.

Andrei stated that it was measured power of the device under load minus measured power of the device while idle. We were also told it is measured power of the SOC plus memory. So not really sure what to make of their presentation, tbh. Unless I’m mistaken about the different explanations.
 
Mar 11, 2004
23,444
5,852
146
I just realized that one of the biggest questions for this chip will be its sustained performance in a device. I think that's actually the most impressive aspect of Apple's chips (even if the performance is good enough that for many tasks it'd probably be bursty). Which it should be possible to outdo Apple's cooling, but we'll see what the chip can actually handle as a lot of the mobile chips really struggle with sustained performance even if they're in a form factor where that shouldn't be nearly as much of a problem. If it does well though, I could see those gaming handhelds getting very interesting, especially after a few years with better software support, and better eGPU options.

Saw they did provide scores for different form factors which is interesting and seems like it will impact things quite a bit (probably need a comparison of the M chips in iPad Pros, Macbook Air, and Macbook Pros).
 

Doug S

Diamond Member
Feb 8, 2020
3,588
6,349
136
Now that we have M3 scores, I have to reiterate my confusion about Qualcomm's claims in regards to Oryon's efficiency. They said that Oryon can match peak M2 Max performance while consuming 30% lower power. Since M2 Max consumes about 5.5 watts running at 3.7Ghz, that would put alleged Oryon at 4 watts for the same performance.

Surely they meant peak MT performance, so the power consumption of M2 Max in an MT test would be what mattered not the power consumption of a single core. Because the performance of M2 Max = performance of M2 Pro = performance of M2 in single threaded tests since they are the same cores and clocked the same, so why would they have specified M2 Max if they were talking ST?
 
  • Like
Reactions: Tlh97 and Thibsie

SpudLobby

Golden Member
May 18, 2022
1,041
702
106
Andrei stated that it was measured power of the device under load minus measured power of the device while idle. We were also told it is measured power of the SOC plus memory. So not really sure what to make of their presentation, tbh. Unless I’m mistaken about the different explanations.
Guys:

The graphs of the actual workloads are indeed that. The device (aka the SoC plus memory) minus idle/statics.

The "23W" and "80W" devices in the demos are not 1:1 representative of the actual power draws of those chips under the performances indicated, the 23W and 80W there are just talking about the thermal long term capability of those devices.
I'll paste what he said:
"what those 23/80W are supposed to mean is "thermal envelope of the given test devices""

"it's only correlated in 30+ minute workloads, in which case TDP == power consumption, but that's only valid for Qualcomm, as that correlation doesn't exist for Intel/AMD"

Again, the actual perf/watt curves are representative and good data for the actual draw of the SoC + power delivery + it's DRAM - which is what you actually want to know for a chip, not just "core power" or some other myopic figure. But the 23W and 80W indications for those devices are really just showing what the chip could perform at peak for a given thermal enclosure.
 

SpudLobby

Golden Member
May 18, 2022
1,041
702
106
Qualcomm’s presentation from day two of their event

View attachment 88480
Yeah, that is the actual 80W power of the SoC + DRAM. That's all valid and actual power measurement. But the "23W" and "80W" device tests are just about the thermal constraints of the device.

Unfortunately the fact that the graphs still peak around 80W will make this more confusing because people will mix up these separate indices.
 

SpudLobby

Golden Member
May 18, 2022
1,041
702
106
I just realized that one of the biggest questions for this chip will be its sustained performance in a device. I think that's actually the most impressive aspect of Apple's chips (even if the performance is good enough that for many tasks it'd probably be bursty). Which it should be possible to outdo Apple's cooling, but we'll see what the chip can actually handle as a lot of the mobile chips really struggle with sustained performance even if they're in a form factor where that shouldn't be nearly as much of a problem. If it does well though, I could see those gaming handhelds getting very interesting, especially after a few years with better software support, and better eGPU options.

Saw they did provide scores for different form factors which is interesting and seems like it will impact things quite a bit (probably need a comparison of the M chips in iPad Pros, Macbook Air, and Macbook Pros).
I think if you want to see the sustained performance the best indicator of that is just looking at the MT graphs across the X axis of wattage and the Y axis of performance. Matching 13th gen 10-12 core Intel 50-65W performance at around 20-30W is pretty good.

The other thing will be idle power draw and very low power draw and what that looks like for battery life. It's not just about the dynamic draw but what it's sapping when you're doing mixed or lighter loads. Won't be as good as Apple's, but can they beat a 7840U and get close enough? My bet is yeah.
 

SpudLobby

Golden Member
May 18, 2022
1,041
702
106
Surely they meant peak MT performance, so the power consumption of M2 Max in an MT test would be what mattered not the power consumption of a single core.
They did not mean peak MT performance because it will likely lose by a bit or won't look as impressive I guess. More importantly it won't cost as much as an M3 Max system with a similar RAM/SSD configuration, and a lot of people just want something with great battery life and great peak CPU performance.

To further expound, I doubt finding an X Elite system will cost as much as an M3 14" MBP system (which starts at $1599 for 8GB of RAM...) for similar specifications - and there are in fact people who use Apple hardware more recently solely for the hardware, battery life. It'll compete more with AMD's Phoenix though, contra initial predictions of a huge die size and cost. See below for more on all this^1.

Because the performance of M2 Max = performance of M2 Pro = performance of M2 in single threaded tests since they are the same cores and clocked the same
Nope, this is not the case.

The M2 Max is clocked to about 3.7GHz and will score into the 2050+ range on GB5 ST. M2 and M2 Pros are clocked to about 3.5GHz on ST and usually are about 100-200 points behind. It's enough to where the base M2 -> M2 Max gap is almost 8-9% when you look at the best, which checks out based on the clocks and probably some scheduling/ramping differentiation.

It's odd, I know, but it's what they did.

They used the M2 Max's ST probably because they knew they could opportunistically compete with it, and because it represented Apple's absolute best ST on benchmarks by 5-10%. Plausibly, also because it has a bit more overhead (just slightly) which makes an iso-performance efficiency comparison more favorable. Their point is they can indeed match Apple's best and beefiest on the ST CPU performance and efficiency. This graphic was paired well with another ST comparison to Intel's 13980HK (I forget which one it was, but a 13th generation 5.6GHz chip) where they matched it with 65% less power iso-performance. Those were the two ST perf/W comparisons they gave.

ST performance/watt is a big deal for mobile products even in this vein and the comparison made sense.


1: The die on the X Elite is around 170mm^2 per Charlie and SemiAccurate (smaller than a 178mm^2 7840U/HS even). It's an M2 or 7840U-class chip on the cost structure front in terms of the 128B bus and die size, but the CPU performance can substantially beat the former and latter and still size up for efficiency, most likely idle power too vs AMD.

First and foremost it's a competitor for AMD and Intel's mainstream monolithic dice or the M2/3 - and yes, Apple actually do sell the M3 in a beefier, better cooled profile as Apple showed with the 14" MBP M3 for $1599 or the Mac Mini soon - before someone says "but that's only for iPads/Airs" (no, not exclusively).

So people saying "it loses to the M3 Pro/Max by a hare or a lot" are kind of missing the plot I think, and I understand why they paired an M2 Max ST comparison with an M2 MT comparison. They are just opportunistically saying they can compete with the best that was out at the time of the presentation on at least one thing, and to the extent someone says hey, the M2 Max wins by a hair on MT? Or M3 Max blows it out now?

So what - X Elite systems won't cost as much and they aren't weighed down by the bus and GPU area Apple have, it's not going for that market. It's more like a vastly better version of AMD's Phoenix tbh.
 
Last edited:
  • Like
Reactions: Tlh97 and roger_k

SpudLobby

Golden Member
May 18, 2022
1,041
702
106
Now that we have M3 scores, I have to reiterate my confusion about Qualcomm's claims in regards to Oryon's efficiency. They said that Oryon can match peak M2 Max performance while consuming 30% lower power. Since M2 Max consumes about 5.5 watts running at 3.7Ghz, that would put alleged Oryon at 4 watts for the same performance.

Which this in mind let's look at the Cinebench 2024 scores. Oryon gets 1230 points, which is a few hundred points below what I'd expect from 12x M2 Max-like cores. At any rate, based on the power efficiency claims each Oryon core should draw under 4 watts in this scenario, right? So we should be looking at 50 watts + few additional watts for uncore/RAM, but Qualcomm's own slides show power consumption closer to 80 watts. How does that work out? In the meantime M3 Max, which on paper is not any more efficient than Oryon (and I know for a fact that Apples 3nm CPU is not 30% more efficient than M2 core at 3.7 GHz since I measured it myself) manages to get significantly higher performance while running at considerably lower power. Something is off here: either my math, or Qualcomm's claims.

They said that Oryon can match peak M2 Max performance while consuming 30% lower power. Since M2 Max consumes about 5.5 watts running at 3.7Ghz, that would put alleged Oryon at 4 watts for the same performance.
Did you measure that 5.5W in PowerMetrics within the terminal? Two things assuming so.

I) That's only somewhat accurate, and you'd need a proper sampling rate over the course of a workload.

II) You want the whole package power + the DRAM + power delivery stuff, and this doesn't measure that. When Andrei measured the X Elite vs the M2 Max and did all these power measurements, he measured the entire platform (not the screen but like, the package incl SoC and + DRAM + power delivery) minus the idle draw. This is what you actually want to know about an SoC. Not just core power, or even just package power (sans DRAM), and you want at least a decent measurement tool and methodology.

So this isn't really helpful, you're not dealing with especially accurate measurements and on top of that you're not including other core contributions to power draw that are dependent variables of an SoC design + final platform.

His quotes not his measurements and the matter:

"all figures here are btw complete platform power minus statics"

"your 35 package will do 50w platform" I believe referring to again the whole thing with the RAM + power delivery stuff.
 
  • Like
Reactions: Tlh97

qmech

Member
Jan 29, 2022
82
179
66
The other thing will be idle power draw and very low power draw and what that looks like for battery life. It's not just about the dynamic draw but what it's sapping when you're doing mixed or lighter loads. Won't be as good as Apple's, but can they beat a 7840U and get close enough? My bet is yeah.

There seems to be a problem with PCIe Active State Power Management on some(most?) Zen4-based laptops which is increasing idle power draw quite dramatically. With this fixed, idle power draw is reported to be below 3W with the display off (for the whole notebook), with some (e.g. HP Elitebook 845 G10) very close to 2W. Comparing similar Lenovo Yoga Slim notebooks gives AMD-based models a slight edge over Intel-based models.

It should also be noted that Qualcomm's offering won't actually be available in notebooks that you can buy until mid-2024 at the earliest. That puts it close to the next generations of Apple/AMD/Intel.