Discussion Qualcomm Snapdragon Thread

Page 128 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

FlameTail

Diamond Member
Dec 15, 2021
3,896
2,324
106
Taking a look back at the leaks of Hamoa

First leak (November 2022), 20 months before X Elite's release
In other news: Qualcomm's working on a 2024 desktop chip codename "Hamoa" with up to 12 (8P+4E) in-house cores (based on the Nuvia Phoenix design), similar mem/cache config as M1, explicit support for dGPUs and performance that is "extremely promising", according to my sources.
Second Leak (January 2023), 17 months before X Elite's release.
First of all - the CPU.
As I previously leaked, the highest model of Hamoa has 8 performance cores and 4 power efficient ones. Qualcomm is testing the chip at ~3.4GHz (performance cores) and ~2.5GHz (efficient cores)
The efficiency cores bit is wrong. X Elite has only performance cores - 12 of them, and yes the all core clock is 3.4 GHz (except in the X1E-84, which runs at 3.8 GHz).
Each block of 4 cores has 12MB of shared L2 cache. There is also 8MB of L3 cache.
Additionally there is 12MB of system-level cache, as well as 4MB of memory for graphics use cases.
L2 cache amount is correct. L3 cache is wrong- there is no L3 cache. The system level cache is 6 MB.
As for RAM, the integrated controller supports up to 64GB of 8-channel LPDDR5x with optional low-power features at up to 4.2GHz
Correct.
The integrated GPU is Adreno 740 - the same one as in Snapdragon 8 Gen 2. Qualcomm will be providing support for DirectX 12, Vulkan 1.3, OpenCL as well as DirectML
Kinda correct, I suppose? The GPU in the X Elite identifies itself as the Adreno 741. It's based on the Adreno 740- same number of ALUs, but running at 1.5 GHz (twice the clock speed of 740).
The 12 core SKU also provides support for discrete GPUs over its 8 lanes of PCIe 4.0.

There are also 4 lanes (configurable as 2x2) of PCIe 4.0 for NVMe drives, as well as some PCIe 3.0 lanes for the WiFi card and the modem.
Correct.
The WiFi subsystem will support WiFi 7.

As for the modem, Qualcomm seems to be recommending (external) X65.
Correct.
For integrators not wanting to use NVMe for the boot drive, Qualcomm included a 2-lane UFS 4.0 controller with support of up to 1TB parts
Correct. The Samsung Galaxy Books are the only laptops to feature UFS storage so far.
Qualcomm has also updated their Hexagon Tensor Processor to provide up to 45 TOPS (INT 8) of AI performance
Absolutely correct!
The chip also provides ample user-accessible IO: two USB 3.1 10Gbps ports, as well as three USB 4 (Thunderbolt 4) ports with DisplayPort 1.4a.
Correct.
The video encode/decode block has also seen great improvements:
The chip can decode up to 4K120 and encode up to 4K60, including AV1 in both cases
Correct!

Overall, I am surprised how many things the leak got right (although there were several misses)!

Why did I write this? Because we just got the First Leak of the next generation of Snapdragon X. We are 16-22 months away from it's release (if the leak itself is to be believed that it'll come out in 2026H1).
 
Last edited:
  • Like
Reactions: ikjadoon

POWER4

Member
May 25, 2024
54
28
51
Or perhaps it's not as simple as that. Perhaps they are rebalancing through CPU and GPU.

If you look at the X Elite, this is what you'll see:
View attachment 105443
The CPU is extremely strong (M Max class), combined with a weak GPU (base M class).

So if we look at it CPU wise,

Purwa -> Mahua
8L -> 6L+6M

Hamoa -> Glymur
12L -> 12L+6M

It is reasonable to say that Mahua is the Purwa succesor, and Glymur is the Hamoa successor.

So CPU wise, Mahua would vie with M5, and Glymur would vie with M5 Max.

Now if we look at it GPU wise, Mahua would indeed appear to be the Hamoa successor. Both have 128b memory buses, and the GPU performance would be base M chip class.

Glymur has a 50% larger memory bus, and presumably a meatier GPU to go along with it. I'd presume the GPU graduates to being M Pro class.

GPU wise, Mahua would vie with M5, and Glymur with M5 Pro.
It is all very clear.

Regarding Glymur, what is the biggest difference between M Pro and Max? I bet Tigerick's source would have more to say about the other blocks.
 

FlameTail

Diamond Member
Dec 15, 2021
3,896
2,324
106
AFAIK, Canim is dead.
Well if that is the case, I would not be surprised. I always found Canim to be quite strange, considering the release timing and all.

Ming Chi Kuo:
Additionally, Qualcomm plans to launch a low-cost WOA processor codenamed Canim for mainstream models (priced between $599–799) in 4Q25. This low-cost chip, manufactured on TSMC’s N4 node, will retain the same AI processing power as the X Elite and X Plus (40 TOPS).
Considering that Hamoa-based Snapdragon X Plus laptops are selling for as low as $900, I see no reason why Purwa shouldn't serve the $599-$799 segment.

Then where would Canim fit in this picture? The most likely possibilty is that it would've been an entry level part for <$500 Chromebooks and WoA laptops.
 

FlameTail

Diamond Member
Dec 15, 2021
3,896
2,324
106
Improvements that X Elite G2 might/could/should bring;

- 3nm process node
- LPDDR5X-10700/LPDDR6
- ARMv9 / SVE2 / SME
- E-cores on CPU
- Adreno 800 series GPU
- 100+ TOPS NPU
- PCIe Gen 5
- USB 80 Gbps
- 8K external display support
- Stronger VPU (8K encode/decode)
How many of those are coming?

Leak:
- LPDDR5X (but speed not specified).
- 'E cores'

Lawsuit document hinted:
- ARMv9 (but dunno about SVE2/SME support)

Can be taken for granted:
- 3nm
- Adreno 800 series based GPU
- 100+ TOPS NPU

Unsure:
- PCIe G5
- USB 80 Gbps
- 8K external display support
- 8K encode/decode
 
Last edited:

FlameTail

Diamond Member
Dec 15, 2021
3,896
2,324
106
I wonder if the Oryon Team considered using 8-core clusters for Mahua and Glymur.

It could have been;
Mahua : 8P+4E
Glymur : 8P+8P+4E

I have a feeling that Apple might upgrade to 8P clusters in the M5 generation, and it would Glymur on a stronger footing against Strix Halo/Fire Range and Intel's HX parts.
 
Last edited:

poke01

Platinum Member
Mar 8, 2022
2,132
2,683
106
AFAIK, NV and Intel support 4:4:4 for H265 for at least the last few gens, but AMD does not. Not sure about Apple.
Apple supports 4:4:4 as well. For 8bit and 10bit

Edit; I’ll hav to look into it, not to sure
 
Last edited:

FlameTail

Diamond Member
Dec 15, 2021
3,896
2,324
106
I wonder what's the GPU size difference between Mahua and Glymur. A 50% wider memory bus means the GPU could be 50% larger, but it could also be more than that. Case in point:

M3 -> M3 Pro
128b -> 192b
10 core GPU -> 18 core GPU.
 

FlameTail

Diamond Member
Dec 15, 2021
3,896
2,324
106
Does it make sense for Qualcomm to scale the NPU for their next generation of Snapdragon X chips?

This question was asked in the Apple thread, and name99 said this notable thing:
The other alternative is that people are starting to use the NPU for genuine, performance-sensitive work (eg as part of their standard Adobe or Premier type workflows) in which case it makes sense that you can buy a faster workflow. I don't *think* we are there yet (the workflows are starting to exist, but for whatever reasons they mostly still run on GPU). But maybe in a few years?
In the Windows ecosystem, we are already beginning to see some applications beginning to take advantage of the NPU (Examples: Davinci Resolve video editor, dJay Pro audio production software...).
 

FlameTail

Diamond Member
Dec 15, 2021
3,896
2,324
106
I wonder if the Oryon Team considered using 8-core clusters for Mahua and Glymur.

It could have been;
Mahua : 8P+4E
Glymur : 8P+8P+4E

I have a feeling that Apple might upgrade to 8P clusters in the M5 generation, and it would Glymur on a stronger footing against Strix Halo/Fire Range and Intel's HX parts.
Also wouldn't using 8P clusters be beneficial for application performance? Most programs scale up to 8 cores, and few go beyond that. So it would be beneficial to have those 8 cores contained within one cluster.

This argument is used with Intel/AMD chips all the time:
• Used to justify why Intel's desktop CPUs having only 8 P-cores, and using E-cores to scale MT performance beyond that, makes perfect sense.
• Used to justify that an 8-core Ryzen is enough for most people, including gamers.
• Used as a point of criticism against Strix Point, which has a cluster of 4 Zen5 cores, meaning that most applications that use 8 cores will spill out to the Zen5C cluster

Surely, this argument applies to ARM CPUs too?
 

soresu

Diamond Member
Dec 19, 2014
3,214
2,491
136
Should bolstering the decode/encode capabilities be on Qualcomm'a priority list for X Elite G2?
Short answer no.

That's a problem for another day when there is more video editing apps on WoA that are stable.

At the end of the day all of this can be done in software, it's just not going to be as pwr efficient as running it on an ASIC.

As noted previously, Qualcomm are still targeting that lowest common denominator in the market to get the ball rolling in WoA's direction.

8 and 10 bpc for HEVC/H265 + AV1 and 8 bpc for AVC/H264 remain the vast majority of video content out there (not counting VP9 which isn't even mentioned on that graph despite being the high res codec for the majority of Youtube videos still).

Comprehensive video ASIC support will come eventually for DCC use, I just wouldn't hold my breath on it coming immediately after SDXE gen 1.

The other Twitter guy mentioned niche workflows while Apple ProRes remains unsupported by most APU SoC ODMs other than Apple themselves.

Given the amount of camera hardware out there and video editing or compositing software that supports ProRes the format can't even be classified as truly niche which makes the codec ASIC situation generally bad across the industry.
 
Last edited:

Doug S

Platinum Member
Feb 8, 2020
2,759
4,698
136
Does it make sense for Qualcomm to scale the NPU for their next generation of Snapdragon X chips?

This question was asked in the Apple thread, and name99 said this notable thing:

In the Windows ecosystem, we are already beginning to see some applications beginning to take advantage of the NPU (Examples: Davinci Resolve video editor, dJay Pro audio production software...).

Windows has different priorities. There isn't "a GPU" like there is on Apple Silicon. They might have Intel, might have AMD, might have Nvidia. So they are already having to write multiple paths. It makes sense to add another path for NPU, especially since the systems that (currently) have NPU are ones more likely to have an integrated GPU that may perform fairly poorly at NPU like tasks.

For Apple Silicon, every Mac has an NPU but it also has an Apple GPU that's likely higher performance than Apple's NPU, since the latter has (at least so far) been designed for power savings than performance. Moreover, as you step up in the Apple Silicon line you get more GPU at every step, but the NPU has (again, at least so far) been the same size. So it seems like it would be a waste to code a special path for the NPU on Macs today, since even if it was slightly higher performance on say the base model, that's the only place where that code path would make sense.
 
  • Like
Reactions: name99
Jul 27, 2020
19,881
13,626
146
Windows has different priorities. There isn't "a GPU" like there is on Apple Silicon. They might have Intel, might have AMD, might have Nvidia. So they are already having to write multiple paths. It makes sense to add another path for NPU, especially since the systems that (currently) have NPU are ones more likely to have an integrated GPU that may perform fairly poorly at NPU like tasks.
Not the case at least for Meteor Lake:

1724059469331.png

GPU is better when using the same framework, though it will obviously eat more power.

Not sure what you mean by "paths".