• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Discussion Qualcomm Snapdragon Thread

Page 114 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
The probability is non-zero.

Snapdragon 8G5 will supposedly be dual sourced between Samsung Foundry (N2) and TSMC (N3P). We can guess that the IP of 8G5 and X Elite G2 are related...
 
I have no idea if its actually real, i just saw someone post it discord claiming snapdragon x, hence why i put "allegedly"
Interestingly this odd main memory latency was also previously reported by Notebookcheck.

I guess AIDA64 results on Snapdragon have to be considered with a huge grain of salt until this is corrected.
 
Okay I came across this leak about 8G4 from a while ago. I am not going to say their name, but they were a credible leaker.
When compared to 8G3, AI/DSP computational performance has improved by almost double. GPU model name is Adreno 830.
8G3 NPU is 45 TOPS according to the same source.

So if 8G4 is almost double... are we looking at ~90 TOPS?

That's a remarkable increase, but what I find even more remarkable is that they are doing this within presumably the same memory bandwidth envelope. Because if 8G4 isn't upgrading to LPDDR6, then it will have to stick to the same LPDDR5X-9600 as 8G3.

8G3 : LPDDR5X-9600 -> 76 GB/s

I suppose the 45 TOPS NPU does not saturate the memory bus fully?

Wish Chips&Cheese would test this, but they said they can't.
 
I'm not sure but I don't think an NPU eats that much bandwidth. We don't even know of those tops are measured with int4, int8, int16 ?
 
As per leaks, the Snapdragon 8 Gen 4 that it will be released in a few months, will have a more powerful GPU/NPU than the X Elite.

Funny 😛
 
X Elite + LPDDR5X-7500???
Well yeah cheaper PCBs.
Why not? The iGP is useless and nothing else needs the bandwidth.
As per leaks, the Snapdragon 8 Gen 4 that it will be released in a few months, will have a more powerful GPU/NPU than the X Elite.
You need the membw for either of those things, and unless QC bumps the SLC to AAPL levels, no bueno.
 
As per leaks, the Snapdragon 8 Gen 4 that it will be released in a few months, will have a more powerful GPU/NPU than the X Elite.

Funny 😛
Peak performance really doesn’t matter for this case. What matters is the efficiency and feature set, e.g. frame interpolation.

It’s pathetic but really because 740 is too small and ill-suited for PC games along with bad drivers. And Qualcomm is not going to do an insanely beefy GPU which I know you keep hammering about lol. We’ll just see a mostly higher clocked 830 for X2.
 
And Qualcomm is not going to do an insanely beefy GPU which I know you keep hammering about lol. We’ll just see a mostly higher clocked 830 for X2.
Yeah and it would be a wasted effort even if they do something grand and then the crappy drivers never allow it to reach full potential.
 
Interestingly, a former AMD Radeon/RDNA engineer has joined Qualcomm's Adreno team:


It looks like Qualcomm is working on adding desktop class features to such as work graphs etc.. which is cool.

All those changes being made to make it a proper desktop-class architecture, will be wasted if Qualcomm doesn't put a fat GPU in their X Elite G2.

By 'fat GPU', I am asking for them to simply put double the GPU that is in the Snapdragon 8G4 or 8G5 mobile SoC. Not something absurdly beefy like the M3 Max. Even Apple does it for the base M chip

A14 (4-core GPU) -> M1 (8-core GPU)
A15 (5-core GPU) -> M2 (10-core GPU)
A17 (6-core GPU) -> M3 (12-core GPU).
 
Interestingly, a former AMD Radeon/RDNA engineer has joined Qualcomm's Adreno team:


It looks like Qualcomm is working on adding desktop class features to such as work graphs etc.. which is cool.

All those changes being made to make it a proper desktop-class architecture, will be wasted if Qualcomm doesn't put a fat GPU in their X Elite G2.

By 'fat GPU', I am asking for them to simply put double the GPU that is in the Snapdragon 8G4 or 8G5 mobile SoC. Not something absurdly beefy like the M3 Max. Even Apple does it for the base M chip

A14 (4-core GPU) -> M1 (8-core GPU)
A15 (5-core GPU) -> M2 (10-core GPU)
A17 (6-core GPU) -> M3 (12-core GPU).
Late 2025/early 2025 should be a fun time for iGPUs. Hopefully the QC GPU in X Elite 2 is on par with Lunar Lake.
 
A comment with some supply chain info + speculation:
Yeah, the part about "hurting" AMD/Intel with Purwa is pure speculation. Depends on:

How quickly NPU becomes a necessity? (needs some killer software that even grandma wants to use).

AMD already HAS an NPU with 50 TOPS. If need be, they could release a cheap chiplet based 4 core CPU. Not sure how quickly Intel can adapt to a sudden change in market dynamics, unless they have a high volume low cost LNL-M part that doesn't sacrifice NPU TOPS.
 
Yeah, the part about "hurting" AMD/Intel with Purwa is pure speculation. Depends on:

How quickly NPU becomes a necessity? (needs some killer software that even grandma wants to use).

AMD already HAS an NPU with 50 TOPS. If need be, they could release a cheap chiplet based 4 core CPU. Not sure how quickly Intel can adapt to a sudden change in market dynamics, unless they have a high volume low cost LNL-M part that doesn't sacrifice NPU TOPS.
It also bears the question of when Canim will be on the market. It was already a "late to the party" part when the high-end ones were still expected to be released much earlier. With the delayed launch/mass volume of Hamoa, very little noise regarding Purwa, and now the heat coming from AMD and Intel, I can't see this possibly coming soon enough to be worth prioritizing over other stuff.
 
Last edited:
Interestingly, a former AMD Radeon/RDNA engineer has joined Qualcomm's Adreno team:
Not that interesting really.

Like the µArch engineers who design the hardware itself, the software engineers working on gfx drivers tend to drift around the industry.

In the case of AMD/Qualcomm it's practically keeping it within the family considering Adreno started as former ATi/AMD Imageon during the VLIW µArch era.

The Adreno brand name itself is an anagram of Radeon.
 
Back
Top