Discussion Qualcomm Snapdragon Thread

Page 192 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

StinkyPinky

Diamond Member
Jul 6, 2002
6,985
1,283
126
Really need real world stuff. Yeah the Geekbench is very impressive, the Cinebench reasonably good. The GPU looks much improved.

But these are all synthetic benchmarks. How does it do on Photoshop, playing a game, editing a video/photo, or opening large spreadsheets? The stuff people actually use their pc's for.

Also since that's their top tier chip you would fee like it should at least be compared to an M4 Pro if not higher....
 

naukkis

Golden Member
Jun 5, 2002
1,030
854
136
Damn how good that X2 is. So now best performing CPU's are ARM from two different vendors - and gap to best performing x86 competitors is huge. How long x86 will survive?
 

Doug S

Diamond Member
Feb 8, 2020
3,708
6,554
136
Damn how good that X2 is. So now best performing CPU's are ARM from two different vendors - and gap to best performing x86 competitors is huge. How long x86 will survive?

x86 will be fine, because that's what the Windows world is standardized on. How many people do you think buy laptops based purely on CPU performance, without regard to any other factors such as price or software/OS compatibility?
 

naukkis

Golden Member
Jun 5, 2002
1,030
854
136
x86 will be fine, because that's what the Windows world is standardized on. How many people do you think buy laptops based purely on CPU performance, without regard to any other factors such as price or software/OS compatibility?

People need speed, good battery time and that their stuff works. Everything needing speed is ARM native already - and emulation works good enough that other stuff works fine too. There's pretty much no need to restrict cpu to native x86 ones - just get the best for current use case. And if finest cpu's aren't x86 there's pretty soon no point of wasting money to design high performance x86 mobile/desktop-products.
 

Doug S

Diamond Member
Feb 8, 2020
3,708
6,554
136
People need speed, good battery time and that their stuff works. Everything needing speed is ARM native already - and emulation works good enough that other stuff works fine too. There's pretty much no need to restrict cpu to native x86 ones - just get the best for current use case. And if finest cpu's aren't x86 there's pretty soon no point of wasting money to design high performance x86 mobile/desktop-products.

Are you basing this on single core or multi core? Qualcomm didn't release any TDP info (does no one else find that suspicious?) and that article was written based on benchmarks chosen by Qualcomm, in laptops built by Qualcomm (well some OEM/ODM to Qualcomm's specs) that would have been designed with best possible cooling.

Its pretty clear Qualcomm is pushing the edges of single core power, how much power do you think running 18 cores is going to draw? For all we know the right MT comparison might not even be HX laptops but a DTR laptop.
 

naukkis

Golden Member
Jun 5, 2002
1,030
854
136
Are you basing this on single core or multi core? Qualcomm didn't release any TDP info (does no one else find that suspicious?) and that article was written based on benchmarks chosen by Qualcomm, in laptops built by Qualcomm (well some OEM/ODM to Qualcomm's specs) that would have been designed with best possible cooling.

Its pretty clear Qualcomm is pushing the edges of single core power, how much power do you think running 18 cores is going to draw? For all we know the right MT comparison might not even be HX laptops but a DTR laptop.

So we might have first ARM-cpu to fully exploit turbo like every x86 cpu does. But that doesn't matter at all - performance is so much better than unlimited x86 that I do want x2 Elite for my Windows desktop for best performance.
 

Tup3x

Golden Member
Dec 31, 2016
1,282
1,413
136

Raqia

Member
Nov 19, 2008
123
86
101
Are you basing this on single core or multi core? Qualcomm didn't release any TDP info (does no one else find that suspicious?) and that article was written based on benchmarks chosen by Qualcomm, in laptops built by Qualcomm (well some OEM/ODM to Qualcomm's specs) that would have been designed with best possible cooling.

Its pretty clear Qualcomm is pushing the edges of single core power, how much power do you think running 18 cores is going to draw? For all we know the right MT comparison might not even be HX laptops but a DTR laptop.
~55W

1759178399851.png
~18W ST
1759178526185.png
 

DZero

Golden Member
Jun 20, 2024
1,765
671
96
This leaves..
- How badly Intel is
- How AMD carries X86
- And even GPU wise, AMD has a very STRONG competitor that won't need any dGPU inside.
- How strong Apple is with their Mac OS
- How badly Microsoft is using ARM on WoA. And also, knowing that helps Intel more than AMD, shows how badly Microsoft is.
 
  • Like
Reactions: Tlh97

Meteor Late

Senior member
Dec 15, 2023
343
379
96
Reality is, compatibility will always be the most important factor. There is a starting point where performance is already enough for the vast majority of tasks, and the battery life yeah people does care about that, but not at the cost of compatibility again, if x86 can be good enough in that front, such as Lunar Lake or I suspect Panther Lake or Ryzen 3nm or 2nm would reach the good enough category, it's just not enough difference in terms of performance or even battery life to justify having to worry about if my app will run, if drivers will be an issue, etc. It's just the harsh reality.
The only way it could take off would have to be a combination of:
-The gap between ARM and x86 agents gets bigger than it is now
-Microsoft gets more serious about ARM, translation gets better and especially almost any app is translated without errors, more apps than expected get ported to ARM, GPU drivers improve more than expected, etc
Still, probably the biggest issue is not about apps but about drivers, such as printers and the likes.
 
  • Like
Reactions: Tlh97

hemedans

Senior member
Jan 31, 2015
292
165
116
This leaves..
- How badly Intel is
- How AMD carries X86
- And even GPU wise, AMD has a very STRONG competitor that won't need any dGPU inside.
- How strong Apple is with their Mac OS
- How badly Microsoft is using ARM on WoA. And also, knowing that helps Intel more than AMD, shows how badly Microsoft is.
Intel has its own issues but atleast in this market they carry X86, lunar lake is good low power system, has good battery life comparable to Arm and perfomance is good even below 10W.
 
  • Like
Reactions: Tlh97

soresu

Diamond Member
Dec 19, 2014
4,168
3,643
136
How badly Microsoft is using ARM on WoA. And also, knowing that helps Intel more than AMD
How does that help Intel more than AMD?

At the end of the day if more people are yeeting Windows PCs for ARMacs then it doesn't matter which of the x86 duopoly is doing better.
 

naukkis

Golden Member
Jun 5, 2002
1,030
854
136
How much Qualcomm invested on their cpu program? Is it a time to finally say that ISA matters for performance with cpu development more than Intel and AMD want to acknowledge, or are both Intel and AMD just piss poor cpu designers?
 

StinkyPinky

Diamond Member
Jul 6, 2002
6,985
1,283
126
How much Qualcomm invested on their cpu program? Is it a time to finally say that ISA matters for performance with cpu development more than Intel and AMD want to acknowledge, or are both Intel and AMD just piss poor cpu designers?

I honestly think ARM is just a superior architecture for this level of mobile computing. I think Qualcomm are very invested because they want to be on Windows what the M series is to MacOS.

It would not surprise me at all if by 2035 all retail level computers are ARM based.
 

Joe NYC

Diamond Member
Jun 26, 2021
3,864
5,399
136
How much Qualcomm invested on their cpu program? Is it a time to finally say that ISA matters for performance with cpu development more than Intel and AMD want to acknowledge, or are both Intel and AMD just piss poor cpu designers?

Qualcomm paid $1.4 billion for Nuvia. Nuvia spent bunch of money developing IP and may have stolen some IP from Apple.

It's not like Qualcomm created all this in a year or two...
 

naukkis

Golden Member
Jun 5, 2002
1,030
854
136
Qualcomm paid $1.4 billion for Nuvia. Nuvia spent bunch of money developing IP and may have stolen some IP from Apple.

It's not like Qualcomm created all this in a year or two...

But they did. Why can't AMD or Intel do same if problem ain't x86 itself?
 

Tigerick

Senior member
Apr 1, 2022
900
828
106
The M4 in the iPad is more thermally limited than the one in MacBooks, where it can get close to 4000 in GB6 ST so just over 10% more than A18 Pro. And Apple tends to keep single threaded power consumption lower than competitors even in its laptops.

But yeah, that'd still have X Elite 2 land far from 5000 (maybe topping out at 4500, maybe lower). Just this should be the first time we see Oryon on equal nodes (and you'd assume equal arch) on both the phone and laptop platform, so we can't quite predict what kinda gap there will be between the two yet.
X2-EE GB6 ST.png
X2EE GB6 MT.png

Now I hope you realized how stupid your argument was. As for any SoC, Qualcomm have to decide how to distribute power to all CPU cores. The more cores, the more power will be distributed to additional core. In order to sustain MT max frequency (4.4GHz), X2-EE has to power up to 50W as shown in the graph above. Apple's M4 Pro has about 40W TDP, clearly Qualcomm aims to scale higher with more cores (18 vs 14) and higher clock speed at the expense of power.

In order to sustain 5GHz boost clock speed, Qualcomm have to use up to 18W, not 50W. The relative performance curve is almost flat at 18W, here the core has hit the limit based on the design and node.
 
Last edited:

Joe NYC

Diamond Member
Jun 26, 2021
3,864
5,399
136
Qualcomm announced their plans for AI inference servers using their Hexagon NPU architecture today. Details are light, but it was enough to juice the stock by >10% :rolleyes:


Qualcomm announces an NPU on PCIe card, calls it AI, uses a picture of a rack (instead of the PCIe card), stock jumps 20% this morning, or $50 billion in market cap.

Apparently, the current version of the same PCIe card kind of sucks, and QCOM is giving it away for free to some potential customers.

Definitely feels like a bubble...
 

Raqia

Member
Nov 19, 2008
123
86
101
Qualcomm announces an NPU on PCIe card, calls it AI, uses a picture of a rack (instead of the PCIe card), stock jumps 20% this morning, or $50 billion in market cap.

Apparently, the current version of the same PCIe card kind of sucks, and QCOM is giving it away for free to some potential customers.

Definitely feels like a bubble...

They have a differentiated offering which makes a lot of sense for inferencing and TCO. Efficient Inferencing is different from training --training requires higher precision compute and is latency tolerant, but deployed production models will be continuously used on host inferencing hardware and are more latency sensitive.

Using cheaper, commodity LPDDR instead of HBM for 4x higher memory capacity per card makes sense both for latency with large models and multi-model tenancy. An architecture that isn't graphics oriented saves die space and allows more flexibility for data flows than a graphics pipeline (with its ROPs and TMUs) to reduce energy used in data movement.
 

adroc_thurston

Diamond Member
Jul 2, 2023
7,637
10,387
106
Using cheaper, commodity LPDDR instead of HBM for 4x higher memory capacity per card makes sense both for latency with large models and multi-model tenancy.
Those want membw that LPDDR can never offer.
An architecture that isn't graphics oriented saves die space and allows more flexibility for data flows than a graphics pipeline (with its ROPs and TMUs) to reduce energy used in data movement.
DC GPUs have their FF GFX h/w long excavated.