Discussion Qualcomm Snapdragon Thread

Page 6 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Tup3x

Golden Member
Dec 31, 2016
1,230
1,337
136
Not necessarily.

The 13800H performance that the Snapdragon matches in CB24 at about 32W total power (65% less) is about a 976 apparently. Qualcomm’s peak performance is about a 1229 on CB24 for the “80W TDP” which here would at least be similar to the actual peak. And then it’s 1022 in the “23W TDP” config (which tells us nothing about the power though we do know the MT boost clocks there should be 3.4GHz).

Anyway, with some margin of error here it’s about 1/4-1/3 more from the 32W point where they match the 13800H to the peak for QC on performance which would make sense and puts you around the 1229 range more or less (or the 32W point is lower rather).

So say the MT is 950 @ 32W. Yeah that’s worse than the M2 Max, but not by much, and take it down to 20W where the M3 is, it’s steep but should still be within 10-15%. That’s all assuming the calculations here are even roughly accurate and the graph is representative.

I’m not that worried tbh.
I'm not worried either.
1698177174358.png
 

SpudLobby

Golden Member
May 18, 2022
1,041
701
106
I'm not worried either.
View attachment 90719
Yeah, though FWIW, I think the M2 Max at peak will draw like ~ 20W when measured from the wall on ST from Notebookcheck in Single R23 (and shouldn’t include display since it was in an external monitor). This is only one measurement though. But fwiw the M2 or M3 are like 8-12W by the same reports.



It’s also -30% less power @ the M2 Max’s ST performance, not 30% *and* more performant simultaneously. Still, the M2 Max is usually about 7+% faster than the M2 Pro or M2 thanks to a .2GHz clock boost and probably more headroom on bandwidth and background tasks.



If Qualcomm is matching the M2 Max at 30% less power, and we say the platform power consumption for ST stressers is 20W, then you’re looking at a ~ 14W draw for the whole stack for what is like a GB6 2840 *if* we take notebookcheck’s numbers seriously, and if CB23 has rough power similarity to GB6.

In practice I expect it’s a bit better than that power consumption, but something like 10W for the whole platform when one core is gunned to 3.8GHz would make sense. Keep in mind also Windows performance isn’t as good as Linux for now afaict probably due to scheduling. So the “23W” version with a 4.0GHz boost scores more in the 2700-2800 GB6 range on Windows, and since probably at about 3.9GHz is where it’s M2 Max-tier on performance, probably take a hare off that GB6. Then you have the matched performance on Windows that takes -30% off the M2 Max power draw.
 
  • Like
Reactions: Tlh97 and ikjadoon

FlameTail

Diamond Member
Dec 15, 2021
4,384
2,756
106
I wonder if Windows 12 will contain scheduler and other optimisations for the X Elite chip.


Getting ready for the Snapdragon X Elite
One thing that's interesting is that Hudson Valley will be based on the Germanium platform release, and that platform update is set to hit RTM in April. However, Hudson Valley itself won't RTM until August, with a general release in September or October. The platform changes have more to do with the underlying tech, and it looks like Microsoft wants to have it ready earlier so that Arm devices powered by the Snapdragon X Elite can ship with it preinstalled.
Indeed, it's said that the Germanium platform has important changes for the Snapdragon X Elite and these PCs can't be shipped with the current version of Windows 11, but manufacturers want to ship them in June 2024. As such, these Arm-based PCs will ship with the Germanium platform release, but they won't have all the Hudson Valley features out of the box. They'll have to wait for the update coming a few months later, but it will simply be a cumulative update. For everyone else, Hudson Valley will release alongside the Germanium platform release as one big feature update, like Windows 11 was to Windows 10.

X Elite will debut with Windows 12.
 

SpudLobby

Golden Member
May 18, 2022
1,041
701
106
One other thing though. Of course they beat Intel, and we can reasonably assume the whole platform draw (with the package DRAM, power ICs) matching M2 Max is pretty good, and that’s about a 2600-2700 GB6 on Windows. Like at worst I’d bet 12-14W. Prob lower.

Well check out what AMD is drawing for this stuff.


This post is using TDP caps to gradually ascertain the single core scaling in GB6.2 for a 7840HS.

Mind you this software estimate and all will be fairly accurate for AMD but doesn’t include other losses and DRAM e.g. the full platform like Andrei measures, so these are all understated relative to the QC numbers we’re discussing.

5W - 776
10W - 2240
15W - 2594
17.5W - 2654
20W - 2716
(After that no change for ST performances).


Basically, comparing at the same performance, like that 2594 or 2654, think the X Elite would draw much less total power. It would probably get worse the lower we go — e.g. I bet at the 2240 the X Elite’s perf/W would look much better than it already does towards 2600-2700 vs AMD.
 
Last edited:
  • Like
Reactions: Tlh97 and ikjadoon

FlameTail

Diamond Member
Dec 15, 2021
4,384
2,756
106
I wonder how much transistors Qualcomm invested in things like the USB4 controllers, VPU and Display Engines.

It is well known that Apple spends a lot of transistors on those things compared to Intel/AMD, with a Thunderbolt controller bingo the size of an entire CPU P-core for instance.
 

gdansk

Diamond Member
Feb 8, 2011
4,074
6,749
136
From revealed information, ancillary functions are not skimped: GPU is big, NPU is big (about 45 TOPS), supports even AV1 encoding, dual 5K external display support + 4K120Hz internal display support. If anything based on the announced features I presume even a higher portion of the SoC is dedicated to things other than CPU cores when compared to M2/M3.
 
Last edited:
  • Like
Reactions: Tlh97 and ikjadoon

FlameTail

Diamond Member
Dec 15, 2021
4,384
2,756
106
Can the NPU be used for DLSS-like upscaling?

Or should tensor cores necessarily be inside the GPU itself to do it?
 

soresu

Diamond Member
Dec 19, 2014
3,710
3,037
136
Can the NPU be used for DLSS-like upscaling?

Or should tensor cores necessarily be inside the GPU itself to do it?
The 'NPU' is already a combo of QC's homegrown Hexagon DSP µArch and a dedicated ML inference unit.

While they are technically separate from the GPU, the entire assembly in a modern smartphone SoC is pretty integrated.

Hypothetically keeping the NPU external means it can be used for certain functions like always on audio listening while drawing as little power as possible relative to running it through the GPU.
 

FlameTail

Diamond Member
Dec 15, 2021
4,384
2,756
106
The 'NPU' is already a combo of QC's homegrown Hexagon DSP µArch and a dedicated ML inference unit.

While they are technically separate from the GPU, the entire assembly in a modern smartphone SoC is pretty integrated.

Hypothetically keeping the NPU external means it can be used for certain functions like always on audio listening while drawing as little power as possible relative to running it through the GPU.
Very good, but I was asking if it would be possible to run something like DLSS (which requires matrix processing units), through the NPU only?

Right, DLSS and XeSS use the matrix units that are embedded within the GPU itself.
 

FlameTail

Diamond Member
Dec 15, 2021
4,384
2,756
106
Hey!

So I anybody here actually looking to buy a laptop with Snapdragon X Elite when it does arrive?
 
  • Like
Reactions: ikjadoon

ikjadoon

Senior member
Sep 4, 2006
241
519
146
From revealed information, ancillary functions are not skimped: GPU is big, NPU is big (about 45 TOPS), supports even AV1 encoding, dual 5K external display support + 4K120Hz internal display support. If anything based on the announced features I presume even a higher portion of the SoC is dedicated to things other than CPU cores when compared to M2/M3.

(1) Some people would argue the X Elite should be compared to the M3 Pro, not M3. However, I argue otherwise. If you look at the die size, M3 (TSMC N3B) is 146 mm² and X Elite (TSMC N4P) is about 172 mm².

The M3 has a smaller die but it's on the more expensive N3B node, which also has lower yields. So I guess when it comes to actual cost, both are similar and the M3 might potentially be more expensive than the X Elite.

(2) These numbers of the X Elite are better than Meteor Lake and Hawk Point. Arrow Lake and Strix Point will beat it, but they are coming in late 2024. So I guess the X Elite will still have some thunder to boast about when it arrives in mid-2024.

I'd agree here on the die sizes. Qualcomm allocated differently: 12x P-core, vs Apple's 4+4, and they squeezed a lot into a relatively tiny die.

Maybe Qualcomm is doing what MediaTek 9300, Intel MTL, and AMD's Zen4c do: only two P-cores get the full die size for peak frequency.

SXE (80W device type): just 2x P-cores can clock to 4.3 GHz, the other 10x can only hit 3.8 GHz?

I thought first, 600 MHz isn't much compared to AMD's Zen4c. Can Qualcomm really save much die space? But then MediaTek's design gave me pause: even if Arm's relatively small X4 cores can achieve a noticeable shrink with just a 400 MHz reduction, Oryon may well be the same.

The Dimensity 9300 still uses a three-tier CPU approach here of sorts. One larger Cortex-X4 core is clocked up to 3.25GHz, while the other three run at 2.85GHz. MediaTek notes there’s no difference to these cores in terms of cache, only that the higher clocked core is laid out with a larger silicon area to enable the higher frequency.

//

My only quibble on comparing SXE with M3 / M3 Pro: per-die cost could depend on volume ordered. Apple reliably sell tens of millions of Macs, so Apple may get larger TSMC volume discounts than Qualcomm, who may be more cautious for this first-gen Windows on Arm SoC.

So an M3 Pro, even with a presumably larger area, may still be cheaper than Qualcomm's SXE, so they could occupy the same price segment.
 
  • Like
Reactions: Tlh97 and FlameTail

SpudLobby

Golden Member
May 18, 2022
1,041
701
106

Qualcomm's dominance in the smartphone sector is slipping. I am not surprised they want to expand into PC. Diversification is good.
I think some of that was inevitable as MediaTek got better and entered premium. And while this isn’t a perfect analogy, similar can be expected of Qualcomm entering PC’s for Intel and AMD — except there I think Qualcomm actually has a better product from some important metrics, whereas that’s not quite true of Dimensity vs Snapdragons.

Fwiw the market share here misses the premium share where I think Qualcomm still bests MediaTek by a significant margin. Arguably QC’s issue in mobile isn’t MediaTek but:

- Samsung and it’s Exynos: if Samsung’s phones are more competitive and they can continue to use Exynos, that’s fewer sales worldwide for alternative Android phones that may use Qualcomm SoC’s.
Alternatively if in the future Galaxy phones are competitive but Exynos is canned in favor of Snapdragons like they did last year, that’s ideal for QC.

— Android premium competitiveness versus Apple. I think Android has lost some ground vs Apple in recent years, and among youth in the West or even in Korea and Japan it’s much worse.

- Pixels (?) depending on how successful these are, that’s not $ for Qualcomm either. But in this case Pixels might eventually use Qualcomm modems apparently fwiw, so if they grow that’s not too bad but that’s not a full SoC + modem/RFFE.

Qualcomm is in a weird position I think. They have MediaTek nipping at their heels, they have Apple maintaining ground vs the premium Android stuff, Samsung’s Exynos will still be around and takes 50% minimum of Galaxy sales, and China is a bit of a ?.
 
  • Like
Reactions: Tlh97 and FlameTail

SpudLobby

Golden Member
May 18, 2022
1,041
701
106
I'd agree here on the die sizes. Qualcomm allocated differently: 12x P-core, vs Apple's 4+4, and they squeezed a lot into a relatively tiny die.

Maybe Qualcomm is doing what MediaTek 9300, Intel MTL, and AMD's Zen4c do: only two P-cores get the full die size for peak frequency.

SXE (80W device type): just 2x P-cores can clock to 4.3 GHz, the other 10x can only hit 3.8 GHz?
My guess is that it is a quality yield (parametric) issue. All the cores in two clusters in particular (they said this indicating the third is different) are probably physically designed to hit 4.3GHz — maybe a bit more — but with yields you get variances, so only 1/4 in each is capable of 4.25GHz(4.3). Though it’s also possible one core in each batch is physically a bit bigger using more performant cells or whatever. They’ve done this with A7x cores on phones before so.

> “Meanwhile in lighter workloads, the chip supports turboing up to 4.3GHz on 2 cores. Qualcomm’s slide on this matter shows a core from each cluster, but it’s unclear whether this is some kind of prime/favored core in action (where only certain cores are designed/validated for those speeds) or if it’s simply a stylistic choice.“

So it could be either way — it could be two cores that are capable on a yield basis per each die — chosen cores out of firmware or whatever and different ones for each part — or they could each physically be distinct. But fwiw, they are probably not that much bigger if they are even different.

Unless Qualcomm is keeping a ton of clockspeed in the tank, 4.3GHz vs 3.8GHz isn’t that huge. These are still going to be relatively dense cores.

I also want to point out that L2 is a part of this, and these cores share L2. So if even one of the cores is clocked higher, any density savings in the cache would be destroyed by that because they need to be able to support the fastest core.

Most important: AMD targeting up to 5.7GHz on the same process, 5.2GHz on the mobile versions. When something does less it’s just yields. Zen 4C is capable of about low to mid 4GHz apparently fwiw. It’s a much bigger range of variation for AMD and they’re also pushing a much higher clockrange relative to the process. Qualcomm, even if they do have prime core designs which they don’t seem to disclose (leading me to believe this is just a yield thing and each die’s 4.3GHz cores will differ) are not going to save like 30-40% on area here.
I thought first, 600 MHz isn't much compared to AMD's Zen4c. Can Qualcomm really save much die space? But then MediaTek's design gave me pause: even if Arm's relatively small X4 cores can achieve a noticeable shrink with just a 400 MHz reduction, Oryon may well be the same.
They are absolutely smaller, but you have to separate the architectural and L2 difference from the physical design necessary to support x y z clocks.

Those smaller X4’s have 512kb of L2, not 1MB like the one huge X4 — Arm allows this as long as you have one huge X4. Second of all, those smaller X4’s that tag along have 2x128b vector/SIMD units instead of 4x128B with the regular X4. Another new Arm thing.

EDIT: I see MediaTek mentioned they’re laid out with larger silicon too.

Yeah fwiw I think you hit threshold effects for a given process node where you have to use larger silicon even if it’s still dense cells or whatever. Like as a probabilistic thing if you want most chips’ X cores to be able to hit some frequency, probably at some point you have to just change the phydes of the core(s) so a higher or very high proportion of them are able to do so.

It could also be about timing for this particular design too, idk.
 
Last edited:
  • Like
Reactions: Tlh97 and FlameTail

FlameTail

Diamond Member
Dec 15, 2021
4,384
2,756
106
Also in the Apple M3, only one core can hit 4.05 GHz, and the others top out at 3.6 GHz iirc?
 

FlameTail

Diamond Member
Dec 15, 2021
4,384
2,756
106
Fwiw the market share here misses the premium share where I think Qualcomm still bests MediaTek by a significant margin. Arguably QC’s issue in mobile isn’t MediaTek but:
indeed. The bulk of Mediatek's share is the budget segment. Qualcomm still holds the flag in the premium segment, but who knows for how long?
- Samsung and it’s Exynos: if Samsung’s phones are more competitive and they can continue to use Exynos, that’s fewer sales worldwide for alternative Android phones that may use Qualcomm SoC’s.
Alternatively if in the future Galaxy phones are competitive but Exynos is canned in favor of Snapdragons like they did last year, that’s ideal for QC.
Idk about Exynos' prospects. Samsung has been fudging around for the past several years.
2020: E990 was a flamin' disaster
2021: E2100 was decent but still bested by the 888 on the same node.
2022: Initially both E2200 and it's rival 8Gen1 were flamin cookers. But then 8+Gen1 came along and smashed them both.
2023: E2300 was shelved and Samsung shipped 8Gen2 worldwide.
2024: Exynos is coming back to the S series. But preliminary leaked benchmark scores shoe it's worse than 8Gen3 and D9300, despite being a 10-core CPU.

Still, it seems Samsung will not give up on their own SoCs. It has been widely rumoured that Samsung is working on a "Dream Chip" for the S25 series in 2025. In the meantime they have replaced the Snapdragons with Exynoses in their midrange and budget lineups, which sell in huge volumes.

— Android premium competitiveness versus Apple. I think Android has lost some ground vs Apple in recent years, and among youth in the West or even in Korea and Japan it’s much worse
Apple's modem has been delayed again it seems, so Qualcomm will be happy to keep selling them modems.
.

- Pixels (?) depending on how successful these are, that’s not $ for Qualcomm either. But in this case Pixels might eventually use Qualcomm modems apparently fwiw, so if they grow that’s not too bad but that’s not a full SoC + modem/RFFE.
.

Qualcomm is in a weird position I think. They have MediaTek nipping at their heels, they have Apple maintaining ground vs the premium Android stuff, Samsung’s Exynos will still be around and takes 50% minimum of Galaxy sales, and China is a bit of a ?.
Qualcomm can't sell modems to Apple for ever. It is rumoured Apple is going full throttle on their 6G modem, and it looks they don't want to rely on Qualcomm at all for 6G.

Mediatek and Exynos will continue to grind away at Snapdragon's dominance in mobile.

Hence, Qualcomm should diversify into more markets, with the money they have from the dominance in mobile now. Diversify into the PC and Datacenter while the time and money window is still open.
 

SpudLobby

Golden Member
May 18, 2022
1,041
701
106
//

My only quibble on comparing SXE with M3 / M3 Pro: per-die cost could depend on volume ordered. Apple reliably sell tens of millions of Macs, so Apple may get larger TSMC volume discounts than Qualcomm, who may be more cautious for this first-gen Windows on Arm SoC.

So an M3 Pro, even with a presumably larger area, may still be cheaper than Qualcomm's SXE, so they could occupy the same price segment.
Well two things.
I think costs probably differs more depending on total orders with than it does the specific designs. From everything we know about TSMC and also Apple pricing or silicon BOM, this is basically the case. The volume in these ranges isn’t really the issue, almost entirely about bulk agreements with TSMC.

And even then, I am seriously skeptical that is what’s going on here to the extent that an M3 Pro is cheaper than this. I mean, N3 alone is just expensive, like ~25% more, so what you really mean is the M2 Pro. And there, no, I doubt it.

Apple does likely have a discount for their bulk orders and fronting TSMC R&D effectively fwiw.

But the M1 -> M2 was an area increase of like 20-25%, and the M1 Pro base model was 240 mm^2.
The M2 Pro is supposedly around 288mm^2.

So even with the M3 Pro if we assume they used ~15-30% (actual) density gains and this canceled the 25% price increase gains over N5, and then let’s take an Apple’s discount of 25-30% (rumored), you still end up with an N4/5 equivalent die size cost of like 200-210mm^2.


And then the M3 Pro also has a 192-bit bus which adds some cost, and in practice N3B yields are not as good as N4P.

Unless TSMC wafer cost is volume adjusted for a given design alone independent of the risk orders for a firm and this is a tremendous discount, I feel very comfortablee saying the X Elite is less expensive than the M3 Pro or M2 Pro.


More interesting is AMD’s Phoenix 7840U at 178mm^2 on N4 — because AMD probably does have similar volume on N4/5 specifically if not more than Qualcomm, but open question if this is giving them some advantage. My guess is it’s fairly similar for those two per wafer.
 

SpudLobby

Golden Member
May 18, 2022
1,041
701
106
indeed. The bulk of Mediatek's share is the budget segment. Qualcomm still holds the flag in the premium segment, but who knows for how long?

Idk about Exynos' prospects. Samsung has been fudging around for the past several years.
2020: E990 was a flamin' disaster
2021: E2100 was decent but still bested by the 888 on the same node.
2022: Initially both E2200 and it's rival 8Gen1 were flamin cookers. But then 8+Gen1 came along and smashed them both.
2023: E2300 was shelved and Samsung shipped 8Gen2 worldwide.
2024: Exynos is coming back to the S series. But preliminary leaked benchmark scores shoe it's worse than 8Gen3 and D9300, despite being a 10-core CPU.


Apple's modem has been delayed again it seems, so Qualcomm will be happy to keep selling them modems.

.


Qualcomm can't sell modems to Apple for ever. It is rumoured Apple is going full throttle on their 6G modem, and it looks they don't want to rely on Qualcomm at all for 6G.
1). I agree and it’s a problem for Qualcomm. Apple is still going to shift to their own for 5G fwiw. That rumor doesn’t mean they’re done with 5G, it’s just delayed. 2025/2026 should see at least one phone with their modem and I bet 2027/2028 they’re switched over entirely.

Mediatek and Exynos will continue to grind away at Snapdragon's dominance in mobile.

Hence, Qualcomm should diversify into more markets, with the money they have from the dominance in mobile now. Diversify into the PC and Datacenter while the time and money window is still open.
Exynos isn’t really grinding away though.

Yes they are obviously trying to diversify. But the datacenter is a lost cause mostly. They canceled the server part. Their AI DC stuff is a joke.

PC, auto, and XR + wearables is where they’ll be.

One other thing: Pixels I think they might actually get back on modems. I strongly suspect Google will go back to QC when they head to TSMC, so if Pixels grow that’s not a bad gig. But really it would be better for QC if Google scrapped Tensor and just went Snapdragon with some lower priced deal.


They’re definitely in a weird spot.
 

SpudLobby

Golden Member
May 18, 2022
1,041
701
106
Also in the Apple M3, only one core can hit 4.05 GHz, and the others top out at 3.6 GHz iirc?
No, for 4.05GHz, it’s only one at a time, but that’s standard. Apple has never done the prime/non-prime thing where one core has a bigger physical design than the others for frequency. All of them can hit the peak, it’s just that their all core frequency is lower than the peak.

I could be wrong and for the first time they did differently now but I doubt it.
 

FlameTail

Diamond Member
Dec 15, 2021
4,384
2,756
106
No, for 4.05GHz, it’s only one at a time, but that’s standard. Apple has never done the prime/non-prime thing where one core has a bigger physical design than the others for frequency. All of them can hit the peak, it’s just that their all core frequency is lower than the peak.
Then Qualcomm might be doing the same thing, no?
Snapdragon-X-Elite-Pre-Briefing-Deck-5_575px.jpegSnapdragon X Elite Pre-Briefing Deck 7.jpeg
I could be wrong and for the first time they did differently now but I doubt it.
Just take a peak at the die shot
F922zLFWEAA5iaO.jpeg
 

FlameTail

Diamond Member
Dec 15, 2021
4,384
2,756
106
Hypothetical Question

Let's take 2 CPUs. They both use the identical core architecture/caches. Overall multicore performance is also identical (4.4×10=44, 4.0×11=44).

(A) 10 cores @4.4 GHz using HP library

(B) 11 cores @4.0 GHz using HD library

Which solution is better in terms of PPA? I am curious which will take up more die area and which will be more efficient at multicore performance.
 

SpudLobby

Golden Member
May 18, 2022
1,041
701
106
Then Qualcomm might be doing the same thing, no?
View attachment 90914View attachment 90915
Yeah they absolutely might, except they also seem to highlight that it’s one core in each cluster in a way that makes me think it’s actually one core that makes it. Again one core doesn’t necessarily mean they’re physically different, they can assess which core can actually hit that frequency in assembly/packaging or the firmware can know, similar to how Intel has in the past had preferred cores. But yeah it might also just be that only one core at a time is capable.

Just take a peak at the die shot
View attachment 90913
Yep that doesn’t show any unique P core.
 

Doug S

Diamond Member
Feb 8, 2020
3,123
5,368
136
Qualcomm can't sell modems to Apple for ever. It is rumoured Apple is going full throttle on their 6G modem, and it looks they don't want to rely on Qualcomm at all for 6G.

If the rumor that Apple is throwing in the towel on a 5G modem is true, what good is a 6G modem going to do (on a side note I'm skeptical 6G will even be used in phones) A 6G modem still has to support 5G/LTE, and it isn't as though 6G will be easier (if their issues are partly hardware related) and on the baseband side LTE/5G/6G it doesn't matter because you retain all the complexities of interfacing with thousands of carriers over hundreds of bands in hundreds of countries.

Those rumors may be false as it turns out Apple did produce some iPhone 15 test models using their own modem. Now I imagine things went badly enough when they left Apple HQ with those test phones (probably mostly when they left the US) that they felt they had no choice but to take the option to extend the Qualcomm deal. But it is hard to imagine things went SO badly they canceled the entire project!

But if Apple really truly can't get their own modem working they should call up Mediatek and offer to buy their modem. Not to buy modems from them, but to buy their modem - give Apple all the chip designs, baseband software, and licenses to all of Mediatek's cellular related patents (or better yet for Apple's purposes make Apple 50% owner of those patents so they are in a better licensing position vis a vis Qualcomm) and Apple can use Mediatek's designs as a starting point for their modem. Mediatek would continue development as before, and the two projects would slowly diverge. Sort of like forking an open source software project. Surely Mediatek would be willing to take a billion or two off Apple's hands at no real cost to them - and even ignoring the cash it would be a win for them as doing that would hurt their biggest competitor, Qualcomm.
 
  • Like
Reactions: SpudLobby