Question Qualcomm's first Nuvia based SoC - Hamoa

Page 21 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

adroc_thurston

Diamond Member
Jul 2, 2023
5,874
8,224
96
The money is in mobile in servers and more so than 10 years ago.
You've said market is shrinking...
...which it's not. Just not growing.
The "I know x y and z" routine isn't going to pass go because you've worn your *** for a hat too often in the last two years, man
rent free
Yikes. Going backwards for them.
No, it's how it should be.
Sophomoric. At least evict Andrei before slinging.
rent free
 

Doug S

Diamond Member
Feb 8, 2020
3,216
5,542
136
There are changes that can be made to cache beyond size and latency. Stuff like the number of ways, number of read and write ports, prefetch and eviction strategies, even changing its basic design (number of transistors per bit) to affect its power efficiency. Trying to compare the caches of different CPU designs based on only two numbers is missing a lot of information.

Everyone doing CPU design has access to simulation tools that can model the effect of various changes in all the figures of merit. They won't necessarily all converge on a single solution because there are other things about their CPU designs that are different. i.e. the target clock rate, power consumption, chip size/cost for their market, and so forth. It is quite possible that Apple's solution is the best one for their needs and Intel's solution is the best one for theirs even though they are not alike. It is also possible everyone will eventually converge around a similar cache plan, but I doubt it.
 
  • Like
Reactions: Tlh97 and SpudLobby

adroc_thurston

Diamond Member
Jul 2, 2023
5,874
8,224
96
It is quite possible that Apple's solution is the best one for their needs and Intel's solution is the best one for theirs even though they are not alike.
Duh.
Chongus shared L2 is a really-really bad fit for server which is why everyone not Apple moved to private L2 + shared L3.
It is also possible everyone will eventually converge around a similar cache plan, but I doubt it.
Everyone already did.
It's 32/64K L1's 1/2/3/whatever megs of private L2 and then a pool of shared victim L3.
Zen, coves, Cortex-A/X, Neoverse N/V, pretty much all R-V designs, weird chinesium; all do that.
 

Henry swagger

Senior member
Feb 9, 2022
509
312
106
There are changes that can be made to cache beyond size and latency. Stuff like the number of ways, number of read and write ports, prefetch and eviction strategies, even changing its basic design (number of transistors per bit) to affect its power efficiency. Trying to compare the caches of different CPU designs based on only two numbers is missing a lot of information.

Everyone doing CPU design has access to simulation tools that can model the effect of various changes in all the figures of merit. They won't necessarily all converge on a single solution because there are other things about their CPU designs that are different. i.e. the target clock rate, power consumption, chip size/cost for their market, and so forth. It is quite possible that Apple's solution is the best one for their needs and Intel's solution is the best one for theirs even though they are not alike. It is also possible everyone will eventually converge around a similar cache plan, but I doubt it.
What simulation tools do they use.. different sim software ?
 

SpudLobby

Golden Member
May 18, 2022
1,041
701
106
Still very much a viable, relevant market.
Just a fairly high barrier to entry since OEM inertia tied to market maturity.r
Oh it’s viable. Esp for someone like AMD going from zip to something, capable of ruthless cost cutting to maximize margins and survive. But even they are more focused on mobile at this point. You can slap a slightly modified mobile part onto a desktop mainstream lineup and sell more than you’re going to vice versa for very obvious reasons. I’m sure AMD might be considering similar. It’s a widely known rumor at this point — telling. But I never said it was a *dead* market.

f
r e n t
e
e
I like tributaries, I’m flexible — this is your rent. Keep the chimping tame though. Don’t want you to get a suspension again.
 

SpudLobby

Golden Member
May 18, 2022
1,041
701
106
One may like or not adroc's attitude but here he's not the one acting like a jerk.
It’s ok lol. Adroc chimps out every other post. It’s a core part of his somewhat uncivilized reputation here, on Twitter, other chat rooms. He’ll get over it. He just wants you to tell him he’s right.
 

Thibsie

Golden Member
Apr 25, 2017
1,077
1,253
136
It’s ok lol. Adroc chimps out every other post. It’s a core part of his somewhat uncivilized reputation here, on Twitter, other chat rooms. He’ll get over it. He just wants you to tell him he’s right.
Who's acting uncivilized ? Adroc ?
I don't like your tone and I don't like the tone this thread takes.
If you think this is fun, it's only you.
 
  • Like
Reactions: CouncilorIrissa

SpudLobby

Golden Member
May 18, 2022
1,041
701
106
Anyways, back to the topic at hand — I wonder how Strix Point will do against Hamoa/X Elite. Could get interesting. I don’t think AMD will do well with the uncore elements though. Iirc Andrei mentioned he doesn’t see Intel and AMD ever realizing all-day battery life on parts like this.
 

adroc_thurston

Diamond Member
Jul 2, 2023
5,874
8,224
96

SpudLobby

Golden Member
May 18, 2022
1,041
701
106
meds. now.

He's off his medication I assume.
Nah this is stale. Every accusation is a confession etc. Save your chimpstincts for Russian conscripts. Good use there — I’ll support you. Seriously.

Anyways on Strix I’d grant the GPU. That’s obvious. As long as they don’t **** the bed like with RDNA3 which was very ugly for you & AMD.

Hamoa is a serious SoC and everyone knows it. Apple’s software, hardware choices are comically annoying if you’re in the market for the whole “caring about power efficiency and performance” thing and it’s a possible alternative. I thought you were onboard with this — you should be happy we have more competition.
 

adroc_thurston

Diamond Member
Jul 2, 2023
5,874
8,224
96
Nah this is stale. Every accusation is a confession etc. Save your chimpstincts for Russian conscripts. Good use there — I’ll support you. Seriously.
You really need to tone down the voices-in-your-head gimmick.
It's not funny.
Anyways on Strix I’d grant the GPU
Everything besides hopefully the fabric is a win for STX1.
Hamoa is obviously a serious SoC and everyone knows it
It's an overly expensive joke that has no business being a 2024 part.
Horrific platform costs and no power/perf benefits to suffer WoA torture.
I hope Phoenix/Oryon cores at least have a TSO switch to make emulation perf a bit less miserable.
I thought you were onboard with this — you should be happy we have more competition.
It's not a competition, just like 835/850/8cx g1/2/3 weren't a competition.
 

soresu

Diamond Member
Dec 19, 2014
3,871
3,289
136
Anyways, back to the topic at hand — I wonder how Strix Point will do against Hamoa/X Elite. Could get interesting. I don’t think AMD will do well with the uncore elements though. Iirc Andrei mentioned he doesn’t see Intel and AMD ever realizing all-day battery life on parts like this.
"all day battery life" is basically a meme if you are doing any kind of serious work no matter what the platform or vendor, and will ramain so even when solid state lithium sulphur batteries become mainstream.

More efficient display tech could make that less of an issue though - so much power is lost simply getting pixels to your retina that conventional display hardware has become a major bottleneck to user experience.

One of the great advantages of VR/AR is that you can be a lot more efficient with lighting pixels when you don't have to blast the information across half a meter to your eyes.
 

SpudLobby

Golden Member
May 18, 2022
1,041
701
106
You really need to tone down the voices-in-your-head gimmick.
It's not funny.
Just me here. You have someone else on the line with us?
Everything besides hopefully the fabric is a win for STX1.

It's an overly expensive joke that has no business being a 2024 part.
Realistically Strix won’t actually hit devices until 2025. Fabric is a serious concern though yes. Even your fellow AMD fans are negative about that.
Horrific platform costs and no power/perf benefits to suffer WoA torture.
No power benefits? Doubt. Peak perf arguably yes.
I hope Phoenix/Oryon cores at least have a TSO switch to make emulation perf a bit less miserable.

It's not a competition, just like 835/850/8cx g1/2/3 weren't a competition.
Gaps are much smaller now and the performance is there, WoA has improved and will continue to. But look, if you want to be civilized for a moment here and stop your browbeating agita reminiscent of barrack trash talk — I think it’s possible Strix ends up a good part.

I don’t know though. Andrei seems pretty pessimistic. I’ll grab his quote later. I’m inclined to believe him over your oracles or whatever.