- Mar 3, 2017
- 1,774
- 6,757
- 136
i remember it for pentium D , but not Core 2 , link ?Remember when AMD said Intel was using glue for Core 2 Duo?
Perhaps you're right, it was so long agoi remember it for pentium D , but not Core 2 , link ?
i remember it for pentium D , but not Core 2 , link ?
Honestly Kraken feels more like an heir to the 5600/5700g than a PHX heir.
It's not about gaming on an APU, it's about sort of kind of being able to game on an APU. The lowest cost still gaming capable ish system, if you're ok with regularly dropping to low settings, 720p and/or 30 fps.
Same memory bandwidth on both sides, with Strix Point getting 50% more CPU cores that consume memory bandwidth.Strix Point v Kraken having only 36% higher TS score despite 2x bigger IGP?
Hardly true. It's just a speculation.
Phoenix2 and Mendo aren't the same price bracket, Mendocino is a cheaper part. Sonoma Valley is a direct replacement for the same socket as Mendocino, so it should be considerably cheaper than PHX2.From the looks of it, Sonoma is the actual low-cost successor of Phoenix2 and older Mendocino.
That's what I think is silly.8 cores were deemed not enough for a high premium part.
Same memory bandwidth? Only controller width is the same, but that doesn't mean they will use the same memory.Same memory bandwidth on both sides, with Strix Point getting 50% more CPU cores that consume memory bandwidth.
Do you really think AMD would put a 16CU IGP inside just to be massively bandwidth starved?I was hoping for Strix Point to either have a bit of Infinity Cache or at least LPDDR5T. Unless something's changed, AMD's designs don't let the iGPU access the CPU L3 either, so it looks like Strix Point is going to have a massive memory bandwidth bottleneck.
Doesn't make sense.Maybe Strix Point secretly has a huge iGPU L2 cache which makes up for lack of Infinity cache and limited memory bandwidth.
I think you misunderstood what I meant by "Huge L2"Doesn't make sense.
L2 is less dense than Infinity cache, If I am not mistaken.
I'm aware that Mendocino and Phoenix 2 are on different price brackets, but Sonoma is probably in the middle between the two and Kraken is well above Phoenix 2. Depending on the Zen5's IPC upgrades, Sonoma might actually be close to Phoenix 2 in CPU performance.Phoenix2 and Mendo aren't the same price bracket, Mendocino is a cheaper part. Sonoma Valley is a direct replacement for the same socket as Mendocino, so it should be considerably cheaper than PHX2.
That's certainly valid for gaming tasks, but it looks like Strix Halo is a direct competitor to Apple's M3 Max.Edit: I have to say I dont get the point of a 16 cores Strix Halo.
I get that it's a Halo part, but in no console, portable or not, is 16 cores even close to useful. It's all locked at 8 and everyone that tried the 8/12/16 cores Zen 4 X3Ds came and said the 8 core was just the best deal overall. AMD even dropped trying to sell 2 CCDs with the X3D cause it was just a waste of cache.
I really don't get it, this could've been an easy 8 core part and still be fully a Halo product.
Which also makes me wonder what AMD's expectations are for this chip's ASP and margins. They could put this on competitors to ~$1200 gaming laptops with Core i7 / Ryzen 7 and RTX4060-class dGPUs and it would probably be cheaper to make. However the Halo part of it tells me they're only putting this into $2000 products or more, for brand value.That is exactly because it is a "halo" part in both CPU and GPU departments, probably for a light workstation with enough power to face almost all tasks - I admit it is a little unbalanced towards the CPU side but probably it was easier than beef up the GPU more, and 8 cores were deemed not enough for a high premium part.
They're all probably using the same LPDDR5X PHYs and controllers from Synopsys, and most likely they're using the same (fastest as possible) memory for these benchmarks (assuming they're legit).Same memory bandwidth? Only controller width is the same, but that doesn't mean they will use the same memory.
BTW what does It have to do with TimeSpy and IGP?
Phoenix with 12 CUs is massively bandwidth starved with 6400MT/s LP5 (ROG Ally, for example), so yes.Do you really think AMD would put a 16CU IGP inside just to be massively bandwidth starved?
If It was as much starved as you make It out to be, then they should have kept only 12CU.
But Strix Point's theoretical GPU performance uplift will be higher than 33%, considering that it's also bring upgraded from RDNA3 to RDNA3.5, and probably higher clocks for the CUs as well.16CUs in Strix Point will be +33% compute units compared to Pheonix. If Strix Point uses 8533MT/s LP5X then it has +33% more bandwidth than 6400MT/s LP5
I don't think Strix Halo iGPU performance is going to rival M3 Max. If you look at the benchmarks...certainly valid for gaming tasks, but it looks like Strix Halo is a direct competitor to Apple's M3 Max.
That's why it has 16 CPU cores like the M3 Max and a probably comparable iGPU doing 10-15 TFLOPs (20-30 theoretical from double pumped ALUs).
Apple still has a memory bandwidth advantage. Look at this monstrosity:The only thing AMD isn't keeping up with is the massive 512bit LPDDR5 vs Halo's 256bit, but that difference is probably watered down by AMD going with 8533Mbps LPDDR5X (Apple's using 6400Mbps), twice the L3 cache and on the GPU side there's also the 32MB Infinity Cache.
Strix Halo isn't a M3 Max. Its more like a M3 Pro. M3 Max is very expensive to fab. Thats why its in $3500 Macbooks for the full die for now. AMD would be using strix halo for gaming, not for content creation or Blender etc. Different use cases imo.But Strix Point's theoretical GPU performance uplift will be higher than 33%, considering that it's also bring upgraded from RDNA3 to RDNA3.5, and probably higher clocks for the CUs as well.
That and the fact that Strix Point has 12 cores means the CPU MT performance is >50% greater than Phoenix.
The RAM bandwidth has to feed all of that. I don't think it can. Memory bandwidth starvation...
I don't think Strix Halo iGPU performance is going to rival M3 Max. If you look at the benchmarks...
View attachment 96995
It would certainly beat M3 Pro's iGPU though...
Apple still has a memory bandwidth advantage. Look at this monstrosity:
View attachment 96996
Look at the massive LPDDR memory controllers and SLC slices, flanking the GPU like rockets.
M3 Max vs Strix Halo
256 bit vs 512 bit
LPDDR5-6400 vs LPDDR5X-8533
400 GB/s vs 273 GB/s
48 MB SLC vs 32 MB Infinity Cache
Btw if Gurman is correct, M4 Max is coming at the end of this year, so Strix Halo will have to compete with that.
Indeed. I forgot to write the conclusion that Strix Halo is an M Pro-class part, and not an M Max-class part.Strix Halo isn't a M3 Max. Its more like a M3 Pro. M3 Max is very expensive to fab. Thats why its in $3500 Macbooks for the full die for now. AMD would be using strix halo for gaming, not for content creation or Blender etc. Different use cases imo.
256 bit bus, 40 CU GPU, large NPU, large RAM pool available for the GPU, and 16 powerful CPU cores.Edit: I have to say I dont get the point of a 16 cores Strix Halo.
I get that it's a Halo part, but in no console, portable or not, is 16 cores even close to useful. It's all locked at 8 and everyone that tried the 8/12/16 cores Zen 4 X3Ds came and said the 8 core was just the best deal overall. AMD even dropped trying to sell 2 CCDs with the X3D cause it was just a waste of cache.
I really don't get it, this could've been an easy 8 core part and still be fully a Halo product.
But Strix Point's theoretical GPU performance uplift will be higher than 33%, considering that it's also bring upgraded from RDNA3 to RDNA3.5, and probably higher clocks for the CUs as well.
That and the fact that Strix Point has 12 cores means the CPU MT performance is >50% greater than Phoenix.
The RAM bandwidth has to feed all of that. I don't think it can. Memory bandwidth starvation...
I don't think Strix Halo iGPU performance is going to rival M3 Max. If you look at the benchmarks...
![]()
It would certainly beat M3 Pro's iGPU though...
The big factor you're missing here is that the M3 uses a "System Level Cache" that serves the CPU and GPU, but has no L3 for the CPU.Look at the massive LPDDR memory controllers and SLC slices, flanking the GPU like rockets.
M3 Max vs Strix Halo
256 bit vs 512 bit
LPDDR5-6400 vs LPDDR5X-8533
400 GB/s vs 273 GB/s
48 MB SLC vs 32 MB Infinity Cache
Who said It's legit and not just a speculation?They're all probably using the same LPDDR5X PHYs and controllers from Synopsys, and most likely they're using the same (fastest as possible) memory for these benchmarks (assuming they're legit).
And of course memory bandwidth plays a very important role in GPU performance.
Give me graphs or numbers.Phoenix with 12 CUs is massively bandwidth starved with 6400MT/s LP5 (ROG Ally, for example), so yes.
Ah, but you are misunderstanding Apple's CPU Design philosophy. They have no need for an L3, by virtue of the fact that their L1/L2 is so huge.The big factor you're missing here is that the M3 uses a "System Level Cache" that serves the CPU and GPU, but has no L3 for the CPU.
For "last level cache" it's actually:
CPU: 48MB SLC (shared with GPU) vs. 64MB L3 exclusive
GPU: 48MB SLC (shared with CPU) vs. 40MB Infinity Cache exclusive
Though this doesn't tell the full story either, because there's e.g. more L2 cache in apple's CPU cores than in Zen5.
Apple's GPU architecture traces it's lineage back to Imagination GPUs, not PowerVR.I think the M3 GPU architecture (which I assume is still pretty much a fork of PowerVR Rogue)
While we don’t have much insight into Apple’s latest GPU designs, it’s understood that these are custom microarchitecture designs are based upon Imagination’s GPU architecture IP, which makes it unique in the GPU world as we don’t see any other such GPU architecture license in the market. Features such as tile-based deferred rendering and PVRTC are Imagination patented technologies which Apple currently publicly exposes as features of its GPUs, so it’s evident that the current designs still very much use the British company’s IP. The GPU’s block structure is also very similar to that of Imaginations, further pointing out to a close relationship between the designs.
I didn't misunderstand anything.I think you misunderstood what I meant by "Huge L2"
Radeon 780M has 2 MB L2.
What if Strix Point has something like 6 or 8 MB? That's a substantial 3x to 4x increase, while not blowing the die size by much (?).
You brought up Strix Point having 50% more cores as some proof why the difference in TS is so small.
Graphs or numbers of what? 18CUs being 33.(3)% more CUs than 12 CUs? 😐I am not interested in your words, give me graphs or numbers.
For the same reason the cut-down 760M with 8CU RDNA3 gets similar performance to the full 780M in so many games.Maybe you should question why 8CU RDNA3.5 have basically the same performance as 12CU RDNA3.
Yup, my bad.Also, Strix Halo has 32 MB Infinity Cache, not 40 MB?
PowerVR is the graphics division of ImaginationApple's GPU architecture traces it's lineage back to Imagination GPUs, not PowerVR.
They may be made up of ~same building blocks. But the the cores, caches and memory controllers of a) Pentium D, b) Core 2 Quad, c) Zen 1 Ryzen, d) Zen 1 Epyc, e) Zen 2/3/4/5 Ryzen and Epyc are arranged in five different topologies. What the respective competitor's marketing departments had to say about these various different solutions at the time may have been entertaining at best, but lacked analytical depth. ;-)[...] fundamentally it is the same basic building blocks.
Actually the dual-CCX parts are even at a certain (and sometimes grave) disadvantage compared to single-CCX parts in certain applications, which includes many games but also some computational tasks — due to their lack of a unified last level cache. The additional Infinity Cache of Strix Halo (if there is one) does not look like it would have any bearing on this particular disadvantage, or does it?I have to say I dont get the point of a 16 cores Strix Halo.
I get that it's a Halo part, but in no console, portable or not, is 16 cores even close to useful. It's all locked at 8 and everyone that tried the 8/12/16 cores Zen 4 X3Ds came and said the 8 core was just the best deal overall. AMD even dropped trying to sell 2 CCDs with the X3D cause it was just a waste of cache.
I really don't get it, this could've been an easy 8 core part and still be fully a Halo product.