• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Question Zen 6 Speculation Thread

Page 409 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
I don't write assembly anymore, intrinsics made that a thing of the past, but I do look at the disassembly of my code. You can basically write assembly level code with tight c/c++.
I once wrote a small sample in asm, then C and ran them against each other. The C was FASTER than the ASM. When I disassembled it, I found that the C compiler used the registers better than I did. I quit writing ASM for anything other than inline C functions for very very specific usage after that 🙂.
9% is not that much. But do you know if Zen 6 is better than 9%?
Let's say 10% conservatively. On top of that, I hear the latest leaks are that clock speeds will top out at ~6.4Ghz (12%). I think the bigger effect (and likely not measured well by SPEC) will be the improved memory controller, faster memory, and larger L3 cache impacts that aren't necessarily "IPC"

In fact, I am wondering how important a "pure" IPC measurement is (say the algorithm fits into L1 to measure the MAX throughput possible).

So I am guessing that in most desktop applications we will see a 20-30% improvement over Zen 5.

I think in DC applications, we will see the 70% AMD is claiming.... and this is really a bigger deal than desktop anyway..... at least for AMD and their profit and revenue are concerned.
 
Yeah.

It doesn't.
Again, 2 shrinks, a ton more area and on average like 30% faster.
Embarassing.

At 10W in gaming workloads, Hawk and Strix Point are also embarassing compared to Van Gogh considering their node generations as well. That doesn't mean RDNA3's gfx IP is terrible.


At 20-30W, Intel's Xe3 is surpassing RDNA 3/3.5 by 50 to 80%, or more when raytracing is involved and/or a XeSS is used for a lower base render with better IQ than FSR3.
 
Intel will get Nvidia iGPUs soon - Nvidia will certainly be open to get into x86 handheld console market, they did it cut price for Nintendo who must be the cheapest company on this planet.
Cut price on a 2 yr old SoC that was already outdated by the time it was announced the first time, let alone by the time it was announced for Switch and then released.

(it was effectively making money back on unused stock like with the AMD Steam Deck SoC deal)

The SoC for Switch 2 is even further out of date on the CPU side and was already 2 gens out of date on the GPU side - not to mention it's sporting a GPU cluster count inferior to most flagship phones, so it's not like they are stretching the budget from any side.
 
Those weren't made for that.
Alas, LNL and PTL both were.
None of them, AMD nor Intel, were designed for gaming workloads at 10W.

Van Gogh OTOH does seem to be much more gaming-oriented, or at least much better balanced for gaming (only 4 CPU cores with big 8CU GPU, large GPU L2 per-CU, large RAM bandwidth per-CU, etc.).


A lot less, but that's hardly relevant.
It's very relevant considering the fact that, like you keep saying, AMD doesn't care.
 
None of them, AMD nor Intel, were designed for gaming workloads at 10W.
LNL was!
Van Gogh OTOH does seem to be much more gaming-oriented, or at least much better balanced for gaming (only 4 CPU cores with big 8CU GPU, large GPU L2 per-CU, large RAM bandwidth per-CU, etc.).
No, it was made for a meme Win10X tablet.
It's very relevant considering the fact that, like you keep saying, AMD doesn't care.
?
Dayum, brother. Just prep the cash for Olympic Ridge.
 
Back
Top