• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Question Zen 6 Speculation Thread

Page 398 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
3% extra isn't worth the risks associated with dropping even 16 bit stuff, never mind 32 bit that is still very actively used - MAYBE 30% would have been justifiable, but there is no way dropping that stuff will get more than very little - internally it must be all mapping to existing hardware stuff anyway, and simple remap can't be very expensive.

The main thing you'd gain is a bunch of short opcodes that can be repurposed for new stuff, but is decoding really the biggest problem right now?

Anyway, it (dropping 32 bit stuff) is all academic - this ain't going to happen in x86 like EVER.
I would take that 3% in a heartbeat. Any 16 or 32 bit software I have I'd jettison or if really necessary now and then run in compatibility mode. It's been hanging on long enough.

I understand it's really only the front end, and decode in particular that would be affected but there is a lot of inefficiencies going on with many of those 32 bit and especially 16 bit instructions going on that the decode has to deal with and then the backend has to execute.

If you're still stuck with 16/32 bit baggage there is always emulation. But again, this is just my preference.

Of course this assumes any 16/32 bit legacy instructions built into the OS would be immediately jettisoned, like booting would have to immediately be fully 64 bit.

So if that was the path then yes, I'd take the 3% to move on and make life eventually easier for hardware and software developers.
 
Last edited:
I understand it's really only the front end, and decode in particular that would be affected but there is a lot of inefficiencies going on with many of those 32 bit and especially 16 bit instructions going on that the decode has to deal with and then the backend has to execute.
No, because the 64-bit instructions use the same encodings. All the inefficiencies remain even after old instructions are dropped. The only win for dropping backwards compatibility on x86 is in validation. There are no performance gains to be had for dropping a few legacy decoder paths.
 
That's not an easy thing and I definitely would be one of those who would've quit. Time is too precious to waste on Microsoft stuff. I haven't even tried Office 21 and 24 to see if it's better or faster. No point. The moment they introduced their subscription based Office 365, that was the end of any interest in Office. Not interested in paying a yearly tax and not interested in paying for the perpetual licensed product that will definitely be inferior to the subscription based one.
When working on corporate stuff, MS is the standard though. Painful, but true 🙁
Yeah, AMD already confirmed 70%+.
... and really, for AMD's profit purposes, this is ALL that really matters. Also, this next generation seems like Intel is just giving up on DC completely (ironically, their laptop and desktop offerings might be pretty convincing).

It seems like a very strange freaky Friday. Intel is going "more cores" and "value choice" and AMD is all DC and HEDT/Gaming.

I think AMD has their priorities correct.
 
No, because the 64-bit instructions use the same encodings. All the inefficiencies remain even after old instructions are dropped. The only win for dropping backwards compatibility on x86 is in validation. There are no performance gains to be had for dropping a few legacy decoder paths.
Wouldn't some transistors space be available that would not be needed?
 
Wouldn't some transistors space be available that would not be needed?
Yes, but a few tens of thousands of transistors are entirely insignificant in a core that consists of more than half a billion transistors. Because again, all features that actually require a complex implementation still exist in 64-bit mode. You can still use random prefixes, you can still use the AL register which is aliased into RAX, etc.
 
Another freaky Friday situation is that Intel can't stop saying what is coming and how it is going to be great, and we can't seem to get a peep out of AMD.

What a bizzaro world we live in 🙂.
 
No, because the 64-bit instructions use the same encodings. All the inefficiencies remain even after old instructions are dropped. The only win for dropping backwards compatibility on x86 is in validation. There are no performance gains to be had for dropping a few legacy decoder paths.

Why should they? If you're tossing out 16 and 32 bit encodings that frees up a lot of prefixes. Why wouldn't you want to use them for all the crap using those EREX or whatever the heck ridiculous 8 and 10 byte instructions some of the 64 bit stuff is saddled with?

The CPU could still ALSO handle the current 64 bit encodings (to avoid needing a translator for those) but code using the new encodings would be smaller and make more efficient use of L1I and L2 cache.

That's where I get the small single digit boost - better use of icache. Not worth all the hassle, but you would get SOMETHING out of it if you went through all that.
 
Remember the "omg Zen 4 SUX" time after AMD unveiled it at Computex 2022? And it was a 1T inter-generational jump monster.
Well, for context:
  • Zen 4 launch was seen as delayed as it started the ~2y cadence
  • Zen 4 development was not on a shoestring budget
  • Gains expectations were set by Zen 3
IPC gain of Zen 4 sucked. The TDP hike sucked - Zen has no longer been "the cool chip".

Back then AMD just did not tell us they aimed at Intel's frequency ceiling.
They have some geography nerds at AMD. How high up does it go? We'll never know.
Still they can never beat Magny Cours as a geographic codename.
Geography? IIRC Magny-Cours and Interlagos are racing circuits.
 
192core (dense cores?) at 4GHz? Or is that misdetected 2S board with 2x classic processors?
That might be the boost clock. 9965 would already boost to 3.7 on N3E.
the 256c SKU also shows 4.0, with 2.0 being the base clock.
Code:
BIOS Model name:                         AMD Eng Sample: 100-000001056-03                To be filled by O.E.M. CPU @ 2.0GHz

The NUMA nodes also correspond to the 32c CCD:
Code:
NUMA node0 CPU(s):                       0-63,256-319
NUMA node1 CPU(s):                       64-127,320-383
NUMA node2 CPU(s):                       128-191,384-447
NUMA node3 CPU(s):                       192-255,448-511
edit: nvm, it shows 128 cores per socket, so looks like these are 96c and 128c SKUs. https://openbenchmarking.org/system/2603100-NE-STRESSTES12/stress_test/lscpu
Code:
Core(s) per socket:                      128
 
Last edited:
Seems like "Intel is back" and it always surprises me seeing how much it consumes in power with so many cores.
I imagine that starting with Zen 6 AMD will also be forced to part large cores with a lot of compact cores to be able to keep the competition. But will AMD also be able to keep power consumption under control? The current design of an IOD fabbed on older process with CCDs will continue and work?
 
Back
Top