There are a few 5900X3D but as prototypes on the Lab(One was shown on Computex 2021), there are plenty of reasons as to why they did not, but the main reasons are that they are ramping up Milan-X which they sell for $1,340 a pop(Per CCD), that is where the money is and they deem that the 5800X3D is enough to recapture the gaming crown and hold off Intel until Zen4 dropsPity there appears to be no 5900X3D which would be a logical replacement for my 3900X. Did AMD give a reason? Was it just cost? Limited supply?
I guess I'll just keep an eye out for a heavy price cut for a 5900X.
Dude, premium CPUs command premium prices. The 1800X was launched at $499 and was not even the top dog at gaming. Intel has always charged a pretty penny for top of the line(see 11700K vs 11900K).
The 5800X3D will launch at $500 or more, it's a Halo product for extreme gamers. This not for people worrying about budgets, like many of you do.
if they had released such an SKU they would had to downscale the single core frequency at 4.6GHz at best, down from its usual 4.9.
The 3D-V Cache die can be turned off when not in use to save on power(as to how the OS deem that necessary is something we don't know yet)I don't think that it all down to thermals, but aside from just dissipating heat, the additional cache is going to be consuming power. That may also have something to do with it on top of binning decisions since powering all of that additional cache means there's less power to go around. If that's the case then the people willing to go beyond the default limits might be able to get some additional OC headroom.
4.9 was what they were comfortable binning at 6 quarters before the 5800X3D, and you're speaking very confidently for what must be complete speculation.
Again, if the limit is thermals, why did base clocks drop by 400MHz while boost clocks only dropped by 200 MHz (I mistakenly said earlier it was the same amount). That's a 10.5% reduction at base vs 4.3% reduction at boost.
Obviously, there's an element of thermals and an element of binning, but any thermal bottleneck is going to be more pronounced at high frequencies and voltages. The massive reduction in base clockspeed compared with a much smaller reduction in boost clocks doesn't make much sense if the primary consideration is thermals, but it may if it's binning.
The 3D-V Cache die can be turned off when not in use to save on power(as to how the OS deem that necessary is something we don't know yet)
Why turn it off when we know that the performance is better with the additional cache even though the clock speed is lower? Furthermore, if you're considering the total system power, then having more cache is better for lowering the total power use. Having to load something from main memory is more expensive in terms of power use. Unless you're going to put the computer to sleep, there aren't too many cases I can think of where you'd see an advantage from turning the additional cache off.
Why turn it off when we know that the performance is better with the additional cache even though the clock speed is lower? Furthermore, if you're considering the total system power, then having more cache is better for lowering the total power use. Having to load something from main memory is more expensive in terms of power use. Unless you're going to put the computer to sleep, there aren't too many cases I can think of where you'd see an advantage from turning the additional cache off.

ALL cores boost produce more heat, so silicon temp is higher, isnt it, so they had to dial down the TDP.
For single core boost it is assumed that close cores are not much boosted, so the nearby silicon is used as cooler, wich allow for higher frequencies.
You aren't explaining the reduction to base clocks. In a vanilla 5800X, there isn't a huge difference between single core and all core boost. The official specified boost frequency is closer to all core than single core. The difference between single and all core boost is significantly less than the 400 MHz reduction to base clocks.
![]()
AMD Ryzen 7 5800X Review
The AMD Ryzen 7 5800X is built using just one CCD, which eliminates a lot of latencies and bottlencks in the multi-core topology. We also saw it boost close to 5 GHz regularly, out of the box, without any overclocking. This one-two-punch combination helped it beat the 5900X in gaming and several...www.techpowerup.com
It's possible that the 5800X3D will change this behavior so that there is a larger gradient between single and all core boost, but that would be pure speculation.
It doesn't get disabled, just gated off so it isn't drawing power and turned back on when needed, essentially it gets put to sleep. I imagine this all happens at the hardware level, the OS/software doesn't have visibility into the extra cache being on or off.
It's also 15% at ISO Speed(at least the 5900X3D Prototype shown at Computex 2021)If that weren't the case, we wouldn't see a 15% performance improvement despite a ~5% reduction in clock speed.
Perhaps my wording didn't make it clear, but under what situations would the hardware want to do that? The larger cache improves performance and reduces the amount of time that the CPU needs to run because it doesn't need to wait on as many memory accesses. If that weren't the case, we wouldn't see a 15% performance improvement despite a ~5% reduction in clock speed.
Really the only time you'd want to turn it off is if the system is idle at which means that the cores don't need additional power to boost either. Otherwise as long as the system is operating, you'd always want to have the additional cache active because it ultimately saves power.
My original point was that AMD has kept the same TDP for the 5800X3D as it had for the 5800X. Because the cache requires power, it means that there's less that can be supplied to the remainder of the chip so the clock speeds naturally decrease. Of course, if you don't care about that and are willing to bypass those limits, it's entirely possible that you can still achieve the same clock speeds on a 5800X3D as you could on a 5800X assuming that the limits aren't also due to binning changes. Even if there were binning changes that just means a greater variability in silicon quality and that some parts could still reach those clocks, but there's no guarantee.
You aren't explaining the reduction to base clocks. In a vanilla 5800X, there isn't a huge difference between single core and all core boost. The official specified boost frequency is closer to all core than single core. The difference between single and all core boost is significantly less than the 400 MHz reduction to base clocks.
![]()
AMD Ryzen 7 5800X Review
The AMD Ryzen 7 5800X is built using just one CCD, which eliminates a lot of latencies and bottlencks in the multi-core topology. We also saw it boost close to 5 GHz regularly, out of the box, without any overclocking. This one-two-punch combination helped it beat the 5900X in gaming and several...www.techpowerup.com
It's possible that the 5800X3D will change this behavior so that there is a larger gradient between single and all core boost, but that would be pure speculation.
A 5800X uses up to 142W of power. The V-Cache component itself uses a slice of that power, which lowers the power budget for the cores. The reason the base clocks drop more than boost is that the base clock is a baseline rating when all cores are active, and the boost clock is the peak boost for one core. The reason the boost clock is slightly lower is actually for better binning. It has absolutely nothing to do with thermals. You can enable PBO or overclock to get the performance back.
So, one guy saying it's all thermals. You think it's not thermals at all, but is rather all about power power. In actuality there's a bit of both, but even together you can't explain a 400 MHz drop in base clocks.
A 5950X also has a 3.4GHz base clock, with the same 105W TDP. Even accounting for better binning on the 5950X (and again, 6 quarters between that and the 5800X3D. Average silicon quality has obviously gone up). You wouldn't make the argument that the V-cache die consumes anywhere near as much as a compute die, because that would be silly. Yet here we are.
You don't always need such a massive cache. Many programs fit just fine inside Zen's 32 MB of existing L3 cache and won't see any uplift with the extra cache. If the extra cache isn't being used, it doesn't make sense to waste power by continually refreshing it.
The base clocks will be set based upon max core utilization, meaning the full L3 cache is engaged. Potentially, in real world clocks, if the V-cache is powered off then you would still see the same base clocks as the standard 5800x, assuming all else being equal.
What general workload is going to fit in 32 MB of cache? Sure the instructions aren't necessarily aren't going to use that much, but it's rare that anything is operating on less than 32 MB of data.
That also doesn't include other programs that are running. Sure for benchmarking purposes everything else is turned off and background processes are eliminated too the greatest extent possible, but most people are going to be running multiple other applications in the background and make great use of that extra cache space.
I suppose if we get the magic workload that doesn't need the additional cache and it's being run in something more akin to a server environment where only that program is running you could just get the regular performance by not utilizing the v-cache, but if you know that why even bother buying the 3800X3D instead of a 3800X or something else like a 39xxX instead?
So, one guy saying it's all thermals. You think it's not thermals at all, but is rather all about power power. In actuality there's a bit of both, but even together you can't explain a 400 MHz drop in base clocks.
A 5950X also has a 3.4GHz base clock, with the same 105W TDP. Even accounting for better binning on the 5950X (and again, 6 quarters between that and the 5800X3D. Average silicon quality has obviously gone up). You wouldn't make the argument that the V-cache die consumes anywhere near as much as a compute die, because that would be silly. Yet here we are.
As far as I am aware only the L1 is as fast as the core. L2 is half of that and L3 even slower.Does the L3 cache clock at the same speed as the cores in Zen 3? Anyone know?
As far as I am aware only the L1 is as fast as the core. L2 is half of that and L3 even slower.
As far as I am aware only the L1 is as fast as the core. L2 is half of that and L3 even slower.