Speculation: Ryzen 4000 series/Zen 3

Page 205 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

A///

Diamond Member
Feb 24, 2017
4,352
3,154
136
And it gets beat by the 5900x, maybe even the 5800x, we have to wait on the benchmarks. Who cares about 5 ghz, when you have 20% better IPC ? Are you still living in the 90s ?

There is a wild theory on the internet that the 5600X matches the 10900K in performance. If true... wow. 4 less cores, but still...
 
  • Wow
Reactions: lightmanek

moonbogg

Lifer
Jan 8, 2011
10,635
3,095
136
I wonder what the overall performance will be versus Intel chips, OC for OC. Also, do the 12 core chips have a CCX latency issue? I'd be getting the 8 core version myself, but I'm curious about the 8 core dies affecting the 12 core version's latency.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
Need independent reviews and while AT is legendary in core deep dives and IPC investigations with SPEC2006, sadly in gaming Anandtech is irrelevant for enthusiasts due to 2666C24 setups they have for Intel memory.

Anandtech really only stands out for Apple reviews nowadays.

10 to 15 years ago were amazing years for cpu power. Maybe quantum type stuff will get us back to the recent old days?

Last I heard Quantum is even harder, and requires bunch of regular computers to correct its errors.
 
  • Like
Reactions: lightmanek

jamescox

Senior member
Nov 11, 2009
637
1,103
136
I really don't understand people defending the price increases. It's +$50 across the board, which impacts the lower end parts a lot more. Plus with the removal of the stock HSF (which while yes, many people didn't end up using it, it will still cool the CPU adequately), the effective price is really more like +$80 across the board compared to last gen CPUs. A lot of the value is gone.

It performs significantly better so it is worth more money to most people. If it were a much smaller performance increase, like Zen to Zen+, than that would be a little different. They put a lot of R&D money into a new architecture and you don't want to pay them for that? They basically had to make Zen and Zen 2 without really getting payed back for their R&D. AMD didn't make that much profit last year. If AMD wants to have the money to continue competing, then they need to raise ASPs.

It seems well worth the cost to me and it is very far from the nearly complete stagnation and ridiculous prices during the intel monopoly. They will come down in price, so you can wait or just don't buy it if you don't think it is worth it. They are, as far as I am concerned, selling almost ridiculously high performance for the price. Even with something like ThreadRipper, which is expensive, still delivers amazing and unprecedented level of performance.
 

jamescox

Senior member
Nov 11, 2009
637
1,103
136
What is this "something better" ? And don't say it's GPUs since they still can't even run standard C++ ...

It's bad enough that some or all parts of the silicon can't even run standard C++ code but it sucks even more for a programmer that they'll need to deal with 2 different compilers. 1 for the CPU/host code and 1 for the GPU/device code ...

We can barely convince AAA game developers to adopt stupid concepts like shading languages or shader IRs so can you imagine how well it'll go down with many other smaller developers needing to use another proprietary vendor compiler ?
What is something better? No idea really; I just have heard Linus Torvalds opinion and I also think that trying to make a cpu act like a gpu is probably a bad idea. It seems like Intel's kludge before they finally gave up and decided to design a gpu. Also, if they were going to support it, I would have expected it with the new architecture (Zen 3). I guess it may be plausible that they are waiting for Zen 4 (5 nm and chip stacking to deliver more bandwidth).

It is still the case that large numbers of server cpus do not really require any FPU at all so it is often wasted silicon. FP units take a huge amount of die area. SMT allows for sharing the FPU to some extent. I look forward to some SMT scaling test with Zen 3. Keeping the units narrower allows you to make SKUs with differing number of units to cover different market segments while still having universal support. I don't know if we will see such a split with Zen evantually. AMD split there GPU developement since compute and graphics don't necessarily have the same requirements. We could see some different variants from AMD eventually to serve specific markets, although chip stacking allows them to mix and match things.

There would be some special programming required to make use of a GPU style compute unit, but I have to wonder how much AVX512 is actually just compiled with auto-vectorization from basic C++. I would expect most AVX512 use is through hand tuned low level libraries or it is HPC code that is also hand tuned. Making use of a cache coherent gpu compute unit wouldn't be much different as long as you supply the necessary libraries. There is probably some other possibilities with a chiplet gpu compute unit that is cache coherent; there is no unnecessary copying and latency could be very low.
 

coercitiv

Diamond Member
Jan 24, 2014
6,198
11,891
136
I wonder what the overall performance will be versus Intel chips, OC for OC. Also, do the 12 core chips have a CCX latency issue? I'd be getting the 8 core version myself, but I'm curious about the 8 core dies affecting the 12 core version's latency.
This is the cherry on top, the 5800X will likely look even more consistent in games, if not faster. You can bet there's a latency price to pay for dual chiplets, but I'd wager the large cache pools and some scheduler optimization will compensate for some or most of it.
 
  • Like
Reactions: lightmanek

Thibsie

Senior member
Apr 25, 2017
749
801
136
I mean, no? I don't have anything against the letters 'XT' being next to one another. Once they leak Warhol, yes, I will be raging against that. And you should too.

No need to jump the gun IMO.
If Warhol comes, let's see when, how it performs and at what price. Then we may or not rage against it.
 

dnavas

Senior member
Feb 25, 2017
355
190
116
@DrMrLordX is quietly raging at that.
...quietly? :)
Once they leak Warhol, yes, I will be raging against that. And you should too.

Honestly, AMD is going to do what they're going to do. At some point during cycles, products are undervalued and no one will buy them -- this is the time to get the best deals. At another point rebranded products see price-hikes and everyone is throwing in their wallets and their first-born to get them. I'm not sure the rebranded more expensive respin will happen this upcoming round, but it WILL happen. Don't rage, it isn't worth the heart disease. It's sad, but it's also human nature. And for goodness sake, don't pre-rage! Wait for it to happen first :>

It's bad enough that some or all parts of the silicon can't even run standard C++ code but it sucks even more for a programmer that they'll need to deal with 2 different compilers. 1 for the CPU/host code and 1 for the GPU/device code ...

It's not THAT bad. Although C++ suffers from hydra-syndrome and an overstressed toolchain, LLVM allows for some interesting post-packaging optimization. Define a half dozen embeddable DSLs or use a fixed set of instructions ala AVX512, but support off-board accelerators (8087)?

There are now (at least?) three companies with CPUs and GPUs. One of them will get away with defining the x84_64 equivalent architecture for a combined CPU and vector/bit-blit/big-data "GPU"-style set of operations, others will be stuck in the land of Itanium or get caught flat-footed. It's going to be amazing to watch, but I'm glad I'm not playing that game!

We can barely convince AAA game developers to adopt stupid concepts like shading languages or shader IRs so can you imagine how well it'll go down with many other smaller developers needing to use another proprietary vendor compiler ?

You convince developers to do something new by providing a step-function's worth of increased capabilities -- performance, ability, maintenance, market or some other axis of improvement.
 
Apr 30, 2020
68
170
76
It performs significantly better so it is worth more money to most people. If it were a much smaller performance increase, like Zen to Zen+, than that would be a little different. They put a lot of R&D money into a new architecture and you don't want to pay them for that? They basically had to make Zen and Zen 2 without really getting payed back for their R&D. AMD didn't make that much profit last year. If AMD wants to have the money to continue competing, then they need to raise ASPs.

It seems well worth the cost to me and it is very far from the nearly complete stagnation and ridiculous prices during the intel monopoly. They will come down in price, so you can wait or just don't buy it if you don't think it is worth it. They are, as far as I am concerned, selling almost ridiculously high performance for the price. Even with something like ThreadRipper, which is expensive, still delivers amazing and unprecedented level of performance.
Come on, do you really think AMD was losing money on Zen, Zen+ and Zen2 chips - including R&D? Absolutely not. Yes it's normal for new chips to cost more than the ASP of 1.5 year old chips. That's standard. What's not standard is the new chips commanding a significantly higher price than those old chips made their debut at. For the lower end chip, it's nearly a 20% increase in original MSRP - even more so when you factor in that none of them come with HSF anymore except the 6c chip.
 
  • Like
Reactions: Lodix and KompuKare

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
Come on, do you really think AMD was losing money on Zen, Zen+ and Zen2 chips - including R&D? Absolutely not. Yes it's normal for new chips to cost more than the ASP of 1.5 year old chips. That's standard. What's not standard is the new chips commanding a significantly higher price than those old chips made their debut at. For the lower end chip, it's nearly a 20% increase in original MSRP - even more so when you factor in that none of them come with HSF anymore except the 6c chip.

I'm all for low prices, but lets be realistic. Intel and NVidia are the proverbial 1,000lb gorillas. For AMD to continue their forward momentum and remain competitive, they need to NOT be relegated to the budget price bracket that we've all been accustomed to over the years.

Personally, I have no problem paying higher prices for AMD CPUs provided they can demonstrate premium performance. I have paid a lot more in the past for a lot less to be honest.
 

leoneazzurro

Senior member
Jul 26, 2016
927
1,452
136
Come on, do you really think AMD was losing money on Zen, Zen+ and Zen2 chips - including R&D? Absolutely not. Yes it's normal for new chips to cost more than the ASP of 1.5 year old chips. That's standard. What's not standard is the new chips commanding a significantly higher price than those old chips made their debut at. For the lower end chip, it's nearly a 20% increase in original MSRP - even more so when you factor in that none of them come with HSF anymore except the 6c chip.

They are not losing money anymore but their gross margins are below what for the industry is considered sustainable for a long-term business. So lowering theis ASP is not the best way to go, especially on a new product which development is not yet amortised. Moreover, they are pricing based on the performance level. A 5600X is likely to be slightly behind a 3700X in multithreading and significantly above for lightly threaded application and gaming. It is likely to be above the 10600K in every aspect, being priced slighlty more AND with a cooler bundled. The other parts are priced for not cannibalizing the internal sales (there is still 3000 series stock to be sold) and against the competitor's parts, with a price premium reflecting the performance uplift, and with the added benefit for many users to be able to reuse their mainboard, which is not granted at all with the competition.

Edit: the lineup has one oddball though, and that is the 5800X, that is the lowest value offering in the 5000 series.
 

Karnak

Senior member
Jan 5, 2017
399
767
136
What's not standard is the new chips commanding a significantly higher price than those old chips made their debut at.
While I agree that a 5600 and 5700X are missing and because of that pricing is a bit off compared to Zen2 "down there", but there are still the 5900X and 5950X. They are only $50 more expensive than the 3900X/3950X. Which means a +10%/+7% increase. That's just not a "significant higher price". Performance increase is like double/triple the amount of the price increase. Probably even a bit more.
 
Feb 4, 2009
34,564
15,777
136
Come on, do you really think AMD was losing money on Zen, Zen+ and Zen2 chips - including R&D? Absolutely not. Yes it's normal for new chips to cost more than the ASP of 1.5 year old chips. That's standard. What's not standard is the new chips commanding a significantly higher price than those old chips made their debut at. For the lower end chip, it's nearly a 20% increase in original MSRP - even more so when you factor in that none of them come with HSF anymore except the 6c chip.

While I’m not super excited about the prices it is to be expected.
Does the moderate quality included heatsink carry value? How many builders have used the included heatsink. I didn’t but it is nice to have a backup just in case.
20% price increase for 10% performance gain is pretty typical in computing.
I think you are being a bit overly dramatic about it.
 

kurosaki

Senior member
Feb 7, 2019
258
250
86
While I’m not super excited about the prices it is to be expected.
Does the moderate quality included heatsink carry value? How many builders have used the included heatsink. I didn’t but it is nice to have a backup just in case.
20% price increase for 10% performance gain is pretty typical in computing.
I think you are being a bit overly dramatic about it.
Typical for the last couple of years may e. Back in my days, we had at least 100% gai. Gen to gen, with pricing often going down. This, this is stagnation.
 
  • Like
Reactions: KompuKare

eek2121

Platinum Member
Aug 2, 2005
2,930
4,026
136
While I’m not super excited about the prices it is to be expected.
Does the moderate quality included heatsink carry value? How many builders have used the included heatsink. I didn’t but it is nice to have a backup just in case.
20% price increase for 10% performance gain is pretty typical in computing.
I think you are being a bit overly dramatic about it.

I used the cooler included with my 3900X on my wife’s 3600XT.
 
  • Like
Reactions: lightmanek
Feb 4, 2009
34,564
15,777
136
Typical for the last couple of years may e. Back in my days, we had at least 100% gai. Gen to gen, with pricing often going down. This, this is stagnation.

Oh I 100% agree with that. Sadly regarding CPUs & GPUs appears all the easy performance gains have been used and none are left.
We need something radically different to move forward at those gains again. Maybe some kind of light/fiber optic instead of electricity, maybe (but unlikely for the foreseeable future) it will be quantum computing. Who knows. Only thing that is known is what we have will likely be here for a long time.
 

nicalandia

Diamond Member
Jan 10, 2019
3,330
5,281
136
There is a wild theory on the internet that the 5600X matches the 10900K in performance. If true... wow. 4 less cores, but still...
5600X should be able to Trash a 6950X a 10 Core from Intel, but the 10900K? In Single Thread Yes, in Multi Threaded 5600X should be a few percentage(single digit) lower
 

nicalandia

Diamond Member
Jan 10, 2019
3,330
5,281
136
I have no issues with pricing. Intel doesn’t have anything comparable, and the 5600X is likely to perform as well as a 3800X.
Rocket Lake has been rendered EOL by Zen 3, They will be a backport of Willow Cove(10nm 6% less IPC than Zen 3) that is highly inefficient runs hot and will Max Out at 5.3 Ghz and 8C/16T. So it will be 6%-10% lower IPC than Zen 3 and it will be capped at 8C/16T.
 

dnavas

Senior member
Feb 25, 2017
355
190
116
Only thing that is known is what we have will likely be here for a long time.

Necessity is the mother of invention. pcie5 likely shipping next year, pcie6 specs should settle as well. Will vendors really adopt pam4 and absorb those costs, or are we headed elsewhere? Unclear to me. It may never hit desktops. But what happens when your pcie connectors have "more" bandwidth than your memory? We're headed into terra bizarro. If the industry does what it usually does, that bandwidth will get used, but if it does, it'll be used for something completely alien imho. I'll be very disappointed if what we have now is here for a long time. Could happen, but I sincerely hope not.
 

D283W

Junior Member
Dec 11, 2017
8
18
81
This is the cherry on top, the 5800X will likely look even more consistent in games, if not faster. You can bet there's a latency price to pay for dual chiplets, but I'd wager the large cache pools and some scheduler optimization will compensate for some or most of it.
Are you guaranteed to have a single chiplet with a 5800X though? I didn't think that was necessarily the case, but I could be mistaken. Also, this was posted on YouTube by RGT earlier purporting to show that a 5800X ties a 10700K in 1080p gaming: 1602275445671.png
 
Last edited:

naukkis

Senior member
Jun 5, 2002
706
578
136
Are you guaranteed to have a single chiplet with a 5800X though? I didn't think that was necessarily the case, but I could be mistaken. Also, this was posted on YouTube by RGT earlier purporting to show that a 5800X ties a 10700K in 1080p gaming: View attachment 31479

No, it's about performance per dollar. As 5800x is about 10% more expensive supposedly it's 1080p gaming performance is also 10% faster.
 

ModEl4

Member
Oct 14, 2019
71
33
61
Although Zen3 execution was extremely good and delivered more than expected performance, especially in games, the pricing is not exactly consumer friendly.
It is another thing to say that from a business perspective and in relation with what the competition has to offer it can be justified and another thing that AMD won't be like Intel and whenever has the opening it won't try to raise prices. It took Intel 4,5 years to raise the price of 2500K ($216) to the level of 6600K ($242)
$26 in 4,5 years and AMD $50 in 3,5 years (Although AMD increased their performance a little more in 3.5 years than what Intel did in 4.5 years, Intel had virtually no competition in that era, at +$200 there was no desirable competitive AMD CPU) On the other hand if you check the gaming performance, below is a pessimistic scenario of 720p difference in modern engines which possibly is more indicative regarding the future than 1080p CS:GO/LOL/DOTA2. (Intel % dif based on Techpowerup 720p results)
5900X 100
5800X 98
5600X 95
10900K 95 (-5% vs 5900X)
10850K 94.4
10700K 91.8
10600K 88.2
5900X is the easy choice to say that the +$50 is justified since it is a 12 core with no competition in the consumer grade CPUs from Intel for at least a year, but look also the 5800X model, it will be better than 10900K both in gaming and in multithreading applications like rendering (slightly) and possibly the 8 core Rocket Lake-S will not be able to match it in multithreading (in gaming the odds are with Intel I think) so why not $449 now? On the other hand $300 for a 6core is a hard pill to shallow, but if you check the inevitable 5600, having 10700K gaming performance and +20% multithreading performance in applications like rendering vs 10600K at possibly $249 is justified also. Another optimistic scenario involves also the possibility that the prices gradually will be lower than the SRPs. Anyway if AMD uses the extra margins to strengthen the GPU team I wouldn't mind, increased competition in the GPU space is badly needed and it will help the consumer also (especially after the mini preview, lol what a bad judgment call not to specify the RX6000 model positioning in the upcoming Big Navi models🙄)
 
  • Like
Reactions: Tlh97

kurosaki

Senior member
Feb 7, 2019
258
250
86
Oh I 100% agree with that. Sadly regarding CPUs & GPUs appears all the easy performance gains have been used and none are left.
We need something radically different to move forward at those gains again. Maybe some kind of light/fiber optic instead of electricity, maybe (but unlikely for the foreseeable future) it will be quantum computing. Who knows. Only thing that is known is what we have will likely be here for a long time.
One clear answer is ARM. The X1 totally smashes, even the zen 3 in IPC, that's for the <10W X1. Imagine what a 100 W part would be able to produce. The new arm firestorm cores developed by Apple will probably be able to emulate x86 applications faster than what x86 processors are being able to produce natively. The first actor to release an arm CPU for desktop, developed with support for SATA, pciex, DDR 5 memory and so forth, they will be able to grab a huge market share over night. Ok might be a bit dramatic, but with firestorm, we have what, a 60% higher absolute performance than Intel's best desktop chips, but at 5 watts. Now we're talkin again! Only thing left is for Windows to do the same as Apple, get great AArch64 support and get it done fast. I want my next gaming rig to be 300% faster than the former, not 26%..