Discussion Speculation: Zen 4 (EPYC 4 "Genoa", Ryzen 7000, etc.)

Page 169 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Vattila

Senior member
Oct 22, 2004
807
1,411
136
Except for the details about the improvements in the microarchitecture, we now know pretty well what to expect with Zen 3.

The leaked presentation by AMD Senior Manager Martin Hilgeman shows that EPYC 3 "Milan" will, as promised and expected, reuse the current platform (SP3), and the system architecture and packaging looks to be the same, with the same 9-die chiplet design and the same maximum core and thread-count (no SMT-4, contrary to rumour). The biggest change revealed so far is the enlargement of the compute complex from 4 cores to 8 cores, all sharing a larger L3 cache ("32+ MB", likely to double to 64 MB, I think).

Hilgeman's slides did also show that EPYC 4 "Genoa" is in the definition phase (or was at the time of the presentation in September, at least), and will come with a new platform (SP5), with new memory support (likely DDR5).

Untitled2.png


What else do you think we will see with Zen 4? PCI-Express 5 support? Increased core-count? 4-way SMT? New packaging (interposer, 2.5D, 3D)? Integrated memory on package (HBM)?

Vote in the poll and share your thoughts! :)
 
Last edited:
  • Like
Reactions: richardllewis_01

Ajay

Lifer
Jan 8, 2001
16,094
8,109
136
Can't see such a launch from Apple happening. If TSMC pushes Apple to such an unavoidable choice I expect Apple to think twice about future cooperation with TSMC.

Whatever happens we should expect TSMC's close customers (as at least Apple, AMD and MediaTek are) to be privy to any potential internal issues as they crop up and work together on possible workarounds and alternate solutions accordingly.
I should have said 'low initial volume' wrt to the next iPhone. As I said, I'm sure TSMC is pushing as hard as humanely possible to get yields up. Apple won't move to another foundry over one problematic node. They have too good of a relationship with TSMC to make such a drastic change right now, IMHO. If it happens again - well, all bets are off.
 

Doug S

Platinum Member
Feb 8, 2020
2,751
4,685
136
Can't see such a launch from Apple happening. If TSMC pushes Apple to such an unavoidable choice I expect Apple to think twice about future cooperation with TSMC.

Whatever happens we should expect TSMC's close customers (as at least Apple, AMD and MediaTek are) to be privy to any potential internal issues as they crop up and work together on possible workarounds and alternate solutions accordingly.

Yep 0% chance that Apple launches with a "more limited supply" due to their own SoC, when they work so closely with TSMC they help define each new process. They would know far in advance if there are any problems, and would be able to retarget their process. Heck, its possible they might have a dual track design process for EVERY new A series SoC, with a fallback option available just in case. Considering how many A series SoCs Apple ships, doing such a backup design would cost them like 10 or 20 cents per iPhone. At least it would make sense when jumping to a new process, i.e. for the A14 designing a fallback option on an N7 variant and abandoning it only when they knew N5 was fine.

Anyway, it has been public for a long time that N3 would not be ready in time for Apple so is the claim now that N4 will somehow have a "limited supply"? Or that despite working alongside TSMC and knowing the N3 schedule long before they become public, that they've stubbornly designed A16 on N3 and only now are saying "oh crap, it isn't going to be ready in time"?

Seems to be a lot of people pushing this idea that TSMC is suddenly turned into Intel and is unable to get a process working, while taking Intel's roadmap claims as gospel. I guess there are some here who are heavily invested in INTC or perhaps are holding a lot of underwater options, and really need that stock to make a comeback, which requires a big TSMC stumble to cut the legs out from under AMD.
 

jpiniero

Lifer
Oct 1, 2010
15,173
5,708
136
Anyway, it has been public for a long time that N3 would not be ready in time for Apple so is the claim now that N4 will somehow have a "limited supply"?

We are talking about the 2023 iPhone, not the 2022. I agree that if it was an unplanned change to N4P AMD etc would get bumped.
 
Jul 27, 2020
19,840
13,606
146
Seems to be a lot of people pushing this idea that TSMC is suddenly turned into Intel and is unable to get a process working, while taking Intel's roadmap claims as gospel. I guess there are some here who are heavily invested in INTC or perhaps are holding a lot of underwater options, and really need that stock to make a comeback, which requires a big TSMC stumble to cut the legs out from under AMD.
Gelsinger is shouting at the top of his lungs while Lisa Su is keeping mum. If they were in the running for POTUS, who do you think the majority would vote for?
 
  • Like
Reactions: Drazick

DrMrLordX

Lifer
Apr 27, 2000
22,031
11,614
136
4 and friends use the same production lines, right? That's not the case with 6.

That's only the case 'cuz N6 uses EUV while N7 is purely DUV. From the noise TSMC has made about moving customers to N6 from N7, it appears as though many of the same production steps are used between N6 and N7.

If they were in the running for POTUS

ehhhh

. . .

not here, please.
 

Joe NYC

Platinum Member
Jun 26, 2021
2,481
3,382
106
Why not go with your simplest components on N3, like the stacking cache? Having faster cache or very low heat-producing cache, when cache will dominate your product, sounds like a good fit. Get experience with the process, then migrate CPU architecture over to it.

Because you would be multiplying the cost and getting extremely limited benefit.

I think AMD will be using N6 for cache for years and years to come.
 
  • Like
Reactions: Tlh97 and yuri69

Saylick

Diamond Member
Sep 10, 2012
3,513
7,773
136
That's only the case 'cuz N6 uses EUV while N7 is purely DUV. From the noise TSMC has made about moving customers to N6 from N7, it appears as though many of the same production steps are used between N6 and N7.
Correct, N6 uses EUV for some layers to simplify what would have been DUV multi-patterning down to a single EUV pass. If I'm not mistaken, you can totally transfer your N7 design and have it fabbed on N6 without any increase in density but what you gain is a potential cost savings in number of masks required and/or faster fab time. Alternatively, you can design for N6 from scratch and use EUV on a few select critical layers where you want slightly more density/PPW.
 

jamescox

Senior member
Nov 11, 2009
644
1,105
136
Eh, they will drop it once Zen 4 drops. I can’t see them spending time and money building the chips, testing them, and then not releasing them unless sales numbers for previous gen was way down.
They may not even have made any Zen 3 based Threadripper processors. After Zen 4 comes out, how attractive will a Zen 3 based Threadripper actually be? It would likely be outperformed by Zen 4 parts significantly for anything that doesn’t take full advantage of the extra cores available on Threadripper. It may even be outperformed by Zen 3 with v-cache for games and a lot of other applications. If Zen 4 has significantly improved FP throughput, some applications may perform better in a 16 core Zen 4 part than a 32 core Zen 3.

I hope that there will be a new socket in between AM5 and SP5. I have wondered if the supply of Zen 4 die is significantly better than Zen 3. The 2 vs. 12 (memory channels, cpu die) is a massive difference. I always thought they should have at least 3 sockets anyway.
 

soresu

Diamond Member
Dec 19, 2014
3,212
2,483
136
If Zen 4 has significantly improved FP throughput, some applications may perform better in a 16 core Zen 4 part than a 32 core Zen 3.
The same was true of Zen2 and Zen1/+, considering that the 16C parts will probably clock higher than 32C it's fairly likely to be the case for SIMD heavy workloads.
 

soresu

Diamond Member
Dec 19, 2014
3,212
2,483
136
Gelsinger is shouting at the top of his lungs while Lisa Su is keeping mum. If they were in the running for POTUS, who do you think the majority would vote for?
The majority don't even follow what the individual component makers/designers are doing - they just buy pre built systems, and often nothing more than the whim of the salesman determines what brands they may go for.
 

MadRat

Lifer
Oct 14, 1999
11,943
264
126
Because you would be multiplying the cost and getting extremely limited benefit.

I think AMD will be using N6 for cache for years and years to come.
I don't think getting 1.35X as much cache going from N7 to N5 or 1.2X as much going from N5 to N3 is a limited benefit when you are able to reduce 30% of the power or bump speed up 15% with each successive move. Those are significant improvements. I'm thinking that stacking cache is going to require leaning on these improvements to help with the overall thermal conditions of being packed in so close to the core.
 
  • Like
Reactions: Vattila

CakeMonster

Golden Member
Nov 22, 2012
1,497
658
136
I don't like the Intel bragging much, but I know there's cultural differences worldwide in how to talk about your business, so I will absolutely not try to analyze it with the goal of figuring out the actual shape of Intel's processes and products.

My impression is that we were much more in the dark about future architectures 10 or 20 years ago than today. I might be wrong, but we have been given some details about both the ones releasing this year and the two next ones. That should be a good sign for design progress, even if there are problems with nodes forcing some of them to be backported like its alluded to above.
 

uzzi38

Platinum Member
Oct 16, 2019
2,705
6,427
146

maddie

Diamond Member
Jul 18, 2010
4,881
4,951
136
At least the good thing to come out of Intel's 10nm troubles is that they are now much more forthcoming about their future plans. They have to be to keep their share price from tanking.
Why are you believing they are fully truthful now? It won't be the first time they mislead, witness the 10nm multi-year fiasco.
 

Saylick

Diamond Member
Sep 10, 2012
3,513
7,773
136
Why are you believing they are fully truthful now? It won't be the first time they mislead, witness the 10nm multi-year fiasco.
Truth is decided in hindsight. His/her point is still valid in that Intel has been more forthcoming in their plans (emphasis on the word "plan") than ever before. Whether or not Intel can execute and achieve those plans in a timely matter is an entirely different story. Intel has a legal obligation, being that they are a publicly traded company, to inform investors if there are any setbacks in their plans. When they figure out there is a setback and when they announce it is subjective. All I know is that it would be illegal to sit on that kind of news for a long period of time.
 
Jul 27, 2020
19,840
13,606
146
Why are you believing they are fully truthful now? It won't be the first time they mislead, witness the 10nm multi-year fiasco.
You could say that I'm hopelessly optimistic. But also if Intel stumbles and misses some target, Gelsinger risks getting fired by the board. He acts like an idiot but he is privy to internal milestones being met and that may explain his overly enthusiastic demeanor.

By the way, if Gelsinger gets fired, we may enjoy Raja Koduri's deceptions for a few years as CEO :D
 
  • Haha
Reactions: Kaluan and Thibsie

Mopetar

Diamond Member
Jan 31, 2011
8,107
6,746
136
If it's 4CUs, then I'm a happy guy. 6CUs@1.9GHz is enough to beat last gen Vega iGPUs after all. Provided OC support isn't locked down, it'll be fun to squeeze out as much as we can

Frankly I don't care either way. The main benefit is being able to run a Ryzen CPU without needed to buy a dedicated GPU. Normally that's not a big deal because you could always just buy some cheap used low-end GPU for under $100, but those don't exist anymore.

I don't think most people will care about the actual performance. I would imagine that if they chose a 4 CU design it's for better redundancy or because they had extra space on a pad limited IO die that may as well be used for something or it would just sit empty.
 

uzzi38

Platinum Member
Oct 16, 2019
2,705
6,427
146
Frankly I don't care either way. The main benefit is being able to run a Ryzen CPU without needed to buy a dedicated GPU. Normally that's not a big deal because you could always just buy some cheap used low-end GPU for under $100, but those don't exist anymore.

I don't think most people will care about the actual performance. I would imagine that if they chose a 4 CU design it's for better redundancy or because they had extra space on a pad limited IO die that may as well be used for something or it would just sit empty.
Oh of course, and that's why the CUs are clocked so low. They don't need to be clocked high to do their job right at all. Clocking them lower leaves more power headroom for the CPU cores at the end of the day, and these parts are focused around CPU performance.
 

DisEnchantment

Golden Member
Mar 3, 2017
1,747
6,598
136
Frankly I don't care either way. The main benefit is being able to run a Ryzen CPU without needed to buy a dedicated GPU. Normally that's not a big deal because you could always just buy some cheap used low-end GPU for under $100, but those don't exist anymore.

I don't think most people will care about the actual performance. I would imagine that if they chose a 4 CU design it's for better redundancy or because they had extra space on a pad limited IO die that may as well be used for something or it would just sit empty.
It is going to be useful for a lot of folks.
For instance, I use an RX5700XT w/ 3900X just to show the screen during boot if I need to do something, but most of the time my box is sitting in a corner and I connect to it remotely.
It is such a waste just to put some GPU on it if all you do is just to check the screen once in a while and most of the time the screen is not turned on, but you are just connecting to it remotely or using it as a compile machine or file server etc..

I wish I had an inbuilt GPU on my R9 3900X. Once I get a Zen4 machine, my 5950X will take the role of a remote build machine but sadly I have to use an RX6600XT for it instead of an inbuilt GPU.
Well at least when Zen5 comes out, I will use that Zen4 remote build machine with an inbuilt GPU and no more dGPU.
 
  • Like
Reactions: scineram

Thibsie

Senior member
Apr 25, 2017
860
967
136
Truth is decided in hindsight. His/her point is still valid in that Intel has been more forthcoming in their plans (emphasis on the word "plan") than ever before. Whether or not Intel can execute and achieve those plans in a timely matter is an entirely different story. Intel has a legal obligation, being that they are a publicly traded company, to inform investors if there are any setbacks in their plans. When they figure out there is a setback and when they announce it is subjective. All I know is that it would be illegal to sit on that kind of news for a long period of time.

Well, Intel DID clearly mislead their investors about the 10nm process.
Nvidia at times did too in the past btw.
They were not punished for this.

Regulations maybe the same for all but it doesn't mean it will applied the same way.
 

Asterox

Golden Member
May 15, 2012
1,038
1,821
136
Frankly I don't care either way. The main benefit is being able to run a Ryzen CPU without needed to buy a dedicated GPU. Normally that's not a big deal because you could always just buy some cheap used low-end GPU for under $100, but those don't exist anymore.

I don't think most people will care about the actual performance. I would imagine that if they chose a 4 CU design it's for better redundancy or because they had extra space on a pad limited IO die that may as well be used for something or it would just sit empty.

Yes, but we must not forget AMD VCN hardware found on every Ryzen APU.


Hardvare decoder/encoder is very useful in a variety of situations.One example from my usage with R5 4650G.

Handbrake, 1080p video 2.33 minutes/original 23mbit, encoded in 9mbit 1080p video

- CPU only H264, 1.22 minutes

- CPU+VCN/VCE H264, 40 seconds

- CPU+VCN/VCE H265, 30 seconds

- CPU only H265, 2.30 minutes
 
Last edited: