• Guest, The rules for the P & N subforum have been updated to prohibit "ad hominem" or personal attacks against other posters. See the full details in the post "Politics and News Rules & Guidelines."

Speculation: Zen 4 (EPYC 4 "Genoa", Ryzen 6000)

Page 50 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

What do you expect with Zen 4?


  • Total voters
    266

uzzi38

Golden Member
Oct 16, 2019
1,817
3,654
116
I could see it playing a role in DirectStorage access from M2 SSD's to the GPU when UE5 facilitates more and more insane bandwidth requirements for virtualised geometry and textures in future games.

What's comfortable for a highly optimised platform like PS5 and XSX/S could require more BW for heavier scenes on Windows 10 PCs.
Except the Series X's SSD isn't all that different bandwidth-wise to a PCIe3.0 one, iirc 2.4GB/s uncompressed? Given that I can't imagine games will be developed in such a way that PCIe Gen 5 can be fully utilised
 

CakeMonster

Golden Member
Nov 22, 2012
1,046
111
106
Are there games now or in development that could use 2.4 (3.0) to 4.8 (4.0) to 9.6GB/s (5.0) for SSD->GPU for specific scenarios? What are those scenarios, and how would those otherwise be handled? I guess what I'm asking is how 'bad' will PCIE3.0 DirectStorage be since at least some people seem to be implying it would not work properly?
 

eek2121

Golden Member
Aug 2, 2005
1,287
1,368
136
Are there games now or in development that could use 2.4 (3.0) to 4.8 (4.0) to 9.6GB/s (5.0) for SSD->GPU for specific scenarios? What are those scenarios, and how would those otherwise be handled? I guess what I'm asking is how 'bad' will PCIE3.0 DirectStorage be since at least some people seem to be implying it would not work properly?
An RTX 3090 consumes most of the bandwidth of a PCIE 3.0 x16 slot. With PCIE 4.0, less than half the bandwidth is used. Games can do a lot of innovative stuff with this.

EDIT: I haven’t played it, but I hear Ratchet and Clank on PS5 gives us an early hint of what we can expect to see.
 

Gideon

Golden Member
Nov 27, 2007
1,442
2,902
136
Except the Series X's SSD isn't all that different bandwidth-wise to a PCIe3.0 one, iirc 2.4GB/s uncompressed? Given that I can't imagine games will be developed in such a way that PCIe Gen 5 can be fully utilised
And as I stated previously PCIe 5.0 wouldonly useful in this case if it also scales to both the m.2. slot and primary GPU slot (which it seems it will not on all platforms released this year), rendering the difference largely academic
 

uzzi38

Golden Member
Oct 16, 2019
1,817
3,654
116
And as I stated previously PCIe 5.0 wouldonly useful in this case if it also scales to both the m.2. slot and primary GPU slot (which it seems it will not on all platforms released this year), rendering the difference largely academic
But I see no reason to think Gen 5 for GPUs would even be useful for gaming. For compute/AI workloads that's not the case, but for gaming you only need so much PCIe bandwidth.


You can 1/4 the PCIe bandwidth and still get 96% the performance. That really shows how little b/w we need right now and likely for anywhere close to the near future as well.
 
  • Like
Reactions: Makaveli and Tlh97

moinmoin

Platinum Member
Jun 1, 2017
2,767
3,666
136
Since AM5 doesn't appear to really change I/O over AM4 aside of changing socket and supporting newer standards it would be really great if AMD found a way to bifurcate PCIe lanes down the gen versions. Like using 8x PCIe 4 to offer 16x PCIe 3. This would double the amount of available (PCIe 3) lanes, and once more so once PCIe 5 is supported. Though since this hasn't been done already I guess it's not really feasible.
 

tomatosummit

Member
Mar 21, 2019
68
46
61
Like using 8x PCIe 4 to offer 16x PCIe 3.
This is not a simple or cheap task so it's rarely done on consumer motherboards.
8 pcie lanes are still only 8 pcie lanes. Plug a pcie3 x16 device into a pcie4 x8 slot and it will only run at 3x8 speeds. To split the lanes you need a switch with 24 lanes for your example.
What amd should have done is expose all 28pcie lanes on the cpus that had it on the higher end motherboards. Having what we have now plus 8 or 4+4 lanes would make the current platforms far more versatile and give a real selling point for the Xx70 motherboards.
 

moinmoin

Platinum Member
Jun 1, 2017
2,767
3,666
136
This is not a simple or cheap task so it's rarely done on consumer motherboards.
8 pcie lanes are still only 8 pcie lanes. Plug a pcie3 x16 device into a pcie4 x8 slot and it will only run at 3x8 speeds. To split the lanes you need a switch with 24 lanes for your example.
What amd should have done is expose all 28pcie lanes on the cpus that had it on the higher end motherboards. Having what we have now plus 8 or 4+4 lanes would make the current platforms far more versatile and give a real selling point for the Xx70 motherboards.
This could happen on a uncore/IOD SerDes PHY level that are already configurable to deliver different PCIe lane configs, USB 3, SATA etc. But the increase in lanes obviously would take more pins, which AMD on AM4 as you write didn't even use for exposing all already existing lanes.
 

scannall

Golden Member
Jan 1, 2012
1,808
1,307
136
But I see no reason to think Gen 5 for GPUs would even be useful for gaming. For compute/AI workloads that's not the case, but for gaming you only need so much PCIe bandwidth.


You can 1/4 the PCIe bandwidth and still get 96% the performance. That really shows how little b/w we need right now and likely for anywhere close to the near future as well.
One example would be fitting a PCIe 5 SSD, say 1 TB to a graphics card. With that much bandwidth you would likely see some very nice performance gains in professional graphics cards. Might take a while for that kind of thing to filter down to game engines, but it could be revolutionary.
 
  • Like
Reactions: Tlh97

uzzi38

Golden Member
Oct 16, 2019
1,817
3,654
116

Shivansps

Diamond Member
Sep 11, 2013
3,365
978
136
Assets could quickly load from SSD directly to VRAM. Loading screens would practically disappear. Everything could be streamed in and out seamlessly.
If im going to be honest, it just looks like a way to workaround the lack of avalible memory on a console, as sharing 16GB for GPU+CPU, and im petty sure the S.O is reserving some of it, thats looks really bad looking forward. So being able to load assets directly from a very fast storage whiout having to go to ram first it is really usefull in those cases. BUUUUUT here is the thing, ram is still A LOT FASTER, i dont see a good reason for stop using ram as a another cache in the middle, unless you are on a console and it is either RAM or VRAM... and you want it to be VRAM for assets.

On PC, i dont think it will be very usefull, and the fact you need PCI-E 4, with a PCI-E 4 gpu and a PCI-E 4 NVME... yeah no.
 

LightningZ71

Senior member
Mar 10, 2017
934
902
136
Bifurcation is a good thing on the traditional way. I wish it was more standardized that the boards supported bifurcating the first x16 slot down to at least 4 x x4 channels instead of the poorly documented roll of the dice that it currently is.
 

Rigg

Senior member
May 6, 2020
218
354
96
If im going to be honest, it just looks like a way to workaround the lack of avalible memory on a console, as sharing 16GB for GPU+CPU, and im petty sure the S.O is reserving some of it, thats looks really bad looking forward. So being able to load assets directly from a very fast storage whiout having to go to ram first it is really usefull in those cases. BUUUUUT here is the thing, ram is still A LOT FASTER, i dont see a good reason for stop using ram as a another cache in the middle, unless you are on a console and it is either RAM or VRAM... and you want it to be VRAM for assets.
The initial PC implementation does use the system RAM as a cache.


On PC, i dont think it will be very usefull, and the fact you need PCI-E 4, with a PCI-E 4 gpu and a PCI-E 4 NVME... yeah no.
Unless I missed an updated announcement in the last month this is not a fact. As far as hardware requirements they haven't really said much but Microsoft did say in a recent official presentation that 3.0 SSD's are supported.

 

exquisitechar

Senior member
Apr 18, 2017
491
565
136
Given all the talk of lasagne on Twitter as you all may or may not have seen, I just wanted to post this transcript:


AMD should be talking more about their 2.5D and 3D roadmap in the coming months, so stay tuned :)
Excited for new roadmaps. All the different rumors with little official news from AMD are getting tiresome.
 

moinmoin

Platinum Member
Jun 1, 2017
2,767
3,666
136
Might Milan-X be the variant intended for Frontier? We know that unlike El Capitan Frontier won't use "stock" Epyc chips.
 

Gideon

Golden Member
Nov 27, 2007
1,442
2,902
136
Given all the talk of lasagne on Twitter as you all may or may not have seen, I just wanted to post this transcript:


AMD should be talking more about their 2.5D and 3D roadmap in the coming months, so stay tuned :)
Really great interview, thanks for sharing!

It has many interesting dibits. One is that AMD might actually end up doing some ARM solutions when it's a good fit for some customers (which was probably quite obvious anyhow).
If you squint really hard (or "want to believe" :D) they are also hinting of having different archs for different sweetspots (e.g. how Arm has X1 for maximum performance at a cost of area and A78 for optimized perf/watt per transistor)
Lisa Su said:
I think we have deep relationships with all of the data center customers, and we're talking to them about what do they need over the next 5 years. You'll see AMD add more points to our compute road map as well to optimize for some of these different workloads. And I think the value is really in how you put things together.
And so from an AMD standpoint, we consider ourselves sort of the high-performance computing solution working with our customers. And that is certainly the way we look at this. And if it means ARM for certain customers, we would certainly consider something in that realm as well. But we look at it as really, let's talk about what problem you're trying to solve. And then we'll work with you with the best components to address the customers' needs.
 

eek2121

Golden Member
Aug 2, 2005
1,287
1,368
136
So it does exist after all:

Patrick Schur on Twitter: "AMD is working on a new CPU (codename Milan-X) that will use stacked dies. 😏" / Twitter

The Milan-X codename showed up on a forum board a couple of weeks ago, and I figured it was what all the lasagna posts were about, but was never really certain.

Happy to see it surface
Curious. I am wondering when this rolls out and if we are also going to see a Ryzen version of this. Milan-X with DDR5 would be a beast, especially if they do like Intel is doing and have a SKU with HBM.

The increased TDP to 120w is long overdue. I actually wouldn’t mind the 170w TDP either since an AIO can handle it just fine.

The PCIE lane boost is surprising.
 

ASK THE COMMUNITY