Speculation: Ryzen 4000 series/Zen 3

Page 12 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

DownTheSky

Senior member
Apr 7, 2013
787
156
106
They already said they'll split the compute gpu and gaming gpu lines. Next gen MI 100 instinct cards are GCN with 8096 compute units. Can't find the link with all the specs, it's on reddit.
 

misuspita

Senior member
Jul 15, 2006
388
417
136
3) I'd like to see some basic GPU integrated in IO chip. Something like AMD Phenom northbridge 780G was. For the 2D desktop and office work would be OK and it would have few transistors on 12nm.
This!

Something basic of the most basic, just 2d graphic output and basic 3d. Somebody who just needs CPU grunt and don't want GPU. For example i9900k. I need CPU grunt for my audio DAW but don't need any GPU power at all except basic graphic output. Intel offers that in (almost) all their CPU. For AMD all I can get is a 3400G max. Which is not Zen2 and doesn't have the performance compared to even 3600.... My GPU needs are 0 on my DAW PC. I could just use a AsRock A300 and a 3600/3700x if it had a graphic output and have a powerful mobile PC
 
  • Like
Reactions: HurleyBird

gorobei

Diamond Member
Jan 7, 2007
3,654
980
136
aHR0cDovL21lZGlhLmJlc3RvZm1pY3JvLmNvbS9ML08vODQ4MzY0L29yaWdpbmFsL3RzbWMtY293b3MuSlBH

if tsmc is showing this now, it's possible amd is pushing for active interposer on the next epic. after that the concept of an architecture generation will change from something they update every few years to changing one of the chiplets cpu, io-interposer, gpu, specialty asic, etc. every year or less(ie just up one of the sub chiplets whenever its ready vs all of them at once.)
https://www.tomshardware.com/news/tsmc-interposer-processor-hbm-moores-law-not-dead,40171.html
 
  • Like
Reactions: DarthKyrie

DrMrLordX

Lifer
Apr 27, 2000
21,582
10,785
136
Who cares about iGPU.

Budget builders, OEMs. Those sorts of people.

1) AMD is developing APUs for Sony and MS consoles based on Zen2 + custom RDNA 2nd gen. And all these to be launched next year. Why would AMD waste workforce to develop something very different?

Yeah, good point. Whole thing seems a bit weird. My only thought is that if Renoir isn't going to be monolithic, then AMD can just swap out dice for the different products. Still no idea why they would go to the effort of creating a Vega20-based iGPU just for Renoir when they're making a Vega-based one to put into the Xbox2/PS5 packages via IF. If, in fact, that's how they're going to do it.
 

Richie Rich

Senior member
Jul 28, 2019
470
229
76
Is that to reduce IF power usage?
Yes. IF communication consumes much more power than inter-die comm.
So for mobile market segment the monolythic die is optimal solution due to large power restrictions.
For consoles it is not necessary due to much higher TDP. However there is still benefit of more performance from same TDP. And for high volume production for many years it is worth do it monolythic too. Especially when AMD is working on single die APU Renoir. They will use Renoir as base for console chips IMHO.

Guys, what about Zen3 aka Ryzen 4000?
  1. - is it gonna be Zen2 with features they couldn't finish within timeline of Zen2? Somethink like Intel's every year new "generation" small improvement kind of thing
  2. - is it gonna be new architecture like Zen1? Like every 3-4 years big architecture step up (Zen1), and then every year brushing the diamond to be perfect (Zen1+, Zen2).
 

Richie Rich

Senior member
Jul 28, 2019
470
229
76
An interesting prospect, given that most of the leaks point to PS5/Xbox2 using 8c chips.
They will use two blocks of CCX, each CCX has 4c, so 8c total. CPU cores are the easier part of APU. They can add as many CCX as they need.
No, console SoCs have absolutely nothing to do with Renoir (or any other AMD product).
Absolutely nothing... such a strong words. We know they will use Zen2 or Zen3 CPU core and custom RDNA GPU. IMHO This looks like it has a lot to do with other AMD products.
 

moinmoin

Diamond Member
Jun 1, 2017
4,934
7,619
136
Absolutely nothing... such a strong words. We know they will use Zen2 or Zen3 CPU core and custom RDNA GPU. IMHO This looks like it has a lot to do with other AMD products.
R&D in semi custom business usually feeds back to AMD consumer products, but not the other way (e.g. the 2x 4 cores topology so prominent in Zen originates from the consoles). The consumer APUs so far are low budget cut down designs of existing silicon that use parts of next gen microcode as the most exciting novelty.
 

Richie Rich

Senior member
Jul 28, 2019
470
229
76
R&D in semi custom business usually feeds back to AMD consumer products, but not the other way (e.g. the 2x 4 cores topology so prominent in Zen originates from the consoles).
Wasn't the first AMD 2x4c design CPU called Interlagos (server 16c Bulldozer CPU) introduced in 2011? PS4 console is 2013 product (and PS4 CPU core Bobcat is derived from Bulldozer). It just looks to me that consoles and custom business is based on other AMD products not the other way IMHO. And it makes sence that custom department is using already developed products of CPU and GPU departments and combines them together into custom chip.
 

Tuna-Fish

Golden Member
Mar 4, 2011
1,324
1,462
136
Wasn't the first AMD 2x4c design CPU called Interlagos (server 16c Bulldozer CPU) introduced in 2011? PS4 console is 2013 product (and PS4 CPU core Bobcat is derived from Bulldozer). It just looks to me that consoles and custom business is based on other AMD products not the other way IMHO. And it makes sence that custom department is using already developed products of CPU and GPU departments and combines them together into custom chip.

PS4 doesn't have a Bobcat, but a Jaguar, and Bobcat is not derived from Bulldozer. Bobcat was a second design effort made at the same time by a different team (AMD Austin) than the Bulldozer cores. It was a very bare-bones done-on-the-cheap design sort of meant for the cheap netbook/third world markets, built by a tiny team in a short time with little engineering resources and designed for the TSMC 40nm process that was inferior to the 32nm SOI the BD cores were built for. It was much more successful than expected; it was much faster than expected for such a bare-bones design and notably A0 silicon had no showstopping bugs and shipped to customers, something almost unheard of in CPU design.

It's unexpected success at a time AMD was otherwise doing very badly was also sort of the immediate cause for why it went wrong, as AMD's stock options turned worthless and they could not afford to compensate the second design team enough, and Samsung first poached the lead designer (Brad Burgess), who then proceeded to pick the best people who previously worked under him at AMD and formed the core of the Samsung Austin Research Center (SARC) from them. SARC would go on to design and implement the Samsung Exynos M* Arm cores.

This was really bad for the immediate future and plans of the core, as AMD mostly had interns and unqualified lifers left at the team, and follow-on cores lagged behind schedules. Also, soon after Jaguar finally shipped, Intel finally got Atom right with Silvermont, which they were also willing to flood to the market not just at cost but at a negative price (Intel sold Silvermont + network chips for less than what the network chips cost alone, so long as the Silvermonts were not scrapped but had to be sold as products. This was meant to help Silvermont push into the android tablet market, and maybe later into phones, but no-one actually wanted x86 android at the time, and so it just crashed the netbook market.)

Being synthesisable for the TSMC processes meant that the Bobcat/Jaguar design was in the right place when the console refreshes came around, and AMD was probably saved from bankruptcy by being able to sell complete solutions to the console makers, winning both the big bids.

As a final note, the cat cores basically never borrowed anything from the construction cores, but the reverse is not true. The cat cores were the first to implement the perceptron branch predictor, and idea that was bounced around in academia for a while before that. It was probably chosen for the cat cores because the existing predictors were too big and power-hungry for such a small core, and if they had to implement a new one with limited resources, they wanted to pick one that was conceptually simple and easy to implement, which the perceptron predictor is. It was among the things that worked better than expected -- compared to the finely tuned large monstrosities found in larger mainline cores, the predictor is supposed to be quite stupid, but is also very fast, uses little power and is tiny. In practice, it actually often outperformed the predictor in BD, and was thus introduced into the Piledriver, and is still the first stage of branch prediction in Zen 2.
 

Richie Rich

Senior member
Jul 28, 2019
470
229
76
Great and very interesting stuff! Thanks for this!!
However Bulldozer and its upgrade called Piledriver is the same crap for me (2xALU design is silly). Jaguar is upgraded Bobcat, not a major design change.

I agree that Bobcat team did a great job. Their runaway from AMD is just reaction on how bad management was at that time. Very sad.
 
Mar 11, 2004
23,031
5,495
146
It could be a timing issue. Vega 20 puts it ahead of Navi on 7nm by months, so if they were wanting to get Zen 2 APU out, it might have simply been that they needed to go with 7nm Vega. There could be other factors (I mention some below) as well. Maybe the plan was for monolithic APU to go Vega, and then chiplet APU to use Navi (the latter possibly being more compact, or maybe they'll be able to use Navi chip that is also going into dGPU, so they can use it both places, or there being some bits in Navi to work as chiplet - even if its mostly for their own R&D and not customer/commercial use outside of possible the APU).

I remember that in the Anandtech interview , where AMD's commented on the GloFo 7nm cancellation, they explicitly said they only had one product there, that was supposed to be released in the end of this year.

Considering that Rome and Matisse use the exact same chiplets, it would sees really odd to abandon all the possible savings by fabbing them in multiple places. Let's also not forget that Matisse wasn't supposed to come out at the year end.

My bet is, that the planned product on GloFo was the 7nm APU

Its been awhile, but I seem to remember them saying the chip was supposed to be going into production at the end of last year. I think that it was Navi. There were rumors supposedly by the AIBs that AMD had planned for Navi to be an end of Q1 release. I think moving it to TSMC delayed it a few months. It also might explain the prices (since rumors kept saying Navi was supposed to be ~$250), but they had to spend more to port it on shorter notice to TSMC, which meant they had to increase the prices. They had room to though due to the performance and Nvidia's high pricing and large chips.

Good news for Radeon VII owners (sort of) wrt longevity of the platform's driver support. Bad news for actual Renoir buyers.

Unless those customers want Vega's compute capability. I seem to recall one of AMD's APUs being popular because it had their highest double precision performance outside of their professional GPU (since they locked that down on their lower GPUs).

In a bandwidth limited scenario and power limited scenario navi is extremely more efficient than vega.

Due to the wsa amd needs to continue selling 12nm apu so what they need is something to complement it on the mid and high end. Something that can maintain their lead on gpu perf. Navi can do that. Vega wont. Zen plus api is fine as is and good at offloading 12nm capacity and fulfill the wsa.

From a brand and broad portfolio perspective zen2vega apu is just bad and another defensive old style amd thinking decision. I hope they waited the few extra months to get it right first time.

You're basing that on what?

I doubt AMD is struggling to meet the WSA. I really don't know why you think improved Vega wouldn't keep them sitting quite well in iGPU performance compared to Intel. They were already memory bandwidth bound more than anything.

I don't agree. Gaming its more than fine, and its very possible that the major customers buying APUs actually benefit more from the compute capability. I believe the embedded market is one of the primary markets for AMD's APUs, and I think most of them would choose Vega's compute over possibly improved 3D graphics rendering that Navi might offer (and I have a hunch wouldn't be nearly as much as you expect due to memory limitations - and I don't think Navi is drastically improved in that area to make a meaningful difference).

1) AMD is developing APUs for Sony and MS consoles based on Zen2 + custom RDNA 2nd gen. And all these to be launched next year. Why would AMD waste workforce to develop something very different?

2) They could do small trims to console chip: to use DDR4 mem ctrl instead GDDR6, cut off half of L3 cache, make proportional vanilla 2ndRDNA to mem bandwidth... and voilá Renoir.

3) I'd like to see some basic GPU integrated in IO chip. Something like AMD Phenom northbridge 780G was. For the 2D desktop and office work would be OK and it would have few transistors on 12nm.

Because they might legally not be able to use those APUs elsewhere. Also different markets have different needs.

That would be ridiculously bandwidth starved. I also would guess it wouldn't be able to work in AM4 socket, and most of their non-game console embedded market has much stricter power requirements. They'd only be able to make such a thing by making a console themselves. Which I would love, but I have a strong hunch they either have agreements not to do that, or there's some other reason. But that's why I've been advocating making a high end console that is quite a bit different, they'd get to leverage a lot of development that goes into the consoles, but make it different enough and the pricing means it wouldn't step on their console partners' toes, while it would enable them to leverage other advantages that the consoles' budget and power requirements prevent). I personally really wish that we'd get the console companies to open up to other OSes. Even if they delayed it a generation (meaning only doing that after the new one releases, so once the new Xbox or PS5 come out, then open the One/PS4).

I would too.
 

DrMrLordX

Lifer
Apr 27, 2000
21,582
10,785
136
Unless those customers want Vega's compute capability. I seem to recall one of AMD's APUs being popular because it had their highest double precision performance outside of their professional GPU (since they locked that down on their lower GPUs).

Carrizo. It somehow got 1/2 FP64.
 

NostaSeronx

Diamond Member
Sep 18, 2011
3,683
1,218
136
I'll be shocked if Renoir was not briefly canned like the original Kaveri. That way they can skip to Zen3+RDNA2 with Renoirv2 and it would have launch semblance with Zen3+RDNA that is Dali.

DDR5, AV1, etc are all more important than launching without them as well. DDR5 low-spec is at the end of the year; Q4-2019 DDR5-3200 to DDR5-4400.
 
  • Like
Reactions: Drazick

Kenmitch

Diamond Member
Oct 10, 1999
8,505
2,248
136
Because they might legally not be able to use those APUs elsewhere. Also different markets have different needs.

I don't think it's set in stone that a console has to be a APU anyways.

A new Sony patent popped up recently which if it is the PS5 it looks like the could go cpu on one side and gpu on the other. Works for laptops, why not consoles.

https://www.techradar.com/news/this...es-us-our-best-look-at-the-console-design-yet

https://nl.letsgodigital.org/spelcomputers-games/sony-playstation-5-console/
 

DrMrLordX

Lifer
Apr 27, 2000
21,582
10,785
136
Wasn't there a plan, or even an implementation, of Carrizo in some custom system that specifically had a use for that FP64 capability? I think I remember reading that some place.

Probably as a Pro APU. I don't remember the specifics.