Discussion The beauty of AMD chiplet design

Kocicak

Senior member
Jan 17, 2019
982
973
136
I have not seen this discussed before. Beside all the obvious advantages of the chiplet design, there is one more thing: the current 8 core chiplet will be relevant and usable for two or even three years. I believe that even that far in future it will be usable in some low end processors or other applications. The same little universal 8 core chiplet produced in so high volume allowing the development cost to be disolved so much, that overall it will be extremelly cheap to produce.

Why not to split the consumer processor line in two parts: one higher end part, which would be getting new generation chiplets every time they are released, and lower end processors, which would have the computing unit updated for example every second year? I believe it would be very cost effective way and it would really allow the consumers to enjoy the benefits of the design and high volume (and therefore low cost) production.
 

whm1974

Diamond Member
Jul 24, 2016
9,460
1,570
96
I don't know, maybe things will change in the not so near future but currently the vast majority of folks are well served by quad cores for the moment and will be for some time. Now for my next system I'm planning on going to 8 cores but then again I am not part of the vast majority of users.
 
  • Like
Reactions: thxdd

DominionSeraph

Diamond Member
Jul 22, 2009
8,391
31
91
What the frack are you talking about? You don't see the 2990WX selling for $5 because it's a bunch of parts glued together, do you?

A high-end 8 core CPU may still be usable in 2 to 3 years? What a bold statement to make when a Sandy Bridge quad is still more than most people need, and that's looking to be the case for the next 5 years or more. Doesn't mean Intel is still making Sandy or that there would be some amazing savings to be had over the current quad cores if they had kept up with it.
 
Last edited:

amd6502

Senior member
Apr 21, 2017
971
360
136
The Piledriver FX (eg FX-8300 series) was a very long lived (2013 2014 2015 2016 2017 2018) product and near its later half (2016-2019) the prices were able to drop significantly for the cost savings that you mentioned. (Amazingly, on sale the FX retail cost dipped as low as $10/core).

Pinnacle Ridge is another classic and well optimized design and I also see it having around a 6 year lifespan and production of ~ 5 years. The design savings will probably be passed on to the consumer in a few years, as it heads into midlife in 2021.

I see the ryzen 3000 chiplet parts as a very short lived (1-2yr) product. AMD has made the pattern of optimization following a new core or node clear, and this very much makes sense. Since 7nm was a new node and new core I actually see two consecutive years of optimization likely.

The 2019 vintage chiplets themselves could be reused for a portion of the 4000 series (2020) products, namely Navi APUs w/hbm for the high end AM4. In my guesstimate, this consumer line will also be complemented with a monolithic 7nm product, as well as Picasso (2019), and Pinnacles (2018).
 
Last edited:

Kocicak

Senior member
Jan 17, 2019
982
973
136
I do not think that the longevity and current low prices of piledriver processors are result of cost savings caused by high volume production. I believe that the demand for these has been quite limited, their production ended years ago and now only stock is being sold, probably at a loss. I am actually quite surprised that they are still sold, I would just scrap remaining stock of these shadows of the past... :D
 

whm1974

Diamond Member
Jul 24, 2016
9,460
1,570
96
Well there is a market for FX CPUS as a CPU upgrade for existing systems. In regards to such rigs it is cheaper to upgrade the processor then it is to replace the entire system.
 

Kocicak

Senior member
Jan 17, 2019
982
973
136
I just checked performance of FX-8370 and it has multi-thread performace comparable to 4c/4t Zen processor while single thread is significantly weaker. All that at x-times higher power consumption. It is an enviromental disaster. I would just scrap them.

Back on topic:

What is better: To have whole consumer CPU lineup regularly updated with new processing units, or having the lineup split, while lower end would be updated less often, enabling the lower end processors being cheaper?
 

Arkaign

Lifer
Oct 27, 2006
20,736
1,377
126
What the frack are you talking about? You don't see the 2990WX selling for $5 because it's a bunch of parts glued together, do you?

A high-end 8 core CPU may still be usable in 2 to 3 years? What a bold statement to make when a Sandy Bridge quad is still more than most people need, and that's looking to be the case for the next 5 years or more. Doesn't mean Intel is still making Sandy or that there would be some amazing savings to be had over the current quad cores if they had kept up with it.

Not that I disagree with your statement at all, but it does bring up an interesting observation.

If Intel were willing to make a 6C/12T or 8C/16T "9850K" or whatever that literally didn't have iGPU, it could have easily double the cache and still be decently smaller die size I believe. The IGP got bigger and bigger over time :/

I much prefer AMDs clear separation : models with both (APU), and then CPUs with no compromise.
 

Attachments

  • 650px-kaby_lake_(quad_core)_(annotated).png
    650px-kaby_lake_(quad_core)_(annotated).png
    584.8 KB · Views: 18

Abwx

Lifer
Apr 2, 2011
10,940
3,441
136
I just checked performance of FX-8370 and it has multi-thread performace comparable to 4c/4t Zen processor while single thread is significantly weaker. All that at x-times higher power consumption. It is an enviromental disaster. I would just scrap them.

R3 1200 does 480pts in Cinebench and the FX8370 does 640pts, so are you sure that you checked anything.?.

If downgraded to the R3 1200 perf the FX would consume 50W, somewhat more than the R3 1200 s 30W but nothing like several times as you, wrongly, state it.
 

Mopetar

Diamond Member
Jan 31, 2011
7,831
5,981
136
I much prefer AMDs clear separation : models with both (APU), and then CPUs with no compromise.

I don't know if it's a long term plan for AMD or not, but they could apply the chiplet strategy to graphics as well. The only major impediment from reports is that it creates a situation akin to crossfire or SLI which tend to be a pain to get working well for games, so a monolithic GPU is better.

However, that doesn't matter as much for the APU market where a single chiplet probably delivers enough graphics power to be useful or the high-end computational market where there doesn't seem to be as much performance loss if you split the chip up into modules.

If you can get infinity fabric to work with your graphics chiplet, there's no reason why you can't build the rest of the memory interface into the IO die. About the only way it gets any more flexible than that is if they were able to execute on the rumor where the large IO die for EPYC could be cut into usable quarter parts for Ryzen desktop. That's probably harder than just having separate dies, but assuming you could do that, you're really driving down costs and having the greatest amount of flexibility possible.
 

Kocicak

Senior member
Jan 17, 2019
982
973
136
A high-end 8 core CPU may still be usable in 2 to 3 years? What a bold statement to make when a Sandy Bridge quad is still more than most people need.
When I said usable I meant sellable as a new processor. This first computing chiplet may be usable for few years to come and producing it large quantities and widelly using it simply makes a lot of sense.

That is why I though that lower end processors actually do not need getting updated every time new computing chiplet comes out. They may need IO chiplet updated, but computing chiplet may stay for longer.

It brings the question if concurrent production of new and old computing chiplet is practical. I have no idea how semiconductor production is organised. Perhaps it is possible to first produce gazillion of computing chiplets, and update lower end processors only when these run out, while producing only one kind of chiplet at a time.
 
Last edited:

PotatoWithEarsOnSide

Senior member
Feb 23, 2017
664
701
106
I guess the benefit that you are referring to, in a round about way, is that the chiplet design allows for flexibility of IO dies, including short lead times for new technology standards. The implication is that AMD wouldn't necessarily optimize the chiplets each year, only the IO. Whilst this may be feasible if the next two gens are more about providing DDR5 and PCIe5 than improved performance. However, that'd certain come alongside negative feedback; Intel got criticised for marginal generational gains, whereas this would be perfmance stagnation by design.
 

Mopetar

Diamond Member
Jan 31, 2011
7,831
5,981
136
That depends more on how it's sold though. If a year from now AMD updated the IO die and sold Zen 2 parts as a new Ryzen 3X50 or used some other naming scheme like that, it would just seem like a refresh of third generation parts.

I'd probably liken it to the GPU side of things where mobile parts were frequently rebadges. Hell, we already kind of see that, as the Ryzen 3000 mobile parts are old Zen+ 12nm parts.

Normally I would think the IO die is what undergoes the least change. New memory or other IO enhancements come out rather infrequently. Realistically you could probably keep using the same IO die for several years once DDR5 and PCIe 4 are included.
 

moinmoin

Diamond Member
Jun 1, 2017
4,944
7,656
136
Why not to split the consumer processor line in two parts: one higher end part, which would be getting new generation chiplets every time they are released, and lower end processors, which would have the computing unit updated for example every second year? I believe it would be very cost effective way and it would really allow the consumers to enjoy the benefits of the design and high volume (and therefore low cost) production.
In a way this is what AMD is doing with its GPUs already: Their GPUs are assembled from numerous IP blocks which are all separately improved and updated. Just that the result is still a monolithic chip. I can imagine that in the long run they'll want to turn the IP block into chiplets, and do the same with the CPUs beyond just separating core and uncore. The most important part is feasibility and cost though.
 

beginner99

Diamond Member
Jun 2, 2009
5,210
1,580
136
Now that I think about it another advantage of the chiplet design is, that the designers can completely ignore new tech (pcie, memory,...). All they need to match is their internal, better predictable IF version it must be compatible with. This means you can plan and react faster as you don't need to worry about such things.

EDIT: And if you run into issues with your chiplet you could still offer the old version with a new IO die and new features. This is another issue intel had with their 10nm, that there low power chips kept using lpddr3 instead of 4 simply because these lacked the needed controller. (Albeit I don't get why they didn't back port stuff like that)
 

Atari2600

Golden Member
Nov 22, 2016
1,409
1,655
136
Why not to split the consumer processor line in two parts: one higher end part, which would be getting new generation chiplets every time they are released, and lower end processors, which would have the computing unit updated for example every second year?

That doesn't make sense to me.

The beauty in the design is that AMD have one chiplet serving as much of the market as possible. So they have:
(i) as large a selection as possible for harvesting.
(ii) as large a selection as possible for binning.
(iii) design costs amortised over a much larger number of chips.
(iv) a simpler contract with their foundry partner(s).
(v) a flexible means of adapting to changing demand. Market needs can be adjusted at the packaging stage, not the wafer start stage.

It would make no sense to split manufacturing effort - unless the newer chip is on a newer process which does not take resource from the older process.


Considering benefits in the longer-term; the chiplet(s) + I/O should allow for quicker integration of the more recent GPUs into the package. Whenever AMD define a common communication architecture for Infinity Fabric, the CPU chiplet and the GPU chiplet - then they should be able to (relatively) quickly package up solutions as they see fit. Vega doesn't follow this - but the High Bandwidth Cache Controller is definitely a step toward it. Navi may or may not - the inclusion of XGMI in more recent Vega chips would point toward progression to that end.
 

Despoiler

Golden Member
Nov 10, 2007
1,966
770
136
We've been talking about chiplet design pros and cons for months now in the speculation thread. Why does this need it's own thread? The beauty of a chiplet design is you can reuse your chiplets top to bottom with only minor differences if any. The I/O being separate is how you segment based on workload/use case. If you make one new chiplet and one older chiplet you've thrown the entire economy of scale out of the window. So it's not cost effective at all. Also, how is it any different to releasing a new generation chiplet and having backstock of your previous generation that hasn't sold yet?
 
  • Like
Reactions: DarthKyrie

Topweasel

Diamond Member
Oct 19, 2000
5,436
1,654
136
This is the exact opposite of what AMD is trying to accomplish and the exact opposite of lets say down stepping last gen video cards in a new lower price tier. AMD wants to be move forward more flexible. We have given AMD a hard time because the new APU's come out about 6 months after the last of either the core arch or video arch is finished. We gave AMD a hard time that they knew performance BD was a dead end and stopped getting updated after PD. If AMD developed their TR and Epyc models as their own Mono dies they wouldn't come out till a year or more after a new arch.

The Chiplet arch of Zen 2 in the future means AMD doesn't have to wait till they finish the next Graphics chip. Lets say Navi was available in chiplet for the Zen 2 chipcould just be thrown on a CPU with an IO and Navi chiplet. Then as soon as they finish the next Gen video they can throw that on with the Zen 2 chiplet. Then they can toss a Zen 3 chiplet on there when that's ready and not waiting for the next gen's refresh. They don't need to design a new die for server, then one for HDET, heck considering they will be pushing 64 cores, they would probably need 3 or so dies to cover the market. Now AMD can as soon as an architecture is finished being designed they can implement it immediately on all product ranges. Specifically dodging the this CPU is old news feeling you get with rebranding the same die for years. The fact they are fabless also increases this move, they don't have to worry about tooling and such as much. They can just go to TSMC and say I don't want any more of those and I want a lot more of this.
 

hojnikb

Senior member
Sep 18, 2014
562
45
91
chiplet could also mean longer life for am4 socket... Use the same io die but upgrade the cpu and gpu dies. You dont need to redesign dram controller and everything uncore, just slap an older io die.

similar to how things worked with fsb. this thing was so flexible that could run ddr1 on pretty much the newest core2quad at the time
 

Topweasel

Diamond Member
Oct 19, 2000
5,436
1,654
136
chiplet could also mean longer life for am4 socket... Use the same io die but upgrade the cpu and gpu dies. You dont need to redesign dram controller and everything uncore, just slap an older io die.

similar to how things worked with fsb. this thing was so flexible that could run ddr1 on pretty much the newest core2quad at the time

That I could see. AMD might have some kind grace period where they could use even new chiplets with older IO for older platforms. So they could go DDR5 with Zen 3 but offer Zen 3 on AM4 with an old IO.
 

Topweasel

Diamond Member
Oct 19, 2000
5,436
1,654
136
How do the manufacturering costs compare to traditional chips? Seems like you’re just trading some fixed cost for variable cost.

Not sure I understand this statement. Where are the fixed costs coming from? The only fixed costs (though for Intel its a bit more variable but not much), is the wafers themselves. What AMD drove up the yields on the most expensive part and minimized it's size overall for it and put the rest of chip on an older cheaper more predictable process. It also eliminated the need for the shared dies across Ryzen, TR, and EPYC no longer has to have functionality that may not be needed on that platform.
 

Mopetar

Diamond Member
Jan 31, 2011
7,831
5,981
136
How do the manufacturering costs compare to traditional chips? Seems like you’re just trading some fixed cost for variable cost.

Chiplets are less expensive to manufacture. Reducing the size of the die that you're producing means that you're more likely to get more functional parts back.

They also don't have to design multiple different chips, which saves on additional work and means that you don't need to create separate masks for those other chips.

Both the fixed and per unit costs are going to decrease on the whole. Packaging might be slightly more expensive, but not enough to offset the other cost savings.
 

Atari2600

Golden Member
Nov 22, 2016
1,409
1,655
136
How do the manufacturering costs compare to traditional chips? Seems like you’re just trading some fixed cost for variable cost.

See this for an explanation of how not all is as it seems.

Take home is that - even at the point the stuff leaves the foundry - splitting the floor plan across nodes is likely to improve recurring costs, not increase them.

Once you consider the benefits of flexible packaging to meet market demand and both harvesting and binning - then its fairly clear that splitting across two nodes is a big winner.