Discussion The beauty of AMD chiplet design

Jan 17, 2019
37
89
46
#1
I have not seen this discussed before. Beside all the obvious advantages of the chiplet design, there is one more thing: the current 8 core chiplet will be relevant and usable for two or even three years. I believe that even that far in future it will be usable in some low end processors or other applications. The same little universal 8 core chiplet produced in so high volume allowing the development cost to be disolved so much, that overall it will be extremelly cheap to produce.

Why not to split the consumer processor line in two parts: one higher end part, which would be getting new generation chiplets every time they are released, and lower end processors, which would have the computing unit updated for example every second year? I believe it would be very cost effective way and it would really allow the consumers to enjoy the benefits of the design and high volume (and therefore low cost) production.
 

whm1974

Diamond Member
Jul 24, 2016
6,839
370
96
#2
I don't know, maybe things will change in the not so near future but currently the vast majority of folks are well served by quad cores for the moment and will be for some time. Now for my next system I'm planning on going to 8 cores but then again I am not part of the vast majority of users.
 

DominionSeraph

Diamond Member
Jul 22, 2009
8,388
21
91
#3
What the frack are you talking about? You don't see the 2990WX selling for $5 because it's a bunch of parts glued together, do you?

A high-end 8 core CPU may still be usable in 2 to 3 years? What a bold statement to make when a Sandy Bridge quad is still more than most people need, and that's looking to be the case for the next 5 years or more. Doesn't mean Intel is still making Sandy or that there would be some amazing savings to be had over the current quad cores if they had kept up with it.
 
Last edited:

amd6502

Senior member
Apr 21, 2017
412
47
61
#4
The Piledriver FX (eg FX-8300 series) was a very long lived (2013 2014 2015 2016 2017 2018) product and near its later half (2016-2019) the prices were able to drop significantly for the cost savings that you mentioned. (Amazingly, on sale the FX retail cost dipped as low as $10/core).

Pinnacle Ridge is another classic and well optimized design and I also see it having around a 6 year lifespan and production of ~ 5 years. The design savings will probably be passed on to the consumer in a few years, as it heads into midlife in 2021.

I see the ryzen 3000 chiplet parts as a very short lived (1-2yr) product. AMD has made the pattern of optimization following a new core or node clear, and this very much makes sense. Since 7nm was a new node and new core I actually see two consecutive years of optimization likely.

The 2019 vintage chiplets themselves could be reused for a portion of the 4000 series (2020) products, namely Navi APUs w/hbm for the high end AM4. In my guesstimate, this consumer line will also be complemented with a monolithic 7nm product, as well as Picasso (2019), and Pinnacles (2018).
 
Last edited:
Jan 17, 2019
37
89
46
#5
I do not think that the longevity and current low prices of piledriver processors are result of cost savings caused by high volume production. I believe that the demand for these has been quite limited, their production ended years ago and now only stock is being sold, probably at a loss. I am actually quite surprised that they are still sold, I would just scrap remaining stock of these shadows of the past... :D
 

whm1974

Diamond Member
Jul 24, 2016
6,839
370
96
#6
Well there is a market for FX CPUS as a CPU upgrade for existing systems. In regards to such rigs it is cheaper to upgrade the processor then it is to replace the entire system.
 
Jan 17, 2019
37
89
46
#7
I just checked performance of FX-8370 and it has multi-thread performace comparable to 4c/4t Zen processor while single thread is significantly weaker. All that at x-times higher power consumption. It is an enviromental disaster. I would just scrap them.

Back on topic:

What is better: To have whole consumer CPU lineup regularly updated with new processing units, or having the lineup split, while lower end would be updated less often, enabling the lower end processors being cheaper?
 
Oct 27, 2006
19,566
100
106
#9
What the frack are you talking about? You don't see the 2990WX selling for $5 because it's a bunch of parts glued together, do you?

A high-end 8 core CPU may still be usable in 2 to 3 years? What a bold statement to make when a Sandy Bridge quad is still more than most people need, and that's looking to be the case for the next 5 years or more. Doesn't mean Intel is still making Sandy or that there would be some amazing savings to be had over the current quad cores if they had kept up with it.
Not that I disagree with your statement at all, but it does bring up an interesting observation.

If Intel were willing to make a 6C/12T or 8C/16T "9850K" or whatever that literally didn't have iGPU, it could have easily double the cache and still be decently smaller die size I believe. The IGP got bigger and bigger over time :/

I much prefer AMDs clear separation : models with both (APU), and then CPUs with no compromise.
 

Attachments

Abwx

Diamond Member
Apr 2, 2011
8,791
121
126
#10
I just checked performance of FX-8370 and it has multi-thread performace comparable to 4c/4t Zen processor while single thread is significantly weaker. All that at x-times higher power consumption. It is an enviromental disaster. I would just scrap them.
R3 1200 does 480pts in Cinebench and the FX8370 does 640pts, so are you sure that you checked anything.?.

If downgraded to the R3 1200 perf the FX would consume 50W, somewhat more than the R3 1200 s 30W but nothing like several times as you, wrongly, state it.
 

Mopetar

Diamond Member
Jan 31, 2011
4,417
349
126
#11
I much prefer AMDs clear separation : models with both (APU), and then CPUs with no compromise.
I don't know if it's a long term plan for AMD or not, but they could apply the chiplet strategy to graphics as well. The only major impediment from reports is that it creates a situation akin to crossfire or SLI which tend to be a pain to get working well for games, so a monolithic GPU is better.

However, that doesn't matter as much for the APU market where a single chiplet probably delivers enough graphics power to be useful or the high-end computational market where there doesn't seem to be as much performance loss if you split the chip up into modules.

If you can get infinity fabric to work with your graphics chiplet, there's no reason why you can't build the rest of the memory interface into the IO die. About the only way it gets any more flexible than that is if they were able to execute on the rumor where the large IO die for EPYC could be cut into usable quarter parts for Ryzen desktop. That's probably harder than just having separate dies, but assuming you could do that, you're really driving down costs and having the greatest amount of flexibility possible.
 
Jan 17, 2019
37
89
46
#12
A high-end 8 core CPU may still be usable in 2 to 3 years? What a bold statement to make when a Sandy Bridge quad is still more than most people need.
When I said usable I meant sellable as a new processor. This first computing chiplet may be usable for few years to come and producing it large quantities and widelly using it simply makes a lot of sense.

That is why I though that lower end processors actually do not need getting updated every time new computing chiplet comes out. They may need IO chiplet updated, but computing chiplet may stay for longer.

It brings the question if concurrent production of new and old computing chiplet is practical. I have no idea how semiconductor production is organised. Perhaps it is possible to first produce gazillion of computing chiplets, and update lower end processors only when these run out, while producing only one kind of chiplet at a time.
 
Last edited:
Feb 23, 2017
453
335
106
#13
I guess the benefit that you are referring to, in a round about way, is that the chiplet design allows for flexibility of IO dies, including short lead times for new technology standards. The implication is that AMD wouldn't necessarily optimize the chiplets each year, only the IO. Whilst this may be feasible if the next two gens are more about providing DDR5 and PCIe5 than improved performance. However, that'd certain come alongside negative feedback; Intel got criticised for marginal generational gains, whereas this would be perfmance stagnation by design.
 

Mopetar

Diamond Member
Jan 31, 2011
4,417
349
126
#14
That depends more on how it's sold though. If a year from now AMD updated the IO die and sold Zen 2 parts as a new Ryzen 3X50 or used some other naming scheme like that, it would just seem like a refresh of third generation parts.

I'd probably liken it to the GPU side of things where mobile parts were frequently rebadges. Hell, we already kind of see that, as the Ryzen 3000 mobile parts are old Zen+ 12nm parts.

Normally I would think the IO die is what undergoes the least change. New memory or other IO enhancements come out rather infrequently. Realistically you could probably keep using the same IO die for several years once DDR5 and PCIe 4 are included.
 

moinmoin

Senior member
Jun 1, 2017
666
182
96
#15
Why not to split the consumer processor line in two parts: one higher end part, which would be getting new generation chiplets every time they are released, and lower end processors, which would have the computing unit updated for example every second year? I believe it would be very cost effective way and it would really allow the consumers to enjoy the benefits of the design and high volume (and therefore low cost) production.
In a way this is what AMD is doing with its GPUs already: Their GPUs are assembled from numerous IP blocks which are all separately improved and updated. Just that the result is still a monolithic chip. I can imagine that in the long run they'll want to turn the IP block into chiplets, and do the same with the CPUs beyond just separating core and uncore. The most important part is feasibility and cost though.
 

beginner99

Diamond Member
Jun 2, 2009
3,971
108
126
#16
Now that I think about it another advantage of the chiplet design is, that the designers can completely ignore new tech (pcie, memory,...). All they need to match is their internal, better predictable IF version it must be compatible with. This means you can plan and react faster as you don't need to worry about such things.

EDIT: And if you run into issues with your chiplet you could still offer the old version with a new IO die and new features. This is another issue intel had with their 10nm, that there low power chips kept using lpddr3 instead of 4 simply because these lacked the needed controller. (Albeit I don't get why they didn't back port stuff like that)
 

Atari2600

Senior member
Nov 22, 2016
723
167
106
#17
Why not to split the consumer processor line in two parts: one higher end part, which would be getting new generation chiplets every time they are released, and lower end processors, which would have the computing unit updated for example every second year?
That doesn't make sense to me.

The beauty in the design is that AMD have one chiplet serving as much of the market as possible. So they have:
(i) as large a selection as possible for harvesting.
(ii) as large a selection as possible for binning.
(iii) design costs amortised over a much larger number of chips.
(iv) a simpler contract with their foundry partner(s).
(v) a flexible means of adapting to changing demand. Market needs can be adjusted at the packaging stage, not the wafer start stage.

It would make no sense to split manufacturing effort - unless the newer chip is on a newer process which does not take resource from the older process.


Considering benefits in the longer-term; the chiplet(s) + I/O should allow for quicker integration of the more recent GPUs into the package. Whenever AMD define a common communication architecture for Infinity Fabric, the CPU chiplet and the GPU chiplet - then they should be able to (relatively) quickly package up solutions as they see fit. Vega doesn't follow this - but the High Bandwidth Cache Controller is definitely a step toward it. Navi may or may not - the inclusion of XGMI in more recent Vega chips would point toward progression to that end.
 

Despoiler

Golden Member
Nov 10, 2007
1,817
62
136
#18
We've been talking about chiplet design pros and cons for months now in the speculation thread. Why does this need it's own thread? The beauty of a chiplet design is you can reuse your chiplets top to bottom with only minor differences if any. The I/O being separate is how you segment based on workload/use case. If you make one new chiplet and one older chiplet you've thrown the entire economy of scale out of the window. So it's not cost effective at all. Also, how is it any different to releasing a new generation chiplet and having backstock of your previous generation that hasn't sold yet?
 

Topweasel

Diamond Member
Oct 19, 2000
4,695
319
136
#19
This is the exact opposite of what AMD is trying to accomplish and the exact opposite of lets say down stepping last gen video cards in a new lower price tier. AMD wants to be move forward more flexible. We have given AMD a hard time because the new APU's come out about 6 months after the last of either the core arch or video arch is finished. We gave AMD a hard time that they knew performance BD was a dead end and stopped getting updated after PD. If AMD developed their TR and Epyc models as their own Mono dies they wouldn't come out till a year or more after a new arch.

The Chiplet arch of Zen 2 in the future means AMD doesn't have to wait till they finish the next Graphics chip. Lets say Navi was available in chiplet for the Zen 2 chipcould just be thrown on a CPU with an IO and Navi chiplet. Then as soon as they finish the next Gen video they can throw that on with the Zen 2 chiplet. Then they can toss a Zen 3 chiplet on there when that's ready and not waiting for the next gen's refresh. They don't need to design a new die for server, then one for HDET, heck considering they will be pushing 64 cores, they would probably need 3 or so dies to cover the market. Now AMD can as soon as an architecture is finished being designed they can implement it immediately on all product ranges. Specifically dodging the this CPU is old news feeling you get with rebranding the same die for years. The fact they are fabless also increases this move, they don't have to worry about tooling and such as much. They can just go to TSMC and say I don't want any more of those and I want a lot more of this.
 

hojnikb

Senior member
Sep 18, 2014
543
24
91
#20
chiplet could also mean longer life for am4 socket... Use the same io die but upgrade the cpu and gpu dies. You dont need to redesign dram controller and everything uncore, just slap an older io die.

similar to how things worked with fsb. this thing was so flexible that could run ddr1 on pretty much the newest core2quad at the time
 

Topweasel

Diamond Member
Oct 19, 2000
4,695
319
136
#21
chiplet could also mean longer life for am4 socket... Use the same io die but upgrade the cpu and gpu dies. You dont need to redesign dram controller and everything uncore, just slap an older io die.

similar to how things worked with fsb. this thing was so flexible that could run ddr1 on pretty much the newest core2quad at the time
That I could see. AMD might have some kind grace period where they could use even new chiplets with older IO for older platforms. So they could go DDR5 with Zen 3 but offer Zen 3 on AM4 with an old IO.
 

deathBOB

Senior member
Dec 2, 2007
513
15
116
#22
How do the manufacturering costs compare to traditional chips? Seems like you’re just trading some fixed cost for variable cost.
 

Topweasel

Diamond Member
Oct 19, 2000
4,695
319
136
#23
How do the manufacturering costs compare to traditional chips? Seems like you’re just trading some fixed cost for variable cost.
Not sure I understand this statement. Where are the fixed costs coming from? The only fixed costs (though for Intel its a bit more variable but not much), is the wafers themselves. What AMD drove up the yields on the most expensive part and minimized it's size overall for it and put the rest of chip on an older cheaper more predictable process. It also eliminated the need for the shared dies across Ryzen, TR, and EPYC no longer has to have functionality that may not be needed on that platform.
 

Mopetar

Diamond Member
Jan 31, 2011
4,417
349
126
#24
How do the manufacturering costs compare to traditional chips? Seems like you’re just trading some fixed cost for variable cost.
Chiplets are less expensive to manufacture. Reducing the size of the die that you're producing means that you're more likely to get more functional parts back.

They also don't have to design multiple different chips, which saves on additional work and means that you don't need to create separate masks for those other chips.

Both the fixed and per unit costs are going to decrease on the whole. Packaging might be slightly more expensive, but not enough to offset the other cost savings.
 

Atari2600

Senior member
Nov 22, 2016
723
167
106
#25
How do the manufacturering costs compare to traditional chips? Seems like you’re just trading some fixed cost for variable cost.
See this for an explanation of how not all is as it seems.

Take home is that - even at the point the stuff leaves the foundry - splitting the floor plan across nodes is likely to improve recurring costs, not increase them.

Once you consider the benefits of flexible packaging to meet market demand and both harvesting and binning - then its fairly clear that splitting across two nodes is a big winner.
 

Similar threads



ASK THE COMMUNITY

TRENDING THREADS