AMD “Next Horizon Event" Thread

Page 11 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Mopetar

Diamond Member
Jan 31, 2011
7,831
5,980
136
That reasoning sucks, and is the exact kind of thinking that single-handedly lost AMD the GTX 280-HD 4870 generation despite enjoying a massive architectural advantage. If you have the capability to go for your opponent's throat versus leaving them enough breathing room to compete against you, it's almost always the right decision to do the former.

I'm reminded of a quote from Jobs when he came back to Apple. He basically said that people needed to let go of the notion that for Apple to win, Microsoft needed to lose. AMD needs to make a good product and ensure that their company is profitable so that they can invest more into R&D. What good is slitting your opponent's throat if if leaves you in the same position?

If the lowest of AMD's consumer processors can trade blows with Intel's highest end while consuming less power, then that's R300/G80/Conroe all over again. That situation never, ever, not in a million years, lowers profits. It's only ever the opposite. Demand for the low end products may go up faster than the high end, but the absolute demand for high end products is still significantly greater than it would be otherwise.

It doesn't matter what the demand is if you can't supply it, and if it's anything like the mining craze, it's the middlemen who profit while AMD doesn't get an extra dime. I think you're both forgetting that AMD is launching on a new node which they have to share with several other companies and that they themselves are also using for their GPU parts as well. What this means is that they're going to be heavily supply constrained when it comes to how many dies they have available. There isn't a lot of reason to start off with 2-chiplet R3/R5 parts when they'll be constrained by how much supply they can push.

Imagine that they have 1 million chiplets binned for R3. Suppose that they can get $120 for a CPU using 1 chiplet, or $160 for a CPU using 2 chiplets (because it slots up against a better chip). That's $120 million in revenue vs. $80 million in revenue. They're better off waiting for yields to improve and supply constraints to fall before considering using more than 1 chiplet for their consumer parts. You're trying to apply long-term reasoning (which is very sound) while ignoring the constrains that exist in reality and make those choices less reasonable.
 

Mopetar

Diamond Member
Jan 31, 2011
7,831
5,980
136
Yeah, the APU is the one I am most convinced will be monolithic. There have been some leaks suggesting we are going to see a process refresh of Raven Ridge on GloFo "12nm" first. Which does indicate a much longer wait for 7nm APU :(.

I'm not sure if they'll make the APU monolithic, or if they do it won't be released on 7nm for a very long time. Making it monolithic means the yields would be worse and that they can't be used for other market segments. The margins would be much lower on such a part than anything else AMD could produce, so unless there's free wafers at TSMC, AMD is always better off financially producing something else.

What would have been interesting is if 7nm Vega was a much smaller chip. AMD had said that using the MCM approach of Zen on their GPUs didn't cause much of an issue as far as compute, deep learning, etc. went but that having multiple chips specifically didn't work for gaming. What would be interesting (and may they're eventually working to get there, but just didn't have the R&D at the time to do it this early) is to produce a GPU chiplet that can be combined with multiple others to create something like Epyc that is targetted at the datacenter, but have an individual chiplet be roughly good enough to put into an APU.

That doesn't really solve the problem of what to do with their mainstream gaming parts and perhaps those just have to continue to be monolithic.
 
  • Like
Reactions: amd6502

Glo.

Diamond Member
Apr 25, 2015
5,705
4,549
136
Its funny that firstly: people did not believed that Rome is 64 core CPU, with 9 dies. Then people did not believed that we will see Chiplets on AM4 platform. And now people do not believe we will see dual Chiplet on AM4, despite the fact that Chiplets are 25- 40W TDP, depending on core clocks, and there is no need to redesign any power delivery on B450, and X470 MoBos ;).

Rome is 225-250W TDP 64 core CPU. It means that each chiplet has to be 25-30W TDP :)
 
  • Like
Reactions: DarthKyrie

teejee

Senior member
Jul 4, 2013
361
199
116
Its funny that firstly: people did not believed that Rome is 64 core CPU, with 9 dies. Then people did not believed that we will see Chiplets on AM4 platform. And now people do not believe we will see dual Chiplet on AM4, despite the fact that Chiplets are 25- 40W TDP, depending on core clocks, and there is no need to redesign any power delivery on B450, and X470 MoBos ;).

Rome is 225-250W TDP 64 core CPU. It means that each chiplet has to be 25-30W TDP :)

Rome is a server CPU. Top of the line 32 core 7601 has all core boost frequency of 2.7Ghz and has the same TDP as 16 core 1950 Threadripper.
So your figure of 25-30W is not valid for a 4Ghz+ Chiplet (factor two wrong).
 

Glo.

Diamond Member
Apr 25, 2015
5,705
4,549
136
Rome is a server CPU. Top of the line 32 core 7601 has all core boost frequency of 2.7Ghz and has the same TDP as 16 core 1950 Threadripper.
So your figure of 25-30W is not valid for a 4Ghz+ Chiplet (factor two wrong).
<facepalm>

And where it was in my post that I talked about any clocks?

Secondly, we do not know how good or bad is 7 nm process, and how well it will clock in low-power envelopes.

Im pretty sure that, if that +25% performance figure is accurate, 8C/16 45W TDP CPU will have 3.6 Base clock and up to 4.5 GHz Turbo.
 

Despoiler

Golden Member
Nov 10, 2007
1,966
770
136
Its funny that firstly: people did not believed that Rome is 64 core CPU, with 9 dies. Then people did not believed that we will see Chiplets on AM4 platform. And now people do not believe we will see dual Chiplet on AM4, despite the fact that Chiplets are 25- 40W TDP, depending on core clocks, and there is no need to redesign any power delivery on B450, and X470 MoBos ;).

Rome is 225-250W TDP 64 core CPU. It means that each chiplet has to be 25-30W TDP :)

Yup and the size and complexity of the I/O decreases with the part complexity. Enterprise > Professional > Consumer. The I/O will shrink as the memory channels and L4 decrease.

Rome is a server CPU. Top of the line 32 core 7601 has all core boost frequency of 2.7Ghz and has the same TDP as 16 core 1950 Threadripper.
So your figure of 25-30W is not valid for a 4Ghz+ Chiplet (factor two wrong).

He isn't saying the consumer CPUs will be 25-30w. What it means is that when the clock speed rises they will still fit in the same power envelope as the existing generation chips or less. That's what the node shrink gives you.
 

maddie

Diamond Member
Jul 18, 2010
4,738
4,667
136
Its funny that firstly: people did not believed that Rome is 64 core CPU, with 9 dies. Then people did not believed that we will see Chiplets on AM4 platform. And now people do not believe we will see dual Chiplet on AM4, despite the fact that Chiplets are 25- 40W TDP, depending on core clocks, and there is no need to redesign any power delivery on B450, and X470 MoBos ;).

Rome is 225-250W TDP 64 core CPU. It means that each chiplet has to be 25-30W TDP :)
Where is Rome a 225-250W CPU? Have I missed that?
 

teejee

Senior member
Jul 4, 2013
361
199
116
<facepalm>

And where it was in my post that I talked about any clocks?

Secondly, we do not know how good or bad is 7 nm process, and how well it will clock in low-power envelopes.

Im pretty sure that, if that +25% performance figure is accurate, 8C/16 45W TDP CPU will have 3.6 Base clock and up to 4.5 GHz Turbo.

Maybe I missed your point then. But 4+ Ghz was obviously taken from the fact that you talked about AM4.
 

Arzachel

Senior member
Apr 7, 2011
903
76
91
It doesn't matter what the demand is if you can't supply it, and if it's anything like the mining craze, it's the middlemen who profit while AMD doesn't get an extra dime. I think you're both forgetting that AMD is launching on a new node which they have to share with several other companies and that they themselves are also using for their GPU parts as well. What this means is that they're going to be heavily supply constrained when it comes to how many dies they have available. There isn't a lot of reason to start off with 2-chiplet R3/R5 parts when they'll be constrained by how much supply they can push.

You can have low range parts using a single chiplet AND high end parts using two chiplets without incurring heavy costs. That's the whole point of the design, folks!
 

amd6502

Senior member
Apr 21, 2017
971
360
136
I don't think we'll even see R3 7nm products. At least for a very long time. 4c/8t 7nm will clearly be R5. If yields are so bad or numbers of chiplets so huge, then maybe they harvest some ≤ 3c to stockpile and eventually put out a R3, if we're lucky.

Most R3 are RR and PR. 12nm/14nm is very long lived and cost effective to make transistors.

Their 12nm is still the bread and butter node for PC and mobile until then. I'm waiting to see what 12nm APU they come out with soon.

Where do you think the smaller IO Hub, GPU, Chiplets come from? The chiplet fairy? They need masks/designs too.

The problem is some people see a neat solution for a specific problem space, and quickly leap to thinking that it's the best solution for ALL problem spaces. It isn't.

For large number of CCX's the central IO hub (even if on another die) may be good for latencies. When they launch 7nm Epyc we find out. I hope soon.

They may go monolithic, but maybe every other generation, so they probably wait for at least Navi and likely longer, and go 7nm monolithic with Zen3.

There's a big difference between dollar volume and unit volume for ultra premium bleeding edge. It's okay for unit production costs to be higher if they really are limited edition high margin products, especially with yet another generation update Zen3 not being so far out in the future.
 

Glo.

Diamond Member
Apr 25, 2015
5,705
4,549
136
300 mm 7 nm wafer, with dies of 72 mm2 size give out 813 working dies.

That means - each chiplet will cost AMD under 13$.

Speaking of manufacturing costs...

Which means, ROME chiplets will cost AMD around 100$ per CPU.

The manufacturing costs for this design may be even lower than they will be for 14 nm Zeppelin designs O_O.
 

DrMrLordX

Lifer
Apr 27, 2000
21,620
10,829
136
Probably channeling Nosta here, but just think of migrating the IOC die to a FDX process in the future. For all we know, GloFlo going big with SOI might be related to this possibility.

12FDX has been pushed back to something like 2020, if it ever sees the light of day.
 

jpiniero

Lifer
Oct 1, 2010
14,584
5,206
136
300 mm 7 nm wafer, with dies of 72 mm2 size give out 813 working dies.

That's assuming you would be able to use 100% of dies. That's obviously not going to happen but you could certainly use the ones with some defects (but still have 4 and 6 good cores) for Ryzen and leave the full for Epyc.

I don't know what AMD is paying for a wafer, but I reckon it's at least 15 grand a wafer at 7 nm.

You'd also have to account for the cost of the IO chip.
 

Glo.

Diamond Member
Apr 25, 2015
5,705
4,549
136
That's assuming you would be able to use 100% of dies. That's obviously not going to happen but you could certainly use the ones with some defects (but still have 4 and 6 good cores) for Ryzen and leave the full for Epyc.

I don't know what AMD is paying for a wafer, but I reckon it's at least 15 grand a wafer at 7 nm.

You'd also have to account for the cost of the IO chip.
those 813 dies are working, according to wafer die calculators, with 93% overall yield, and 56 defective dies(dead ones).

The cost of IO chip will be pennies on 14 nm process, and wafers costing 50% less than 7 nm ones.
 
  • Like
Reactions: DarthKyrie

Mopetar

Diamond Member
Jan 31, 2011
7,831
5,980
136
You can have low range parts using a single chiplet AND high end parts using two chiplets without incurring heavy costs. That's the whole point of the design, folks!

I'm not saying that you can't. My reasoning is based on the fact that AMD will have limited supply. In that situation it makes more sense to sell CPUs with a single chiplet because it means that you can sell more of them. Pretend that Zen 2 is good enough in terms of clock bumps and IPC gains where it's essentially identical to a 9900k or trades blows back and forth across different workloads. You can easily sell a single chiplet CPU at existing price levels and make a good deal of money doing so. Adding another chiplet might mean that you crush the 9900k on any parallel workloads, but unless you charge roughly twice as much for such a CPU, you're better off selling two CPUs using a single chiplet.

It's not that AMD's costs increase significantly from using two chiplets instead of one. Maybe it's only $20 more parts and labor. The problem is the opportunity cost of not selling another CPU since they needed to use one extra chiplet. People have been pointing out the Intel is supply constrained because they've had to do manufacture everything on their 14 nm node because 10 nm isn't ready, but AMD has an even worse problem because they have to share the 7 nm node with Apple and other companies, which means they're even more supply limited.

In the future, it will make more sense for them to sell CPUs with two chiplets, because they won't be supply constrained and the cost of using one extra chiplet is small compared to the additional revenue (and profit) that can be gained by selling that more powerful chip. It's not that the idea itself is bad, just that it doesn't make economic sense this early if the product's life cycle.
 

Mopetar

Diamond Member
Jan 31, 2011
7,831
5,980
136
those 813 dies are working, according to wafer die calculators, with 93% overall yield, and 56 defective dies(dead ones).

The cost of IO chip will be pennies on 14 nm process, and wafers costing 50% less than 7 nm ones.

The IO chip is what interests me. Do you think AMD will only use one design and just disable parts of it (or just bin partially functional ones) for their other products like Threadripper and Ryzen? The other alternative is to have a different IO chip that doesn't connect as many chiplets, but is also a lot smaller. I could easily see them making a separate IO chip designed to serve those other product lines and include the ability to support a GPU for APUs as well.
 

Abwx

Lifer
Apr 2, 2011
10,939
3,440
136
Where is Rome a 225-250W CPU? Have I missed that?


If it s to be compatible with current MBs as AMD stated then it s 180W, likely at 2.2GHz, so thats about 22.5W dedicated to each chiplet and its corresponding power footprint in the I/O device.
 
  • Like
Reactions: lightmanek

HurleyBird

Platinum Member
Apr 22, 2003
2,684
1,267
136
My reasoning is based on the fact that AMD will have limited supply.

The fact you say? It's difficult to predict with too much certainty what the supply situation will look like with TSMC, let alone to claim factual knowledge. Assuming consumer is still chiplet + IO, you think that "massive" ~72nm die (or two) is going to create supply issues? On what basis?
 

maddie

Diamond Member
Jul 18, 2010
4,738
4,667
136
12FDX has been pushed back to something like 2020, if it ever sees the light of day.
Read this on the GloFlo page. Has it changed?

"GF Fab 1 in Dresden, Germany is currently putting the conditions in place to enable the site's 12FDX development activities and subsequent manufacturing. Customer product tape-outs are expected to begin in the first half of 2019."
 

H T C

Senior member
Nov 7, 2018
549
395
136
300 mm 7 nm wafer, with dies of 72 mm2 size give out 813 working dies.

That means - each chiplet will cost AMD under 13$.

Speaking of manufacturing costs...

Which means, ROME chiplets will cost AMD around 100$ per CPU.

The manufacturing costs for this design may be even lower than they will be for 14 nm Zeppelin designs O_O.

According to this:

Y4y7Qcj.png

I chose a defect density of 0.4 because it's a new process, and with these parameters, AMD would get 612 good dies. Note the die dimensions are rounded up to make the calculations a tad bit easier (square root of 72 is 8,485281374).

Here's the link for the Die Per Wafer Calculator: https://caly-technologies.com/die-yield-calculator/
 
  • Like
Reactions: lightmanek

PotatoWithEarsOnSide

Senior member
Feb 23, 2017
664
701
106
How new is the process really if Apple have invested heavily in it for high volume production?
Assuming each wafer cost $15k, I suppose it doesn't massively matter if it works out at $20 or $25 per die, which is the difference between the 75% and 93% yields.
What do suppose that an 8C die costs Intel?
 
  • Like
Reactions: lightmanek

NostaSeronx

Diamond Member
Sep 18, 2011
3,686
1,221
136
Read this on the GloFlo page. Has it changed?
12FDX has been delayed as the performance targets was not high enough. Also most GlobalFoundries customers want to get use to 22FDX, before jumping to 12FDX. Customers utilize 22FDX, give recommendations for 12FDX, 12FDX becomes that much better, etc.

R&D for 12FDX aimed at digital and automotive/industrial optimization started in this quarter and will finish in 2H 2021/2022. Where as RF R&D with 12FDX will start in 2H of 2019 and end in 1H of 2023. So, digital HVM might not be till 2022 and RF-integrated HVM won't be till 2023.

For example: 22FDX started in 2017, but HVM ramp isn't till between Q1-Q2 2019.
Those optimizations matter more than being first to the node.
 
Last edited: