AMD “Next Horizon Event" Thread

Page 12 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

H T C

Senior member
Nov 7, 2018
555
395
136
How new is the process really if Apple have invested heavily in it for high volume production?
Assuming each wafer cost $15k, I suppose it doesn't massively matter if it works out at $20 or $25 per die, which is the difference between the 75% and 93% yields.
What do suppose that an 8C die costs Intel?
Dunno the size of Apple's chip(s): if size is similar, then you are correct, in which case there should be much more good dies. With 0.1 defect density, there would be 756 good dies VS 612 for 0.4 defect density.
 

Gideon

Golden Member
Nov 27, 2007
1,629
3,658
136
Dunno the size of Apple's chip(s): if size is similar, then you are correct, in which case there should be much more good dies. With 0.1 defect density, there would be 756 good dies VS 612 for 0.4 defect density.
Apple chips are waaay waaay bigger AFAIK
 

Mopetar

Diamond Member
Jan 31, 2011
7,837
5,992
136
The fact you say? It's difficult to predict with too much certainty what the supply situation will look like with TSMC, let alone to claim factual knowledge. Assuming consumer is still chiplet + IO, you think that "massive" ~72nm die (or two) is going to create supply issues? On what basis?

It's a new node that's being shared with other companies, and what AMD is getting is being split with their GPUs, and then what is being used for Zen 2 has to be split across every product segment. Those are all facts, so it's not difficult to believe that chiplet supply will be a constraint for AMD.

We don't know how much of a clock bump they'll get, but we can see from 7nm Vega that 15% isn't unreasonable, and even 20% is possible. The IPC gains are less certain, but 10% should be doable given the amount of time they'd had to work on everything. Add all of that together and AMD is probably at parity with Intel, maybe even winning when you factor in power draw. Unless they start charging Intel level prices, the demand is going to outstrip supply.

We also know that AMD is starting with the server chips first this time around. Each one of those is going to use eight chiplets and it's not inconceivable that they'll release variants to make use of binned chiplets, such as a CPU that has 48 functional cores (each chiplet having two cores disabled) such that there aren't a massive amount of chiplets available for consumer parts. As long as the demand for Epyc is strong, AMD has every reason to produce those since the margins are significantly higher compared to the consumer markets.

Put it this way, if there weren't supply issues, why wouldn't AMD launch Ryzen 3000 series CPUs alongside Epyc? This is even more the case if they're extremely competitive with Intel's best chip, which itself seems to be in limited supply at the moment.
 

Jackie60

Member
Aug 11, 2006
118
46
101
So does all this mean an 8 core 16 thread 3000x at 7nm has a decent chance of beating a 9900k in gaming and will it be a worthy upgrade on 5960x at 4.5ghz?
 

H T C

Senior member
Nov 7, 2018
555
395
136
Apple chips are waaay waaay bigger AFAIK
Was not aware.

That said, how long ago did Apple start having these chips manufactured for them? If it's recent, then the process is not mature @ all. I'm not familiar with chips from Apple's products.
 

Gideon

Golden Member
Nov 27, 2007
1,629
3,658
136
Was not aware.

That said, how long ago did Apple start having these chips manufactured for them? If it's recent, then the process is not mature @ all. I'm not familiar with chips from Apple's products.
Way bigger, was somewhat of an overstatement, but certainly bigger:

A12 is 83.27 mm² and in the latest highest-end Iphones, which are already sold for a month (that means quite a buildup before it)
A12X is 122 mm² and in the latest Ipad, that was released about a week ago.

Both will be sold in large quantities, before AMD ramps up the production (in Q1 I believe). The 7nm HP process AMD uses is a bit different though.
 

H T C

Senior member
Nov 7, 2018
555
395
136
Way bigger, was somewhat of an overstatement, but certainly bigger:

A12 is 83.27 mm² and in the latest highest-end Iphones, which are already sold for a month (that means quite a buildup before it)
A12X is 122 mm² and in the latest Ipad, that was released about a week ago.

Both will be sold in large quantities, before AMD ramps up the production (in Q1 I believe). The 7nm HP process AMD uses is a bit different though.

So you're saying they will start being sold before AMD's Zen 2 but haven't yet: is this correct?

If so, then the process is being heavily invested on but it's not mature @ all, as i suspected. This being the case, a 0.4 (or close to it) defect density is to be expected. However, depending on the defect, it's entirely possible some (most???) of these defective dies can still be used for lower chips, such as R5 or R3 family of chips
 

HurleyBird

Platinum Member
Apr 22, 2003
2,684
1,268
136
It's a new node that's being shared with other companies, and what AMD is getting is being split with their GPUs, and then what is being used for Zen 2 has to be split across every product segment. Those are all facts, so it's not difficult to believe that chiplet supply will be a constraint for AMD.

To put things in perspective TSMC's 7nm capacity is estimated to be around 1.1M wafers in 2019, Apple will take less than half of that. AMD would need around a quarter of that capacity to service the entire PC market (eg. 100% market share) with a single chiplet design. So yes, there's quite a lot of theoretical wiggle room depending on how things have been negotiated.

Unless they start charging Intel level prices, the demand is going to outstrip supply.

If they can create the mindset that their budget parts == Intel's best parts, then they can charge greater than Intel prices.

Put it this way, if there weren't supply issues, why wouldn't AMD launch Ryzen 3000 series CPUs alongside Epyc?

There could be a million reasons, nor do we know the exact timing of server versus consumer.
 
Last edited:

maddie

Diamond Member
Jul 18, 2010
4,740
4,674
136
It's a new node that's being shared with other companies, and what AMD is getting is being split with their GPUs, and then what is being used for Zen 2 has to be split across every product segment. Those are all facts, so it's not difficult to believe that chiplet supply will be a constraint for AMD.
......................................................................................
Put it this way, if there weren't supply issues, why wouldn't AMD launch Ryzen 3000 series CPUs alongside Epyc? This is even more the case if they're extremely competitive with Intel's best chip, which itself seems to be in limited supply at the moment.
AMD is a small company relative to Intel.

Do you really think they have the resources to do an across the board release of many 2019 products at once?

Also, it's good to keep the enthusiasm at an elevated level through regular releases, as they have done since Ryzen originally launched. Stoking the stock.

Regarding Vega 20. A high priced professional GPU will not have millions of sales.
 

jpiniero

Lifer
Oct 1, 2010
14,591
5,214
136
Put it this way, if there weren't supply issues, why wouldn't AMD launch Ryzen 3000 series CPUs alongside Epyc? This is even more the case if they're extremely competitive with Intel's best chip, which itself seems to be in limited supply at the moment.

I would say one reason is that AMD might not know yet what the actual Epyc 2 demand is going to be and for what models, for what they get from the yields. I imagine they would rather sell lower end Epyc than even Ryzen 7.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
Y'know and manufacturing costs from being able to minimize the die size on a new process. And being able to bin and reuse those dies across the whole product stack. And being better able to optimize for frequency with most IO spun off to a seperate die. And being able to provide semi-custom designs at a dramatically lower price point.

So again, its a tradeoff. You get flexibility and pay in terms of cost, performance, and power. They are *only* doing this because of the slowdown in scaling.

I see a nice benefit for servers, especially if they want to get it out quick. And that's what it looks like AMD did. But for the mass client market where the dies sold are in the 100mm2 or smaller range, all loss and no wins.

Yes, and I do realize the chiplets are only 70mm2. That's why I can see them making a chiplet + I/O part that replaces Pinnacle Ridge? But for an iGPU part?

That's why it makes more sense to do it this way:
-iGPU-less Desk-Ridge and EPYC = chiplets on MCM
-Raven Ridge market = monolithic.

I'm not sold on memory latency being a big deal, in gaming or in general, else we would see much better scalling with memory speed on Intel systems.

Go open up your computer and look at the board. See where the CPU is. Then look at where the DRAM is. That distance = latency. Say in an imaginary world you make the DRAM zero latency and infinite bandwidth, but there's still the latency of having to travel the physical distance.

Integration achieves the lowest physical distance between the CPU and the DRAM. That's how you reduce the latency. Of course other things matter like how the memory controller deals with things but all things equal, integration is faster because of that.

For larger Server chips sure. For Desktop/Laptop parts? I don't think so.

I agree. And it probably makes sense on Kabylake-G successors. Everything else its better having it monolithic.

AMD is a small company relative to Intel.

Do you really think they have the resources to do an across the board release of many 2019 products at once?

Even Intel doesn't do this. But that has more to do with needing time to cater for different markets than anything. Also, just because you have more money on hand, doesn't mean you go wasting it.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
I wonder if they can make an HBM/DDR6 chiplet and having the entire CPU/IGP working on that. That may be the case for next gen consoles.

Sounds like an AMD Fenghuang successor. Move to the newest CPU/GPU architectures and use HBM. But HBM will need silicon interposers of some sort.
 
  • Like
Reactions: lightmanek

french toast

Senior member
Feb 22, 2017
988
825
136
Looks like 7nm yeilds are pretty decent, the node will mature quickly also due to the amount of use it is getting from various customers.

The only possible issue I can see for 7nm and AMD in general is the rumoured clocking potential missing it's targets..but we still don't know for sure yet, also AMD has made the correct decision on going wider and presumably enlarging the caches, which should mitigate that to some extent.

All in all it looks very exciting for AMD on the CPU front, and worrying for intel in the short term.
 

PotatoWithEarsOnSide

Senior member
Feb 23, 2017
664
701
106
I wonder how many dies per wafer could be salvaged as 4/6C parts...
AMD could potentially yield 95% usable dies per wafer. Certainly so if the IPC and clocks get chunky bumps, and they're happy for the defective dies to go into the consumer R3/5 space. Conceptually, those defective dies have a $0 cost per die; effective cost per die being based upon only fully functional dies. IMO, I don't see a 2-chiplet CPU hitting the consumer market though, except as a halo R9 product that was then intentionally low volume.
 
  • Like
Reactions: Gideon

krumme

Diamond Member
Oct 9, 2009
5,952
1,585
136
The yield on salvaged 6c parts must relatively much higher than 8c Zen core due to not having all that io stuff.

What the estimated cpu core ratio vs mm2 for new chiplets vs old Zen?
 

Arzachel

Senior member
Apr 7, 2011
903
76
91
That's why it makes more sense to do it this way:
-iGPU-less Desk-Ridge and EPYC = chiplets on MCM
-Raven Ridge market = monolithic.
I would be inclined to agree if not for a single detail. There's a reasonably good chance that the MCM layout could let AMD ditch the interposer requirement and hook up a 1Hi HBM chip to the iGPU through two IF links.
Go open up your computer and look at the board. See where the CPU is. Then look at where the DRAM is. That distance = latency. Say in an imaginary world you make the DRAM zero latency and infinite bandwidth, but there's still the latency of having to travel the physical distance.

Integration achieves the lowest physical distance between the CPU and the DRAM. That's how you reduce the latency. Of course other things matter like how the memory controller deals with things but all things equal, integration is faster because of that.
Alright, let me put it this way: I am skeptical that lower main memory access latencies past a certain point translate into significant performance gains in desktop workloads including gaming.
 
  • Like
Reactions: DarthKyrie

PotatoWithEarsOnSide

Senior member
Feb 23, 2017
664
701
106
I agree. I don't think that a hypothetical 20% reduction in latency would yield anywhere near a 20% improvement in performance. The gains are clearly more marginal as latencies improve.

I would add that just because something was clearly the better option according to theory, it doesn't necessarily hold that respective implementations strictly perform in accordance with the theory; tech advancements may result in the lesser option being implemented in a superior way, resulting in better than expected results.
 

Veradun

Senior member
Jul 29, 2016
564
780
136
Put it this way, if there weren't supply issues, why wouldn't AMD launch Ryzen 3000 series CPUs alongside Epyc? This is even more the case if they're extremely competitive with Intel's best chip, which itself seems to be in limited supply at the moment.

They can very well be different designs (i.e. Ryzen still a SoC, not a chiplet design) or the desktop part needs a respin for better speeds.
 

H T C

Senior member
Nov 7, 2018
555
395
136
What IS on a mature process is the IO chiplet.

This is the biggest of all the Epyc chiplets and the one that's supposed to have the lowest defect density. Any word on it's dimensions yet so we can use the Die Per Wafer Calculator to make some estimates?

Because of it's size, the yields will not be as good as with smaller sizes and, add to that, the chances of defects killing the chip entirely should be higher, as opposed to the CCX chiplets where a defect may kill part of it with the rest being fully active. However, the defect density should be far smaller and that should mitigate the yields issue.

Had AMD made the IO chiplet also in 7 nm could end up in disaster because of a much worse defect density aligned with the higher possibility whatever defects could kill the entire IO chiplet. Smart decision IMO to go with a very mature process for such a critical component.
 

CatMerc

Golden Member
Jul 16, 2016
1,114
1,149
136
We'll see I guess. Beside price, which we can argue it's still ultimately flexible as long as there's proper value at $200 and $300 price points, there's one more thing though: whatever Ryzen 3000 can do at 16c/32t, Threadripper will do much better.

My bet is this gen will focus on maximizing 8c/16t performance on AM4. I'm not 100% convinced about this, maybe 12c/24t makes sense still, maybe I'm missing an important part of the picture considering this is just an entertaining hobby for me (no professional background, just enough engineering education to understand the basics), but I would rather entertain the idea of Ryzen 3000 being all APUs rather than Threadrippers with bandwidth issues.

One thing is for certain though, I think I may have to buy one.
Bandwidth needs is just as much down to architecture as core counts. Saying dual channel not being able to feed 16 cores because the previous gen fed 8 cores isn't really an argument. Rome is 16c per dual channel lol

That said, I don't expect price per core to drop significantly with Ryzen 3000. What I expect is AM4's top end to go up. So 8c/16t might go down to 250$-300$ like it is right now, but there will be a 16c for 499-599$ (like the 1950X right now on sales).
This keeps the sweet spot at the same price point, but ASP will go up by virtue of high end products available for those who want them.

At this point I'd expect AMD to set aggressive prices whenever Intel sets aggressive prices, as they have the advantage here.
 
Last edited:

Atari2600

Golden Member
Nov 22, 2016
1,409
1,655
136
Go open up your computer and look at the board. See where the CPU is. Then look at where the DRAM is. That distance = latency. Say in an imaginary world you make the DRAM zero latency and infinite bandwidth, but there's still the latency of having to travel the physical distance.

Integration achieves the lowest physical distance between the CPU and the DRAM. That's how you reduce the latency. Of course other things matter like how the memory controller deals with things but all things equal, integration is faster because of that.

Please.

Physical transmission distances make up a very, very small fraction of overall latency. Its virtually all in signal processing, not the signal transmission.

You can beat on the I/O Controller for possibly increasing latency, but don't use propagation delay as the reason.

(Appreciable) Delays will be the result of the clock rate of the infinity fabric and the internal speed of the memory controller.


You have a transmission delay of around 1ns for every 15 cm travelled. So the extra latency added (due to transmission route lengths) from deviating from core, through an on-socket memory controller rather than directly from core-located memory controller will be measured in pico-seconds... if not femto-seconds.
 

Yotsugi

Golden Member
Oct 16, 2017
1,029
487
106
This interconnect setup is pure, blissful insanity.
I genuinely wonder how did they manage to make it not melt the socket.
 

DrMrLordX

Lifer
Apr 27, 2000
21,629
10,841
136
Read this on the GloFlo page. Has it changed?

"GF Fab 1 in Dresden, Germany is currently putting the conditions in place to enable the site's 12FDX development activities and subsequent manufacturing. Customer product tape-outs are expected to begin in the first half of 2019."

@NostaSeronx has already weighed in on the subject, but I'll add this as well:

https://forums.anandtech.com/thread...ies-stops-all-7nm-development.2553184/page-13

Read to the bottom of that page. Nosta thinks 2022-2023, while @Dayman1225 says 2020. Make of it what you will.
 
Last edited: