Question 7nm I/O die ready, what about GlobalFoundries?

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

RetroZombie

Senior member
Nov 5, 2019
464
382
96
I see many reasons to not do the IO die on 7nm because:
  • the process is expensive
  • it yields worst
  • the chip can’t be salvaged or even binned
  • zero performance improvement
  • very low power savings
  • no integrated gpu to benefit from it
  • low foundry capacity
  • …..
 

RetroZombie

Senior member
Nov 5, 2019
464
382
96
I still have wetdream about the successor of Puma+.
I used to think the same, but the atom is a total crap, why would amd keep creating it's own version of an crappy cpu to compete with something that shouldn't even exist.

I just don't like they keep releasing mediocre cpus that damages their brand, like the new Athlon Silver 3050U.
That cpu should have kept the 2C/4T like the Gold 3150U.

If they really needed to have two models they should have done one of three things to cut down the 3050U:
  • cut the L3 cache in half
  • lower the clock speed by 400Mhz
  • disable boost and keep an fixed clock around 2.6Ghz
And get rid of the Gold and Silver name asap, intel marketing this days is badly run, following them is really not a good idea.
 

RetroZombie

Senior member
Nov 5, 2019
464
382
96
Wait. What? Remember we're talking wafer demand also. AMD made a TON of Polaris GPUs, and those GPUs took up a lot of wafer space compared to Picasso/Raven2 dice. I find it hard to believe that Picasso and Raven2 are taking more wafer space than did Polaris.
The X570 chipset is made by GF?
 

RetroZombie

Senior member
Nov 5, 2019
464
382
96
I'm surprised they didn't release (and wont release) an GF12nm IO die with integrated vega with 4cu (up to 8 cu), to pair with the zen2 chiplet.

Keeping the 6 to 16 core desktop line gpuless is very bad, if amd really wants to make inroads in the OEM market the need to do it even if it's oems exclusive. Also renoir is highend/premium in mobile, but in desktop it's just midrange to lowend...

They don't want to do it fine, but where is the <50Watts gpu line at less than 100$ to pair with their gpuless cpus. They now even have a bigger market to address thanks to the intel F cpu lineup. They could even do them on 28nm tsmc (GCN3 based).

Not everyone needs RX570 and up, they need resolve that.
Sometimes I think they need some product manager that glues everyone together, amd, the oems and the oem customers.
I don't believe that hp, dell, ... doesn't tell them that, if they really don't then they are clueless on what they need to sell and what the market demand.

And an added note, if intel keeps having problems, amd will need capacity, and even one eight core zen1 chiplet done at GF 12nm to pair with the current IO die would still be interesting and sellable, but just for OEMs of course.
 

ao_ika_red

Golden Member
Aug 11, 2016
1,634
678
106
I used to think the same, but the atom is a total crap, why would amd keep creating it's own version of an crappy cpu to compete with something that shouldn't even exist.
I won't call N5000 crap because for new users (which I believe the reason why Atom exists), it's more than enough. It's crappy 5400rpm HDD and 2GB of RAM that usually grind everything to a halt. And remember, this Gemini Lake Atom is supposed to offer near-desktop Q6600 performance in 6W package, and for me it's quite interesting.
I just don't like they keep releasing mediocre cpus that damages their brand, like the new Athlon Silver 3050U.
That cpu should have kept the 2C/4T like the Gold 3150U.
Well, I certainly miss this announcement. Dumb move, AMD. 2 Threads in 2020 is just stupid.
And get rid of the Gold and Silver name asap, intel marketing this days is badly run, following them is really not a good idea.
ditto this
 

RetroZombie

Senior member
Nov 5, 2019
464
382
96
I won't call N5000 crap because for new users (which I believe the reason why Atom exists), it's more than enough. It's crappy 5400rpm HDD and 2GB of RAM that usually grind everything to a halt. And remember, this Gemini Lake Atom is supposed to offer near-desktop Q6600 performance in 6W package, and for me it's quite interesting.
I admit I never saw one N5000 to know how it performs, the only one that was more or less ok of all the atoms was the N2800/N3500 line, the predecessors and successors I had very bad experiences with them, but you are right they are paired with very low-end parts and low ram that makes things even worst.

But amd being so small compared to intel, concentrating their resources on the cat cores would be a very bad idea unless the console guys demanded them due to the very small die space they used.
 

RetroZombie

Senior member
Nov 5, 2019
464
382
96
If 1CU is enough for using onboard display, so be it. It will be much less painful experience when troubleshooting display-related issues.
Not only that but also something that I forgot to mention, what about the HUGE amount of motherboards that are being sold with many monitor outputs that will never get any use?
 

ao_ika_red

Golden Member
Aug 11, 2016
1,634
678
106
Why would AMD keep playing that game, when they can take the high-margin, high-value markets away from Intel instead?
Brand awareness for new users who usually can't afford $500 laptop. And in 3rd world $300 to $400 is their sweetspot.
But amd being so small compared to intel, concentrating their resources on the cat cores would be a very bad idea unless the console guys demanded them due to the very small die space they used.
I didn't say they have to develop new uarch, instead...
So, as all mainstream and high-end mobile parts move towards 7nm and beyond, I hope they will optimise their mobile Athlon lineup and keep it in matured 12nm. By doing so, I reckon Athlon laptop can be had for less than $300 which will give Atom laptop a run for its money.
what about the HUGE amount of motherboards that are being sold with many monitor outputs that will never get any use?
I think it's part and parcel of AM4 ecosystem because they also sell APU and AMD feel they have to accomodate them.
 

maddie

Diamond Member
Jul 18, 2010
3,159
1,832
136
Don't forget the new 12LP+ node.

"Derived from GF’s existing 12nm Leading Performance (12LP) platform, GF’s new 12LP+ provides either a 20% increase in performance or a 40% reduction in power requirements over the base 12LP platform, plus a 15% improvement in logic area scaling. A key feature is a high-speed, low-power 0.5V SRAM bit cell that supports the fast, power-efficient shuttling of data between processors and memory, an important requirement for AI applications in the computing and wired infrastructure markets.
 
  • Like
Reactions: Olikan and krumme

Adonisds

Member
Oct 27, 2019
90
32
51
The I/O die derived from 7nm IP might be put on 6nm EUV. Since, it allows for retapeouts(RTO)/transferred IP w/ NTO. ~80 something masks to upper 60(65-69) something masks. Should place it on cost(time and total price) with 14LPP/12LP/12LP+. N7+ doesn't have mixed signal improvements, but N6 has some of N5's AMS boosters.

GlobalFoundries hasn't any plans for 7nm FinFET or beyond(3nm Next-gen transistor). I/O die is definitely moving to TSMC. N6 uses the same fabs as N7/N7+ and the N5 node will have its own Fab. Thus, both products can run at TSMC.
What exactly does that N7 to N6 IP compatibility mean? Why do people say it will be cheap to move to N6? Wont N6 require new expensive EUV masks? Plus since the density is improved maybe some DUV masks too. Isnt making new masks what makes moving to a new process really expensive?
 

NostaSeronx

Diamond Member
Sep 18, 2011
3,031
612
136
What exactly does that N7 to N6 IP compatibility mean?
Metal Pitch is preserved between N7 and N6, while N7+ forces a new metal pitch density. Requiring a new design for the metal shrink; all logic, SRAM, analog/mixed signal have to be recreated.

If something is created in N7, then ported to N6 it has the same CPP/Mx. Thus the design can be output 1:1, even if the input is different. In fact, the EUV model can probably be closer to software model than DUV. Making EUV a more accurate representation of the planned output than DUV.
Why do people say it will be cheap to move to N6? Wont N6 require new expensive EUV masks? Plus since the density is improved maybe some DUV masks too. Isnt making new masks what makes moving to a new process really expensive?
193i masks are increasing to the point where EUV's mask can reduce mask counts. Which offsets the price of EUV.

~88 masks then moving to EUV:
N6 has 5 EUV layers which can lead up to 20 masks being removed. Such that an 88 mask count N7 design is now a 68 mask count N6 design.

Masks per day will prefer less mask count over more mask count. Which means customers will get N6 ~10+ days sooner than N7 designs.
 
Last edited:
  • Like
Reactions: Adonisds

Adonisds

Member
Oct 27, 2019
90
32
51
Metal Pitch is preserved between N7 and N6, while N7+ forces a new metal pitch density. Requiring a new design for the metal shrink; all logic, SRAM, analog/mixed signal have to be recreated.

If something is created in N7, then ported to N6 it has the same CPP/Mx. Thus the design can be output 1:1, even if the input is different. In fact, the EUV model can probably be closer to software model than DUV. Making EUV a more accurate representation of the planned output than DUV.
193i masks are increasing to the point where EUV's mask can reduce mask counts. Which offsets the price of EUV.

~88 masks then moving to EUV:
N6 has 5 EUV layers which can lead up to 20 masks being removed. Such that an 88 mask count N7 design is now a 68 mask count N6 design.

Masks per day will prefer less mask count over more mask count. Which means customers will get N6 ~10+ days sooner than N7 designs.
If I understood well designing the masks will be easy when moving to N6. But the masks will be different and will have to be remanufactured, right? Isn't the costly part the manufacturing? Designing for N6 might be cheaper than for N7 because there are less masks, but if you already did the N7 design and have all the masks why do it all again for N6?
 

Adonisds

Member
Oct 27, 2019
90
32
51
Metal Pitch is preserved between N7 and N6, while N7+ forces a new metal pitch density. Requiring a new design for the metal shrink; all logic, SRAM, analog/mixed signal have to be recreated.

If something is created in N7, then ported to N6 it has the same CPP/Mx. Thus the design can be output 1:1, even if the input is different. In fact, the EUV model can probably be closer to software model than DUV. Making EUV a more accurate representation of the planned output than DUV.
193i masks are increasing to the point where EUV's mask can reduce mask counts. Which offsets the price of EUV.

~88 masks then moving to EUV:
N6 has 5 EUV layers which can lead up to 20 masks being removed. Such that an 88 mask count N7 design is now a 68 mask count N6 design.

Masks per day will prefer less mask count over more mask count. Which means customers will get N6 ~10+ days sooner than N7 designs.
Also if Metal and CPP are maintained, how is the higher density achieved?
 

NostaSeronx

Diamond Member
Sep 18, 2011
3,031
612
136
Also if Metal and CPP are maintained, how is the higher density achieved?
7.5T-DUV, 6T-DUV density is supported which is retapeout-able(Direct DUV(~88 mask) to EUV(~68 mask)). However, they also have more dense 7.5T-EUV/6T-EUV logic libraries which is a new tapeout.

DUV has additional [contact poly pitch and area] penalty from lower routing effectiveness[LELE/SADP/SAQP/1D]. Which EUV[2D/LE/SE/etc] negates which allows for reduced need of longer CPP and better routing for smaller cell heights(track height).

N6 RTO(retapeout path) => No density gain
N6 NTO(new tapeout path) => Density gain
If I understood well designing the masks will be easy when moving to N6. But the masks will be different and will have to be remanufactured, right? Isn't the costly part the manufacturing? Designing for N6 might be cheaper than for N7 because there are less masks, but if you already did the N7 design and have all the masks why do it all again for N6?
The masks are EUV, however there is less total masks. Masks expire, they have a limited lot life anyway. There is no reason not to go to a long-term cheaper node.
 
Last edited:
  • Like
Reactions: Adonisds

teejee

Senior member
Jul 4, 2013
299
92
101
I'm surprised they didn't release (and wont release) an GF12nm IO die with integrated vega with 4cu (up to 8 cu), to pair with the zen2 chiplet.

Keeping the 6 to 16 core desktop line gpuless is very bad, if amd really wants to make inroads in the OEM market the need to do it even if it's oems exclusive. Also renoir is highend/premium in mobile, but in desktop it's just midrange to lowend...

.

They will launch up to 8C/16T desktop apu's in a few months, based on mobile 4000 dies of course (zen 2), those will be great for powerful OEM/office computers.

The GPU-less desktops will continue of course and will be first with new CPU designs, they share dies with the server line-up so very efficient way of AMD to stay competitive with limited resources.

So AMD is in my opinion doing an excellent balance act between what market want and what they can manage to develop.
Their product planning have been great the past few years IMHO.
 
  • Like
Reactions: RetroZombie

Adonisds

Member
Oct 27, 2019
90
32
51
7.5T-DUV, 6T-DUV density is supported which is retapeout-able(Direct DUV(~88 mask) to EUV(~68 mask)). However, they also have more dense 7.5T-EUV/6T-EUV logic libraries which is a new tapeout.

DUV has additional [contact poly pitch and area] penalty from lower routing effectiveness[LELE/SADP/SAQP/1D]. Which EUV[2D/LE/SE/etc] negates which allows for reduced need of longer CPP and better routing for smaller cell heights(track height).

N6 RTO(retapeout path) => No density gain
N6 NTO(new tapeout path) => Density gainThe masks are EUV, however there is less total masks. Masks expire, they have a limited lot life anyway. There is no reason not to go to a long-term cheaper node.
How long do masks last?
 

scannall

Golden Member
Jan 1, 2012
1,675
1,041
136
If 1CU is enough for using onboard display, so be it. It will be much less painful experience when troubleshooting display-related issues.
I agree. A bare bones 1 CU worth of video in the chipset would be a huge improvement. And make it an easier sale to OEM's.
 

moinmoin

Golden Member
Jun 1, 2017
1,661
1,603
106
I agree. A bare bones 1 CU worth of video in the chipset would be a huge improvement. And make it an easier sale to OEM's.
1 CU by itself is completely worthless without display engine, all the I/O and codec support etc. Those all are fixed costs decent graphic support will always need. The 3 CUs of the lowest budget Zen APU die, Raven2, is probably where AMD sees the tipping point of the balance. And Raven2 is still a pretty big die all things considered.
 
  • Like
Reactions: NTMBK

beginner99

Diamond Member
Jun 2, 2009
4,541
886
126
o do you think they change the I/O die to 7nm for the next products or will they keep hanging on the GlobalFoundries orders?
I exypect it will stay on 14/12nm at GF even for Zen 3, maybe also Zen 4. There are many reasons for that.

1. WSA
2. IO doesn't scale that much anyway and power use (of IO) is less important in desktop and server
3. 14/12nm should be cheap by now and yield near 100%
4. Supply. AMD can't get enough 7nm wafers already now. Imagine all IO dies also had to be on 7nm at TSMC. With IO die at GF, AMD can in total make and sell more chips.

Also if we believe some posters here (huge grain of salt) and GF really invest in low power processes (12nmfdx) then maybe that process would be actually better for IO die than 7nm. Or else I only see IO die at 7nm once chiplets and GPU have moved to 5nm. I expect AMD to stay on 7nm (or it's derivatives) for quiet a while.
 

GaiaHunter

Diamond Member
Jul 13, 2008
3,561
41
91
This thread makes no sense.
The 4000 mobile APU has no IO die.

So where did the "7nm IO die is ready" idea came from?

And if AMD wanted a 7nm IO die they would have one for their Zen2 based products last year.

It was an economic choice and not a " we need to go with 14nm and 12nm because we can't do 7nm IO die just yet".

Them moving the IO die to 7nm for newer porducts will happen if it .makes economic sense and/or is required by other considerations like power budget.
 

eek2121

Senior member
Aug 2, 2005
445
333
136
If I'm not mistaken, they will still be making the "Athlon" branded "value" dies, which I think are still 12 and 14nm?


...Kinda sucks that Athlon is used for their crap tier, but whatever.
Indeed. My brain still associates Athlon with performance. I stopped buying the chips when Intel made a comeback.

I think we will see a move from 14nm to 12nm on EPYC 3 and 12nm will stay the same for Ryzen 4. EPYC 4 and Ryzen 5 will likely get 12nm+. The I/O can't be shrunk very well so it doesn't make sense to spend a lot of money and time trying to make it on a cutting edge process.
I expect Zen 3 will be on 7nm or 6nm. I am predicting a monolithic 7nm EUV die for Zen 3, but absent that, I don't see them using 14nm or 12nm for it.

AMD seem to be done with being the cheap alternative. Intel can always win the budget markets by pouring millions of dollars into "contra-revenue" products, and basically giving away Atoms. Why would AMD keep playing that game, when they can take the high-margin, high-value markets away from Intel instead?
A lot of people don't realize this and it is something that everyone should keep in mind: AMD does not want the low margin value-oriented business. They will play that game for a while, but when the time is up, they will no longer do so. AMD is aggressively pursuing high margins. (Basically, they are becoming the next Apple)

There are pros and cons to this.

Also, if for some odd reason you don't believe this, watch/listen to their investor meetings/earnings reports, or simply ask yourself: what happened to the Ryzen budget parts?
 

burninatortech4

Senior member
Jan 29, 2014
345
88
101
If 1CU is enough for using onboard display, so be it. It will be much less painful experience when troubleshooting display-related issues.
I think we've discussed this on here before but I'll take my own shot at it:

My impression from past products (RX 550 and RX 560 being the prime example) is that there are diminishing returns in die space gains from removing CU's. The display, PCI-e, encode/decode and other I/O portions of the die can't be substantially decreased without completely removing features.

So let's say they include [on the chipset silicon] a dumbed down display engine with 1 CU that just serves for display out and lacks a media decode block /extraneous I/O? I can't help but question why AMD would want to use the die space for that.

In addition, wouldn't there have to be display traces from the chipset to the motherboard I/O? That would substantially increase PCB complexity.

I would guess the easiest thing to do would be to include some kind of neutered BMC right next to the display out to keep the PCB complexity down. But that's now a separate part that motherboard OEM's have to include in their BoM.
 
Last edited:

ASK THE COMMUNITY