GlobalFoundries changes plans for 20-nm process

Wreckage

Banned
Jul 1, 2005
5,529
0
0
http://techreport.com/discussions.x/20261

It looks like GlobalFoundries, IBM, and other Common Platform members have done a little soul searching and decided to abandon the gate-first manufacturing approach they've been championing for some time. AMD and GlobalFoundries have made it clear that they're committed to gate-first manufacturing through the 28-nm half-node, but as Real World Technologies' David Kanter reports, GlobalFoundries' 20-nm fab process will be a gate-last one.

It sounds like something they should have done a while ago.
 

Martimus

Diamond Member
Apr 24, 2007
4,490
157
106
This is by no means a surprise. Every fab has jumped off the gate first bandwagon as soon as they could. Gate last just has too much of a performance advantage to gate first, although gate first should be much cheaper.
 

piesquared

Golden Member
Oct 16, 2006
1,651
473
136
Yep, this is great news. Products are nearly on the market, so the worst is behind them. Fusion products are at TSMC so no affect on their most important line of products.
 

exar333

Diamond Member
Feb 7, 2004
8,518
8
91
Gate-last has been extremely good to Intel, so this doesn't sound like a big surprise.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
This is by no means a surprise. Every fab has jumped off the gate first bandwagon as soon as they could. Gate last just has too much of a performance advantage to gate first, although gate first should be much cheaper.

Therein belies the critical difference in mentality between market winners and losers in this industry.

The team that looks to spend an extra $0.50 per chip to ensure it out-performs the competition (and thus commands out-sized ASPs and gross-margins) versus the team that looks to save $0.50 per chip in production costs at the expense of delivering an uncompetitive product that must be sold for $10's if not $100's of dollars less per chip in ASP to keep the inventories in check.

It never surprised me that AMD/GloFo and IBM went with gate-first because of the IP landscape situation as well as the time-to-develop/implement issue, but it was their half-baked PR spin on why going gate-first was superior to gate-last that made my sides hurt from all the induced laughter.

And now they got themselves a one-node solution (which in this industry IS the most expensive way to do development and manufacturing) that does little to build confidence in their customers that they are technology leaders compared to TSMC.

Some discussion on the topic from a while back: http://forums.anandtech.com/showthread.php?p=29032366
 
Last edited:

exar333

Diamond Member
Feb 7, 2004
8,518
8
91
Therein belies the critical difference in mentality between market winners and losers in this industry.

The team that looks to spend an extra $0.50 per chip to ensure it out-performs the competition (and thus commands out-sized ASPs and gross-margins) versus the team that looks to save $0.50 per chip in production costs at the expense of delivering an uncompetitive product that must be sold for $10's if not $100's of dollars less per chip in ASP to keep the inventories in check.

It never surprised me that AMD/GloFo and IBM went with gate-first because of the IP landscape situation as well as the time-to-develop/implement issue, but it was their half-baked PR spin on why going gate-first was superior to gate-last that made my sides hurt from all the induced laughter.

And now they got themselves a one-node solution (which in this industry IS the most expensive way to do development and manufacturing) that does little to build confidence in their customers that they are technology leaders compared to TSMC.

Some discussion on the topic from a while back: http://forums.anandtech.com/showthread.php?p=29032366

Great post, and thanks for the link to the prior discussion (interesting read). :)

I wonder what the licensing restrictions will be for other companies using this tech now that Intel has pioneered much of this work.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
Great post, and thanks for the link to the prior discussion (interesting read). :)

In my prior life I actually worked on HK/MG first-hand. It is/was every bit as sexy in real-life as it probably seems over the internet. Loved it.

I wonder what the licensing restrictions will be for other companies using this tech now that Intel has pioneered much of this work.

Can you say "prohibitively so" ten times, fast? Intel guards their process-tech like Apple does their next-gen product lineup. Only you won't see some dude leave a process-tech behind in a bar :D
 

acx

Senior member
Jan 26, 2001
364
0
71
In my prior life I actually worked on HK/MG first-hand. It is/was every bit as sexy in real-life as it probably seems over the internet. Loved it.



Can you say "prohibitively so" ten times, fast? Intel guards their process-tech like Apple does their next-gen product lineup. Only you won't see some dude leave a process-tech behind in a bar :D


What if some dude left the blueprints in a pdf file on a prototype iphone 5 at a bar?
 

Medu

Member
Mar 9, 2010
149
0
76
Therein belies the critical difference in mentality between market winners and losers in this industry.

The team that looks to spend an extra $0.50 per chip to ensure it out-performs the competition (and thus commands out-sized ASPs and gross-margins) versus the team that looks to save $0.50 per chip in production costs at the expense of delivering an uncompetitive product that must be sold for $10's if not $100's of dollars less per chip in ASP to keep the inventories in check.

It never surprised me that AMD/GloFo and IBM went with gate-first because of the IP landscape situation as well as the time-to-develop/implement issue, but it was their half-baked PR spin on why going gate-first was superior to gate-last that made my sides hurt from all the induced laughter.

And now they got themselves a one-node solution (which in this industry IS the most expensive way to do development and manufacturing) that does little to build confidence in their customers that they are technology leaders compared to TSMC.

Some discussion on the topic from a while back: http://forums.anandtech.com/showthread.php?p=29032366

Hasn't Intel stuck to bulk CMOS, even though SOI is the better tech, due to cost benefits?

I wonder what the licensing restrictions will be for other companies using this tech now that Intel has pioneered much of this work.

Well there is constant rumours that Intel will switch to SOI at node Xnm. If there is any truth in it I am sure IBM and Intel will come to some agreement over sharing process tech!
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,787
136
Well there is constant rumours that Intel will switch to SOI at node Xnm. If there is any truth in it I am sure IBM and Intel will come to some agreement over sharing process tech!

The SOI Intel was/is considering is fully depleted version while they one they skipped(and the one IBM/AMD uses) is the partially depleted version. There's a big difference between the two, but I'll leave that for IDC to explain. Though it looks like SOI Tech has the license for fully depleted version as well?

I'm not sure how Intel's plans are now. If they can live the 22nm transition sticking to whatever technology they have, all the better it will be for them.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
Hasn't Intel stuck to bulk CMOS, even though SOI is the better tech, due to cost benefits?

It's all cost benefits analysis but what you have to incorporate in the analysis is that node development is a fixed cost irrespective of the volume of product generated from it.

From a project management perspective you have a choice...spend $1B developing a leading-edge bulk-CMOS node that deliver X microamps/micron driver current at Y nanoamps per micron of leakage or spend $0.8B developing a leading-edge SOI node that delivers roughly the same X:Y ratio, albeit at lower X and lower Y.

SOI lowers the upfront node development cost at the expense of increasing the backend production cost.

So Intel spent $1B, AMD spent $800M...but Intel uses their $1B creation to make ~8x more wafers than AMD. SOI wafers cost about $200-$250 more than bulk Si wafers. It doesn't take much math to realize that both AMD and Intel made the optimal choices given their product volumes.

In this hypothetical numbers analysis if Intel produces more than 1m wafers with their $1B node then they will save money in the long run having stuck with bulk CMOS but spent an extra $200m up-front in R&D expenses to create the node.

AMD saved $200m upfront by going SOI and lowering their node's development cost, and provided they don't produce more than 1m wafers over the life of the node then they saved money in the long run even though the wafers cost more to use.

We (TI) internally evaluated SOI just like Intel did, as did many other IDM's at the time as well, and the simple fact of the matter is that it came down to economics for everyone. IBM and AMD did not have the product volumes necessary to amortize the node development expense of sticking with bulk while engineering it to deliver competitive performance, Intel and others did.

(I'm reducing a complicated subject matter to a mere forum post, you'll have to excuse me if I failed to elucidate the reality of the differences that bulk versus SOI brings to bear on us process development engineers and project managers)
 

Phynaz

Lifer
Mar 13, 2006
10,140
819
126
IDC, do device designers have to be re-trained in design rules in switching between bulk / SIO, or do the tools take care of that?
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
IDC, do device designers have to be re-trained in design rules in switching between bulk / SIO, or do the tools take care of that?

Yes and yes.

Basically its new training no matter what for every node as the caveats get more lengthy in number, lessons learned from prior nodes are accumulated and broadcast, design rule restrictions get all numerous, and design for manufacturing complexity/opportunity grows exponentially.

Although "re-trained" might be a tad strong, its not like they go back and hit the books for another year or two of college. More like "refresher courses" only they aren't so much being refreshed but more like brought up to speed.

But definitely for a device engineer who has had no practical experience with SOI, regardless their experience with bulk, they are going to climb a bit of a learning curve in the beginning. And of course proficiency won't come from training, that comes from iterative experiential learning, so even the best trained green crew can' be expected to deliver superbly optimized IC's when switching from SOI to bulk or vice-versa.

I'm curious to see if GloFo offers SOI beyond 32nm. The development expense is pretty high and cannot be offset by expanding the foundry business to include more customers. They are trying to get ARM onto SOI but other than licensing there are no takers so far (that I know of).

It is possible that the economics of node development coupled with the foundry model of GloFo will drive AMD back to bulk for high-performance. Wouldn't be the end of the world, look at what an awesome job the Bobcat team did getting x86 cores implemented in TSMC's 40nm bulk-Si and the performance/watt of the product.

AMD has options.
 

theeedude

Lifer
Feb 5, 2006
35,787
6,198
126
Interesting stuff, I have the James Plummer book from 2000 and it doesn't mention "gate last." Time for an update :)