Intel 22nm - Multigate FETs: A Risky Proposition

Idontcare

Elite Member
Oct 10, 1999
21,110
59
91
After years of R&D, multi-gate devices (MuGFETs) with vertical structures still fall into the "high risk" category, according to Intel Fellow Kelin Kuhn, who assessed the risks of 22 nm options during a presentation at IEDM on Sunday.

Challenges with parasitic resistance, parasitic capacitance and topology will make vertical devices difficult to implement. Strain techniques, so successful with planar transistors, may not work as well, she said.

http://www.semiconductor.net/article/CA6622435.html

See the table at the bottom. 22nm looking to be more and more likely an evolutionary continuation of 45nm/32nm HK/MG planar CMOS transistors.

Not exactly unexpected, but it is intriguing to see Intel continuing this new trend of public disclosure of their internal risk vs. reward assessments on technology options (aka pathfinding).
 

Martimus

Diamond Member
Apr 24, 2007
4,488
153
106
Originally posted by: Idontcare
After years of R&D, multi-gate devices (MuGFETs) with vertical structures still fall into the "high risk" category, according to Intel Fellow Kelin Kuhn, who assessed the risks of 22 nm options during a presentation at IEDM on Sunday.

Challenges with parasitic resistance, parasitic capacitance and topology will make vertical devices difficult to implement. Strain techniques, so successful with planar transistors, may not work as well, she said.

http://www.semiconductor.net/article/CA6622435.html

See the table at the bottom. 22nm looking to be more and more likely an evolutionary continuation of 45nm/32nm HK/MG planar CMOS transistors.

Not exactly unexpected, but it is intriguing to see Intel continuing this new trend of public disclosure of their internal risk vs. reward assessments on technology options (aka pathfinding).

I think that Nemesis was the only one around here who actually expected them to move to MuGFETs at 22nm. This isn't suprising at all, although I could see AMD doing this as it is the kind of high risk/high reward type move they need to get a leg up.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
59
91
Originally posted by: Martimus
I could see AMD doing this as it is the kind of high risk/high reward type move they need to get a leg up.

That would be up to TFC at this point. The challenge isn't coming up with zany ways to do something other than planar CMOS (FINFET, MuGFET, CNT, etc) but rather the challenge is coming up with something that actually provides superior transistor electrical characteristics (Ion/Ioff, Iddq, capacitance, etc) on some cost normalized metric.

Intel is basically saying at 22nm node dimensions their MuGFET integration scheme is not expected to deliver superior transistor metrics over that of scaling their planar CMOS when the costs are considered and risk to timeline slip are comprehended. Typical pathfinding homework in action from my outsider perspective.

But even if TFC found that their MuGFET integration approach provided superior xtor metrics it would still need to fit the cost model or it would not be adopted for manufacturing. A 10GHz PhIII would be a game-changer performance wise, but not if AMD has to sell them for $10k each just to break-even.

Sometimes this simple perspective gets lost when folks have visions of sugar-plums and FinFETs dancing in their heads ;) It's not increased performance at any cost, it's increased performance at approximately equivalent cost per wafer. The most luxurious of cost increase budgets I have seen (industry-wide) for commodity CMOS nodes is 30% for node-over-node cost increase per wafer. 20% is more the norm/target.
 

VirtualLarry

No Lifer
Aug 25, 2001
56,570
10,199
126
Originally posted by: Idontcare
Sometimes this simple perspective gets lost when folks have visions of sugar-plums and FinFETs dancing in their heads ;) It's not increased performance at any cost, it's increased performance at approximately equivalent cost per wafer. The most luxurious of cost increase budgets I have seen (industry-wide) for commodity CMOS nodes is 30% for node-over-node cost increase per wafer. 20% is more the norm/target.
Help me understand that last statement. 20% cost increase, for new process nodes? I thought that shrinks/new nodes were supposed to decrease cost, by allowing more dice per wafer. Is that not true?

 

Idontcare

Elite Member
Oct 10, 1999
21,110
59
91
Originally posted by: VirtualLarry
Originally posted by: Idontcare
Sometimes this simple perspective gets lost when folks have visions of sugar-plums and FinFETs dancing in their heads ;) It's not increased performance at any cost, it's increased performance at approximately equivalent cost per wafer. The most luxurious of cost increase budgets I have seen (industry-wide) for commodity CMOS nodes is 30% for node-over-node cost increase per wafer. 20% is more the norm/target.
Help me understand that last statement. 20% cost increase, for new process nodes? I thought that shrinks/new nodes were supposed to decrease cost, by allowing more dice per wafer. Is that not true?

Cost per wafer always goes up (sure there will be a couple exceptions to this rule) for new nodes, if for no other reason than the new nodes require more recent equipment which once purchased require another 4yr+ to fully depreciate relative to yesteryear's node.

The purpose of a node is to increase density so you get more die/wafer. This is where you get a cost reduction per nub (net unit built).

To be sure though, a 300 mm^2 chip built on 90nm node is going to be cheaper than a 300 mm^2 chip built on 45nm node. This is reality. One just hopes that the more expensive 45nm chip fetches higher ASP's to net out to produce equivalent (or superior) gross margins, otherwise why make the chip in the 45nm process?

More dice/wafer means more revenue per wafer, not lower cost per wafer.

Foundries for example sell contracts by the wafer. They don't care (within reasonable bounds) whether you want them to produce 20 behemoth die on that wafer or 2000 super tiny die on that wafer, in either case you are going to pay the same ~$2k per wafer (pre-test and packaging). It's up to the client to decide how they want to manage the wafer cost in their overall business model and product group.
 

SunnyD

Belgian Waffler
Jan 2, 2001
32,674
146
106
www.neftastic.com
The first approach will always be "if it ain't broke..."

However investing in vertical chips isn't a bad idea. They will inherently be more expensive forever and a day, which is why planar will always be the first point of production. But there will come a time where physics make planar chips either exponentially bigger to keep up with Moore or simply impossible to continue the path. That's where the shift will happen. Until then, if there's a profit in going planar, so be it.
 

Martimus

Diamond Member
Apr 24, 2007
4,488
153
106
Originally posted by: Idontcare
Sometimes this simple perspective gets lost when folks have visions of sugar-plums and FinFETs dancing in their heads ;) It's not increased performance at any cost, it's increased performance at approximately equivalent cost per wafer.

This is true even in my field, although the contractor will often try to get us to pay for more performance than we need (and we often fall for it :( ). We probably spend a whole lot more than most (or really anyone), so I won't compare my line of work to a foundry any more.

I do wonder what AMD will focus on from here forward. They do seem to be getting beat in just about every arena except for the HTPC market with the launch of i7 servers. They should be able to do well in a few markets if they truly focus on those markets. They only have one competitor in the x86 market, so they should be able to carve out a sustainable market niche that enables them to be profitable.

At least that is my opinion on the matter. No-one can be the best at everything, they stretch themselves too thin that way. The only way AMD shouldn't better Intel in some markets is if they also try to stretch themselves across the same number of projects with far less resources to do so.
 

TuxDave

Lifer
Oct 8, 2002
10,571
3
71
First finfets, then tri-gate transistors and now they're called mugfets. (unless someone can tell me that there is actually a difference besides the name)
 

JackyP

Member
Nov 2, 2008
66
0
0
Originally posted by: SunnyD
The first approach will always be "if it ain't broke..."

However investing in vertical chips isn't a bad idea. They will inherently be more expensive forever and a day, which is why planar will always be the first point of production. But there will come a time where physics make planar chips either exponentially bigger to keep up with Moore or simply impossible to continue the path. That's where the shift will happen. Until then, if there's a profit in going planar, so be it.

I'm just wondering where does die-stacking come into play? Or is this completely out of the equation now? Or was it just left out from this report, because they focused on process features, whereas die-stacking is not dependant on process but a way to assemble chips?

It seems it's not mentioned in Intels presentation either. http://www.his.com/~iedm/program/courses.html
 

pm

Elite Member Mobile Devices
Jan 25, 2000
7,419
22
81
Originally posted by: TuxDave
First finfets, then tri-gate transistors and now they're called mugfets. (unless someone can tell me that there is actually a difference besides the name)

I'm no expert, but... MUGFETs I thought was the generic term for any FET that is flipped on it's side and has more than one gate (as opposed to the dual-gate technology that I remember IBM working on that had a planar design but was tough for lithography alignment).

The original FinFET's had a dual-gate, not a tri-gate. See this paper:
http://www-device.eecs.berkeley.edu/~viveks/Papers/118ISSCC01.pdf
and this one as well:
http://www-device.eecs.berkeley.edu/~viveks/Papers/67IEDM99.pdf

These are dual-gate designs - it's even in the title. Intel extended this concept by changing a couple of litho steps and extended it into a "wrap-around" tri-gate design that added another gate in the Z-axis.

Wikipedia seems to reinforce the view that FinFET's are dual-gate devices and tri-gates are three gate devices. http://en.wikipedia.org/wiki/Multigate_device

Now the obvious question that I have occassionally wondered is, if FinFETs use a wrap-around gate similar to tri-gate devices, then why aren't FinFET's tri-gates? The answer is obvious in the diagrams, they are using a silicon nitride spacer (see the papers above), but why... why not get rid of the spacer and make it a tri-gate? There's got to be a good reason for the spacer, but I haven't figured it out. I presume it's related to self-alignment, but the second paper mentions the silicon nitride "hard mask" several times without explaining - at least to my pseudo-layman's ears - why they need it.

There's a discussion of the three types by Intel using fully-depleted SOI in this paper:
http://download.intel.com/tech...ri-gate-paper-0902.pdf

* Not a spokesperson for Intel Corp.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
59
91
Originally posted by: pm
Now the obvious question that I have occassionally wondered is, if FinFETs use a wrap-around gate similar to tri-gate devices, then why aren't FinFET's tri-gates? The answer is obvious in the diagrams, they are using a silicon nitride spacer (see the papers above), but why... why not get rid of the spacer and make it a tri-gate? There's got to be a good reason for the spacer, but I haven't figured it out. I presume it's related to self-alignment, but the second paper mentions the silicon nitride "hard mask" several times without explaining - at least to my pseudo-layman's ears - why they need it.

There's a discussion of the three types by Intel using fully-depleted SOI in this paper:
http://download.intel.com/tech...ri-gate-paper-0902.pdf

* Not a spokesperson for Intel Corp.

It's not a spacer in the traditional use of the word spacer where you are offsetting the dopant implant region for S/D formation and extension from the channel. The nitride is there as necessary evil in the particular integration scheme used to form the Fin and subsequently the gate.

It served as a hardmask for etching the Fin in the first place. Then it served as an etchstop when etching the gate so as to protect the underlying Si channel.

Naturally there are integration approaches that get around this. Just as planar CMOS integration with HK/MG has two major different ways to skin the same cat (gate first versus replacement gate).

As for the myriad of acronyms...this is what people do when trying to capitalize on self-promotion in western culture. Every tom, dick, and harry is going to name their pet university xtor after something new and unique in an attempt to differentiate it when they get their 15s EEtimes headline fame. I've seen it for so many years now it doesn't even register with me anymore.

How many names do we need for cars? One for every car product manager is the answer.
 

TuxDave

Lifer
Oct 8, 2002
10,571
3
71
Originally posted by: pm
Originally posted by: TuxDave
First finfets, then tri-gate transistors and now they're called mugfets. (unless someone can tell me that there is actually a difference besides the name)

I'm no expert, but... MUGFETs I thought was the generic term for any FET that is flipped on it's side and has more than one gate (as opposed to the dual-gate technology that I remember IBM working on that had a planar design but was tough for lithography alignment).

The original FinFET's had a dual-gate, not a tri-gate. See this paper:
http://www-device.eecs.berkeley.edu/~viveks/Papers/118ISSCC01.pdf
and this one as well:
http://www-device.eecs.berkeley.edu/~viveks/Papers/67IEDM99.pdf

These are dual-gate designs - it's even in the title. Intel extended this concept by changing a couple of litho steps and extended it into a "wrap-around" tri-gate design that added another gate in the Z-axis.

Wikipedia seems to reinforce the view that FinFET's are dual-gate devices and tri-gates are three gate devices. http://en.wikipedia.org/wiki/Multigate_device

Now the obvious question that I have occassionally wondered is, if FinFETs use a wrap-around gate similar to tri-gate devices, then why aren't FinFET's tri-gates? The answer is obvious in the diagrams, they are using a silicon nitride spacer (see the papers above), but why... why not get rid of the spacer and make it a tri-gate? There's got to be a good reason for the spacer, but I haven't figured it out. I presume it's related to self-alignment, but the second paper mentions the silicon nitride "hard mask" several times without explaining - at least to my pseudo-layman's ears - why they need it.

There's a discussion of the three types by Intel using fully-depleted SOI in this paper:
http://download.intel.com/tech...ri-gate-paper-0902.pdf

* Not a spokesperson for Intel Corp.

So essentially Intel found a way to not have a nitride layer between the Si fin and the top portion of the gate to allow slightly better channel control?
 

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
Chew on this for awhile.

http://www.amdzone.com/index.p...finfet-sram-cell-with-

Than there is this. http://www.tomshardware.com/ne...-transistors,2927.html

Than this one.
http://www.semiconductor.net/a....html?industryid=47298

Ireally don't know what the Intel fellow is talking about . But It looks like smoke and mirrors to me.

From Toms

Intel is going to great lengths to get a better handle on the power consumption of its semiconductors.

On step closer to reality is the "tri-gate" transistor, which enables the company to gain more flexibility in adjusting, processor performance and power consumption. The technology could be available by 2009 and drop a chip's total power by 35%, Intel said.

The tri-gate transistor isn't entirely a new announcement, as the company has been talking about the technology at various events since September of 2002. Presenting at the 2006 Symposia on VLSI Technology and Circuits in Honolulu, Hawaii, Intel followed up with more details and first test results, which indicated that the tri-gate transistor, often also referred to as "3D transistor" may in fact be a technology that will make it into production one day.

Intel's approach to enhance the gate technology in transistors tackles one of the major concerns especially in micro processors. Transistor gates consist of a gate electrode and gate dielectric. Both components control the flow of electrons between the source and drain by switching on and off the main current. Shrinking the structures of transistors has created several challenges such as increasing current leakage in "off" states of a transistor - causing the overall power consumption of a semiconductor device to climb.

While Intel has found solutions to keep the traditional gate architecture alive through the introduction of strained silicon to enhance electron flow and a so called "high-K" gate material in place of silicon dioxide to reduce current leakage, additional shrinks may require completely new approaches to control leakage and improve transistor performance. One possible solution could be tri-gate transistors, Intel believes.

Compared to today's planar transistors, tri-gate transistors use three gates instead of only one. According to Mike Mayberry, vice president and director of component research at Intel, the addition of two gates allows the company to increase the amount of current running through the transistor and decrease leakage since current can be routed into three different channels. First tri-gate transistors apparently have been manufactured and Mayberry claimed that 65 nm versions offer a 45% increase in speed or 50x reduction in "off"-current when compared to regular planar transistors.

If Intel can transfer these numbers into greater amounts of transistors, it isn't hard to imagine that tri-gate technology will become more and more important over time: With the amount of transistors doubling every 18 - 24 months and a growing flexibility to turn more parts of a micro chip on and off, a 50% reduction of off-current could deliver a whole new range of mobile devices and bring us closer to those notebooks with 8 hours battery time.

An even better solution than tri-gate transistor would be a pipe-structure, with a single gates and completely surrounding the electron flow from source to drain. However, this would require Intel to create a "tunnel" for the main current, which - according to Mayberry - is still science fiction. "At least today we cannot manufacture such a model," he mentioned in a conference call. However, he did not rule out that future technologies could provide an opportunity to develop such a transistor.

The executive stressed that tri-gate technology is "only one" approach to control performance and leakage in production processes down the road. But even if tri-gate is barely more than a research project - Mayberry said that Intel is "nowhere near to build 1 billion tri-gate transistors onto one chip - it appears to make steady progress and not be one of Intel's countless research projects that don't make it past a scientific paper presentation.

Mayberry indicated that a mass-production of the tri-gate transistor would be possible "with the 32nm chip generation," which is expected to debut at the end of 2009. If adapted, he expects the technology to move across all semiconductor products built by Intel.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
59
91
Originally posted by: Nemesis 1
I really don't know what the Intel fellow is talking about . But It looks like smoke and mirrors to me.

It's not smoke and mirrors, but it is standard pathfinding homework. Long range R&D, not really expected to become viable for near-term manufacturing. Think EUV. If Intel pathfinding was not pushing on it for the last 6 yrs then it would simply never become a reality (ever), but at least we are 6 yrs closer to it becoming a reality thanks to Intel.

Same with this tri-gate and finfet stuff. Its all pathfinding, due diligence, generating options (and characterizing them) perchance they become necessary down the road.

All these non-planar xtor integration schemes raise the cost to produce a device, so the preference is to not have to go there. But its not acceptable to not know what performance and cost you are leaving on the table by not going there. So you (AMD, Intel, etc) must do the due diligence of creating and characterizing these structures for decision making purposes.

Intel did PD-SOI too, did not mean at any point in the pathfinding project that they were definitely going to convert from bulk Si to some variant of SOI.

From my perspective the era of planar CMOS is here to stay at least another 2 nodes. (32nm and 22nm) Until Lg gets around 15nm. Beyond that and I foresee a plausible situation for needing to migrate to a non-planar transistor integration.

Now if AMD actually pulled FinFET out of its hat and mass produced them in an MPU at say the 22nm node then that would truly make my jaw drop. I'd be flabbergasted and truly amazed/humbled by the AMD/IBM engineers if they accomplished that. (I worked on FinFET for a bit but that is nothing special really as just about everyone who is in FEOL process development at a top-10 revenue IDM or foundries or consortia like IMEC and Sematech has worked on FinFET)

Intel on the other could do it and do it with ease if they wanted to. That's what a monster R&D budget gives you the luxury of doing. But they have to have a reason for doing it, it has to be cost justified as they are spending shareholder equity. If they convince themselves that their 22nm planar CMOS shrink with HK/MG will be competitive with AMD/IBM's 22nm CMOS then they can't just be willy nilly about increasing R&D costs now and production costs later to jump on FinFET just because they can do it if they spend enough money. It has to make fiscal sense.
 

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
Well If AMD 45nm is as good as it appears to be. Intel best look over shoulder real hard.

Amds 45nm process movies to 32nm. with little cost. Just as intels 32nm will move down to 22nm with little cost.

If Intel fall asleep again . Its unforgiveable
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,786
136
Well If AMD 45nm is as good as it appears to be. Intel best look over shoulder real hard. Amds 45nm process movies to 32nm. with little cost. Just as intels 32nm will move down to 22nm with little cost. If Intel fall asleep again . Its unforgiveable

Intel has hands down the best process technology. There are multiple parameters determing transistors, but one of them is the Ion, the performance of the transistors when its on.

They are also the only one doing 45nm without immersion lithography, while all other companies are using it in one form or another. The reward for finding a way without using it is the reduced mass production costs.

Intel 45nm Hi-K
? PFET IDSAT = 1.08 mA/micrometer

? NFET IDSAT = 1.36 mA/µm

Ever since 0.13 micron, Intel has achieved the unstoppable lead of the highest performance transistor and fastest delivery, often without using more exotic techniques like SOI and immersion.

It's not the process technology team that falls asleep within Intel. It's the processor design teams. But we are starting to see with Nehalem the x86 CPUs within Intel might have been purposely crippled for political reasons than technical. One of them which might have been the IA64 project. The IA64 project has been influencing Intel since the early 90s.

We are starting to see what will be unfolded when the lift the "political" limit.


Interesting article about the AMD 45nm Shanghai:

http://www.eetimes.com/news/la...le=true&printable=true

"The pFET performance improvement is more dramatic with drive current up to 660 microamps/micrometer compared to 510 microamps/micrometer on 65-nm transistors."

Intel's pFET performance is 64% better than AMD's, but they note AMD has lower leakage current.

"Our transistor benchmarking indicates that leakage current is less than one-third of AMD's 65-nm process. It's also significantly lower than the Intel 45-nm HKMG process. In fact, the Ion/Ioff ratio for AMD's pFET is nearly 10 times better than the Intel pFET."

Major conclusion for the article:
"With a major architectural overhaul, Intel will be pushing server performance with its Nehalem chips, but AMD can maintain its "negawatt" lead with its power-saving transistor design."