• We should now be fully online following an overnight outage. Apologies for any inconvenience, we do not expect there to be any further issues.

Pressure Builds on Gate First High-k

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
Pressure Builds on Gate First High-k

Problems with the gate-first approach to high-k/metal gate deposition may force IBM to switch to the gate-last approach pioneered by Intel, technologists said this week at the International Electron Devices Meeting (IEDM) in Baltimore. GlobalFoundries and other members of the Fishkill Alliance are putting pressure on IBM to reconsider its gate-first approach, which technologists said has problems with yields, threshold voltage stability, and mobilities.

At IEDM, a knowledgeable source said GlobalFoundries and nearly all the other members of the Fishkill Alliance will force a shift by IBM to the gate-last approach at the 22 nm node. GlobalFoundries is mulling a switch even earlier, at the 28 nm node coming to market in about a year, he added.

Mukesh Khare, high-k program manager at IBM, said IBM "understands the fundamentals of high-k very well." He said the gate-first approach provides fewer design rule restrictions, and is simpler to implement, than the gate-last approach used by Intel. The IBM high-k technology is working "very well," he said, offering as proof an IBM 28 nm low-power process described Wednesday at IEDM with an equivalent oxide thickness (EOT) of ~5 Å. "Nobody has such a low EOT for a 28 nm LP process," Khare said.

Asked about a switch to a gate-last approach at the 22 nm node, Khare said, "I am surprised at that kind of talk. Every technology has challenges, which is why we continue to work at it and develop solutions. We take it one node at a time." He added, "At this point, no one knows what will happen at the 15 nm node. It could be finFETs that come in by that time." The fully depleted, extremely thin SOI (FD-ETSOI) approach that IBM is pursuing would provide greater electrostatic control, and take some of the burden off of the oxide layer.

http://www.semiconductor.net/article/439276-Pressure_Builds_on_Gate_First_High_k-full.php

I wonder how much of the IP space Intel has already mapped out and laid claim to for the replacement gate integration flow.

Gate first may be inferior performance-wise but it may be cost ineffective to go with gate last if the licensing fees for Intel patents is prohibitively expensive.
 

Hyperlite

Diamond Member
May 25, 2004
5,664
2
76
I wonder how much of the IP space Intel has already mapped out and laid claim to for the replacement gate integration flow.

Gate first may be inferior performance-wise but it may be cost ineffective to go with gate last if the licensing fees for Intel patents is prohibitively expensive.

That's probably the ticket, and something they'll never admit, aside from "implementation cost."
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,787
136
Isn't it possible to create their own version of gate-last Hi-K process? They don't have to use exact same materials and approach, I'm sure. To say there is only one best way to do anything is to say the least, no fun. :p
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
IntelUser have you had much opportunity to file patents or be on the receiving end of litigation from one? :p

It doesn't matter what Intel's process of record is for their replacement-gate process used in manufacturing, what matters is how wide/encompassing their patents are in terms of the HK/MG IP space and the price they are requiring to license that IP.

Sure there are going to be other ways to skin the cat, every process development team spends a certain amount of time dealing with less than optimal integration issues owing to certain IP obstacles, but developing those alternatives takes more time and money and still might not deliver comparable device-level results. That's why cross-licensing happens.

At any rate I'm simply musing on this because everyone but the IBM integration manager clearly wants gate-last but he's all like "huh!? why would anyone want that? first time I've heard it mentioned, ever!". Which is just soooo IBM.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,787
136
BTW, do you still work at TI related to process technology development? I think I heard before that you don't. Just wondering...

I thought they went gate first because of their belief it was superior in performance to the gate last approach. Sure their 32nm numbers were pretty good.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
I don't, 32nm was aborted along with 45nm. I was among the lucky few retained but I subsequently chose to part ways nevertheless.

There are no specific performance-enabling advantages to gate-first versus replacement gate (aka disposable-gate and so-called "gate last" integration), the advantages of gate-first all basically boil down to cost-reduction advantages. (streamlined integration, fewer process steps, lower cycle-time, higher yields, etc)

Now if you can get the same performance out of gate-first as you can get out of gate-last then the primary difference between the two integration schemes would be cost and manufacturing complexity and you'd clearly choose gate-first based on that reasoning. But there is growing evidence in the public domain that gate-last enables performance that you simply can't arrive at with a gate-first integration scheme (the PMOS drives owing to stress from removing the dummy gate).
 

KingstonU

Golden Member
Dec 26, 2006
1,405
16
81
Is this foreshadowing that AMD is going to have to face Intel with an inferior product again? If gate-first turns out to underperform relative to Intel's gate-last it would be yet another disadvantage which will mean they'll have to make up for it in other aspects.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
Isn't last part of first paragraph saying otherwise in terms of yield/cost/etc?

I'm not seeing that, can you quote it to point it to me more clearly? You talking about the paragraph in the OP or the chartered JDA comments?

Your leaving is related to TI becoming fabless right? As the below article says
http://www.eetimes.com/news/semi/showArticle.jhtml?articleID=197000041

Yep. That was the turning point in a matter of speaking.

What that EETimes article discusses is the reason my involvement with 32nm and 45nm CMOS development came to an early end and I briefly became a 130nm Analog node development engineer.

My personal reasoning for leaving TI altogether though was actually related to this http://www.metaquotes.net/en/metatrader4/automated_trading and my two young kids combined with discovering that my wife has more career potential in her pinkie finger than I had in my whole body. Best career choice I made in all my life was getting out of her way and letting her be in the driver's seat these past years. (she too was a TI development engineer and still works in the industry) If only fate would have forced that opportunity sooner :p
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
Is this foreshadowing that AMD is going to have to face Intel with an inferior product again? If gate-first turns out to underperform relative to Intel's gate-last it would be yet another disadvantage which will mean they'll have to make up for it in other aspects.

I don't look at that way or interpret that to be a predestined unavoidable outcome.

It just means GloFo and IBM will have had to work all the harder to get their transistor performance and yield entitlement up to the level that Intel has with their gate-last integration.

Given that the GloFo process won't be in production until a year after Intel's, that extra development time might be all that is needed to close those gaps. Which means when it does debut it has the potential to bring all the performance of a gate-last integration scheme but without the associated costs and manufacturing complexity.

Very similar to the double-pattern dry litho versus single-pattern immersion litho delta between the two at 45nm.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,787
136
I'm not seeing that, can you quote it to point it to me more clearly? You talking about the paragraph in the OP or the chartered JDA comments?

Oh sorry, I was typing on my S5 MID and had problems highlighting the text. :p

GlobalFoundries and other members of the Fishkill Alliance are putting pressure on IBM to reconsider its gate-first approach, which technologists said has problems with yields, threshold voltage stability, and mobilities.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
Oh sorry, I was typing on my S5 MID and had problems highlighting the text. :p

Ah yes, those are more indications of deficient R&D investments than representative of intrinsic (unavoidable) delta's once the technology is put into manufacturing. (things like higher cycle time leading to higher defectivity leading to lower yield, etc)

If they don't put in even the most basic of required efforts to get the gate-first integration manufacturing worthy then for sure they are going to have yield issues as well as process variability issues. That is true of every step in the entire process flow.

That said, to be sure the working assumption here is that the reduced engineering space represented by the limited thermal budget commanded by the gate-first integration approach does not in itself represent an intrinsic performance deficit.

(for example high-k was pursued as a replacement for SiON gate oxides with the assumption it could be implemented within the framework of traditional contemporary doped-poly gates, which turned out to be a non-starter of an integration approach and actually did more harm than good owing to some misunderstood device physics at the time, this is circa 2002)

Intel has functioning devices built with gate-last in the market now, gate-first has yet to prove itself as a manufacturable integration scheme regardless the proposed cost benefits. As such any list of advantages of gate-first integration is going to be purely hypothetical and bound only by expectation and reasoning.

IBM's gate-first efforts could very well end up just like their SiLK efforts.
 

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
Dam IDC I didn'tsee this thread till I replied to the intel 32nm thread . Your hot lately . But Gate first I laught out load.
 

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
Also I thought when intel intraduced ther 45nm. IBM/ AMD we have that ready to go to ... More lies from IBM/AMD . Is that not trueth now as I said 2 years ago .
 

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
For those who don't know Intel usies Gate last because the annealing process damages the metal gates the heat causes big problems . I leave it to IDC to clean up what I am saying .
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
Also I thought when intel intraduced ther 45nm. IBM/ AMD we have that ready to go to ... More lies from IBM/AMD . Is that not trueth now as I said 2 years ago .

Yeah they even dedicated a whole presentation slot at IEDM 2008 to it:

http://www.realworldtech.com/page.cfm?ArticleID=RWT072109003617&p=3

I'm sure it worked, just not well enough to justify the cost adder at the time.

For those who don't know Intel usies Gate last because the annealing process damages the metal gates the heat causes big problems . I leave it to IDC to clean up what I am saying .

No need to clean it up, that about sums it up.

That and with gate-last they can select their metal-gate specifically tuned for PMOS versus NMOS (gate-first requires using the same metal for both fets) and they get the enhanced strain component on the channel by removing the dummy-gate (which was under-strain, counter-acting the strain on the channel) which then puts all the more strain on the channel.

For anyone not competing with Intel in a product space that is dependent on advanced process tech the advantages of gate-last might not be big enough to warrant the cost adders for implementing, but if you are competing with Intel then you are asking a lot of yourself to produce a competitive process tech node while tying your hands behind your back and not allowing yourself to pursue the gate-last integration scheme.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91

With gate first you have to deposit your gate stack materials (gate oxide, buffer, metals, cap, hardmask, etc) prior to patterning and etch which means you don't know where you are going to have PMOS or NMOS at the time of deposition of the blanket films.

With gate last you can actually "pick out" your NMOS vs PMOS transistors with masks specific to each for depositing the materials of interest into the gates after the dummy gates are removed.
 

CTho9305

Elite Member
Jul 26, 2000
9,214
1
81
With gate first you have to deposit your gate stack materials (gate oxide, buffer, metals, cap, hardmask, etc) prior to patterning and etch which means you don't know where you are going to have PMOS or NMOS at the time of deposition of the blanket films.

With gate last you can actually "pick out" your NMOS vs PMOS transistors with masks specific to each for depositing the materials of interest into the gates after the dummy gates are removed.

I don't understand why you can't use separate masks for N/P with gate first. At every stage in the process, you know where you'll eventually end up with whatever structure you're worrying about (ignoring cases like metal revs where you might move around a metal 4 shape after you already built the design up to contact or metal 1).
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
I don't understand why you can't use separate masks for N/P with gate first. At every stage in the process, you know where you'll eventually end up with whatever structure you're worrying about (ignoring cases like metal revs where you might move around a metal 4 shape after you already built the design up to contact or metal 1).

You can, and do, have separate masks for N/P for gate-first, that is how you make the implants selective to create N/P in the first place.

The issue with gate first is that you have to start out with a blanket wafer of uniformly deposited films prior to patterning and etching your gates. Those blanket films go on to become gates and they don't know ahead of time whether they are going to be PMOS or NMOS transistors so they all have the same materials in the stack.

The way you selectively remove/fill/cmp and repeat for gate-last is simply not compatible with gate-first integration.

Think of it this way, are you familiar enough with damascene copper integration for BEOL? You first make a blanket film stack representing your dielectric insulating layers, then you pattern/etch where you eventually want copper to be, then you cover the whole wafer with copper (not just filling in the vias and the trenches, you purposefully make an overburden that must be removed) and then you CMP off the excess. That is damascene, you add your metal to a pre-existing mold and remove the excess.

Gate-last, aka replacement gate, is a damascene process. You can't do that with gate-first and have CMP be the method for removing your excess overburden because there is nothing making the wafer a blanket surface at the time. The features (topology) on the wafer is your gates, the very things you don't want removed.

So a planarizing process is not going to help you remove the metal stack material you deposited for forming your second-to-be-formed gates when that stack material is lying in-between the gates you formed first.

Let me see if I can make some crude pictures to aid the eye, text is not all that insightful, I will update this post in a few minutes...
 

CTho9305

Elite Member
Jul 26, 2000
9,214
1
81
So the problem is that you can't just put down one (N or P) first and then protect it with e.g. nitride because you need to CMP while forming the other gate type?
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
So the problem is that you can't just put down one (N or P) first and then protect it with e.g. nitride because you need to CMP while forming the other gate type?

What do you mean by "put down one first"...you mean go thru the whole process of depositing/patterning/etching the gate for one type and then capping them with a nitride?

Of course you can do it, but now your second gate stack is going to be deposited on top of a nitride layer, so your second set of transistors aren't going to function.

If I am understanding your proposal correctly.
 

CTho9305

Elite Member
Jul 26, 2000
9,214
1
81
Build your N's, put down a nitride, remove the nitride where you want to build the P's, build the P's, and remove the remaining nitride.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
Here is a quick work-up I did to put into pictures the salient points for gate-last integration (just an example):

Gatelastintegrationexamplesmall.jpg


(note I used the same purple pen for all cases of gate oxide but in reality the original gate oxide under the dummy gate can be different from the gate oxide used for the subsequent N and P xtors, and the N and P xtors can use different gate oxide just as they are shown above to have different metals denoted as green and red)

Build your N's, put down a nitride, remove the nitride where you want to build the P's, build the P's, and remove the remaining nitride.

When you deposit the films to build your P's you deposit those blanket layers over the top of your N's as well (which are under the nitride), the wafer topography remains and you do not have a flat wafer surface to send into litho for resist coat/pattern/etc.

Your DOF is shot and you won't be able to print your P's. Proximity effects nightmare for controlling resist thickness and printing margin.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
UMC Takes Hybrid Approach to 28 nm High-k

At IEDM, G.H. (Mike) Ma, a researcher at UMC's R&D center in Tainan, Taiwan, said the gate-first approach for PMOS transistors tends to result in a reduction (roll-off) of the flatband voltage as well as threshold voltage (Vt) levels exceeding 0.4 V, which he said are "unacceptable, certainly for high-performance applications." The flatband voltage (Vfb) roll-off is related to the thermal budget of the work-function metal, he said.

The gate-last method, first developed by Intel Corp., resolves many of the performance issues, Ma said, but the approach is fairly expensive. The UMC hybrid approach "will not get to the performance level of the pure gate-last method," he said, but it is likely to be less expensive. The hybrid approach integrates "the high compatibility of the gate-first scheme for the NFET and the better thermal budget control of the gate-last scheme for the PFET."

By using a gate-last approach for the PMOS transistor, and a gate-first approach for the NMOS transistor, UMC seeks to avoid "the high process complexity" of the pure gate-last approach, which Ma said remains "a challenging process."

229983-The_hybrid_PMOS.jpg

The hybrid PMOS after ILD CMP (a), dummy poly removal (b), and low-resist metal CMP (c). (Source: UMC, IEDM 2009)


http://www.semiconductor.net/article/440060-UMC_Takes_Hybrid_Approach_to_28_nm_High_k-full.php

Pretty cool, the hybrid approach, removing just the PMOS gates as a gate-last effort.