Intel announces Tri-gate "3-D" transistors for upcoming Ivy Bridge based processors

Page 6 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Khato

Golden Member
Jul 15, 2001
1,378
487
136
And if you compare that grey line at the 1v mark to the blue line at the 1v mark, you see what it would be for intels high end with planar. What is it, .03 or so?

Except for the fact that the gray line is pretty much purely theoretical - it could well be that it wouldn't have scaled as well in reality.

Also keep in mind that the comparison is to Intel's 32nm process, which was not only available and shipping products well over a year before anyone else, but markedly outperforms all the latecomers - http://www.realworldtech.com/includes/images/articles/iedm10-10.png

Makes me wonder how long it'll be before anyone else even ships product on the 2*nm node, much less actual 22nm? Really seems like the manufacturing gap is only widening as time goes on... Heh, has anyone else even shipped product yet with HKMG? Or is it the case that the first products using them will be the upcoming bulldozer/llano... over 3 years after Intel introduced it on their 45nm process with Penryn.
 

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
They already said Atom at the end of 2012, and server chips first for IB as well. The whole "IB is coming in a few months to beat Bulldozer on desktop" argument has been shattered in one day basically.

And it IS late, albeit not hugely. Intel is 3 months behind schedule with this already. Otellini has changed from "shipping 2nd half 11" to "beginning production before the end of the year".

Can you tell me the differance between what he said first Vs what he said last. Begin production befor the end of the year . Is differant how. than a 2 h shipping second half and production beginns befor end of year . What DATE changed . I see none . The thing he said was will see 22nm chips in 2011.
 
Last edited:

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
I am no expert - I usually defer on this stuff to Idontcare - but it was my understanding that the original finFET designed used a thicker layer of oxide across the top as part of the self-aligned process and so it really was a dual gate even though the poly wrapped around on all three sides because the oxide etchstop on the top was too thick.

In this paper from UC Berkeley about the finFET:
www.eecs.berkeley.edu/~hu/PUBLICATIONS/PAPERS/700.pdf

They mention the SiO2 "cover layer" several times which is on the top of the "fin". For example on page 2321, fourth paragraph, they say "the SiO2 spacer on the sides of the Si-fin is completely removed by sufficient over etching of SiO2 while the cover layer protects the Si-fin" and then at several points they go on to mention that the finFET is a double-gate device.

Idontcare, you mentioned you worked on finFETs.... are they dual-gate or tri-gate?

There were so many different integration schemes and approaches that I'd have to give a serious look at my notes to see which ones were looking most plausible at the time.

FinFET was just the jargon used to describe any non-planar xtor scheme that involved the channel being a vertical "fin" projecting perpendicular from the substrate.

The delineation of gates, counting them and so on, was not the defining difference in the integration schemes that I worked on. (and not unsurprising, this was all concept and feasibility stuff at the time, no value to come from labeling each experiment)

Regardless the semantics, the point here is that Intel has managed to implement it into a fully production worthy embodiment which just gobsmacks me. We were nowhere, absolutely nowhere, close to having 3D xtor technology on a timeline for 2012 deployment. We were looking to 2011 for our first HKMG (still planar though) production implementation.

Intel has absolutely "conroed" the process technoloy world here.

I tip my hat to all the engineers and project managers involved in this achievement!
Tip-Hat1.gif
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
YOU knew I would be one who believes transitor tech fell from the sky. AT&T employee got credit for it in 1946 or 47. My recall is falling off pretty bad but I believe thats correct . Right around the time UFO were crashing in roswell

The silliness here is that no transistor tech of the 1940's was advanced enough to power the electronics of a spacecraft that was traveling between stars (UFO-derived or terrestrial), and the analytical technology (metrology) that existed in the 1940's was simply incapable of being used to evaluate/reverse-engineer the technology of a modern-day transistor.

So you have a dilemma here. Either the UFO's traveled across the cosmos in 1940's xtor technology (less advanced than what we used to take us to the moon) or they came here with super-advanced transitors that we would not have even been able to look at let alone do basic analysis of until the mid 90's (in which case how did we know what to build in the 40's?).
 

Phynaz

Lifer
Mar 13, 2006
10,140
819
126
Makes me wonder how long it'll be before anyone else even ships product on the 2*nm node, much less actual 22nm? Really seems like the manufacturing gap is only widening as time goes on... Heh, has anyone else even shipped product yet with HKMG? Or is it the case that the first products using them will be the upcoming bulldozer/llano... over 3 years after Intel introduced it on their 45nm process with Penryn.

What I'm reading says Intel has basically increased their lead in fabrication technology to three years with this announcement.

No wonder there's so much talk about Intel entering the contract manufacturing market.

"Hey Paul, how about you build me 60 million or so A6's? - Steve"
 

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
AMD doesn't have any FABs. They are at the mercy - for lack of a better word - of their contract manufacturers.

Basically a significant part of their destiny is out of their hands.

Well if intel is going to make processors for others . NV would be crazy not to make a deal with intel to fabb their chips
 

Phynaz

Lifer
Mar 13, 2006
10,140
819
126
One other thing to point out - at least for us business types...This is going into production on existing tools.

It adds a couple of more production steps which Intel says adds 2 to 3 percent in additional processing costs, but they don't have to outlay millions of dollars on new tooling.
 

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
The silliness here is that no transistor tech of the 1940's was advanced enough to power the electronics of a spacecraft that was traveling between stars (UFO-derived or terrestrial), and the analytical technology (metrology) that existed in the 1940's was simply incapable of being used to evaluate/reverse-engineer the technology of a modern-day transistor.

So you have a dilemma here. Either the UFO's traveled across the cosmos in 1940's xtor technology (less advanced than what we used to take us to the moon) or they came here with super-advanced transitors that we would not have even been able to look at let alone do basic analysis of until the mid 90's (in which case how did we know what to build in the 40's?).

THe on off switch Transistor tho simple had to be done in very small baby steps here on earth . Unless the alien tech was so advanced as to make 14nm look hugh , We could recover the tech and examine the transitors threw electron microscopes. I believe a german invented it in the early 30s if I recall it correctly . We could have seen the tech and started developing for it . years befor it bore fruit.
 

Phynaz

Lifer
Mar 13, 2006
10,140
819
126
Well if intel is going to make processors for others . NV would be crazy not to make a deal with intel to fabb their chips

I'm thinking of Intel as the luxury manufacturer - that's why I mentioned Apple. Apple has the $$$ to have Intel manufacture parts for them that nobody can match. Lowest power and fastest ARM cores on the planet. And Apple pays a LOT for that.

For example recently Apple locked up 50% of AUO's display capacity -- by offering them 400% of the current price for the displays. They could very likely do the same thing with Intel.

I don't see Nvidia doing that. Seems they would price themselves out of the market.
 
Last edited:

jimbo75

Senior member
Mar 29, 2011
223
0
0
Except for the fact that the gray line is pretty much purely theoretical - it could well be that it wouldn't have scaled as well in reality.

Yeah it might have been better too. What a fun game of make believe we can have playing this.

Makes me wonder how long it'll be before anyone else even ships product on the 2*nm node, much less actual 22nm? Really seems like the manufacturing gap is only widening as time goes on... Heh, has anyone else even shipped product yet with HKMG? Or is it the case that the first products using them will be the upcoming bulldozer/llano... over 3 years after Intel introduced it on their 45nm process with Penryn.

I could have said the same thing about immersion litho just over a year ago, or about SOI still. We'll see soon enough what extra that brings.
 

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
When I seen this today I really thought it was about intels SOI that I read about not long ago . I like this news way better, Intel can try the SOI on haswell 22nm.
 

jimbo75

Senior member
Mar 29, 2011
223
0
0
Well SOI is nearing the end of it's life anyway, i'm just pointing out that AMD had stuff that intel didn't have, not that it made much difference. Then again how much of that was actual process and how much was architecture?

We won't know if SOI was actually worth much until we see Llano and BD, as that will be the first time both companies have had HKMG and Immersion at the same time. Maybe SOI is worth another 10% to BD, maybe it's not, we shall see soon.
 

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
Well news is its just beginning for intels SOI according to a topic we had here not that long ago. Its not IBMs SOI but Intels. Intel didn't buy the license for IBMs SOI and from what I have seen they didn't need to add the extra cost of SOI to their chips . Intel likes margins.
 
Last edited:

Phynaz

Lifer
Mar 13, 2006
10,140
819
126
Why do people think immersion lithography provides a performance benefit?
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
THe on off switch Transistor tho simple had to be done in very small baby steps here on earth . Unless the alien tech was so advanced as to make 14nm look hugh , We could recover the tech and examine the transitors threw electron microscopes. I believe a german invented it in the early 30s if I recall it correctly . We could have seen the tech and started developing for it . years befor it bore fruit.
With that thinking you can easily argue that every trivially simple techology was born from alien tech.

The fork? Alien tech. We pitiful humans are way to stupid to have ever developed it on our own.

Xtors though, thankfully I worked on them in first person so I do know different. There is nothing alien about them or our understanding of the physics that bely them.

It was a natural transition from vacuum tubes (the market existed), and discovering their semiconductor properties was no more difficult or improbable than the discovery of vulcanization of rubber. (i.e. both were accidents, but easy ones to make that would have happened sooner or later)

It's too bad though that aliens traveled all the way here and all we got out of it was the internet and the ability to twitter. We should have taken the FTL drive.

Ivybridge alien tech still isn't going to get you to the nearest star in anything less than decades while traveling at the speed of light :p
 

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
Oh IDC I recall the vacuum tubes and one of its uses durring the big war . You know what I mean . Binary language and transitors which came first , The chiken or the egg. vacuum tube was a neat ideal to use binary. But the switching speed was a NIGHTMARE. I take IDC that you believe were alone in the universe kinda short sited I would say . As for taking 10 years to get to nearest star . I could debate that with ya and probably out point ya. Ever here of nemesis. Its a brown dwarf that is companion to our sun . Its getting closer and closer . Many scientist are beginning to examine more closely our system . Many say it a real possiability that our Sun has a companion . The proof will likely kill ya tho.
 

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
With that thinking you can easily argue that every trivially simple techology was born from alien tech.

The fork? Alien tech. We pitiful humans are way to stupid to have ever developed it on our own.

Xtors though, thankfully I worked on them in first person so I do know different. There is nothing alien about them or our understanding of the physics that bely them.

It was a natural transition from vacuum tubes (the market existed), and discovering their semiconductor properties was no more difficult or improbable than the discovery of vulcanization of rubber. (i.e. both were accidents, but easy ones to make that would have happened sooner or later)

It's too bad though that aliens traveled all the way here and all we got out of it was the internet and the ability to twitter. We should have taken the FTL drive.

Ivybridge alien tech still isn't going to get you to the nearest star in anything less than decades while traveling at the speed of light :p

Removed post as it went in directions not suited to this topic. It wasn't me who brought up transitors from UFOs.
 
Last edited:

Martimus

Diamond Member
Apr 24, 2007
4,490
157
106
With that thinking you can easily argue that every trivially simple techology was born from alien tech.

The fork? Alien tech. We pitiful humans are way to stupid to have ever developed it on our own.

Xtors though, thankfully I worked on them in first person so I do know different. There is nothing alien about them or our understanding of the physics that bely them.

It was a natural transition from vacuum tubes (the market existed), and discovering their semiconductor properties was no more difficult or improbable than the discovery of vulcanization of rubber. (i.e. both were accidents, but easy ones to make that would have happened sooner or later)

It's too bad though that aliens traveled all the way here and all we got out of it was the internet and the ability to twitter. We should have taken the FTL drive.

Ivybridge alien tech still isn't going to get you to the nearest star in anything less than decades while traveling at the speed of light :p


I think we can get to the nearest star in 8 minutes if we traveled at the speed of light since it is 149 million kilometers (93 million miles) away. Of course, I don't expect us to travel at the speed of light to get there, so at our more conventional speeds it probably would take decades to get there.
 

pm

Elite Member Mobile Devices
Jan 25, 2000
7,419
22
81
THe on off switch Transistor tho simple had to be done in very small baby steps here on earth . Unless the alien tech was so advanced as to make 14nm look hugh , We could recover the tech and examine the transitors threw electron microscopes. I believe a german invented it in the early 30s if I recall it correctly . We could have seen the tech and started developing for it . years befor it bore fruit.

I remember back when I worked on the Pentium microprocessor (so like 15 years ago), one of the running jokes in the debug labs was that whenever we'd be stuck on a problem and we'd be thinking hard about a solution someone would say "you know, we just need to go check the alien schematics and see how we are supposed to be doing this."

If only it was that easy...

How did we manage to get his far off topic anyway?


* Not speaking for Intel Corp. *
 
Last edited:

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
I think we can get to the nearest star in 8 minutes if we traveled at the speed of light since it is 149 million kilometers (93 million miles) away. Of course, I don't expect us to travel at the speed of light to get there, so at our more conventional speeds it probably would take decades to get there.

Ya in terms of space 1AU isn't along distance nor is 1 light year (6,000,000.000,000) miles. I don't know does space and time fold? Kinda like a 3D transitor in a manner.
 

Tsavo

Platinum Member
Sep 29, 2009
2,645
37
91
With that thinking you can easily argue that every trivially simple techology was born from alien tech.

The fork? Alien tech. We pitiful humans are way to stupid to have ever developed it on our own.

Xtors though, thankfully I worked on them in first person so I do know different. There is nothing alien about them or our understanding of the physics that bely them.

It was a natural transition from vacuum tubes (the market existed), and discovering their semiconductor properties was no more difficult or improbable than the discovery of vulcanization of rubber. (i.e. both were accidents, but easy ones to make that would have happened sooner or later)

It's too bad though that aliens traveled all the way here and all we got out of it was the internet and the ability to twitter. We should have taken the FTL drive.

Ivybridge alien tech still isn't going to get you to the nearest star in anything less than decades while traveling at the speed of light :p

Actually.

I used to work for the government as an engineer. Type and place are irrelevant.

In 2006, I was lead engineer on a team tasked with reverse engineering three UFOs that crashed in Vermont. Actual From-Another-Planet-UFOs and not weather balloons gone astray type of UFOs.

I can't talk about much of the project...not because of some red classified stamps across everything, but because, quite simply, there isn't much to talk about. Their propulsion systems were largely destroyed, and what's left makes absolutely no sense. Their power system is fusion, something we'd have 20 years from now if we spent money on that instead of tanks and F-22's, but that's another discussion.

Much of what they have are made from alloys of metals that simply don't exist here on Earth...at least not yet. Much of their technology is sitting on shelves until we have the technology and science to understand it. The irony is, once we have the technology and science, the alien spacecraft won't be of much use to us. We can't replicate it today, and when we can, we won't need to.

At any rate.

The processors in the alien craft are ... from Earth. Intel Pentium 4 EE editions, circa 2004. More than 8000 of em, too. Humans, it seems, are the most adept EE's in the galaxy. Aliens that can get here, get here to steal our CPUs and not stick LED's up some redneck's ass.

Pressing on.

We got a decent enough idea of their power system, and learned that they couldn't power both their nav and propulsion system while keeping the many banks of CPUs cooking. They pre-plotted their route, shut down 75% of their CPU banks, then lit the fires.

In this case, they lit the fires right into Pico Peak.

Due to the extreme nature of the crash, we learned absolutely nothing about alien anatomy. Want to know what we know what an alien looks like? Got a jar of raspberry jam in the fridge? That's what we have, and we got that by scraping it off the inner windshields of their space craft. Well, obviously raspberry jam didn't fly three spacecraft into Pico Peak, but you get my idea.

If they'd just waited a year, they would have been able to upgrade all their P4 gear to Athlon 64 X2's, nearly halved their power budget and would have been able to run their nav while in flight.

We're all pretty sure they wet themselves when they found out about AVX in Sandy Bridge.
 

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
Dozens of multigate transistor variants may be found in the literature. In general, these variants may be differentiated and classified in terms of architecture (planar vs. non-planar design) and number of channels/gates (2, 3, or 4).

[edit] Planar double gate transistorsPlanar double-gate transistors employ conventional planar (layer by layer) manufacturing processes to create double-gate devices, avoiding more stringent lithography requirements associated with non-planar, vertical transistor structures. In planar double-gate transistors the channel is sandwiched between two independently fabricated gate/gate oxide stacks. The primary challenge in fabricating such structures is achieving satisfactory self-alignment between the upper and lower gates.[3]

[edit] FlexfetFlexfet is a planar, independently-double-gated transistor with a damascene metal top gate MOSFET and an implanted JFET bottom gate that are self-aligned in a gate trench. This device is highly scalable due to its sub-lithographic channel length; non-implanted ultra-shallow source and drain extensions; non-epi raised source and drain regions; and gate-last flow. Flexfet is a true double-gate transistor in that (1) both the top and bottom gates provide transistor operation, and (2) the operation of the gates is coupled such that the top gate operation affects the bottom gate operation and vice versa.[4] Flexfet was developed, and is manufactured, by American Semiconductor, Inc.

[edit] FinFETs
A double-gate FinFET device.The term FinFET was coined by University of California, Berkeley researchers (Profs. Chenming Hu, Tsu-Jae King-Liu and Jeffrey Bokor) to describe a nonplanar, double-gate transistor built on an SOI substrate,[5] based on the earlier DELTA (single-gate) transistor design.[6] The distinguishing characteristic of the FinFET is that the conducting channel is wrapped by a thin silicon "fin", which forms the gate of the device. The thickness of the fin (measured in the direction from source to drain) determines the effective channel length of the device.

In current usage the term FinFET has a less precise definition. Among microprocessor manufacturers, AMD, IBM, and Motorola describe their double-gate development efforts as FinFET development whereas Intel avoids using the term to describe their closely related tri-gate [1] architecture. In the technical literature, FinFET is used somewhat generically to describe any fin-based, multigate transistor architecture regardless of number of gates.

A 25-nm transistor operating on just 0.7 Volt was demonstrated in December 2002 by Taiwan Semiconductor Manufacturing Company. The "Omega FinFET" design, named after the similarity between the Greek letter "Omega" and the shape in which the gate wraps around the source/drain structure, has a gate delay of just 0.39 picosecond (ps) for the N-type transistor and 0.88 ps for the P-type.

[edit] Tri-gate transistors (Intel)
Schematic view (L) and SEM view (R) of Intel tri-gate transistorsTri-gate or 3-D are terms used by Intel Corporation to describe their nonplanar transistor architecture planned for use in future microprocessor technologies. These transistors employ a single gate stacked on top of two vertical gates allowing for essentially three times the surface area for electrons to travel. Intel reports that their tri-gate transistors reduce leakage and consume far less power than current transistors.

In the technical literature, the term tri-gate is sometimes used generically to denote any multigate FET with three effective gates or channels.

Intel currently plans on releasing a new line of CPUs, termed Ivy Bridge, which feature tri-gate transistors. [7]

[edit] Gate-all-around (GAA) FETsGate-all-around FETs are similar in concept to FinFETs except that the gate material surrounds the channel region on all sides. Depending on design, gate-all-around FETs can have two or four effective gates. Gate-all-around FETs have been successfully built around silicon
 

drizek

Golden Member
Jul 7, 2005
1,410
0
71
The processors in the alien craft are ... from Earth. Intel Pentium 4 EE editions, circa 2004. More than 8000 of em, too. Humans, it seems, are the most adept EE's in the galaxy. Aliens that can get here, get here to steal our CPUs and not stick LED's up some redneck's ass.

It's worse than we thought. It's bad enough that Intel was paying off Dell and HP to get them to use their CPUs, I wonder what they did to the aliens to prevent them from getting Opterons...