Semimd: Deep Inside Intel (An Interview with Mark Bohr)

cbn

Lifer
Mar 27, 2009
12,968
221
106
http://semimd.com/blog/2012/10/18/deep-inside-intel/


Deep Inside Intel

By Ed Sperling
Semiconductor Manufacturing & Design sat down with Mark Bohr, senior fellow at Intel, to talk about a wide range of manufacturing and design issues Intel is wrestling with at advanced nodes—and just how far the road map now extends.

SMD= Will EUV make 10nm? And if it doesn’t, what effect will that have on Intel?
Bohr= For a process module as critical as lithography, Intel always has more than one option we pursue. In this era, the options are either EUV or 193nm immersion with multi-patterning.

SMD= How about directed self-assembly?
Bohr= That’s not a universally usable approach. You still need to define some layers with direct patterning, not a self-assembly technique. That’s a niche direction that will not replace these mainstream lithography techniques, but there may be some layers where it can complement the normal patterning techniques.

SMD= What’s your opinion about the future of the foundry business?
Bohr= The traditional foundry model is running into problems. In order to survive, the foundries will have to become more like an integrated device manufacturer. Even some of the chief spokespeople for the foundries have said something similar. The foundry model worked well when traditional scaling was being followed and everybody knew where we were headed. In this era, where you continually have to invent new materials and new structures, it’s a lot tougher being a separate foundry and maskless design house. Being an IDM, we have design and process development under one roof. That’s really a significant advantage.

SMD= Can even Intel afford to be an independent IDM? The cost of building state-of-the-art fabs at future nodes is astronomical.
Bohr= Yes. We have the volume and the products that can fill multiple fabs.

SMD= But you’ve also opened up your fabs to at least a couple customers. Are you planning on extending that? ,
Bohr= Our motivation is that we know we have great process technology, and partnering with other strategic companies can be a win-win situation. We can sell our technology and make more money off what we’ve developed, and they can have some very compelling products. It’s not Intel’s goal to be a general-purpose foundry, but we will be partnering where it makes strategic sense.

SMD= Is Intel sticking to bulk CMOS or will move to new materials such as fully depleted SOI?
Bohr= We see more advantages in bulk than SOI. I won’t say SOI won’t be in the future. There may be some device structure that is better done in SOI than bulk. But I don’t see than happening right now. When we first announced that we were making TriGate or finFET devices at 22nm, we said we’re making these devices on SOI, as well. But we think there are cost advantages to doing TriGate on bulk rather than SOI. That’s our plan for the foreseeable future.

SMD= What comes after the current finFET?
Bohr= The finFET is scalable to 14nm.


SMD= But if you’re at 22nm, 14nm isn’t very far away, so you’ve got to be working on the next step.
Bohr= For Intel, you’re right. For other companies, it’s many years away. For 10nm, which is where I’m spending most of my time these days, I know we have a solution. I can’t elaborate at this point.

SMD= At 10nm aren’t you running into quantum effects?
Bohr=Everything gets different and tougher, but the problems are solvable—at least at that generation.

SMD= How far ahead can you see?
Bohr= I know we can get to 10nm. Beyond that, our research group is working on solutions for 7nm and 5nm. I have confidence we’ll have solutions for those. But by the time we’re down to 5nm we’ll be looking at non-familiar devices and device structures. That’s what we’ll have to do to get down to that level.

SMD= Where do stacked die fit into your roadmap?
Bohr= 3D stacked die have advantages, but only for certain market segments. You have to be very clear about what problem and what market segment you’re trying to serve. For a small handheld application where a small footprint and form factor are key and power levels are low, it probably makes good sense to use 3D stacking. For desktop, laptop and server applications where form factor isn’t as valuable and power levels are higher, 3D stacking has some problems that make it not an ideal solution.

SMD= Along those lines, does Intel see the smart phone and small mobile device market as a key direction?
Bohr= Intel is very serious about getting into the smart phone and tablet markets. We are a very different company from what we were five or six years ago. We are developing process technologies, but also products, that span a much wider range of performance and power than anywhere in our history. We’re not just after the high-performance desktop. We’re developing products that support 100-watt server chips down to sub-1 watt smart phone chips.

SMD= There are a number of interesting techniques Intel is working with, such as near-threshold computing. How will power management start changing inside these chips?
Bohr= When you’re talking about developing a smart phone chip that is ultra low power that also provides improved performance features that the market expects, you have to pull every trick out of the bag. You need great transistor technology, great package technology, great CPU architectures, the ability to turn off parts of the chip when you don’t need them so you’re saving power, the software links with the chip design so the software knows when to throttle power down. You need transistors, CPU architecture and software to be effective in that space.

SMD= How many cores will be required in the future?
Bohr= It depends on the market. In the server market, the more cores you can pack on the better. But in desktops, laptops and smart phones, there’s probably a limit to how many cores are practical. It’s not one. It’s probably several.

SMD= But less than eight?
Bohr= Yes, probably less than eight. But when you talk about the number of cores and computing engines, it depends on whether you’re dealing with traditional computing tasks where four cores are better than two cores. If you’re talking about execution engines in a graphics processor, clearly you want more cores.

SMD= What does this do for Intel’s platform strategy, particularly as you go after many markets with very specific needs?
Bohr= Even for Intel there are probably an optimal number of chip designs. It’s not like in the past where we tried to make one size fit all or have one chip serve multiple markets. But on the other hand, trying to design and manufacture dozens of very different designs in a generation is also impractical. There’s an optimal number of designs, although I don’t know what that number is, that can best meet the market requirements. You want to make as few iterations between the different designs as you can or re-use the cores or some of the circuit blocks between the different chips so you’re not completely redesigning it.

SMD=Are there other materials being considered for transistors?
Bohr= Our research group has been publishing papers about using 3-5 materials http://en.wikipedia.org/wiki/List_of_semiconductor_materials for the channels. You deposit indium phosphide or gallium arsenide layer on top of silicon to make a transistor on the surface. It’s still a silicon wafer, but you’re looking at depositing more exotic materials. That’s new and different and it may happen, but it’s not yet fully resolved how good that approach may be.

SMD= Has the priority for what you’re designing into a chip changed? Is it still all about performance, or has power overtaken that?
Bohr= Ten or 15 years ago, performance was the main goal in developing a new process technology. That really has gone away as the No. 1 priority. We still strive to provide a performance boost with each new technology, but there’s much more emphasis on improving power or efficiency on each new generation. We do that by reducing active power for the work a chip does. That’s a much more important goal for us today. Part of the reason is that the market has shifted from desktop applications to more mobile products. The first transition was from desktops to laptops. Now the move is to put things into smart phones. Today’s consumer wants computing power he can hold in his hand in the form factor of a smart phone and a tiny battery. He wants the performance he had on his laptop only three or four years ago. That’s what we shoot for.

SMD= That shifts the biggest challenge to the architecture, right?
Bohr= Yes. Whether it’s low-power, low-leakage transistors or a more efficient core architecture—or linking that with more efficient software.

SMD= What becomes the next big bottleneck?
Bohr= We have lots of challenges. Lithography is the key challenge in making transistors smaller. Whether EUV will happen on time or we have to extend immersion using multiple patterning. But when you make transistors smaller they don’t become less leaky. In fact, the opposite is true. You have to continually invent new structures and materials to allow feature-size scaling, which is critical for active power reduction and for cost.

SMD= But wires don’t scale well. How do you deal with that?
Bohr= RC delay gets worse as you scale, compared with transistors, which tend to get faster as you scale down.
The industry has had 20-plus years of struggling with that problem. One way we’ve addressed that is that we’re no longer striving for very high operating frequency, especially in the phone market where 2 or 2.5GHz would probably be sufficient. That’s one advantage. The other advantage is that the average size of the chips is smaller in these laptop and cell phone applications so you don’t have interconnects traveling a long distance across a large chip. Instead, it’s a more compact chip so the signals don’t have to go as far. But even with those chips, we still have a challenge of performance from the interconnect. We have to be clever about what pitches we choose. Some of the lower layers are dense pitch, where density is important. Some of the upper layers are coarser pitch, where performance is important. We’re also continuing to drive down interconnect capacitance by employing lower-k dielectrics.

SMD= Is the interconnect becoming more problematic?
Bohr= If you talk to a designer 10 years ago you would have heard the same thing. Maybe now they’re saying, ‘This time we’re really serious.’

SMD= How about new interconnect technology?
Bohr= It’s hard to replace copper and low-k other than by making lower k. But at least in the low-power cell phone market, stacking chips does help to minimize some of the interconnect issues, particularly between the logic and the memory chips.

SMD= You’re referring to through-silicon vias?
Bohr= Yes.

SMD= So if Intel is planning to get into that market, the company is experimenting with that technology right now?
Bohr= Yes, and we’ve been public about exploring TSV and 3D technology for a couple years. Although there are some challenging technology aspects, the real issue is cost. Doing TSVs and stacking chips—especially these custom Wide I/O chips—is expensive. So this might be a better engineering solution in terms of density, performance and power, but will the market bear the added cost? Not all markets will bear the higher cost.

Great Interview!! (Lots of really interesting points besides the stuff I bolded!)

However, the discussion of increasing leakage as xtors shrink coupled to increasing RC delay of the interconnect wires really got me thinking about what kind of new xtor structures Intel will use for 10nm and beyond.

For a phone, It appears the ideal configuration is DRAM stacked on top of the CPU cores. (This helps reduce some interconnect delay). But at the same time it sounds like Intel might be aiming for up to 2.5Ghz operation for the CPU cores.

2.5 Ghz CPU cores with stacked DRAM (for 10nm and beyond)....to me that implies a very low leakage xtor design needs to be used. If anything to control heat density in such a constrained environment.
 
Last edited:
Aug 11, 2008
10,451
642
126
http://semimd.com/blog/2012/10/18/deep-inside-intel/




Great Interview!! (Lots of really interesting points besides the stuff I bolded!)

However, the discussion of increasing leakage as xtors shrink coupled to increasing RC delay of the interconnect wires really got me thinking about what kind of new xtor structures Intel will use for 10nm and beyond. (particularly for mobile)

Considering the relatively minimal gain in the shrink from SB to IVB and the problems with higher temperatures, I wonder how much longer Intel's process advantage is going to carry over into practical advantages to the end user. I guess what I am trying to say, is I have no doubt that Intel will continue to have a node advantage over its competitors. I just wonder if they are not getting to the point of diminishing returns for having a smaller process.
 

inf64

Diamond Member
Mar 11, 2011
3,697
4,015
136
Mark Bohr is a true genius,period. I have no doubt that intel will transition to 10nm without much problems,but we have to wait and see what happens after 10nm. They do have a lot of money to burn into the RnD but as he said,the added costs of the new technologies(such as TSV for example) are going to hurt the pricing structure of the chips that employ them.
 

Idontcare

Elite Member
Oct 10, 1999
21,118
58
91
Mark Bohr is a true genius,period.

Agreed, you can really tell by the clarity and focus of his answers that the guy is on top of his game. He is not simply talking the talk.

It does seem to be a general consensus - be it from Intel, Samsung, or TSMC - that the shrinkage hits what seem to be insurmountable challenges around 5nm.

They are doing such amazingly complicated things to the xtor now just to make sub-32nm work - going with HKMG and then Finfet - the mind boggles at just how much more complicated and foreign the xtor is going to become just to make sub-10nm work.
 

Idontcare

Elite Member
Oct 10, 1999
21,118
58
91
Doesn't matter, since Intel is going to die because of ARM processors, amirite?

Intel has an ARM license, if it made financial sense to them to fab ARM chips with their 14nm fabs versus fabbing Atom chips in the same fabs then there can be no doubt that they would do it.

Intel has no problems doing ARM as they did with XScale. But they aren't going to take the lower-profit road if they can see a way to stick to the higher profit road.

ARM vendors are going to have to deal with the same thing AMD had to deal with - a growing process node gap between their products and those offered by Intel.
 

krumme

Diamond Member
Oct 9, 2009
5,952
1,585
136
I think the most important point is:

"we have design and process development under one roof. That’s really a significant advantage."

That advantage Intel will keep for many years. Even when we reach situations where market can not carry the expensive nodes, because the competence can make it cheaper to adress certain OEM and specific needs - with that knowledge that come from feedback between design and proces development (production).

But notice there is an elephant in the room. Samsung. What they have been extremely good at is to connect market needs to production competences. And besides the fab, they have all kind of other technologies to make excellent products. In areas where Intel have nothing.

Can cpu process advantage make up for that?

American engineers and technicians is going to fight with korean salaries. And you just need more than a genious here, but a brute force of thousands of engis working like mad :)
 
Mar 10, 2006
11,715
2,012
126
Intel has an ARM license, if it made financial sense to them to fab ARM chips with their 14nm fabs versus fabbing Atom chips in the same fabs then there can be no doubt that they would do it.

Intel has no problems doing ARM as they did with XScale. But they aren't going to take the lower-profit road if they can see a way to stick to the higher profit road.

ARM vendors are going to have to deal with the same thing AMD had to deal with - a growing process node gap between their products and those offered by Intel.

Oh, you should know that I'm a super sarcastic arse when it comes to the ARM v.s. X86 thing. The notion that Intel "can't" compete in low power is patently ludicrous.
 

Idontcare

Elite Member
Oct 10, 1999
21,118
58
91
Oh, you should know that I'm a super sarcastic arse when it comes to the ARM v.s. X86 thing. The notion that Intel "can't" compete in low power is patently ludicrous.

I'm with ya bro, I was adding more fuel to the fire, wasn't meaning to come across as debating you or that I thought you were "of that persuasion" in the topic per se ;)

The piece Intel is missing, and they know this, is the analog part. But they are aggressively working on that. They know they need to nail the analog aspects of low-power SOC design before they can convince the markets to buy their digital CMOS products in bulk.

It is one of those things that I think is just an unavoidable consequence of the math when you start looking at what Intel must be planning if they are intending on having three or four 450mm 10nm fabs loaded. They won't care whether they are shoveling x86-based chips or ARM-based chips out of the fab for 6 billion smarphones, chips is chips at that point.
 

thilanliyan

Lifer
Jun 21, 2005
11,864
2,066
126
Any relation to Neils Bohr? :p

It is really interesting how the whole process node thing develops. Is it a necessity to go below 5nm? What drives this whole machine? The need for higher performance? Or the need to cut die costs? Both?
 

Idontcare

Elite Member
Oct 10, 1999
21,118
58
91
Any relation to Neils Bohr? :p

It is really interesting how the whole process node thing develops. Is it a necessity to go below 5nm? What drives this whole machine? The need for higher performance? Or the need to cut die costs? Both?

It really is purely about stabilizing gross margins by lowering the cost of production and/or elevating the ASP of the product.

Graph1.png


Graph3.png


Figure5.png


Figure6.png


Shrinking reduces the cost for production but it also pushes the minimum in the cost curve out to the right, meaning you continually get bigger cost reductions if you not only shrink your device but you also increase the total number of components ever so slightly (xtor count goes up).

Companies use this slight increase in xtor budget while hitting the optimal minimum on shrinking to increase performance so they can protect against ASP erosion, or possibly gain an increase in ASP if the performance/competition enables it.

Moore's law has always been about economics, maximizing profits and gross margins by enabling higher ASP and lowering production costs in a timely fashion that was easily manageable by existing business management metrics.
 

thilanliyan

Lifer
Jun 21, 2005
11,864
2,066
126
Moore's law has always been about economics, maximizing profits and gross margins by enabling higher ASP and lowering production costs in a timely fashion that was easily manageable by existing business management metrics.

Thanks for the great info!

Now what happens when they get to 5nm or whatever the limit on "current" tech is? Is there another method for them to reduce production cost without switching to a new node? What if some "new" tech turns out to be more expensive than sticking to 5nm (at least in the short term)?
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
Thanks for the great info!

Now what happens when they get to 5nm or whatever the limit on "current" tech is? Is there another method for them to reduce production cost without switching to a new node? What if some "new" tech turns out to be more expensive than sticking to 5nm (at least in the short term)?

You can increase wafer size to reduce cost. But besides that, assuming the economics or the abilities simply dont allow say for example lower than 5nm. And lets say photonics, quantum computing and whatever also fails. Then we simply end at 5nm and only the designs can improve, but without increasing the transistor budget. In that case you gonna start seeing some serious redesigns and trims.
 

Idontcare

Elite Member
Oct 10, 1999
21,118
58
91
Thanks for the great info!

Now what happens when they get to 5nm or whatever the limit on "current" tech is? Is there another method for them to reduce production cost without switching to a new node? What if some "new" tech turns out to be more expensive than sticking to 5nm (at least in the short term)?

When traditional scaling runs into an economic brick wall that is when you will begin to see unconventional compute methods coming to the fore.

What do I mean by "unconventional compute methods"? I mean things like virtual scaling by side-stepping physical scaling and entering into scaling the effective electronic structures in k-space (aka reciprocal lattice).

In physics, the reciprocal lattice of a lattice (usually a Bravais lattice) is the lattice in which the Fourier transform of the spatial wavefunction of the original lattice (or direct lattice) is represented.

comb2d.png


See this cool tutorial complete with java feature for interacting with the direct lattice and seeing the effect on the reciprocal lattice.

That is for scaling traditional compute circuits into a virtual space in which the physical components don't scale but the electric versions of them do.

Another way to continue scaling compute density is to increase the complexity of the compute itself - go beyond binary to trinary and so forth, or quantum.

This kind of stuff will be avoided at nearly all costs because the managerial complexity associated with attempting to navigate such a project is extremely risky.

Even if the science is sound that doesn't mean the people managing the project, or the people at the directorship level (managers of project managers) will know up from down when it comes to reducing these sciences to practice and create a shipping/functioning product.

But someone will figure it. Consider the stone-age equivalence of the computational tools the likes of which Fermi and Einstein had at their disposal while solving all the physics equations and engineering challenges necessary to create the atom bomb. Where there is a will there is a way, someone will find a way to scale compute density (performance and cost) below the 5nm equivalent threshold.
 

thilanliyan

Lifer
Jun 21, 2005
11,864
2,066
126
Thanks for all the info IDC. I'll have to read more about it when I get a chance. I took a semiconductor materials course in uni, but it didn't go that in depth in terms of what is/could be coming in the future. Plus, my Materials degree use has not ventured into the semiconductor realm (I've been mostly in "traditional" metallurgical work), but I like to keep abreast of tech news if I can. :)
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
SMD= What comes after the current finFET?
Bohr= The finFET is scalable to 14nm.

Some more information on transistor possibilities:

http://semimd.com/blog/2012/08/15/what-comes-after-finfets/

What Comes After FinFETs?

By Mark LaPedus
The semiconductor industry is currently making a major transition from conventional planar transistors to finFETs starting at 22nm.

The question is what’s next? In the lab, IBM, Intel and others have demonstrated the ability to scale finFETs down to 5nm or so. If or when finFETs runs out of steam, there are no less than 18 different next-generation candidates that could one day replace today’s CMOS-based finFET transistors.

But even the large companies with deep pockets don’t have the time or resources to work on all technologies. “We can’t pick 18,” said Mike Mayberry, vice president and director of components research in the Technology and Manufacturing Group at Intel Corp. “We will develop only a few of them.”

Mayberry said the eventual winners and losers in the next-generation transistor race will be determined by cost, manufacturability and functionality. “The best device is the one you can manufacture,” he said.

In fact, the IC industry is already weeding out the candidates. In 2005, the Semiconductor Research Corp. (SRC), a chip R&D consortium, launched the Nanoelectronics Research Initiative (NRI), a group that is researching futuristic devices capable of replacing the CMOS transistor in the 2020 timeframe. NRI member companies include GlobalFoundries, IBM, Intel, Micron and TI.

So far, the NRI has narrowed down and identified a handful of serious contenders: gate-all-around, silicon nanowires, tunnel field-effect transistors (TFETs), carbon nanotubes, graphene devices, and bilayer pseudo-spin field-effect transistors (BiSFETs).

It’s still too early to determine which future transistor candidate will prevail, said Steven Hillenius, executive vice president of the SRC. “There is still no consensus,” Hillenius said, “but we’ve gone from 20 or so potential devices down to less than 10.”

The finFET and beyond
For now, the industry is banking on the finFET transistor to enable IC scaling for the foreseeable future. The current thinking is that today’s finFET will likely scale at least two generations down to 10nm, said Subramani Kengeri, head of advanced technology architecture at GlobalFoundries. Then, at 7nm, the industry is looking at next-generation finFETs based on III-V or other materials to provide a mobility boost, Kengeri said. It’s too early to predict a winner, as “nothing has been settled,” he added.

Indeed, the future is cloudy at and beyond 10nm. According to the 2011 ITRS roadmap, there are a dizzying array of next-generation transistor options on the table: III-V channel replacement finFETs, carbon nanotube FETs, graphene nanoribbon FETs, nanowire FETs, tunnel FETs, spin FETs, IMOS, negative gate capacitance FETs, NEMS switches, atomic switches, MOTT FETs, spin wave devices, nanomagnetic logic, excitonic FETs, BiSFETs, spin torque majority logic gate and all spin logic.

The futuristic candidates likely will require new materials, manufacturing flows and design methodologies. At the SRC, there is one basic criterion to help narrow down the playing field: “The promising new structures are the ones you can put in the current manufacturing flow. The new materials would be used in conjunction with what we are using now,” said SRC’s Hillenius.

For that reason, one transistor candidate has emerged as the favorite in the race. “At this point, the tunnel FET looks like the best option,” said Chenming Calvin Hu, professor of microelectronics at the University of California at Berkeley. Using III-V materials for the channels, TFETs potentially could extend CMOS. Claiming eight times the performance of today’s MOSFETs, TFETs enable a steeper sub-threshold slope less than 60 mV/decade. In TFET, a tunnel barrier is created at the source- channel contact in order to increase the drive current of the transistor.

“It’s likely that the industry will stay with finFETs or tri-gates for the 22nm and 14nm nodes. The earliest introduction of III-V MOSFETs is likely is at the 10nm node. This implies that III-V TFETs will appear no sooner than the 7nm technology node,” said Suman Datta, professor of electrical engineering at Pennsylvania State University.

In the lab, Intel has shown TFETs based on III-V materials like InGaAs. “Penn St. and Notre Dame have been able to use staggered and broken gap tunnel junctions in In(Ga)As/Ga(As)Sb TFETs to demonstrate competitive on-current in experimental devices. These TFETs have all been n-channel demonstrations. Very little work has been toward p-channel TFETs and the next challenge would be the demonstration of steep switching p-channel TFET for complementary TFET logic,” Datta said.

“The biggest barrier is the introduction of III-V compound semiconductors within a state-of-the-art silicon fab. III-V islands need to be grown selectively on 300mm, or by that time on 450mm substrates, with low defect count using a high volume manufacturing technique,” Datta said.

Besides TFETs, silicon nanowires also could be classified as “an extension to the finFET,’’ said Gary Patton, vice president of the Semiconductor Research and Development Center at IBM. Silicon nanowire field-effect transistors (FETs) are structures in which the conventional channel is replaced with tiny nanowires.

Nanowires also enable what’s considered to be the ultimate solution in the IC industry: gate-all-around (GAA) finFETs. GAA FETs can have two or more gates, which are wrapped around by a nanowire channel. In a recent paper, Harvard University and Purdue University demonstrated a gate-all-around III-V MOSFET. The device itself boasts 1, 4, 9 or 19 nanowire channels. One of the key fabrication steps is a controlled release process, which is used to form the InGaAs nanowire channels.

“We would likely see GAA devices two to three generations after tri-gate/finFET technology,” said Jiangjiang Gu, a Ph.D. candidate at the Department of Electrical and Computer Engineering at Purdue. “The biggest challenge for GAA devices with III-V channels is how to fabricate ultra-small nanowires with high mobility surfaces and low interface trap densities by a top-down technology. Other challenges include how to form low resistance contacts to these nanowires and how to reduce variations of the GAA devices.”

Carbon nanotubes and graphene
TFETs, nanowire FETs and GAA are arguably the most straightforward extensions to CMOS. Two other options, carbon nanotubes and graphene-based devices, are promising but more exotic approaches. Carbon nanotubes are grown on full wafers and aligned in one direction. They are subsequently transferred to a target substrate multiple times. IBM, for one, has demonstrated sub-10nm carbon nanotubes.

Carbon nanotube FETs (CNFETs) are “the only FET that is projected to outperform the 11nm node ITRS target,” said H. S. Philip Wong, professor of electrical engineering at Stanford University, in a recent paper. CNFETs, according to Wong, face three major challenges: aligned density; stable p- and n-type doping on the same wafer; and low resistance metal to contact at short contact lengths.

In contrast, graphene consists of one-atom-thick planar sheets, which are packed in honeycomb crystal lattice structures. The technology is expensive and difficult to put into manufacturing. And it doesn’t have a band gap, meaning it can’t be turned off in a system.

Still, there is interest in using graphene as a channel replacement material. IBM, for one, is looking at analog and RF applications for graphene FETs (GFETs). The company has demonstrated a GFET running at 155-GHz with 40nm channel lengths.

In another approach, the University of Texas at Austin has been developing the BiSFET, which is said to have 1,000 times lower power consumption than CMOS. In this device, a p- and an n-type layer of graphene are separated by a dielectric tunnel barrier. Each graphene layer has a metallic contact and is electrostatically coupled to a gate electrode.

“The device is still in an R&D phase. While we have theoretically shown that it should work, we are still struggling to demonstrate functionality in the lab. So at this point, it is premature to think of large scale production,” said Sanjay Banerjee, professor of electrical and computer engineering and director of the Microelectronics Research Center at the University of Texas at Austin.

Researchers are also looking at other technologies. For example, all spin logic (ASL) is gaining interest. ASL uses magnets to represent non-volatile binary data, while the communication between magnets is achieved using spin currents.

Despite the promising research for spin logic and other futuristic devices, the industry faces many challenges to find the right candidate. “Predicting what lies ahead is fraught with peril as our ability to see is dependent on where and how we look,” Intel’s Mayberry said.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
Bohr said:
But when you make transistors smaller they don’t become less leaky. In fact, the opposite is true. You have to continually invent new structures and materials to allow feature-size scaling, which is critical for active power reduction and for cost.

Some simple explanations to what Intel's Mark Bohr mentioned above:

http://www.imec.be/ScientificReport/SR2011/1414174.html

Downscaling of CMOS technology is currently facing major challenges, mostly due to the non-scalability of the subthreshold slope (SS) of metal-oxide-semiconductor (MOS) devices. This has resulted in a continuous increase of the static power due to the reduction of the threshold voltage along with the supply voltage in order to maintain circuit speed.

Another article ---> http://ambienthardware.com/courses/tfe01/pdfs/Roy1.pdf

I. INTRODUCTION
To achieve higher density and performance and lower
power consumption, CMOS devices have been scaled for
more than 30 years. Transistor delay times decrease by more
than 30% per technology generation, resulting in doubling
of microprocessor performance every two years. Supply
voltage (VDD) has been scaled down in order to keep the
power consumption under control. Hence, the transistor
threshold voltage (VTH) has to be commensurately scaled
to maintain a high drive current and achieve performance
improvement. However, the threshold voltage scaling results
in the substantial increase of the subthreshold leakage
current [1].

http://www.eetimes.com/electronics-news/4213661/Intel-s-Gargini-sees-tunnel-FET-as-transistor-option (An old 2011 entry from EE Times, but I really liked the comments from Gargini)

Previously the challenge had been perceived to be how to design for the maximum on-current while tolerating or reducing the leakage current, Gargini said. Now – perhaps infused by a new awareness of power efficiency at Intel – Gargini proposed the goal to be to minimize the leakage current and in particular the sub-threshold impact on device performance.

In order to reduce power consumption with each successive node, Vdd (supply voltage) needs to be lowered. But, in order to keep drive current up, the threshold voltage also needs to be lowered. Too much reduction in threshold voltage and "standby" power consumption increases (due to leakage).

.....So the new transistor designs must in some way address this basic problem (along with other problems) associated with traditional CMOS scaling.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
I think the most important point is:

"we have design and process development under one roof. That’s really a significant advantage."

That advantage Intel will keep for many years. Even when we reach situations where market can not carry the expensive nodes, because the competence can make it cheaper to adress certain OEM and specific needs - with that knowledge that come from feedback between design and proces development (production).

But notice there is an elephant in the room. Samsung. What they have been extremely good at is to connect market needs to production competences. And besides the fab, they have all kind of other technologies to make excellent products. In areas where Intel have nothing.

Can cpu process advantage make up for that?

American engineers and technicians is going to fight with korean salaries. And you just need more than a genious here, but a brute force of thousands of engis working like mad :)

Great post. Yes, Intel vs. Samsung is a great question.

Regarding CPU process advantage, I think it depends on how effective Intel gets at manufacturing these new xtor structures. If Intel nails it....then maybe not having all the other techs Samsung has won't matter so much?

With that said, I think it will be very interesting to see how Intel integrates it smartphone SOCs. If they can add sufficient logic to their dies (instead of just making them smaller with every node) then maybe they can make great strides in the design of the rest of the phone (to offset the Samsung advantage in components)

For example, take the Lava Xolo mainboard shown below as example. (Intel Penwell SOC (medfield platform) under the Elpida DRAM) --> http://www.chipworks.com/blog/recentteardowns/2012/05/15/inside-the-lava-xolo-intel-penwell-inside/

full-board-front.jpg


full-board-side1.jpg


What needs to happen in order for Intel to integrate all those ICs into one or two major chips?
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
What needs to happen in order for Intel to integrate all those ICs into one or two major chips?

http://intelstudios.edgesuite.net/121113_sc12/archive/archive.htm

Anand asks a great question (at 39:25 into the above video) regarding future Post-NAND non-volatile memory replacing existing memory hierarchy leading to convergence.

The answer was yes, at some point the Post-NAND non-volatile memory could be disruptive to DRAM and NAND storage in the memory hierarchy.

P.S. Great Webcast! (Lots of complex info made easy to digest. I feel like it really helped my understanding of SSD.)
 
Last edited:

OVerLoRDI

Diamond Member
Jan 22, 2006
5,494
4
81
Great posts and a really nifty article. I sent it off to some of my EE friends to take a look at :)