beyond .05 nan process.

beyoku

Golden Member
Aug 20, 2003
1,568
1
71
we are already at .09 im told once we get to .05 "because of the laws of physics" the electrons simply will jump of the wire and computer chips cannot be made at this level. What is beyond this and what is in the roadmaps for amd/intel from this point. Dual cores? is this why we jump to 64bit on the desktop? Quantum computing maybe?? andy suggestions?
 

BigPoppa

Golden Member
Oct 9, 1999
1,930
0
0
Everytime they say we can no longer go beyond a future process, some new developements come along to allow us the ability to. I've not read up on the hardware side of things for a long while, the last big thing I remember was silicon on insulator, and a new lithography technique.
 

f95toli

Golden Member
Nov 21, 2002
1,547
0
0
There are several things that are problematic

*Tunneling processes are a problem, but that is mainly related to the thickness of the insulating layer in the transistors and is not (directly) related to the minimum linewidth. It is still a problem but one that has a few possible solutions.

*The MAIN factor is actually cost. Linewidths of about 20 nm is not really a problem if you use e-beam lithography (it has been routine for at least 15 years) but you can not use that for mass production, you normally use a e-beam to make a mask and then you use that mask to make the waffers using deep-UV light.
There are two problems, first of all the cost of making masks has skyrocketed but I don't think that is a big problem for AMD and Intel, the second problem is that the minium linewidth is limited by the wavelength of the light used, so far there are no practical sources for light that go beyond deep-UV.
There are a few "tricks" you can use, immersion lithography for example, from what I can tell there are problems with using that in mass production, to make structures that are even more narrow but there are still limits.
The main technological issue right now seems to be heat, as you make things smaller the problem of how to cool you device increases.

My guess that the "next big thing" will be 3D structures in Si. There are many other materials that could be used but the whole industri is so focused on Silicon that it is unlikely that other materials have a chance unless something really spectacular happens (the invention a cheap way to mass produce carbon nanotube transistors in a controllable way for example).

Quantum computers are unfortunately not very fast unless you are dealing with very specific problems so that will probably never be an option.
 

Cogman

Lifer
Sep 19, 2000
10,286
145
106
one thing though, why use electrons when we can use light? If we get to a point where electricity is unfeasable, I believe that we will turn to light for a faster speed.
 

EarthwormJim

Diamond Member
Oct 15, 2003
3,239
0
76
Actually, light-emitting transistors have already been created, granted mainly in labs. Labs have been working on them since the 70's I believe. Infact the main premise behind the book Congo was the future of optical computing which was published 1980. Here's a couple of links describing it: link 1 link 2

Oh and one thing, we're actually at a 90 nanometer process, aka .09 micron process.
 

f95toli

Golden Member
Nov 21, 2002
1,547
0
0
The main problem with light is that you need some form of memory for registers etc, afaik no one has been able to figure out how to do that yet.
 

idgaf13

Senior member
Oct 31, 2000
453
0
0
The main problem is not jumping electrons but the condition of the surface upon which they travel.
If the surface,the metal layer, is not polished and free of defects ,"pits",the electrons will go into the pit
and dig a deeper pit eventually going into the next layer.

IBM dealt with this problem using a "Dual-Damascene"process for polishing.
As you polish the surface you do not want the metal to ripped out
so it is a sensitive process.
 

Eskimo

Member
Jun 18, 2000
134
0
0
Originally posted by: idgaf13
The main problem is not jumping electrons but the condition of the surface upon which they travel.
If the surface,the metal layer, is not polished and free of defects ,"pits",the electrons will go into the pit
and dig a deeper pit eventually going into the next layer.

IBM dealt with this problem using a "Dual-Damascene"process for polishing.
As you polish the surface you do not want the metal to ripped out
so it is a sensitive process.

Actually this really isn't much of a problem from a processing standpoint. Chemical Mechanical Polishing (CMP) was inserted in order to planarize the metal layers to allow for the reduced depth of focus capabilities of today's high numerical apertature steppers. There are issues of course in any process technology but most companies have been able to deal with simple metal planarization. More challenges face CMP with the move to porous low-k dielectric materials between the metal layers. These can be more easily damaged by the abrasive nature of CMP.

We moved to copper for it's lower resistivity and higher resistance to electromigration. Because of this we are able to pack copper lines much closer together than we were able to with Aluminum while maintaing similar RC values. Since there weren't any manufacturable dry etch solutions for etching copper IBM developed the dual damascene method, which was actually a cost savings as it incorporates both the vias and interconnects in same process.
 

f95toli

Golden Member
Nov 21, 2002
1,547
0
0
I read the paper today. Unfortunately the results look promising. I say unfortunately becasue photonics was one of few areas where materials other than Si actually had a chance of making it commercially.

Now everything will be done in Si even though there are better materials (the first rule of material science: If it ius not made of Si it is not interesting, even if it is better and cheaper:(. )


 

djNickb

Senior member
Oct 16, 2003
529
0
0
Wasn't there something in Wired a little while back about synthesized diamonds and that they were "the future" of computing?
 

pm

Elite Member Mobile Devices
Jan 25, 2000
7,419
22
81
There is a growing optimism about immersion lithography. Industry suppliers are now starting to talk about using 197nm litho with process nodes as low as 32nm [EETimes, Feb. 2nd, cover story].. The search is for a liquid that has the right properties and an refractive index of 1.6. If they can find that - and it's likely that they will - then cost estimates should drop dramatically on lithography for future process nodes up to 32nm and beyond. It's remarkable that a few years ago everyone was talking about 157nm litho and deep-UV and now there is talk about using 197 and immersion techniques well past the end of the decade.

I'm not sure where I'd guess the limit is. I would probably peg it somewhere around 18nm based on materials properties. But shifts towards using silicon as the base substrate and then using more elaborate materials might extend that further.

One thing that is clear to me is that "an end" is already happening. Design is getting dramatically harder than it was back in the "golden days" of 0.5um, 0.35um, and 0.25um. DRC rules are getting very hard, and the packing densities are getting looser due to OPC and, potentially, phase shifting. Already designers are having to deal with non-linearities in some situations due to quantum effects. Timing is trickier - the EDA shift towards statistical timing is a reflection of this. Leakage and power add two more relatively new issues to add to an already complex design process. Things are noticeably harder and more complex than they were less than a decade ago. I personally would have pegged this shift in complexity somewhere around 0.25um or 0.18um.

I don't forsee the semiconductor industry coming to a halt as some have said when they liken it to a wall... Rather than crashing into an immoveable obstacle, it's more like the industry has started climing up the side of the mountain that is gradually getting steeper. That said, there's is a lot of money tied up in the semiconductor industry and in silicon technology. Money and ingenuity will be a lot of momentum to carry us forward as things get harder.
 

AyashiKaibutsu

Diamond Member
Jan 24, 2004
9,306
4
81
I predict that within the next 5 years computers will become so large and expensive only the five richest kings in europe will beable to afford them! Hehe sorry I couldn't resist.... This all sounds like the same thing with limitations of human speed. Did you know in some circles it was believed a human couldn't breath while moving faster then 55 mph?
 

f95toli

Golden Member
Nov 21, 2002
1,547
0
0
I think this is a bit different. No one seriuously believes that the development will come to a halt but that is not the issue.
I think a better comparison would be the car industri, the development over the first 30-40 years or so was really amazing but after that things slowed down. Of course cars are still evolving but the cost of developing a new, better, type of car (e.g. fuel cells) is so high that it takes a lot of time and is a slow process. We also have a infrastructure based on what is now old technology which would also cost a lot of money to replace. There are of course also a lot of economic interests involved.

My point is that even though we can make faster computers cost is starting to become an issue, many of the problems could be solved by moving to new technologies but the cost is simply too high right now. Another problem is that technologies that used to relativley cheap and have been used for a long time and has not evolved much (for example the e-beam lithography that is used to make the masks) are also reaching their limitations because of the complexety of modern cuircuits, they can still be used but the cost has increased a lot.

There are already extremely fast computer technologies (for example superconducting electronics, RSFQ) that we do know works but no one (maybe with the exception for the NSA) can afford them.


 

Eskimo

Member
Jun 18, 2000
134
0
0
Originally posted by: f95toli


Another problem is that technologies that used to relativley cheap and have been used for a long time and has not evolved much (for example the e-beam lithography that is used to make the masks) are also reaching their limitations because of the complexety of modern cuircuits, they can still be used but the cost has increased a lot.

Having worked for a short time in the mask industry I can speak to this. There are an array of technical and economic challenges which face the mask industry. From a technical standpoint the problem has been twofold.

1) The mask business was a demanding one during the early days when designs were made at 1x the printed feature size. However with the advent of reduction steppers and the move to 5x masks circa 1985 began what some in the industry refer to as the "maskmakers holiday". Basically it became very easy to make masks. Feature sizes were huge (xbox huge ;)) in comparison to those faced by the silicon fabrication facilities. This led to almost no investment of resources in reticle processing techniques or reticle process equipment. Masks were treated as a commodity item and anybody could make their own or buy them from a number of merchant mask shops.

Enter the mid 90's and all of a sudden scanners are coming onto the scene using only 4x reduction. Couple that with an acceleration of the semiconductor industry roadmap which began to press lithographers to print features smaller than the wavelength of light they were using in their exposure systems. Thus began the era of RET (reticle enhancement technology). Since the scanner/stepper makers were struggling to increase the numerical aperature of their lens systems which was subsequently decreasing their depth of focus all of a sudden there was a shift in burden off the shoulders of the lithography systems makers and onto that of the mask makers. Mask shops were largely outdated using process equipment crudely adapted from the silicon wafer world and techniques a decade old. The write tool of choice (ETEC MEBES) hadn't changed that much in the past 15 years.

You had the introduction of phase shift technology which required some mask shops to spin on their own resist for the first time ever, multiple layers which were unheard of in the old chrome and quartz days, defects which had to be repaired with phase shift and intensity taken into account, and the know how and equipment to dry etch the quartz itself for alternating aperature just to name a few.

2) The equipment vendors were non-existant from the mask scene until the mid 90's. There were hardly any companies which specialized in mask equipment, most simply adapted some of the handling equipment to use square quartz instead of round silicon. Very little R&D was put into the equipment. Suddenly when times started to get tough and the mask companies needed technical expertise and research to continue to succeed you had mass consilidation in the industry. Many companies which had been producing their own masks internally were unable or unwilling to invest enough to continue and started ordering from merchant shops. Meanwhile not all of the merchant shops could keep up and there were many mergers until you were left with the 4 remaining today. Dai Nippon Printing (DNP) and Toppan in Japan, and Dupont Photomasks and Photronics based in the US.

The reason this consolidation is important is that it resulted in an overall reduction in the market for mask making equipment. Applied Materials bought Etec and it's mask writing equipment only to find that the worldwide market for mask write tools is around 6-8 a year!!! With that low of volume the only way to recoup the necessary R&D to develop new and better systems would be to charge obscene amounts of money, money that the mask makers can't afford since the wafer fabs don't want to pay that much for reticles. The only other option is not to do the R&D or partner with others.

There have been advancements, don't get me wrong. Now you can get masks with phase shifting (alternating or embedded) and OPC. There are vector based electron beam write tools which are faster than their raster predecessors. The industry is really close to finally comitting to a new mask design file structure to replace the aging GDSII which can't handle the gigabyte/terabyte size of todays files.

So that's my long winded response to your comment, maybe you already knew all that, i dunno.
 

f95toli

Golden Member
Nov 21, 2002
1,547
0
0
Thanks for the info, great post!

I am at "the other end" of the problem since I work at a university and do not have to care about mass production (I am Ph.D student), we only make samples for research so the total "production" is a few wafers at most (in my case a typical sample is 5x5 mm and takes about two weeks to make). The structures we make are usually relatively simple but small (typical sizes are below 50 nm in many cases) which means that they are made using a combination of deep-UV (for contact pads etc) and direct-write using one of our two e-beams. we have two JEOL EBL. One is almost fifteen years old and the other is brand new but the performance is more or less similar (the new one is skughtoly faster which is only important if you are making large structures).

The few things I know about commercial semiconductor processing I have picked up at various conferences (there is usually at least one plenary lecture dealing with the future of nanoelectronics) so I am hardly an expert.