How much smaller can chips go? [article]

khon

Golden Member
Jun 8, 2010
1,318
124
106
Well we'll know in a few years, but right now we're fast approaching the end of DUV, and moving from there to EUV is going to be very challenging.

I would not be at all surprised if the progress beyond 22nm is much slower than it has been until now.

And that's from a purely imaging standpoint, there's also material limitations to consider, which may force a major shift in fabrication technology before long.
 

aphorism

Member
Jun 26, 2010
41
0
0
i think i could have dont a better job in a single post, no offense.

yes, lithography is a big issue but that's not the only limitation. breakthroughs must occurr in almost every area from, doping, lithography, interconnect delay, leakage issues, etc. cost is probably going to be the largest issue of all. in ten years with 15 layers and hundreds of billions of transistors taping out a chip will costs well over 100 million dollars.
 

khon

Golden Member
Jun 8, 2010
1,318
124
106
On the positive side note there may not be a need to go that much smaller.

What really drives the need for faster computing is the vast amounts of data processing needed for higher resolution video/images, and we're nearing the point where the eye can't distinguish it anyway, which is a natural limit we can't really go beyond.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
Great Article.

I thought this was interesting:

The PC philosophy of piling everything through a CPU – instead of creating dedicated processors for specific tasks, as ARM does with smartphones – makes the PC particularly susceptible to the dark silicon problem.

“The PC architecture has taken any intelligence out of peripheral devices and runs it on the processor,” he claimed. “Something like an Ethernet controller has been dumbed down. For a low-power architecture, that’s the wrong approach. That leads you to having one big, hot processor.”
 

maniac5999

Senior member
Dec 30, 2009
505
14
81
On the positive side note there may not be a need to go that much smaller.

What really drives the need for faster computing is the vast amounts of data processing needed for higher resolution video/images, and we're nearing the point where the eye can't distinguish it anyway, which is a natural limit we can't really go beyond.
Very true, at a certain point almost everything becomes either a fashion statement or a commodity. I suspect that Desktop PCs are almost there. I have a 18 month old PC that I bought for $600, and can run most games at settings that most people would have a tough time distinguishing from maximum. like an oven or a refrigerator there are things that a more expensive model would do better (encode that video in 12 minutes instead of 20, etc.) but for 99% of tasks the difference is academic already.

I think laptops still have a ways to go to hit that point, however, as efficiency is a key issue. if an inferior design can get 80% of the performance and 80% of the efficiency (battery life) of a better design, or 100% of the performance and 60% of the efficiency, that doesn't mean much on a desktop, but on a laptop, it can be a big deal, and on smartphones and whatnot, it's an even bigger deal IMO.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
59
91
I don't understand the point of this article...it says nothing that hasn't already been written a hundred times over across the web and discussed in every professional and academic circle the world over.
 

aphorism

Member
Jun 26, 2010
41
0
0
On the positive side note there may not be a need to go that much smaller.

What really drives the need for faster computing is the vast amounts of data processing needed for higher resolution video/images, and we're nearing the point where the eye can't distinguish it anyway, which is a natural limit we can't really go beyond.

yeah, computers would destroy us if they kept doubling in speed. we are already dependent on them. we could eventually make a processor faster than our own brain in every aspect, making humans useless, especially the ones that were once smart.

also i would like to add that there are physical limits to scaling from a physics perspective. our intuition tells us that distance is continuous, it is actually in discrete amounts. the same is true with energy. you cannot make a transistor consume as little energy as you want.
 

epidemis

Senior member
Jun 6, 2007
794
0
0
- To reduce power to flip a transistor (the article state they have hit a snatch, but they did that a while ago, and came up with solutions like strained silicon and HKMG)
- scientific computing (the sky is the limit for these guys)
- savings in material and capital cost since you can fit the same CPU on a smaller piece of sillicon (more expensive machinery but more chips per wafer)
- glass-less 3d - requires 1,5 images for every angle of the scene (if glass-req 3d survives the hype-wave and evolves into glass-less, then it has a potentially to scale up the amount of viewable angles - requiring exponentially greater storage and processing.)
- right now there's a point where software lags hardware, but just wait :)
 
Last edited:

pcgeek11

Lifer
Jun 12, 2005
22,123
4,901
136
I remember when they were saying they could never get the CPU over 1 Ghz. Funny.
 

SHAQ

Senior member
Aug 5, 2002
738
0
76
So after EUV can they go to X-ray and Gamma ray? LOL My personal guess is 8 nm will be the limit. Will CPU's or GPU's reach the limit faster though? I assume the GPU will hit it faster because they are larger and more complex. And in fact GPU's may only reach 12-16 nm while CPU's will get to 8 nm. I wonder what the next avenue will be to increase performance after the limit is reached. Will Intel need to branch out into other areas as we get beyond 2020? Or will prices go up to compensate? Stay tuned.
 

epidemis

Senior member
Jun 6, 2007
794
0
0
So after EUV can they go to X-ray and Gamma ray? LOL My personal guess is 8 nm will be the limit. Will CPU's or GPU's reach the limit faster though? I assume the GPU will hit it faster because they are larger and more complex. And in fact GPU's may only reach 12-16 nm while CPU's will get to 8 nm. I wonder what the next avenue will be to increase performance after the limit is reached. Will Intel need to branch out into other areas as we get beyond 2020? Or will prices go up to compensate? Stay tuned.
8 nm being the limit? Yes for scaling every 2 years, no for the ultimate level, that will be a single atom transistor.. Check back with me in 50 years.
 

Cogman

Lifer
Sep 19, 2000
10,284
138
106
8 nm being the limit? Yes for scaling every 2 years, no for the ultimate level, that will be a single atom transistor.. Check back with me in 50 years.

There will never be a single atom transistor, and you can quote me on that. There are few places in engineering where I can say never, this is one of them.

To have a transistor as we use them, AT LEAST 3 atoms are necessary (Base, collector, emitter). And frankly, it is highly unlikely that we will get to the state of a tri-atom transistor (unless, somehow we create a chemical compound that transfers electrons like a transistor.)
 

epidemis

Senior member
Jun 6, 2007
794
0
0
There will never be a single atom transistor, and you can quote me on that. There are few places in engineering where I can say never, this is one of them.

To have a transistor as we use them, AT LEAST 3 atoms are necessary (Base, collector, emitter). And frankly, it is highly unlikely that we will get to the state of a tri-atom transistor (unless, somehow we create a chemical compound that transfers electrons like a transistor.)

Not a transistor I suppose, but it's plausible to do computation with a single molecule. My mistake.
 
Last edited:

Cogman

Lifer
Sep 19, 2000
10,284
138
106
Not a transistor I suppose, but it's plausible to do computation with a single molecule. My mistake.

And that enters the realm of quantum computing. The problem with it, though, is while 1 atom may be doing the computations, you need very large and complex structures to retrieve that information from the atom.

The other issue is the fact that while quantum computing can solve some problems very fast (cryptography) it isn't necessarily good at general computing problems. From everything I've read, you basically got to have a background in quantum computing to be able to program a quantum computer... Doesn't exactly lend itself to being a general computing solution.
 

ehume

Golden Member
Nov 6, 2009
1,511
73
91
So, when they run up against the physical limits, we shift from bits to qbits, and the acceleration continues.

As long as I can custom build these things, I'll stay in the game.
 

Cogman

Lifer
Sep 19, 2000
10,284
138
106
So, when they run up against the physical limits, we shift from bits to qbits, and the acceleration continues.

As long as I can custom build these things, I'll stay in the game.

Doesn't work like that. You can't just "switch" from a bit to a qubit. And the tech to miniaturize quantum machines just isn't there. Arguably, we are closer to 3 atom transistors than personal quantum computers. There are huge barriers that have to be overcome, such as the fact that to date, all quantum computers have had to run at sub-zero temperatures.

No, my bet is that we will see huge architectural changes before we see something exotic like quantum computing. Heck, the future may well be in an FPGA like technology.
 

bryanW1995

Lifer
May 22, 2007
11,144
32
91
There will never be a single atom transistor, and you can quote me on that. There are few places in engineering where I can say never, this is one of them.

To have a transistor as we use them, AT LEAST 3 atoms are necessary (Base, collector, emitter). And frankly, it is highly unlikely that we will get to the state of a tri-atom transistor (unless, somehow we create a chemical compound that transfers electrons like a transistor.)

why does it have to stop with atoms? why not subatomic particles? really the current theoretical limit is the plank length. It might take us a few thousand (or million) years to get there, but it's at least theoretically possible.
 
Dec 30, 2004
12,553
2
76
why does it have to stop with atoms? why not subatomic particles? really the current theoretical limit is the plank length. It might take us a few thousand (or million) years to get there, but it's at least theoretically possible.

because the physical properties of the silicon are what allow us to use it to make a transistor. Hence you need atoms.
 

Borealis7

Platinum Member
Oct 19, 2006
2,901
205
106
“The number of people predicting the end of Moore’s Law doubles every two years,”

i loled. so true.
 

extra

Golden Member
Dec 18, 1999
1,947
7
81
We could have 10ghz cpus now if Intel had continued down their P4 design path. They'd come with some weird ass stock water cooling systems, take tons of power, and be slower than an i7, but it certainly could be done.