Chip-shrinking may be nearing its limits

JustaGeek

Platinum Member
Jan 27, 2007
2,827
0
71
Tomorrow is the 60th anniversary of the invention of the first transistor.

Here is the article from... AP.

http://news.yahoo.com/s/ap/200...ansistor_anniversary_1

"The devices ? whose miniaturization over time set in motion the race for faster, smaller and cheaper electronics ? have been shrunk so much that the day is approaching when it will be physically impossible to make them even tinier.

Once chip makers can't squeeze any more into the same-sized slice of silicon, the dramatic performance gains and cost reductions in computing over the years could suddenly slow. And the engine that's driven the digital revolution ? and modern economy ? could grind to a halt.

Even Gordon Moore, the Intel Corp. co-founder who famously predicted in 1965 that the number of transistors on a chip should double every two years, sees that the end is fast approaching ? an outcome the chip industry is scrambling to avoid.

"I can see (it lasting) another decade or so," he said of the axiom now known as Moore's Law. "Beyond that, things look tough. But that's been the case many times in the past."




"Intel, the world's largest semiconductor company, predicts that a number of "highly speculative" alternative technologies, such as quantum computing, optical switches and other methods, will be needed to continue Moore's Law beyond 2020.

"Things are changing much faster now, in this current period, than they did for many decades," said Intel Chief Technology Officer Justin Rattner. "The pace of change is accelerating because we're approaching a number of different physical limits at the same time. We're really working overtime to make sure we can continue to follow Moore's Law."

 

jonmcc33

Banned
Feb 24, 2002
1,504
0
0
This has been well known for a long time. That's why there's the push for multiple cores per die and also making the processor more efficient.
 

JustaGeek

Platinum Member
Jan 27, 2007
2,827
0
71
Originally posted by: jonmcc33
This has been well known for a long time. That's why there's the push for multiple cores per die and also making the processor more efficient.

???

They are talking about a brand new technology required.

There is only so much you can squeeze on that piece of silicon.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
Originally posted by: jonmcc33
This has been well known for a long time. That's why there's the push for multiple cores per die and also making the processor more efficient.

I think you are conflating the idea of Moore's Law (double # of funtions on a single IC every so many years) and the idea of technology nodes (reducing the surface area consumed by a function on an IC).

The push for multi-cores was driven by the fact that CPU designers had more transitors (thanks to Moore's Law) in their budget and they simply started running out of ideas of what to do with those transistors in order to make competive performing CPU's.

Give an Intel and AMD design team a technology node capable of enabling 1 Trillion transistors on an economically manufacturable CPU (say <200 mm^2) and at this point in time all those design engineers would know to do with those transistors is turn them into Cache cells and/or 100's of parallel cores. They have no "bright(er)" ideas at this time.

Had the CPU designers ran out of ideas on improving IPC back in the 1980's instead of the 21st century then we would have seen the 1990's filled with generations of multi-core 486 cpu products. As it turns out they did not run out of ideas until just recently.

Running out of room to continue to shrink the transistor, thereby enabling more and more of them to be economically produced on the same 200 mm^2 piece of silicon, was expected since the 60's (or so I was told by Jack Kilby when I worked at TI).

It is not the end of the industry nor will it be the end of the doubling of performance gains every so many years, but until the next thing comes along there will always be dozens of starving journalists who need to publish headlines about the death of Moore's law so they can get a paycheck and pay the month's rent.
 

KingstonU

Golden Member
Dec 26, 2006
1,405
16
81
The smallest transistor node theoretically possible before data transfer is no longer reliable is 18nm. The fastest frequency theoretically possible (using particle accelerators and intense cooling) is ~3000Ghz before electrons start interfering with each other and errors start to develop in data transfer. It will be a long time before we reach 3000Ghz (other than for non scientific use that is) obviously. Considering that 32nm is coming out in 2009 and 22nm is scheduled in 2011, that road block is coming rather quickly.
 

Comdrpopnfresh

Golden Member
Jul 25, 2006
1,202
2
81
Si transistors... Still plenty of other natural and unnatural semi's out there to play around with. There is also still light, diamond, and quantum too. Plus- size isn't everything. Doping, and insulation layers, configurations, and other proprietary goodies can make modern electronics better still- independently of fabrication size.
AND BTW: Thats a misinterpretation/misquote of Moore. You don't have to change the fab size to double the number of transistors. For example- when Intel moves to a memory controller on-die, they'll open slews of room for functional transistors. Same die, more transistors, due to less cache.
What Moore originally said was that the density of transistors per some unit area will double every 18 months. He later revised his statement (some people say it was misperceived in the first place) to a doubling of density every 24 months.
 

SickBeast

Lifer
Jul 21, 2000
14,377
19
81
Intel went from 65nm to 45nm *very* quickly. If they were hitting the wall, they would be having a helluva time making the node shrinks, but they haven't.

With carbon nanotubes and such, I have a feeling they'll make advances that will keep them going for the next 10-20 years, and really that's a LONG time in the semiconductor world. Think about what we were using in 1987. A Commodore 64 maybe?
 

myocardia

Diamond Member
Jun 21, 2003
9,291
30
91
Originally posted by: Comdrpopnfresh
Plus- size isn't everything.

Just because your girlfriend tells you that, doesn't mean it's true.;) Ask her if she feels the same way about diamonds, if you don't believe me.

Originally posted by: SickBeast
Think about what we were using in 1987. A Commodore 64 maybe?

The Commodore 64 debuted in 1982, which is crazy, because I still remember the first time I saw one, as if it were yesterday. Okay, I remember it alot better than yesterday, but that's another story.
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
moore's law has never been "exactly accurate"... its more of a "moore's notion that computers are getting faster over time, duh". He tried to figure the exact rate and did some adjustments, but over time the rate of change goes up and down... eventually we might hit an unpassable wall, but that is still many many years to come. And "there is a need for some serious changes" statement is superflous, serious changes are brought about all the time in order to gain more performance... multi core, multi chip parralleism... metalK transistors, doping techniques, PCB layers... new innovations are brought about even when there isn't a "wall" to overcome, just because performance is being pushed. if we end up with 18nm process that cannot be shrunk further then we will just have to use other methods to improve performance (and cut costs)
 

coldpower27

Golden Member
Jul 18, 2004
1,676
0
76
Interestingly enough for now there are roadmaps up to the 22nm node, but further out then that is iffy at this point. The actual physical limit is 0.3nm as that is the diameter of the silicon atom itself, so were "only" 150 times bigger then the limit now with the 45nm process, though of course within our lifetime's assuming your in your teens or early 20's we will hit "some" shrinkage barrier for silicon.
 

TheOtherRizzo

Member
Jun 4, 2007
69
0
0
What I've never understood is what is the underlying fundamental physical/engineering reason for Moore's law? It's always stated as if it were a real law and not just a statistical observation. But why is it that chip performance increases much much faster than any other technology out there, be it materials. automotive, aviation etc.? If chips can improve by orders of magnitude in just a couple of years then what stopped them from just taking bigger steps in the first place? Is there a simple fundamental reason or is it "just the way it is"?
 

CTho9305

Elite Member
Jul 26, 2000
9,214
1
81
The push for multi-cores was driven by the fact that CPU designers had more transitors (thanks to Moore's Law) in their budget and they simply started running out of ideas of what to do with those transistors in order to make competive performing CPU's.

I don't think it's really that. I think there are 2 issues:
1) Power. You can't sell really high-power parts into the server market any more, and you sure can't sell them into the laptop market. Designing a product that'll only sell into the desktop market is probably financial suicide.
2) Wire delays. Getting a signal across a chip is taking longer and longer (relative to the delay of logic gates)... if you come up with an idea that uses 1 MB of storage, actually getting signals across that storage would take too many cycles. If you wanted to double the number of execution units, you wouldn't be able to shuffle data across them fast enough due to the wire delay. Global routing is a real limiting factor.
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
Originally posted by: TheOtherRizzo
What I've never understood is what is the underlying fundamental physical/engineering reason for Moore's law? It's always stated as if it were a real law and not just a statistical observation. But why is it that chip performance increases much much faster than any other technology out there, be it materials. automotive, aviation etc.? If chips can improve by orders of magnitude in just a couple of years then what stopped them from just taking bigger steps in the first place? Is there a simple fundamental reason or is it "just the way it is"?

Its been stated wrong then... there is no physical engineering reason... thats just generally how fast technology has progressed with the natural market forces... Do a manhetten project kind of deal and you will get a much faster progression at a hugh cost.. Let the market stagnate (communism?) and people will be using the exact same speed computer for 50 years...

That is why moores law had to be revised.
 

TheOtherRizzo

Member
Jun 4, 2007
69
0
0
Yeah, I understand that Moore's law isn't a law of physics in itself. But what I was getting at is what the physical/engineering reason is that computer chips are able to develop at such a pace.

Put another way: Why can a chip designer say "in two years we will give you a chip that is about three times as fast as the one you currently have" with a straight face while an aircraft manufacturer that claims "in two years you'll be able to go from Europe to America at Mach 3" can only be a nutcase?

It can't be market forces alone, for some reason computers can develop at a pace that is unthinkable for any other field of technology. I wonder if there is a simple way to describe what the reason for this is.
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
Because further minuturizing a chip, creating more efficient commands, and tweaking its design is a LOT simpler and then going from europe to the US in mach 3... Simply put, all it takes to improve a computer chip is to pay engineers and programmers... there are dozens of ways to improve it greatly, all of them are rather simple and inexpensive (as far as manufacturing goes)

Thats why I said market forces...
If it was soviet Russia than they would use the exact same chips until the big boss decides he want something faster... and then you would have a whole fleet of engineers working on it concurrently and getting amazing results...

The reason the chips don't get developed faster then an 18 month cycle is because of market forces making it... inefficient for a business to do so... the best money is had when there is a few teams of engineers working concurrently, at most. Ideally only one or two teams.
They typically sell a chip for twice what it costs to produce it, then they use the gross profit to pay for further research and administrative costs... if they wanted to research at an insane rate they would go bankrupt, instead of turning a profit.

Thats why each generation is characterized by a different improvement... The usually cycle a die shrink with a "feature"...

Examples of features:
1. 32bit to 64bit transion. (previously, 16bit to 32bit transition)
2. a new instruction set (MMX, SSE, SSE2, SSE3, SSE4, virtualization, etc)
3. integrated memory controller.
4. multi core die
5. multi die multi core modules.
6. more cache
7. Thread optimization
8. Hyperthreading.
9. etc etc etc....

Imagine the rate of improvement if you had a manhettan project kind of approach, where the worlds top experts in the field are put together in one town... and perform all the different research aspects concurrently!...

I mean, we would get chips that are 5 times faster every year...