• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Why no CPU over 5Ghz?

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
cant they just have more than one chip for different tasks andprocesses? i mean this stuff isnt only useful for workstations and gaming rigs anyway.
 
About "nanotech": It is just a buzz-word, the only time it is used in "real" science is when naming departments and writting applications. I have been to a few "nanotech" conferences and they cover just about everything than you can make in a clean room: Silicon. III-Vs (GaAs, GaN, InP), MEMS, carbon nanotubes, optics, scanning probe technoques, polymers etc.

The concept of "nano techologies" is used so often that it has lost its meaning.
 
So usually the definition for nanotech implies that the dimensions are under 100 nm. This appears in the NSF definition and directive on nanotech I believe.

In regards to materials there many issues as to why it's harder to go faster. First at such small dimensions as are found in the 90nm and 65nm processes, the classical idea of mobility isn't valid. Mobility depends on some probability of scattering of an electron, but those distances are shorter than the scattering time. Electrons are acting more ballistically than random walk particles moving across a line. As a result of this using a material like GaAs doesn't have the effect to drastically increase speed that one would hope for.

Another problem is the idea of doping. In a transistor certain areas of Si are doped with either positive or negative atoms (usually boron or arsenic). Traditionally in solid state theory we talk about dopant levels like 10^20 or something, but when you have a 50nm feature it's only several hundred atoms thick. 10^21 is about 1 part per hundred. That means in a 50 nm feature you've got one or 2 dopant atoms. The physics involved in understanding what effect this really has is incredibly complex and is a modern topic of study.

I guess in summary, these are just a few materials issues to make faster transistors. Shrinking dies and improving speed is a very nontrivial event now, and the transition to the 65nm node is going to take even longer and require a great deal more science before it will ever become a reality.
 
part of the solution is a beowulf cluster of slower cheapos effectively adding up to a supercomputer.
128 computers of 2 ghz each = _______________effective cluster speed. similarly, multiple cores in 1 cpu.
 
A lightbulb with a CPU as filament enclosed with vacuum. CPU is made of carbon nanotube. This is what you might see in future.
 
Originally posted by: dudeguy
cant they just have more than one chip for different tasks andprocesses? i mean this stuff isnt only useful for workstations and gaming rigs anyway.

Maybe in the future we won't have cpu's....

we'll have our math proc, a few logic procs, a memory proc, graphics proc (or just leave it on the board)....

Would sure make for an interesting motherboard.
 
bacon333 sounds like he really knows what he's talking about but I would like to supplement the HEAT discussion just in case some people arent clear.

Logic gates are made from transisters and transisters are made from semiconductors. (Two diodes end-to-end actually) Any time you switch from a 1 to a 0 and back, you need a little more current, you transfer power. In a semiconductor this makes lots of heat. But semiconductors are also how we get our accurate 1's and 0's. The more times per second you change between 1's and 0's the more heat you build. This is why RAM and processors are cooler when idle. They spend most of their clocks each second just processing straight 0's.
If I'm not mistaken, the old TTL used 0 through 1.5 volts for a Logic 0. Then 2.5 through 5 volts for a Logic 1. Everything in the middle is N/A. The newer CMOS standard splits a 0,N/A, and 1 into about equal thirds. Of course modern CPU's run on about 1.4 volts through the whole chip, usually between 50 and 100 Amps of current.

This is also (one of many reasons) why AMD's are much hotter than a similarly clocked Intel. They perform more operations per clock cycle.(on average)

Please dont flame me on the AMD vs INTEL issue, I'm only using them as examples for heat.
 
Also, were moving to scales that allow quantum effects to play a bigger and bigger role in governing how the electrons interact. I worked over the summer in a lab studying quantum dots that are only about 10 times smaller than the smallest transistors. They just cant be made that much smaller before too much of the signals are lost to electron tunneling.
 
Are you refering to the work done by Likharevs group (military funding, I think it was called the "Teraflop"-project) on RSFQ logic or something else?
 
Well, I'm running 2 water-cooled Koolance systems at hime. Please allow me to break-in a hot 5Ghz CPU.
 
with the problems with current cpu's in the 90nm heat problems and current leakage, why wouldnt the next step logically to lower the cpu voltage in addition to adding more power insertion points on the cpu?
 
Originally posted by: sao123
with the problems with current cpu's in the 90nm heat problems and current leakage, why wouldnt the next step logically to lower the cpu voltage in addition to adding more power insertion points on the cpu?

Sure, but as you lower voltage, you reduce the drive current of the transistors, and since the capacitance on each node is (mostly) independent of voltage, you end up slowing the circuit down.
 
Originally posted by: CTho9305
Originally posted by: sao123
with the problems with current cpu's in the 90nm heat problems and current leakage, why wouldnt the next step logically to lower the cpu voltage in addition to adding more power insertion points on the cpu?

Sure, but as you lower voltage, you reduce the drive current of the transistors, and since the capacitance on each node is (mostly) independent of voltage, you end up slowing the circuit down.

Exactly. If we scale down voltages and we wish to still get some gain in performance, we'd have to reduce the threshold voltages of the transistors which would lead to more leakage. I believe the optimal designs of today of performance processors has the total leakage power equal to the total active power.
 
I think it's the government that is limiting the speed for processors. I read 3 years ago that Intel had the technology to put out a 5 Ghz chip, but the government keeps a cap on how fast personal computers can be. They will always have a hand in how far PCs can go.
 
Originally posted by: Deleted member 139972
I think it's the government that is limiting the speed for processors. I read 3 years ago that Intel had the technology to put out a 5 Ghz chip, but the government keeps a cap on how fast personal computers can be. They will always have a hand in how far PCs can go.

Got anything to support any of that?
 
Ctho is right for the most part.

The next breakthrough will have to be quantum computing, probably using spin states. That, will be really amazing.
 
As I have written many times before: No, quantum computers can not replace ordinary computers and are not "fast" in the normal meaing of the word, an ordinary computer is faster 99 times out of 100.

Search the forum for a longer explanation.
 
Hi,

I don't know too much about LCR electronics per se - but I thought the fundamental practical limit (what do I mean by that!) is the capacitance of the waveguides. Copper waveguides operating around 100GHz are no doubt highly unlikely in bulk componnts, but I have no idea how this scales down to IC dimensions?

Cheers,

Andy
 
Originally posted by: Fencer128
Hi,

I don't know too much about LCR electronics per se - but I thought the fundamental practical limit (what do I mean by that!) is the capacitance of the waveguides. Copper waveguides operating around 100GHz are no doubt highly unlikely in bulk componnts, but I have no idea how this scales down to IC dimensions?

Cheers,

Andy

LCR... inductor, capacitor and resistance? Capacitance of waveguides = capacitance of the wires? Well... if that's what you're talking about, then that is part of the issue. The performance gain we get from scaling is now being limited by wire since the capacitance of the wires do not scale as well as the transistors.
 
Originally posted by: TuxDave
Originally posted by: Fencer128
Hi,

I don't know too much about LCR electronics per se - but I thought the fundamental practical limit (what do I mean by that!) is the capacitance of the waveguides. Copper waveguides operating around 100GHz are no doubt highly unlikely in bulk componnts, but I have no idea how this scales down to IC dimensions?

Cheers,

Andy

LCR... inductor, capacitor and resistance? Capacitance of waveguides = capacitance of the wires? Well... if that's what you're talking about, then that is part of the issue. The performance gain we get from scaling is now being limited by wire since the capacitance of the wires do not scale as well as the transistors.

Yep. That's what I was getting at.

Cheers,

Andy
 
the capacitances of the transistors have an effect on how high you can clock it. im not sure about the upper limit that these capacitances cause. come to think of it, i think its in the hundreds of gigaherz so i dont think we have to worry about that for a while.
 
Back
Top