Why no CPU over 5Ghz?

Oreo

Senior member
Oct 11, 1999
755
0
0
Now that Intel have postponed the 4GHz Pentium 4 är both AMD and IBM seem to have troubles going over to 0.09 microprocess technology we seem to be hitting a wall in CPU speed. So, what are the main reasons for this? Heat? Electromigration? The wavelength of the signals disturbing the CPU? If somebody could explain this in simple terms I'd be glad ;)
 
Jul 5, 2004
56
0
0
Heat. A few more watts and CPUs will produce as much heat as the surface of the sun (relative to their size of course).
 

Cogman

Lifer
Sep 19, 2000
10,284
138
106
Heats a big one, So instead of focusing on making the GHZ higher they are starting to focus on making the processor use the clock cycles more effectivly. Great fun it is :).
 

Oreo

Senior member
Oct 11, 1999
755
0
0
Yeah, I know heat is a big issue nowadays. But let's say heat wasn't an issue and that the Prescott 4Ghz only emitted 5 Watts. What would be the limit then? Are there no physical problems with speeds up to say 10-20GHz except the heat issue?
 
Jul 5, 2004
56
0
0
Well there is a physical problem, electricity is only so fast so if a piece of data can't get from one side of the CPU to the other before the next clock happens when it is needed, things could get ugly.

I'm sure if we get tha far (3 terahertz or so?) we'll come up with some kind of latency value to compensate.
 

Kondik

Member
Aug 6, 2004
53
0
0
I think that a 5 GHz CPU can be used to the Hearth of a ALASKA heating system :) People should think to make 2 - 3 GHZ cpus cooleable by a piece of iron :) not to make x GHz THz Cpus
 

CTho9305

Elite Member
Jul 26, 2000
9,214
1
81
Originally posted by: thelordemperor
Well there is a physical problem, electricity is only so fast so if a piece of data can't get from one side of the CPU to the other before the next clock happens when it is needed, things could get ugly.

I'm sure if we get tha far (3 terahertz or so?) we'll come up with some kind of latency value to compensate.

For what it's worth, you already can't drive a signal the whole way across a CPU in one cycle.
 

Oreo

Senior member
Oct 11, 1999
755
0
0
CTho9305, so is that a problem already? What would be the result if you had a 100GHz CPU (that did not have heat issues)?
 

CTho9305

Elite Member
Jul 26, 2000
9,214
1
81
Originally posted by: Oreo
CTho9305, so is that a problem already? What would be the result if you had a 100GHz CPU (that did not have heat issues)?

You can't realistically. The current way to get a signal from one side of the chip to the other is to give it more than a cycle to arrive (you might put flip flops at intervals along a long wire to break it into cycles). I believe that's what the Drive stages in the P4 pipeline are for.

At 100GHz, your cycle time is 10 picoseconds, and a fast inverter (simplest logic gate there is) takes a bit over 10 picoseconds on a modern manufacturing process. The thing is, between every pipeline stage, you need flip flops (they store the data), and flip flops cost you around 2 NAND gate delays (NANDs are slower than inverters - about half the speed). You can't do much logic with just inverters though.

If you wanted, you could do a CPU where every pipeline stage had one NAND gate, but there are a LOT of reasons this is a bad idea:
1. Flip flops are big - you'd have a giant flip flop, a tiny gate, then another giant flip flop
2. The pipeline would be impossible to fill. Given that adding two 32-bit numbers takes over 8 NAND delays, code like this presents a problem:
ADD a, b, c ;;;a = b+c
ADD d, a, a ;;;d = a+a
You couldn't start the second operation for at least 8 cycles after the first one starts. Out-of-order execution can mitigate some of these dependencies, but there are limits (and in a real CPU, there's a lot more to executing an A+B operation than just an addition).
3. Branches would kill performance. Whenever the CPU hits an "if" instruction, it has to guess the result (~95% accuracy is near the current max, I think). About 1 in 5 instructions are "if", so you can see that the longer your pipeline, the more branches are "in flight" and could be wrong. If you have, say, 5 branches in flight (maybe a 25-stage pipeline, between Northwood and Prescott?), with 95% accuracy on each, there's only a 77% chance you predicted all of them correctly. You can see that you'd be spending a LOT of time executing the wrong path of a branch - work which gets thrown away.
4. There is a certain amount of delay involved in getting the clock signal routed across the chip (clock skew). I think in most designs, you sacrifice more than one gate delay to account for skew - basically the flip flop at the start of a pipeline stage could start at time 0+skew and the flip flop at the end of the stage could fire at time Tcycle-skew, so you can only use Tcycle-2*skew if you want the design to be robust.
 

TuxDave

Lifer
Oct 8, 2002
10,571
3
71
Originally posted by: Oreo
CTho9305, so is that a problem already? What would be the result if you had a 100GHz CPU (that did not have heat issues)?

If there were no heat issues or energy issues, I think our CPUs would be kicking so much ass today. The only thing to get rid of is uncertainty of the clock jitter, transistor switching speeds and other sources of randomness. Sprinkle a little GaAs to get really fast transistors and then we'd have an uber uber fast CPU.
 

CTho9305

Elite Member
Jul 26, 2000
9,214
1
81
Originally posted by: TuxDave
Originally posted by: Oreo
CTho9305, so is that a problem already? What would be the result if you had a 100GHz CPU (that did not have heat issues)?

If there were no heat issues or energy issues, I think our CPUs would be kicking so much ass today. The only thing to get rid of is uncertainty of the clock jitter, transistor switching speeds and other sources of randomness. Sprinkle a little GaAs to get really fast transistors and then we'd have an uber uber fast CPU.

The low hole mobility of GaAs makes the PFETs really suck (or so pm once told me).
 

TuxDave

Lifer
Oct 8, 2002
10,571
3
71
Originally posted by: CTho9305
Originally posted by: TuxDave
Originally posted by: Oreo
CTho9305, so is that a problem already? What would be the result if you had a 100GHz CPU (that did not have heat issues)?

If there were no heat issues or energy issues, I think our CPUs would be kicking so much ass today. The only thing to get rid of is uncertainty of the clock jitter, transistor switching speeds and other sources of randomness. Sprinkle a little GaAs to get really fast transistors and then we'd have an uber uber fast CPU.

The low hole mobility of GaAs makes the PFETs really suck (or so pm once told me).

'tis ok. Since we're given that energy is not a problem, we can just stick to using mostly dynamic gates which is nfet driven.
 

beansbaxter

Senior member
Sep 28, 2001
290
0
0
Intel has been investing a lot of money in their research and development efforts into fiber optics...don't ya think that is where the future wave of processors will eventually goto? For terahertz, petaherz, exahurtz, the cycles will go through fiber optics...
 

CTho9305

Elite Member
Jul 26, 2000
9,214
1
81
the speed of light = 299 792 458 meters / second
in 1 picosecond, light travels 0.3 millimeters (300 microns)

Even with fiber optics, a signal can't go further in a cycle. Maybe someone can find numbers on how wide a modern CPUs datapath is in microns.
 

beansbaxter

Senior member
Sep 28, 2001
290
0
0
well the fiber optic thing is just conjecture anyways...

probably is just heat, line width, device layout, line routing, overall architecture. Too many jacka$$es/attitudes working at the Ronler acres R&D site

Just wait till they perfect the diamond CVD process. Which will liquefy silicon at certain process speeds.
 

sao123

Lifer
May 27, 2002
12,653
205
106
another problem with the shrinking circuit size is electrical current leakage.
 

f95toli

Golden Member
Nov 21, 2002
1,547
0
0
CPUs have been using "nano tech" for several years, the "definition" is adter all that the circuit has features smaller than 1 micron and linewidths smaller than 100 nm has been around for some time now.

So no, nano tech won't save us.

That said 5 GHz is acutally quite slow and as long as you do not need complex circuits clock frequencies of tens of GHz is standard III-V technology (GaAs, InP and so on). This type of materials are mostly used in telecom (swotches, mulitplexers etc). But AFIK it is not possible to build whole CPUs in for example GaAs.
 

CTho9305

Elite Member
Jul 26, 2000
9,214
1
81
Originally posted by: f95toli
CPUs have been using "nano tech" for several years, the "definition" is adter all that the circuit has features smaller than 1 micron and linewidths smaller than 100 nm has been around for some time now.

So no, nano tech won't save us.

That said 5 GHz is acutally quite slow and as long as you do not need complex circuits clock frequencies of tens of GHz is standard III-V technology (GaAs, InP and so on). This type of materials are mostly used in telecom (swotches, mulitplexers etc). But AFIK it is not possible to build whole CPUs in for example GaAs.

Don't those circuits often use BJTs instead of FETs too? BJTs are a lot faster, if you can afford the power consumption.
 

ColdFusion718

Diamond Member
Mar 4, 2000
3,496
9
81
Originally posted by: beansbaxter
well the fiber optic thing is just conjecture anyways...

probably is just heat, line width, device layout, line routing, overall architecture. Too many jacka$$es/attitudes working at the Ronler acres R&D site

Just wait till they perfect the diamond CVD process. Which will liquefy silicon at certain process speeds.

Actually, current research is more interested in silicon carbide instead according to one of my device physics professors. Silicon carbide has very similar properties to diamond as to robustness against heat.

Please correct me if I'm wrong.
 

bacon333

Senior member
Mar 12, 2003
524
0
0
As you get into submicrons in die size there will about 3 issues:

-clocking is bit more challenging as previously describe in the other posts
-interconnects effects play a much more significant role (interconnects are the things that attach to different parts of a transistor to make the chip)
-power dissipation could be a HUGE factor (also drop in supply rails)

You're compacting millions of transistors into a small area. There's going to be a lot of resistance and capacitance problems within interconnects and a lot of wasted power (heat). If we ran submicron chips with the supply voltage on our current chips, there will be power dissipating. If we can run the supply rails at a lower voltage, these transistor based chips will not require as much power, hence lower wattage power supplies. Obviously if heat weren't an issue then you can hit 5 ghz easily, even with the chips today (tomshardware). I hope that answers your question.
 

dudeguy

Banned
Aug 11, 2004
219
0
0
when i say nanotech i mean new materials and designs, not miniaturisation uing a few new composites and metals.

like conducting polymers, and carbon tubes for optics etc.
 

imported_tss4

Golden Member
Jun 30, 2004
1,607
0
0
One possible way to over come maximum distance that a signal can travel in the chip is to use more layers to minimize the distance between points on the chip. This adds thermal management problems but there are ways to compensate for that.