Maximum possible clock frequency for a CPU

Brian23

Banned
Dec 28, 1999
1,655
1
0
Today's CPUs run at about 3GHz. The speed of light is aproximately 3x10^8 m/s. Therefore in one clock cycle light could travel 10cm. Electrons travel through different materials at different speeds, but this is always slower than the speed of light. Once the clock frequency gets to a certain point, it will be impossible for an electron to go from one side of the CPU to the other in one clock cycle. at 30GHz, light could travel 1cm in a clock cycle.

Therefore 30GHz is about the highest clock speed we will ever see.

Comments?
 

dmens

Platinum Member
Mar 18, 2005
2,275
965
136
1. At 3ghz it is already impossible for signals to go across the die in one cycle, even using the highest metal layers and full shielding.

2. No signal in a CPU needs to go across the die in one cycle anyway, that's just bad design. There are very few global signals on a CPU and entire protocols are built around them to allow a multicycle response.
 

f95toli

Golden Member
Nov 21, 2002
1,547
0
0
Circuits with a few thousand elements and a clock-frequency of a couple of hundred GHz have already been demonstrated.
However, there are applications where you need "real-time" response and the signal must travel throught the whole circuit in only a couple of cycles, then you have no choice but to make the chip as small as possible.


 

BitByBit

Senior member
Jan 2, 2005
474
2
81
Reaching those clock speeds will simply require die shrinks, along with better transistors.
The signal only has to reach the next pipeline stage in one clock cycle - it doesn't have to traverse the entire pipeline.
Processor cores will continue to shrink as clock speed scaling requires, until of course, transistors become so small that they end up on the same scale as the atoms that comprise them!

Interesting article
 

TuxDave

Lifer
Oct 8, 2002
10,571
3
71
This is the second time I heard this speed of light argument? Who in the world started this madness? Why would a designer limit his design by having the necessity of having signals traverse the entire core in one clock cycle?
 

MAW1082

Senior member
Jun 17, 2003
510
7
81
Yeah I don't think there's a reason the signal would need to go across the die in onle clock cycle. If you really needed to use the output of a calculation for the very next instruction you could just use an appropriote amount of "no operations" in the code.
 

f95toli

Golden Member
Nov 21, 2002
1,547
0
0
There are certain applications where this is neccesary, the digital part of high-resolution high-speed (>10 GHz, 14-16 bit) A/D converters is one example; some designs require real-time feedback so basically the internal logic needs to process data at least 2xfaster than the bandwidth of the incoming signal.
Of course this is an application where you use an DSP or (more likely) an ASIC, not an ordinary CPU.

 

TuxDave

Lifer
Oct 8, 2002
10,571
3
71
Originally posted by: f95toli
There are certain applications where this is neccesary, the digital part of high-resolution high-speed (>10 GHz, 14-16 bit) A/D converters is one example; some designs require real-time feedback so basically the internal logic needs to process data at least 2xfaster than the bandwidth of the incoming signal.
Of course this is an application where you use an DSP or (more likely) an ASIC, not an ordinary CPU.

Then there are sigma-delta (or delta-sigma by some schools) ADCs, where you need to clock much more than twice the incoming frequency.