What stops Immensly high overclocks?

willfreund

Senior member
May 25, 2004
290
0
0
I was testing this today. One unlocked fx chip.
Factors removed
Cache disabled
agp Locked
PCI Locked
Sata controllers locked
USB locked
660 Watt power supply
Memory at 200mhz(slow) useing ddr560 GSkill, one slot> maximum laggish timings
Liquid nitrogen setup exactly the same as the picture of the kid that got to 6.00 ghz
CPu voltage modded to 1.9 (The chip was stable for 3 hour torture test stock fsb with this, thoght i prob slashed a view years off)
Memory voltage 3.1, And I put Water cooling to that
Fsb manipulated to it stayed in a 190 to a 210 range (equivilent, etc a 11x260 = fsb200 because of theclock divider)
I got it to 3.16Ghz (it wouldnt get past windows but would post)
After that a 20mhz increase cpu clocking it would crash.
What would be causeing this? does the resistance get so high that nothing gets forward?
 

Shenkoa

Golden Member
Jul 27, 2004
1,707
0
0
Disabling cache would cause a major performance hit, would not be visible at 3.16 GHz, would run like a FX at 2.4 GHz. Whats causing it, bad luck and a bad stepping. Core wont do any more, I wish I understood the logic behind why it will not get a single bit over that frequency but I dont work for AMD.
 

CycloWizard

Lifer
Sep 10, 2001
12,348
1
81
I'm no CPU expert, but most materials can only withstand certain frequencies before they break down structurally. The smaller the transistor, the higher this critical frequency is because there will be less flaws. Strength and such are flaw-driven properties until you get into the nano-range (less than the 0.9 micron size currently used in most transistors).

It's like taking a stick and shaking it really fast - a few hundred times per second. Now, take a stick that's the same diameter and 1/2 the length. The second will be able to withstand much higher frequencies before it breaks. Not a good analogy probably, but hopefully you can visualize this.

Edit: I guess this doesn't really address overclocking so much as clockspeeds in general, but I believe they are related. Not sure how a processor decides whether or not it will 'post' at a given clockspeed, if that's what you're asking.
 

TuxDave

Lifer
Oct 8, 2002
10,572
3
71
Let's put it this way. To successfully overclock you must satisfy two conditions. One that the transistor will switch faster and the other is that the transistors do not end up killing themselves with excess heat or overdrive voltage. Now assuming the transistors are fine, what limits an overclock.

Sometimes you get a little lucky and the batch that your core comes from happens to have the transistors give a little more drive than usual and have a provide less load than usual. Intel will not be selling your processor at the fastest speed of the bunch since they'll have headaches with all the returns. Instead, they sell your processor well within a boundary of probable performances. In other words, chances are your processor will run at some speed higher than it's rated at.

Picture one logic block leading to another block. So you up the core voltage a little higher, and so your transistors are driving as hard as they can. However, that drive strength is finite and so it will take some period of time to charge up a node to the degree that the next logic gate will detect that something has happened. If you start increasing your clock frequency, you're basically giving each gate less time to operate and eventually you will hit a glitch where even if the gate was trying to charge it up, it wasn't charged up enough for the following gate to detect it. You can't overclock much further than that.

That's the simplified version. If you want a one sentence technical term. Overclocking will reduce your noise margin and increase the probability of logic errors.
 

Matthias99

Diamond Member
Oct 7, 2003
8,808
0
0
Originally posted by: CTho9305
What exactly do you mean by "Overclocking will reduce your noise margin"?

Despite being 'digital' devices, at some level every component in your computer is analog -- transistors being no exception. We talk about digital devices working with ones and zeroes (on and off states for transistors), but in reality, what they're measuring is voltage levels. For a chip running at a VCC of 3.3V, for instance, you might define '0' as 0 <= Vin <= 1.4V and '1' as 2.0V <= Vin <= 3.3V. These are 'noise margins' -- you might only be able to produce a voltage that's stable to +-.1V even in the best conditions, so allowing anything over 2.0V to count as 'on' takes this into account.

When an input or output in a circuit goes from, let's say, 0 to 1 (let's say, 0.1V to 3.2V), it doesn't do so instantly. It takes some (very very small) amount of time for the change to propogate from one transistor/logic gate to the next. Increasing the frequency decreases the amount of time available for this propogation to occur, and at some point it just doesn't make it (or doesn't make it consistently). When the clock ticks, the voltage may still be at, say, 1.6V (which is neither 1 nor 0, and could fall either way depending on the transistors in question), or still down in the '0' range. This would produce a logic error, and would probably cause your system to crash or malfunction badly.

The rise/fall time of a transistor is dependent mostly on capacitance and voltage -- capacitance you can't do much about (it's a function of the process, materials, and gate size), but that's why increasing chip voltage makes higher overclocks more stable -- it drives the voltages up and down more rapidly.
 

CTho9305

Elite Member
Jul 26, 2000
9,214
1
81
Originally posted by: Matthias99
Originally posted by: CTho9305
What exactly do you mean by "Overclocking will reduce your noise margin"?

Despite being 'digital' devices, at some level every component in your computer is analog -- transistors being no exception. We talk about digital devices working with ones and zeroes (on and off states for transistors), but in reality, what they're measuring is voltage levels. For a chip running at a VCC of 3.3V, for instance, you might define '0' as 0 <= Vin <= 1.4V and '1' as 2.0V <= Vin <= 3.3V. These are 'noise margins' -- you might only be able to produce a voltage that's stable to +-.1V even in the best conditions, so allowing anything over 2.0V to count as 'on' takes this into account.

When an input or output in a circuit goes from, let's say, 0 to 1 (let's say, 0.1V to 3.2V), it doesn't do so instantly. It takes some (very very small) amount of time for the change to propogate from one transistor/logic gate to the next. Increasing the frequency decreases the amount of time available for this propogation to occur, and at some point it just doesn't make it (or doesn't make it consistently). When the clock ticks, the voltage may still be at, say, 1.6V (which is neither 1 nor 0, and could fall either way depending on the transistors in question), or still down in the '0' range. This would produce a logic error, and would probably cause your system to crash or malfunction badly.

The rise/fall time of a transistor is dependent mostly on capacitance and voltage -- capacitance you can't do much about (it's a function of the process, materials, and gate size), but that's why increasing chip voltage makes higher overclocks more stable -- it drives the voltages up and down more rapidly.

Ok, I knew all of that, but I'm not sure I'd call that "noise margin". It's really just a setup time violation on the receiving flip flop (assuming the last noce before the flip flop is transitioning when the clock ticks) or just a general timing violation (if you overclock farther than that).

When people say overclocking reduces noise margin / makes signals noisier, it implies (to me, at least) that if you monitored a given net, as you overclock, its signal quality would get worse and worse. However, regardless of clock frequency, most of the nets in a circuit would not be affected.

If you imagine a puddle, and you create ripples at one side every n seconds and sample (check to see if the water is wavy) at the other side n seconds later, then change to n/2 seconds, the ripple moves the same way - you're just going to be checking the other side of the puddle before the ripples arrive.
 

Matthias99

Diamond Member
Oct 7, 2003
8,808
0
0
Originally posted by: CTho9305
Originally posted by: Matthias99
Originally posted by: CTho9305
What exactly do you mean by "Overclocking will reduce your noise margin"?

Despite being 'digital' devices, at some level every component in your computer is analog -- transistors being no exception. We talk about digital devices working with ones and zeroes (on and off states for transistors), but in reality, what they're measuring is voltage levels. For a chip running at a VCC of 3.3V, for instance, you might define '0' as 0 <= Vin <= 1.4V and '1' as 2.0V <= Vin <= 3.3V. These are 'noise margins' -- you might only be able to produce a voltage that's stable to +-.1V even in the best conditions, so allowing anything over 2.0V to count as 'on' takes this into account.

When an input or output in a circuit goes from, let's say, 0 to 1 (let's say, 0.1V to 3.2V), it doesn't do so instantly. It takes some (very very small) amount of time for the change to propogate from one transistor/logic gate to the next. Increasing the frequency decreases the amount of time available for this propogation to occur, and at some point it just doesn't make it (or doesn't make it consistently). When the clock ticks, the voltage may still be at, say, 1.6V (which is neither 1 nor 0, and could fall either way depending on the transistors in question), or still down in the '0' range. This would produce a logic error, and would probably cause your system to crash or malfunction badly.

The rise/fall time of a transistor is dependent mostly on capacitance and voltage -- capacitance you can't do much about (it's a function of the process, materials, and gate size), but that's why increasing chip voltage makes higher overclocks more stable -- it drives the voltages up and down more rapidly.

Ok, I knew all of that, but I'm not sure I'd call that "noise margin". It's really just a setup time violation on the receiving flip flop (assuming the last noce before the flip flop is transitioning when the clock ticks) or just a general timing violation (if you overclock farther than that).

When people say overclocking reduces noise margin / makes signals noisier, it implies (to me, at least) that if you monitored a given net, as you overclock, its signal quality would get worse and worse. However, regardless of clock frequency, most of the nets in a circuit would not be affected.

If you imagine a puddle, and you create ripples at one side every n seconds and sample (check to see if the water is wavy) at the other side n seconds later, then change to n/2 seconds, the ripple moves the same way - you're just going to be checking the other side of the puddle before the ripples arrive.

If you held everything else constant (particularly voltage and temperature), you're right; signal quality should (theoretically) stay the same, and your only constraints will be timing-related. However, in practice they all tend to be interrelated. You could certainly reach a point in pushing a circuit where you theoretically have enough time on average for the signal to arrive, but the connection is noisy enough that it doesn't always make it. I guess it's a matter of perspective. :)
 

TuxDave

Lifer
Oct 8, 2002
10,572
3
71
Originally posted by: CTho9305

Ok, I knew all of that, but I'm not sure I'd call that "noise margin". It's really just a setup time violation on the receiving flip flop (assuming the last noce before the flip flop is transitioning when the clock ticks) or just a general timing violation (if you overclock farther than that).

When people say overclocking reduces noise margin / makes signals noisier, it implies (to me, at least) that if you monitored a given net, as you overclock, its signal quality would get worse and worse. However, regardless of clock frequency, most of the nets in a circuit would not be affected.

If you imagine a puddle, and you create ripples at one side every n seconds and sample (check to see if the water is wavy) at the other side n seconds later, then change to n/2 seconds, the ripple moves the same way - you're just going to be checking the other side of the puddle before the ripples arrive.

Noise margin is how much noise you can sustain on the line before it triggers the error. Using Matthias99's example of '0' being 0 to 1.4V and a flip flop that's recieving this signal. If you're clocked very slowly, the net will most likely be discharged down to 0V and so you have a 1.4V noise margin. If you start prematurely clocking the flipflop and the node could only discharge down to 0.5V, you have a 0.9V noise margin. Since noise margin is a probabilistic function, by decreasing your average noise margin, you increase the probability of error.
 

blahblah99

Platinum Member
Oct 10, 2000
2,689
0
0
I'm surprised no one mentioned timing margins as the limit to how fast a chip can "overclock".

In digital electronics, you need to worry about setup and hold times, as well as clock to data out, or just timing in general.

If a receiver requires a setup time of X ns, and you're overclocking the driver, you are reducing the amount of timing margin you have at the receiver. Eventually, you'll reach a point where the data that arrives at the receiver will no longer meet the setup time, and an error or unknown output occurs. As you overclock the driver, the clock period is shorter and shorter, and guess what - your timing margins get reduced.
 

Sahakiel

Golden Member
Oct 19, 2001
1,746
0
71
Originally posted by: blahblah99
I'm surprised no one mentioned timing margins as the limit to how fast a chip can "overclock".

Read the 5th and 6th post.
 

blahblah99

Platinum Member
Oct 10, 2000
2,689
0
0
Originally posted by: Sahakiel
Originally posted by: blahblah99
I'm surprised no one mentioned timing margins as the limit to how fast a chip can "overclock".

Read the 5th and 6th post.

They're talking specifically about proprogation delay at the transistor level.. I'm talking more on a general scale, both at the chip level and at the board level with other timing requirements taken into account.
 

Matthias99

Diamond Member
Oct 7, 2003
8,808
0
0
Originally posted by: blahblah99
Originally posted by: Sahakiel
Originally posted by: blahblah99
I'm surprised no one mentioned timing margins as the limit to how fast a chip can "overclock".

Read the 5th and 6th post.

They're talking specifically about proprogation delay at the transistor level.. I'm talking more on a general scale, both at the chip level and at the board level with other timing requirements taken into account.

And where, pray tell, do you think chip-level and board-level timing delays come from? Inter-chip delays are a result of intra-chip (transistor-level) delays, although if the chips are fast enough the capacitance of the connection starts to become a limiting factor as well. The principles are the same no matter what level you're working at.