• Guest, The rules for the P & N subforum have been updated to prohibit "ad hominem" or personal attacks against other posters. See the full details in the post "Politics and News Rules & Guidelines."

The Physics behind cores

BoomSlangx

Member
Feb 16, 2018
41
4
11
Hey guys,

Does anyone know enough about microprocessors to know what causes CPU's and GPU's to crash, throw up errors or lock up when overclocking when it's not heat related?

You know what i mean, i always thought for years that what limits your cards core from continuing to be overclocked is simply not being able to keep it cool enough.
But ive been seeing alot now that the 1080 ti's in particular, don't seem to like it 1 little bit when you try an get their core over the 2100MHz threshold.

I've seen a few people getting their cards to like 2088MHz with no problems with temps only in the mid 40's, but as soon as you push it to maybe 2102 it locks up or crashes.
Now this is surprising to me as I'd wonder why nVidia released a chip that is so close to the edge of failing?
And if the limit of the architecture is 2100mhz approx, why are so many manufacturers like Aourus etc making these super tanky copper heatsink / pipe / copper backplated monstrosities if it makes no difference.

I mean with a cleaning and an fresh application of some liquid metal compound on the core of a founders card, i'm sure even those would be able to reach the cars limit quite easily yah?

so 2 questions, why all the complicated cooling solutions ranging from massive 3 slot copper designed, to huge RPM and triple and quadruple fan arrangements to Waterblocking and even open Bed N20 stuff, if it all just maxes out at 2100?

Thanks guys!!!
 

IllogicalGlory

Senior member
Mar 8, 2013
932
337
136
It has to do with the propagation delay of the electrical signals in the chip. The longer the clock period (the lower the frequency), the more time the signals have to propagate through the chip's logic networks between clock switches. If the clock switches before all the circuit's signals have settled, some data could be lost, which results in an error.

To mitigate this, one can increase the voltage of their chip, which reduces propagation delay. Of course, voltage increases also massively increase the heat emission. When you put your chip on LN2, you gain a whole lot of heat dissipation capability, which means you can increase the voltage of the chip a whole lot, which in turn means you can achieve really high clock frequencies without error (for long enough to complete a benchmark, for example).
 
  • Like
Reactions: Grubbernaught

BoomSlangx

Member
Feb 16, 2018
41
4
11
It has to do with the propagation delay of the electrical signals in the chip. The longer the clock period (the lower the frequency), the more time the signals have to propagate through the chip's logic networks between clock switches. If the clock switches before all the circuit's signals have settled, some data could be lost, which results in an error.

To mitigate this, one can increase the voltage of their chip, which reduces propagation delay. Of course, voltage increases also massively increase the heat emission. When you put your chip on LN2, you gain a whole lot of heat dissipation capability, which means you can increase the voltage of the chip a whole lot, which in turn means you can achieve really high clock frequencies without error (for long enough to complete a benchmark, for example).
That's actually a fantastically well informed reply, answered exactly what i asked.
With my only further question being basically, how come we can't get any of those cores over 2102 even when dropping their temps down by 30c on LN2, is it just a physical limit? I mean, it wouldn't be anything as sinister as "We'll make it overclockable, but we'll add in a limit to each stepping, as if we don't, this core could be overclocked an indefinite amount depending on how cool it can be kept meaning hey may never have to upgrade"... if you get me.
 

Red Squirrel

No Lifer
May 24, 2003
59,658
8,579
126
www.uovalor.com
From my understanding the traces on a PCB or even wires, etc act as capacitors and inductors. No matter what you will always have some capacitance and inductance in any type of circuit. Inductance "delays" current and capacitance delays voltage. So imagine you have a signal that is 0 volts for 0 and 5 volts for 1 and you are flipping this on and off very fast to create a stream of bits. When the 5v is applied, it has to charge the "capacitor" that is created by the traces. The inductance also affects it similarly. so that creates a delay before the whole trace can become 5v. Same when you apply 0, there will still be some charge left in the virtual capacitor and inductor network.

The capacitance and inductance is very tiny, but it's there. At GHz frequencies it becomes rather significant though. The chips and the PCBs are layed out in a way to be optimized to a certain frequency, going past that will have less predictable results.

At least that's my understanding anyway. I could be off.
 

ASK THE COMMUNITY