Processers and Microchips generate heat.

The Sly Syl

Senior member
Jun 3, 2005
277
0
0
Its horribly obvious and well known these days that just about anything with a chip in it is going to generate heat, almost all parts of the motherboard, graphics card, etc.

Why is this? I know they have energy going through them, but, what causes more heat output when you overclock, why are some things hotter than others. (Example: Why a processer requires a massive heatsink compared the processer in a cellphone or PSP being completely passive)

What is the physical reason for chips to generate so much heat? Why does overclocking them generate more? (How come prescots generate so much more heat than winchesters while having less power?)

Also, Is it true that a higher temperature is actually bad for them?
 

MobiusPizza

Platinum Member
Apr 23, 2004
2,001
0
0
The resistance of the circuit is the reason why heat is generated. Like a light bulb and a heater, they have high resistance so they glow red hot.
Overclocking means higher working frequency means higher operations per second. Each operation current (electrons) goes through gates and metal connects, this in turn mean more heat generated per second.

Also, most overclocking require steping up the voltage. Higher voltage mean higher current. Higher current means more heat generated.

Cellphones and PSP processors use little power, little voltage so current is smaller. Also their complexity is limited. The less transistors inside them means less heat is generated.

Prescott's hot because of its inefficient design. High transistor count, long pipeline structure means higher working frequency means much more heat.

High temperature can destroy / wear down the semiconductor.
 

AluminumStudios

Senior member
Sep 7, 2001
628
0
0
Basically it's electricity. Think of the MHz or GHz of a device as how many waves of electrons per second are sent through it. If each wave of electrons heats it up by a certain amount (from the electrons racing through and banging against the molecules of the processor) then more waves/sec (GHz) = more heat.

In order to reduce the resistance and current leakage (two physical factors that cause problems when you try to increase a chip's GHz) CPU manufacturers are shrinking the transistor sizes. While that solves some physical issues it creates another - the transistors are so thin that they heat up fast and can't dissapate the heat that well. Imagine using a very thin extension cord on an electric oven or big air conditioner - that cord would heat up (and possibly melt and cause a fire in the house.)

Processors run 2-3.8 GHz (that's 2-3.8 billion pulses of electricity being sent through their millions of transistors per second.) THats a LOT more heat generation than the much smaller, much slower chips in cel phones or PSP. Portable devices also sacrifice performance in order to run at lower voltages which means the electrons are as energetic adn aren't heating the chips up as much letting them regain the performance through higher speeds (there's always a trade-off.)

Chips are rated to run up to a certain temperature. Beyond that they can have errors or burn out (if the barrier between transistors gets too hot and a few electrons are allowed to short between them as a result more will follow and they will damage the chip by creating new, unwanted paths for other electrons to follow.)

I don't know what current AMD and INtel chips are rated at, but I remember some AThlon XP's being rated as being able to operate up to 85 or 90 degrees. It's generally not good to push a chip that far though.

 

CTho9305

Elite Member
Jul 26, 2000
9,214
1
81
In order to reduce the resistance and current leakage (two physical factors that cause problems when you try to increase a chip's GHz) CPU manufacturers are shrinking the transistor sizes.
Actually, smaller transistors (as in, 65nm vs 90nm vs 130nm) leak more for a given width (though hopefully when you use smaller processes, you can shrink device widths).

Overclocking means higher working frequency means higher operations per second. Each operation current (electrons) goes through gates and metal connects, this in turn mean more heat generated per second.
To elaborate on that... ideal CMOS circuits (the circuit style used for most of a CPU) only dissipate power when they're switching. If you look at the right box in this image, you can see crude side view of a transistor. On each transistor's input (the gate) there is effectively a capacitor. In order to switch the transistor from on to off (or back on), whatever is driving the input of this transistor needs to either charge or discharge that capacitor. This is one source of power. Power from switching capacitances like this has the equation P=C*V^2*F, or capacitance switched times (voltage squared) time frequency. As you can see, this depends directly on the clock frequency, so when you overclock by 10%, the frequency component goes up by 10%. Of course, to get the chip to work, you might have to raise the voltage by 10%, which adds another factor of 1.1^2=1.21... so the total power is 1.21*1.1=1.331, or about 33% more power than when it's not overclocked.

So, why do you have to run at a higher voltage when overclocking? Well, a certain number of gates need to switch within the cycle time of the processor. You could model the gates like resistors and capacitors as in the bottom box of this image. Recall that V=IR, or alternately, I=V/R. The current a transistor can drive is related to the voltage over its resistance. The capacitances the transistor drives (from other gates' inputs) need to be switched within a certain amount of time, and if you overclock you leave less time for this to happen. To get the capacitances all charged in time, you need to increase the current, and you do this by increasing the voltage. (Note that V=IR is a really poor model for a transistor, but it conveys the point).

Now, if you have an inverter (the simplest gate... it's just easiest to describe, but this all applies to other gates), there are two transistors - a pmos device which can connect the output to the high voltage source (vdd), and an nmos device that can connect the output to the low voltage source (ground). This image shows an inverter at the bottom and the currents at the top. Because the there is some capacitance at the gates of the two transistors, theit input can't be switched instantaneously, but rather it swings over a few picoseconds from high to low (or low to high). Note in the chart that while the voltage is not all the way at high or low, both the nmos and the pmos drive some current, so there is a direct path from vdd to ground. This "short-circuit" current (also called crowbar current) doesn't depend on frequency, but the power is dissipated every time a gate switches, and that does depend on frequency.

A huge part of the power dissipated in modern processors comes from the chip's clock... about 1/3rd of the power actually results from just switching the clock each cycle. The reason the clock requires so much power is that it's a signal that goes pretty much everywhere across the chip (so there are long wires, which have high capacitance), and it has to switch twice every cycle. You can generate fast clocks that require less power, but there are tradeoffs are beyond the scope of this explanation ;).

Non-ideal (i.e. real) transistors leak, as mentioned above. This adds a static component to power, meaning one that doesn't depend on frequency.

Coming back to your question about why modern processors use so much more power than everything else... there are lots of factors at work. For one thing, your desktop PC is a LOT more complicated than the chips in a cell phone - in a single clock cycle, your desktop can do a huge amount of work, while the chip in your phone might be able to process one instruction (or even take multiple cycles for each instruction). Another thing is that the chip in your desktop operates at a very high frequency, whereas a cell phone chip probably runs much slower. A Pentium 4 might run at 3GHz, with the ability to do a peak of something like 2 integer instructions and 2 floating point each cycle (I forget the exact number) while the cell phone might run at 100MHz and take 3 cycles to finish a single instruction. Additionally, there are different types of transistors - there are transistors that can switch very very fast, but they leak a lot, and transistors that switch slowly but don't leak much. If you need high-performance computing (like a desktop CPU), you're going to use the fast transistors, at the expense of power. However, if you're designing a cell phone, you'd use the slow transistors to improve battery life.

If you look back to the overclocking discussion, you can see there was a 33% power increase for 10% frequency gain. This also works the other way - if you underclock by 10%, you can save a LOT of power. If your 100W@1.4V 3GHz P4 is run at 1.5GHz, and you decrease it to 1V, your power is going to be .5*.51 = about 25 watts (ignoring leakage). Since your cell phone can be slow, they can run it at a low frequency and voltage to save power.

(How come prescots generate so much more heat than winchesters while having less power?)
This question doesn't really make sense. The temperature should be related to the power, so if you have a lower power chip at a higher temperature, there's something different in the cooling.