Originally posted by: mgambrell
There is a certain theoretical upper limit to the amount of computing you can get done per unit of energy or mass (look up bremermann's limit.) But I dont think the computing counts as work at all. I am no expert, but I think all the energy turns to heat eventually no matter what.
But thats not as strange as it sounds. A fire burns. Your body metabolises. Each just turns chemicals into heat, but one way creates a system of ongoing marvelous complexity and the other does not. Just as with different processors--each creates different levels of complexity with the same amount of heat. A 3ghz chip using 100W produces twice as much complexity as a 1.5ghz chip using 100W and they each produce 100W of heat. A human produces 100W and he is much more complex than either cpu! He can reason and move himself a mile.
Does anyone judge the efficiency of a car by miles per degree? No! It is miles per gallon. The gallons do not turn into miles. They turn into heat. It is just that along the way more useful work was done than an uncontrolled explosion. In the case of the cpu, we are clever enough to funnel the energy and heat through a series of "tubes" which create useful patterns.
Whoa, that's the best first post I've ever seen. Welcome to anandtech.
What I'm surprised to see not have been mentioned so far is the size of the process used in the processors. This is by far the biggest difference in how much work gets done per watt of energy consumed. Using your reference to fire, here's an example: I build a fire with 5 sticks, and so do you, yet mine produces 50x as much heat. How is that possible? It's very easy, you used twigs, with an average diameter of ¼", while I used "sticks" with an average diameter of 6".
Here's an example using processors. At one time, I had an Athlon XP 1800. It was a 1.53 Ghz Palomino core, which has a 180nm process. It idled @ 48-49°C using a better than stock heatsink, although I'm not sure exactly how hot it got under 100% load, but it definitely got quite warm, even though it was only running @ 1.53 Ghz, with 256KB of L2 cache. I replaced it with an Athlon XP-M 2600, which had a Barton Core, which is a 130nm process, with 512KB of L2 cache. It's stock speed was 2.0 Ghz, and it idled @ 37°C, at 2.5 Ghz, with considerably above the stock vcore, and was less than 45°C under 100% load.
Now, skip a year or two, and the last processor that I owned, which I just replaced, was a dual-core Opteron 170. It has 2x1MB of L2 cache, and is built on a 90nm process, and is 2.0 Ghz per core at stock speed. At 2.8 Ghz,
it idled @ 34°C, with a 100% load temp of ~50°C. It was replaced with a 65nm Q6600, with 2x4MB of L2 cache, 2.4 Ghz quad-core. Even with only the stock heatsink, it
idles @ 31C.
Now, the reason I noted each processor's amount of cache is because that cache has to be cooled also, since it's on-die. And the performance of each processor upgrade (I had others between, but just used these as examples) was very dramatic, because each was getting considerably more work done per second/hour/whatever than it's predecessor, even with single theaded apps. With SMP-enabled apps, the dual-core Opteron was getting upwards of 10x as much work done per minute/hour as the Palomino core, and the Q6600 does more than twice as much work per minute/hour as the overclocked Athlon, as long as the software I'm using can take advantage of all 4 cores.