Good point. At least in my mind, the gains from overclocking a high-end CPU are far better than overclocking a high-end GPU. A 3.3GHz SNB can regularly hit 4.5GHz with a bit more voltage and better cooling. Even after factoring out Turbo, that's a huge performance increase. And yet because TDP was only 95W in the first place, power consumption hasn't necessarily hit the roof and it's still practical to move that much heat quietly.
This is as compared to high-end GPUs, where they regularly have TDPs in the 200-300W range. GPUs (specifically those that aren't 2nd tier like the 670/7950) don't overclock nearly as well as a CPU, and if you overvolt it's easy to add a lot of power consumption in the process. The gains aren't nearly as great for the extra effort incurred. I'll chase more performance, but the heat generated means I do have a limit.
Indeed. I'm only paying something like $.07/KWh, which is incredibly cheap. So at least here the problem with power consumption isn't the cost of the electricity (or even running the A/C), but rather the overall heat and acoustics. Even though I have central A/C I
still have to run a secondary A/C in my office on hot days because of my equipment. Portable A/Cs are neither quiet nor cheap, so while the latter is a sunk cost, if I can keep heat generation down and avoid listening to the A/C I'm all for it.
It's the difference between graphics and serial workloads. Rendering is embarrassingly parallel - add more functional units (shaders, ROPs, etc) and performance will increase in a fairly linear fashion. So every node shrink should bring with it around a 50% performance improvement for the same die/power.
This is as compared to serial workloads, which can't be distributed across additional functional units like that. The only solution is higher IPC and higher clockspeeds, and we've reached a point where both are difficult to significantly increase right now.