• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Why are GPU's so hot and energy hungry?

dawks

Diamond Member
Simple question, just wondering why they are such hogs compared to modern CPU's.. Is it just the nature of the design and type of work they do? or are the graphics chip companies more concerned with performance than efficiency? Could the chips be designed better without an extravagant amount of work?
 
Didn't we go over this recently?

I think TecHNooB is right to some degree, with transistor count and density. Also, a graphics card is not just the GPU drawing power and creating heat, because there is the PCB with power VRMs and memory, so it is more similar to a CPU with motherboard and RAM.
 
It's a small pcb board that does the work of a full motherboard with cpu, memory, controllers, mosfets etc. etc.
 
It is doing a hell of a lot of work compared to a CPU as well. For what it does, it is quite a bit more efficient than a CPU for the same workload. In HPC applications CPUs cant even sniff the performance\watt of Tesla.
 
Higher transistor count, bigger die. For example GTX 580 has about 3 billion transistors and a 520mm^2 die on TSMC's 40nm process. AMD's Thuban CPU, by comparison, has only 904 million transistors and a 346mm^2 die on 45nm GloFo SOI. And this is one of the larger CPU dies out there, quad cores and dual cores are even smaller, and Intels hex core is also smaller due to it being based on a 32nm process.

Plus video cards also use very high speed memory, which can be pretty power hungry. Memory on a top of the line video card has about 15x the bandwidth of system memory.

And something interesting to keep in mind is that GPUs also perform far, far better than CPUs in some applications. So yes, they can be pretty power hungry, but they also have a mind blowing amount of computational potential. So what you really need to look at is performance per watt, and for a lot of applications, GPUs will wipe the floor with CPUs in this metric. For example, check this link out.

http://fastra2.ua.ac.be/

As you might have guessed, the FASTRA II is a power hungry beast. The 2850 watt worth of power supplies might seem a little overkill though. However, we didn’t measure the actual current drawn from the individual rails, so we have no idea how wide our margin actually is.

The FASTRAs might be power hungry machines, their power consumption is nothing compared to that of a cluster system. Our local Opteron cluster uses approximately 90 kilowatt under full load. The newest incarnation of our desktop supercomputer only consumes 1/75th of the power of the Opteron cluster.

To put these numbers into perspective, we have to take actual perfomance into account as well. We therefore computed an energy efficiency measure that tells us how many slices we can reconstruct with a given amount of energy (i.e. 1 Watt hour = 3600 Joules).

graph_efficiency.png
 
Wow, they have a Fastra 2 now. Time for me to upgrade my quad-GPU F@H rig. (When Sparkle releases their single-slot GTS450 cards, probably.)
 
Higher transistor count, bigger die. For example GTX 580 has about 3 billion transistors and a 520mm^2 die on TSMC's 40nm process. AMD's Thuban CPU, by comparison, has only 904 million transistors and a 346mm^2 die on 45nm GloFo SOI. And this is one of the larger CPU dies out there, quad cores and dual cores are even smaller, and Intels hex core is also smaller due to it being based on a 32nm process.

Plus video cards also use very high speed memory, which can be pretty power hungry. Memory on a top of the line video card has about 15x the bandwidth of system memory.

And something interesting to keep in mind is that GPUs also perform far, far better than CPUs in some applications. So yes, they can be pretty power hungry, but they also have a mind blowing amount of computational potential. So what you really need to look at is performance per watt, and for a lot of applications, GPUs will wipe the floor with CPUs in this metric. For example, check this link out.

so the question becomes... why doesn't nvidia make CPU's?
 
CPUs and GPUs arent the same thing. See how Intel failed at making a GPU. Also nV don't have an x86 license. Someone else could explain it better.

Basically without a licence Nvidia can't sell x86 CPUs, and if they were to try Intel/AMD could sue them for damages and get an injunction preventing Nvidia from selling any more (x86 CPUs).
 
Energy Hungry? I always equate energy to liquid so I would have said "Energy thirsty."

SOmehow I have contributed to this discussion.
 
Back
Top