Energy Efficiency of H264 GPU decoding

tokie

Golden Member
Jun 1, 2006
1,491
0
0
so by now basically every video card / integrated chipset being sold supports GPU H264 acceleration.

What I'm wondering is just how power-efficient it is. For example, on my old laptop I can get 4 hours of battery life watching DivX movies. If one was to use GPU acceleration to watch H264-encoded HD movies in MPC-HC, does this still worsen battery life noticeably? Obviously the GPU is a more efficient place than the CPU to perform this operation, but by how much in terms of power or battery life? I've never really seen this covered anywhere since everything seems to be geared towards HTPC/desktops where battery life doesn't matter.
 

Fox5

Diamond Member
Jan 31, 2005
5,957
7
81
I'd guess the gpu is more efficient, but the cpu may still have a heavy load on it during this for a slow cpu. In that case, it might be less power efficient since its a cpu at near max load, and a turned on gpu.
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
AFAIK this uses specialized hardware, not the GPU's general computing shaders.
As a result, a high end and a low end GPU will have the exact same performance. The reason the CPU still has a load is because some things (like the DRM decryption) cannot be done by the gpu, and are instead done by the CPU and then passed on to GPU. The energy efficiency is better doing it in the GPU; but in some FILES (aka, a bluray rip to an mkv) that do not exactly match what is expected on a bluray disk, I have encountered stutter; on both low end cards and high ends. and on both nvidia and AMD cards. So I disable GPU acceleration and use CPU only (I have a strong enough one) when playing x264 FILES.

but if you are watching blu ray disks, there is no reason in the world to disable it. Oh, just make sure you have all HDCP components.
 

tokie

Golden Member
Jun 1, 2006
1,491
0
0
I know it uses specialized hardware and not general shaders, but I'm wondering more specifically about the additional power usage on a laptop.

Does this specialized PureVideo/Avivo part use 1 watt? 20 watts? How significantly does it affect battery life?
 

yh125d

Diamond Member
Dec 23, 2006
6,907
0
76
Yeah it can utilize the video card's shaders, by using the shader/filters in the program, like denoise, sharpen, sharpen complex, deinterlace, etc. The general acceleration though it handled by the core I think. IGPs like the 780G, GeForce 9300 and others have the ower for the base acceleration part, but if you're going to use the special shaders its recommended to get a faster, dedicated card
 

Fox5

Diamond Member
Jan 31, 2005
5,957
7
81
Originally posted by: taltamir
AFAIK this uses specialized hardware, not the GPU's general computing shaders.
As a result, a high end and a low end GPU will have the exact same performance. The reason the CPU still has a load is because some things (like the DRM decryption) cannot be done by the gpu, and are instead done by the CPU and then passed on to GPU. The energy efficiency is better doing it in the GPU; but in some FILES (aka, a bluray rip to an mkv) that do not exactly match what is expected on a bluray disk, I have encountered stutter; on both low end cards and high ends. and on both nvidia and AMD cards. So I disable GPU acceleration and use CPU only (I have a strong enough one) when playing x264 FILES.

but if you are watching blu ray disks, there is no reason in the world to disable it. Oh, just make sure you have all HDCP components.

From what I've read, high end cards handle decoding better than the integrated graphics. It could just be a generational gap thing, but it would seem that more shaders do allow for better post processing (not straight up decoding, but artifact elimination, sharpening, etc) of the video.
 

lopri

Elite Member
Jul 27, 2002
13,209
594
126
IGP's suffer from lack of dedicated frame-buffer as well. But for all intents and purposes, I suspect it's rather the drivers than the hardware. I tried to get hardware acceleration using 780G, and it does work with some clips and doesn't with others. When it works, it works just as good as any discrete GPU which seems to me proving its capability.

This topic is an interesting one, indeed. For me, GPU acceleration is more about freeing up CPU so that I can do something else at the same time, so I haven't really thought about actual power usage difference between CPU and GPU.
 

lopri

Elite Member
Jul 27, 2002
13,209
594
126
I haven't gotten a chance to test it out, but here is a related data. According to GPU-Z:

GTX 280 in idle (300/100): VDDC Current 7.3A / VDDC 1.0375V
GTX 280 while semi-offloading VC1 playback (400/300): VDDC Current 12.0A / VDDC 1.0625V
GTX 280 while 100% offloading H.264 playback (650/1200): VDDC Current 22.0A / VDDC 1.1875V

The values are approximations. The current obviously fluctuates as scenes change. I need to find my wall-o-meter(?) to get an idea whether/how much power saving exists compared to software playback. It is an interesting subject indeed. We know that GPU-assisted HD playback frees up CPU, often enabling weak CPUs to handle otherwise difficult movies, but I don't know whether there is power saving.
 

tokie

Golden Member
Jun 1, 2006
1,491
0
0
Interesting how the GPU clocks had to increase to decode the H264/VC1 streams. I was under the impression that this wouldn't be necessary but I guess not. I would check it out myself but unfortunately I don't have a kill-a-watt.

I dug around a bit more and remembered Anand's ION review, which shows that an Atom platform with 100% CPU usage used 28.2w, while decoding a H264 video on same platform with GPU offloading used 28w, which suggests a ~0.2w advantage for the GPU decoding. I don't think this is really fair however, because the Atom can't even decode a 720p H264/VC1 video without GPU offloading, so it doesn't show the full picture. A beefier CPU which would fully decode the video in software would be needed for a proper comparison I think.
 

TheRyuu

Diamond Member
Dec 3, 2005
5,479
14
81
Originally posted by: tokie Obviously the GPU is a more efficient place than the CPU to perform this operation

Haven't posted in a LONG TIME. weee, here we go...

I doubt this.
Video encoding/decoding is mainly all integer math.

That being said, I suppose decoding through the driver using a separate hardware decoder (support up to level 4.1) which would be different (and possible more efficient. However you only get up to level 4.1 support (which is BD, etc...).

Cuda decoding through coreavc also uses the gpu but is probably off topic here.

Just my $.02