Does anyone review video decoding quality any more?

evilspoons

Senior member
Oct 17, 2005
321
0
76
I know there were reviews on Anandtech in the past of video decoding quality on various cards, especially as new architectures came out.

I'm currently using the iGPU on a Core i3-2100 to a Pioneer receiver to a Samsung TV, typically playing video through XBMC Helix 14.0 beta 3 (whoops, sorry, it's called "Kodi" now...).

I used to have a half-height Asus Radeon 6570 in the same PC that had (to my eyes) better decoding quality, but the card crapped out on me and for some reason half-height cards are hard to come by where I live and the prices are jacked up like 50% in $CAD vs US prices (this is not the case with "gaming" cards).

My best options appear to be a half-height AMD R7 250 and the three (!!) variants of the Nvidia GT 730, the GF108+DDR3 (yes, Fermi, LOL) version that is basically a GT420 with a new sticker on it, the GK208+DDR3 version, and the GK208+DDR5 version that is naturally the most powerful (at games) but the most expensive.

Does the DDR5 version have any decoding advantage? Heck if I can find out... haha. If anyone has any information, I'd love to know.
 

poofyhairguy

Lifer
Nov 20, 2005
14,612
318
126
I will start off by saying I am an NVidia fanboy. Their years of awesome Linux support has basically bought my loyalty to them for a lifetime.

With that all said: Your eyes were correct. NOTHING on the market cranks out better video than an AMD GPU 6000 series or greater. The reason is that is when AMD's Unified Video Decoder hit maturity with UVD 3, and they did it after NVidia so they did it better. UVD 3 is also when AMD decoding robustness got near Nvidia and CPU decoding levels.

I personally think NVidia cards come close on picture quality, or at least close enough in Linux. Obviously this is all stock usage (aka what you get from XBMC), if you use something like MadVR and Media Player Classic it is a different ballgame.

In regards to the different Nvidia cards: there is no decoding advantage to the DDR5. I have the DDR3 version of that card (or what they called it the first time: a GT 630 V2) and it plays every single file I have that it can play (aka not h265). It is my go-to HTPC recommendation right now.

As to what I would recommend....if you will always use Windows the R7 is hard to beat. If this is a dedicated XBMC box that might run XBMCbuntu one day stick to that DDR3 GK208.

Decoding quality goes in this order for XBMC/Kodi:

1. AMD 6000 series or better on Windows.
2. NVidia GT 400 series or better in Linux
3. NVidia GT 400 series or better in Windows
T4. CPU decoding (aka DXVA, VAAPI, and VDPAU all turned off)
T4. Intel GPU Ivy or better
6. Intel GPU Sandy or older
 
Last edited:

evilspoons

Senior member
Oct 17, 2005
321
0
76
Thanks for the great post! I've basically had Nvidia gaming cards forever (well, since 3dfx blew up), but I read great things about the 6570 for video so I picked it up. I even got an expensive ASUS model and was therefore very annoyed when it stopped working on me, as I've had great reliability with ASUS hardware over many, many years. Since the field of low-profile cards available to me when the 6570 died looked pathetic, I just switched to the HD 2000 on my i3-2100.

I just did a bit of reading on PureVideo and on the UVD, and they're really quiet about admitting which version of PV/UVD they include in various cards (because sometimes it's embarrasing, like with the GF108-based GT730). For instance, figuring out what was in the half-height R7 250 was a pain in the butt.

As near as I can tell, my best bets will be:

AMD, now: something fanless that won't blow up, with UVD 3.
AMD, wait: hope for a UVD 4+ low-profile card (might be waiting a long time!).
Nvidia, now: fanless, reliable, quiet PureVideo VP5 + D (comparable to your GT 630 v2, like the GT730 with GK208).
Nvidia, wait: hope for a Maxwell low-profile card to get PureVideo VP6 + E.

(If I got that GT730 with GF108 I'd be all the way back to VP4 + C! Yeesh.)

Pure CPU decoding is no good because the i3-2100 is also responsible for other stuff like doing parity checks on incoming backups and automatically decompressing downloads, and I have a number of reasons I want to stay with Windows although I did consider a Linux XBMC distro for a while.

Perhaps if a fanless AMD (anything) or Maxwell 730 goes on sale in the near future!
 

poofyhairguy

Lifer
Nov 20, 2005
14,612
318
126
You are welcome, I love this stuff. Honestly Intel's "good enough" GPUs combined with NUC form factors mean this conversation happens less, so I appreciate it.

As far as info on this stuff? One of the weird times when wikipedia is canon:

http://en.wikipedia.org/wiki/Unified_Video_Decoder

http://en.wikipedia.org/wiki/Nvidia_PureVideo

As far as waiting for stuff? Not worth it probably.

On the Nvidia side, Pure Video Feature Set E is mostly about 4K decoding. If you don't have a 4k TV then its pretty useless to wait as I wouldn't expect a huge increase in video quality. On the AMD side, UVD 4 adds hardware interpolation (aka "the soap opera effect") which I personally don't think is worth having. I would wait for a deal, not for tech. A fanless GK208 runs so cool it is amazing, the HTPC GPUs I dreamed of in 2008 exist today!

Good luck!
 

evilspoons

Senior member
Oct 17, 2005
321
0
76
Blargh, I hate interpolation. High frame rate video is one thing (I loved the Hobbit in 48 fps), but making it up is crap.

A 4K TV is a long way off for me too so I guess I'll just wait to see if any of these cards go on sale in the near future. As a Canadian we traditionally get Boxing Day sales (Dec 26) but as of a couple years ago Black Friday stuff is creeping up from the States.

EDIT: Looked at some more specs. Those Geforce 720s (not 730s) with GK208 and a 19 watt TDP are looking awfully nice as a passive option. Then again, the 730 with twice as many cores (not really part of the video experience, I know) but half as much RAM (possibly relevant if I went 4K?) is only 23 watts... the R7 240 sits all the way up at 30 watts, which is still half the TDP of my dead HD 6570.
 
Last edited:

Maiyr

Member
Sep 3, 2008
117
1
81
Decoding quality goes in this order for XBMC/Kodi:

1. AMD 6000 series or better on Windows.
2. NVidia GT 400 series or better in Linux
3. NVidia GT 400 series or better in Windows
T4. CPU decoding (aka DXVA, VAAPI, and VDPAU all turned off)
T4. Intel GPU Ivy or better
6. Intel GPU Sandy or older

This is of interest to me as I am soon to embark upon an HTPC build (first one). Is this saying that you will get better video quality output, in XBMC, with a discrete video card (as referenced above) than you would by using the onboard Intel i5 GPU for output?

Thanks,

Maiyr
 

tential

Diamond Member
May 13, 2008
7,355
642
121
I will start off by saying I am an NVidia fanboy. Their years of awesome Linux support has basically bought my loyalty to them for a lifetime.

With that all said: Your eyes were correct. NOTHING on the market cranks out better video than an AMD GPU 6000 series or greater. The reason is that is when AMD's Unified Video Decoder hit maturity with UVD 3, and they did it after NVidia so they did it better. UVD 3 is also when AMD decoding robustness got near Nvidia and CPU decoding levels.

I personally think NVidia cards come close on picture quality, or at least close enough in Linux. Obviously this is all stock usage (aka what you get from XBMC), if you use something like MadVR and Media Player Classic it is a different ballgame.

In regards to the different Nvidia cards: there is no decoding advantage to the DDR5. I have the DDR3 version of that card (or what they called it the first time: a GT 630 V2) and it plays every single file I have that it can play (aka not h265). It is my go-to HTPC recommendation right now.

As to what I would recommend....if you will always use Windows the R7 is hard to beat. If this is a dedicated XBMC box that might run XBMCbuntu one day stick to that DDR3 GK208.

Decoding quality goes in this order for XBMC/Kodi:

1. AMD 6000 series or better on Windows.
2. NVidia GT 400 series or better in Linux
3. NVidia GT 400 series or better in Windows
T4. CPU decoding (aka DXVA, VAAPI, and VDPAU all turned off)
T4. Intel GPU Ivy or better
6. Intel GPU Sandy or older

Where are you finding this decoding quality information?

I have access to most of those and I really don't see much of a difference but well, I'm not extremely picky.
And I've never used nonGPU decoding in a long time.
My next PC was going to have an IGPU but I'm so used to GPU decoding quality I was always worried about it. I'll have to see if I can notice a difference on my NUC vs my HD7950 vs my 9800MGTS
 

evilspoons

Senior member
Oct 17, 2005
321
0
76
Yes - the decode engine (called UVD in an AMD card, or PureVideo in an nVidia card) does a better job than in Intel's. You'll see a sharper image, fewer odd artifacts in tricky cases, and so on. The picture on ANY video card will be "fine" to 99% of people, but it is better on these cards.

Look around on Anandtech, I think there's an old GT 420(?, maybe 430) review where video decode quality is analyzed. There are newer cards since then, but the same basic principles apply.
 

tential

Diamond Member
May 13, 2008
7,355
642
121
Yes - the decode engine (called UVD in an AMD card, or PureVideo in an nVidia card) does a better job than in Intel's. You'll see a sharper image, fewer odd artifacts in tricky cases, and so on. The picture on ANY video card will be "fine" to 99% of people, but it is better on these cards.

Look around on Anandtech, I think there's an old GT 420(?, maybe 430) review where video decode quality is analyzed. There are newer cards since then, but the same basic principles apply.

Will do. It's just something I've never even thought about. I typically watch on my HD7950 or my 9800M GTS. If I watch on an IGPU it's a youtube video or something and I rarely watch for more than 10-15 vs hours on my HD 7950.

I always saw people pick up cheap GPUs for XBMC machines, I guess that's a reason why.
I wonder if video decoding will improve with Broadwell or Skylake?
 

poofyhairguy

Lifer
Nov 20, 2005
14,612
318
126
This is of interest to me as I am soon to embark upon an HTPC build (first one). Is this saying that you will get better video quality output, in XBMC, with a discrete video card (as referenced above) than you would by using the onboard Intel i5 GPU for output?

Thanks,

Maiyr

If configured correctly yes. In fact I can assuredly say that nice GPU+MadVR+Media Player Classic- Home Cinema+proper configuration= the best picture quality possible period. Intel GPUs can do a pretty good job running MadVR, but a midrange gaming GPU will smoke them and deliver the only 4k video experience outside of Netflix worth having IMHO.

But even I don't do that, as MPC-HC is about as wife friendly as a mistress and I only need 1080p. So instead I use XBMC/Kodi because it can be configured to be the prettiest/easiest media center ever. But whenever you use Kodi (or many media center apps), for the most part you are stuck with the default decoding that the GPU maker provides. You can enable post-processing and you have a few deinterlace settings, but with one exception (Nvidia on Linux which is why I stick to it) you don't get a lot of fine tuning for the hardware decoding outside of the GPU control panel. Which is fine, that is all 99% of people need. They just want it to play, that is kinda the point of a Kodi.

In that case then, you are kinda beholden to the output the GPU maker provides by default in regards to picture quality. That plus actual differences in hardware capability and cadence compatibility is this small difference in picture quality between the GPUs that I am referring to in that list. It seriously doesn't matter to most people, as 99% of the picture quality is dependant on the output device.
 
Last edited:

poofyhairguy

Lifer
Nov 20, 2005
14,612
318
126
Where are you finding this decoding quality information?

evilspoons is 100% right on, much of it is here in old Anandtech. Gt 430 vs AMD 6450 vs Intel Ivy tells the whole story. The rest comes from experience from trying all this stuff and research online and in open source drivers, because basically I have more fun building the HTPCs than I ever do watching the content. Also I am "super picky" for the most part, enough that with my plasmas I can tell what features are working and which ones aren't.

The BIG difference between AMD/Nvidia and everything else is hardware color correction. That is a big part of what people see on AMD+Windows, plus it is what you get rock solid from Linux+Nvidia. If I toggle on and off that VDPAU Color Correction setting it can make a big difference randomly on different content and in different scenes within content. And by big I mean like 15-30% better, so not night and day (except on like 5% of stuff where it is night and day).

The rest is AMD doing some more aggressive post-processing than Nvidia because they came second. The whole "magic" of AMD/Nvidia solutions are that they do the decoding like CUDA on the shader hardware, very similar to CPU decoding, which is why it is the second most robust decoding method possible to CPU decoding. That also means they can throw in hardware effects like color-correction, deinterlacing, post-processing, sharpening, denoising, interpolation, etc. for "free" as in no extra die space. Nvidia did this first, when they went general shader back in the 8000 series. AMD kept (crappy IMHO) decoding hardware until the 6000 series and onward. Now they rule.

Intel on the other hand gives dedicated silicon from what I know, but they try to be all badass about it and use it to do "bonus" stuff like Quicksync. Personally I think Intel's solution isn't that great and I never personally use it, because if you got Intel Inside you probably got enough power to just CPU decode anyway which is considerably more robust than Intel's solution (which is less robust than Nvidia and AMD's way of doing it but still way better than stuff like decoder chips).
 
Last edited: