Sure, since it is available through DXVA. OpenCL solutions never match true hardware implementation in terms of performance or power efficiency, so a true decoding capability is highly preferred. VP9 is inferior format and it needs to die, sooner the better. Unfortunately it was chosen by Google and their muscle guarantees that it's departure won't be as swift as one could hope. If your CPU cannot decode VP9 at 1080P regardless of the bitrate, it is pretty much useless for anything else too.
I'd hope so, VP9 is a competitor to H.264, it's not really meant to go head to head with HEVC. The competitor to H.265 (HEVC) is AV1 which is a collaboration of Mozilla's Dalla, Cisco's Thor, and Google's VP10, all rolled into one royaltyfree codec called AV1. Which should get a finished standard at some point in the first half of 2017.
As for the next generation of video codecs there's one thing that people may not have realized about HEVC/h265, the licensing costs are much much higher. This may push people to alternative codecs such as AV1.
Superior technology always comes at a cost. Anyone who has ever tried to encode VP9 using the reference encoder / library soon realizes what kind of a joke it is. Not to mention the codec itself is inferior to HEVC in terms of compression efficiency. It is pretty hard for a new contender to put up a serious challenge on HEVC, since HEVC is largely based on the current industry standard (AVC, 264).
Considering AV1 also has hardware partners out the ass, they should be. If it's anywhere close to HEVC I don't see why anyone would bother paying the royalties.I wonder if the HEVC patent pool holders are starting to panic. If not, perhaps they should be.
Considering AV1 also has hardware partners out the ass, they should be. If it's anywhere close to HEVC I don't see why anyone would bother paying the royalties.
Adobe
Amazon
AMD
ARM
Ateme
Cisco
Intel
Ittiam
Microsoft
Mozilla
Netflix
NVIDIA
Vidyo
VeriSilicon
Superior technology always comes at a cost. Anyone who has ever tried to encode VP9 using the reference encoder / library soon realizes what kind of a joke it is. Not to mention the codec itself is inferior to HEVC in terms of compression efficiency. It is pretty hard for a new contender to put up a serious challenge on HEVC, since HEVC is largely based on the current industry standard (AVC, 264).
Had a look at the much hyped AV1. Based on what I saw, I cannot really see what the fuss is all about. AV1 can match or even excel HEVC in term of quality, but outside of being encoded using a dedicated, fixed function encoder I cannot see any serious use for it (for consumers). It shares the same exact issues as VP9 does, it is extremely slow and clumsy. Anyone who has tried encoding HEVC knows that it is extremely slow compared to X264 for example. Based on my tests using the most recent encoder versions (AOM 0.10 & HEVC 2.1), HEVC is 11.3x - 13.6x faster at the same bitrate and at settings resulting comparable (within 0.005) SSIM. Since HEVC is already extremely slow to encode it means that AV1 is completely impossible to be encoded on a CPU.
I used a RAW 420P 1080P ("Cobra" by Harmonic) sample video with following settings:
AV1 = --passes=2 --good --i420 --threads=18 --width=1920 --height=1080 --target-bitrate=* (*1000 / 5000) --end-usage=cbr --cpu-used=2
X265 = --preset medium --input-res 1920x1080 --fps 30 --bitrate * (*1000 / 5000) --pools "18" --pass * (* 1 / 2)
AV1 - 1000kbps = 3.71fps (2nd pass), SSIM ~ 0.967, 5000kbps = 1.92fps (2nd pass), SSIM ~ 0.975
HEVC - 1000kbps = 41.71fps (2nd pass), SSIM ~ 0.9675, 5000kbps = 26.15fps (2nd pass), SSIM ~ 0.973
Tested on Haswell-EP HCC (18C/36T), both binaries compiled with x86-64 GCC 6.20 & YASM. AOM compiled with cpu runtime detection / "fat binary" (dispatcher) enabled.
I get the impression that Netflix will continue to use HEVC for a long time even after AV1 lands. The problem is that HEVC will be too firmly entrenched in TVs and stuff like that, and those can't be updated to support AV1. At best, Netflix will start supporting both AV1 and HEVC, the former for computer browsers.
Perhaps that installed base in TVs, etc. is what the patent pool is counting on but in my undereducated opinion that is very risky for them since some manufacturers could easily switch to support AV1 in the future. Sony won't but as time goes on they are becoming less important as a driver of these types of technologies.
If its a smart TV then pushing an updated FFMPEG build for the Android OS running it shouldn't be too difficult, unless the SOC in that smart TV is old rubbish, which it shouldn't be for 2015 era smart TVs.
Sweet but I see that page is for Windows 10. What about macOS? (I'm guessing they can't really say because Kaby Lake Macs don't actually exist yet.)https://bugzilla.mozilla.org/show_bug.cgi?id=1292374
Firefox 52.0 when released will have VP9 hardware decoding support on Intel Kaby Lake, Nvidia Maxwell GM206 & Pascal family GPUs.
both have hardware encoding and decoding but Intel ==> 4K VP9 while APU ==> 1080p VP9.So APUs are out of the question for 4K VP.Also, 1080p at what Fps? 1080p@60 possible ?Like Intel's recently-released Kaby Lake CPUs, Bristol Ridge offers hardware acceleration for the HEVC and VP9 video codecs. It's worth noting that these APUs can only perform hardware encoding and decoding of 4K content for HEVC, however—VP9 support is limited to 1080p content. Kaby Lake, on the other hand, can decode 4K VP9 content in hardware. Given that the next-generation codec wars still appear to be in full swing, Bristol's limited VP9 support may not be that important in the grand scheme of things.
It's not real two-pass.I guess, I don't quite see how hardware video encoders can be both two-pass, and real-time.
QuickSync(Haswell/Broadwell/Kaby Lake(Skylake not 100% sure)), NVENC(Maxwell+?), and VCE 3.x(BriRid,StoRid, Pol12) support this fake 2pass.NVENC does have a "vbr_2pass" in its options, but what it means is that it will
look ahead a number of frames for the rate control to make better decisions.
True two-pass encoding means to do two full passes over the data. Although what
NVENC does improves the quality is this far from the real thing. The naming of
the options is confusing and misleading...
It's not real two-pass.
QuickSync(Haswell/Broadwell/Kaby Lake(Skylake not 100% sure)), NVENC(Maxwell+?), and VCE 3.x(BriRid,StoRid, Pol12) support this fake 2pass.