Question At what point to AMD CPUs outperform Intel CPUs in Adobe Premiere using H264?

Bavor

Member
Nov 11, 2001
82
18
81
Right now I have a i7-6700K at 4.6 GHz. Most of my video editing is in Adobe Premiere Pro for YouTube at 2K and 4K resolution. My i7-6700K was significantly faster than the R7-1700X system I tried for video encoding with the YouTube 4K preset. Depending on the video, the 1700X took 30% to 50% more time to encode the same video.

If I wanted a Ryzen 3000 series CPU for my new system, does anyone know what model would be fast enough so that I won't be encoding video at a slower speed than the 6700K that used the iGPU to speed up encoding?
 

Snarf Snarf

Senior member
Feb 19, 2015
399
327
136
Not sure about the YouTube 4k preset but Puget systems recently did a round up and Intel seems to be ahead still if you're using h.264 or h.265 thanks to QuickSync acceleration.

Ryzen 3000 vs Intel 9000 Premiere Pro Roundup

Do you work with H.264 or H.265 media?
In this case, the Intel 9th Gen CPUs have a pretty commanding lead (especially with 10-bit footage) due to the fact that Premiere Pro supports hardware acceleration of H.264 and H.265 (HEVC) media via Intel Quick Sync. This feature isn't available on AMD processors (or on the Intel X-series), which makes CPUs like the Intel Core i9 9900K simply the best option for this type of media.
Do you work with RED footage?
Here, the higher raw performance of the new AMD Ryzen CPUs allows it to take a significant lead over the Intel 9th gen CPUs - although they can't quite catch up with the higher-end (and much more expensive) AMD Threadripper or Intel X-series CPUs. However, this is one area that is highly subject to change since RED is working on moving more of the processing of RED media from the CPU to the GPU. This is expected to be bundled into one of the next few Premiere Pro releases, at which point we may find that the CPU no longer makes more than a minor impact on performance when working with RED footage.
Do you use Neat Video for noise reduction?
If you use Neat Video, there is simply no contest: use an AMD Ryzen 3rd generation processor. Performance is up to 60% faster when using only the CPU, but even with Neat Video set to use both the CPU and GPU (with a NVIDIA GeForce RTX 2080 Ti) we still saw a 20-30% performance gain with Ryzen over a comparably priced Intel CPU.
 
  • Like
Reactions: Mopetar and Bavor

Hitman928

Diamond Member
Apr 15, 2012
5,177
7,628
136
For final encoding you'll have to get up to HEDT level CPUs to beat a quicksync enabled intel CPU. Maybe the upcoming 16 core AM4 CPU from AMD would do it. Many people don't use quicksync (or other hardware encoders) because the quality is visibly worse but the latest encoders from all 3 companies (intel, NV, AMD) all look fine to me if you're not trying to do professional grade video.

GN did some benchmarking back when Zen+ came out. They do make an interesting observation that the 8+ core CPUs from AMD can overtake even quicksync if you do multiple encodes at the same time. The quicksync hardware can really only do 1 at a time whereas the 8+ core AMD CPUs aren't maxed out thread wise by a single encode so doing 2 encodes at the same time allows the CPU to be fully utilized and it becomes faster than quicksync. So depending on your use case, one can be faster than the other depending on how many videos you are working on how what your quality standards need to be.



There's also the factor of timeline performance which also goes back to high core CPUs can do things like encode one video while working on another much more smoothly, so again, depends on your use case.

 
Last edited:

Bavor

Member
Nov 11, 2001
82
18
81
I usually only work on one video at a time. Even at 4K on YouTube, most people don't seem to notice the difference between h.264 and much higher quality encoding options for game videos.
 

Bavor

Member
Nov 11, 2001
82
18
81
Not sure about the YouTube 4k preset but Puget systems recently did a round up and Intel seems to be ahead still if you're using h.264 or h.265 thanks to QuickSync acceleration.

The YouTube preset is H.264.

People on another forum told me that if 4K youtube editing in Premiere Pro was my primary concern, the i9-9900K would be better than the R9-3900X if I was looking to upgrade.
 

Hitman928

Diamond Member
Apr 15, 2012
5,177
7,628
136
The YouTube preset is H.264.

People on another forum told me that if 4K youtube editing in Premiere Pro was my primary concern, the i9-9900K would be better than the R9-3900X if I was looking to upgrade.

Yeah, if you're using quicksync and not doing multiple videos at a time, the 9900K is the clear winner.
 

Bavor

Member
Nov 11, 2001
82
18
81
Today I had a chance to compare my i7-6700K system to a R7 3800X system using H.264 encoding using the same project in Adobe Premiere Pro CC. Both systems had the latest drivers and latest version of Windows 10 Pro. All the unnecessary stuff in startup was disabled and the system were rebooted prior to testing.

i6-6700K @4.6 GHz
Noctua NH-D14 CPU cooler
Asus Sabertooth Z170 S Motherboard
32 GB DDR4 3000 MHz C15 Corsair RAM
1 TB Samsung 960 EVO NVMe SSD
EVGA RTX 2080 XC Hybrid
Windows 10 Pro

R7 3800X PBO Enabled and 1usmus power plan used
Stock Wraith Prism CPU cooler
Asus TUF Gaming X570-Plus (Wi-Fi) ATX Motherboard
32 GB DDR4 3600 MHz C16 G. Skilll RAM
1TB HP EX950 NVMe SSD
EVGA RTX 2080 XC Hybrid
Windows 10 Pro

The source video project used was a 4K 60 FPS gameplay video 8 minutes and 25 seconds long. It was converted to an H.264 4K 60 FPS video with the YouTube preset in Premier Pro CC.

i7-6700K: 15 minutes 27 seconds
R7 3800X: 14 minutes 37 seconds
The difference was 50 seconds. Twice the cores/threads at a lower speed is a little faster with encoding.

I know its not a true apples to apples comparison. The CPU coolers were different, the RAM speed was different, and the HP EX950 SSD has a faster read and write speed than the 960 EVO in synthetic testing.

If I can find the AMD brackets for the DH-14, I'll retest the 3800X with the NH-D14. I don't think it will make a huge difference because the 3800X CPU was able to sustain 4.1+GHz all core speed at a max of 75C CPU temperature and not all cores were at 100% usage in task manager when I tried a second encoding test of the same source file and ran CPUID HWmonitor and looked at CPU usage in task manager's performance tab.

Also, I may be able to test the i7-6700K system with the 3600 MHz RAM to see if it makes any difference in encoding time.

Previously to uninstalling the AMD drivers with DDU for the AMD GPU on the R7 3800X system, I ran the same test and the time was 14 minutes 44 seconds with a RX 580 8GB GPU and the latest Nvidia drivers. I assume CUDA acceleration doesn't help much with the 4K YouTube preset.

The results make me wonder how well the CPU core/thread scaling will work with the 3950X in Premiere Pro CC.
 
  • Like
Reactions: lightmanek

Markfw

Moderator Emeritus, Elite Member
May 16, 2002
25,478
14,433
136
What was the CPU utilization according to task manager on each ? It sounds like anything more than 4 cores is wasted, since I think at single core 3800x@4.1 ~= 6700k@4.6
 
  • Like
Reactions: Drazick

Bavor

Member
Nov 11, 2001
82
18
81
What was the CPU utilization according to task manager on each ? It sounds like anything more than 4 cores is wasted, since I think at single core 3800x@4.1 ~= 6700k@4.6

6700K CPU usage:
kHO2sGp.jpg


6700K iGPU usage:
ZMxc4XB.jpg


R7 3800X CPU usage:
jMazCKo.jpg


Watching the R7 3800X, it seems that the usage varies, but there are always several cores around 90% to 100% and the rest vary up and down between 50% and 100% or 70% and 100% during the encoding time.

Considering the meager increase from 6700K to 3800X I highly doubt the scaling will be any better to 3950X.

Premiere Pro can use the integrated graphics on Intel CPUs to speed up encoding with certain video codecs. I'm not sure if its an issue of more cores being a waste based on the results linked above from Puget Systems. Its an issue of finding what AMD CPU I would need in an upgrade to not have a significant increase in performance when using the video codecs that I commonly use. With several cores saying in that 90-100% range and the others bouncing up and down between 50% and 100% or 70% and 100% during the encode, I wonder would a 3900X or 3950X have better performance because it allow for more cores to hit 100% during those short times when most of all of the cores are in the 90-100% range? I've read before that Premiere Pro doesn't spread its load out over higher core counts well, but that doesn't seem to be the case here and in some of Puget Systems tests. In the Puget Systems test linked above, the 3800X did better than the 3900X in one type of H.264 encoding and the 3800X did worse than the 3900X in the other H.264 encoding test. I also noticed that in some of the H.264 tests, the 16 core threadripper CPUs did better than the 12 core Threadripper CPUs and in other tests the 12 core Threadripper CPUs performed better based on the export scores.

At this point, without having a higher core count R9 CPU to test I don't know what the performance increase will be in my case. With the smaller amount of time where most cores are in the 90-100% range, I don't think it would be a huge increase in performance, but maybe it would help with longer videos where those instances of all or most cores in the 90-100% range happen more frequently.

Also, would better cooling and using PBO and AutoOC and raised power limits have a significant increase in performance? I've seen mixed results when using PBO and AutoOC and raised power limits in synthetic benchmarks.
 

moinmoin

Diamond Member
Jun 1, 2017
4,933
7,618
136
Premiere Pro can use the integrated graphics on Intel CPUs to speed up encoding with certain video codecs.
And going by your iGPU usage screenshot it did use it on 6700K. Previously I assumed 6700K and 3800X were this close due to lack of scaling of Premiere Pro CC, but the involvement of an iGPU in the former changes the whole picture. 3900X/3950X may possibly fare significantly better then.
 
  • Like
Reactions: lightmanek

Abwx

Lifer
Apr 2, 2011
10,847
3,296
136
And going by your iGPU usage screenshot it did use it on 6700K. Previously I assumed 6700K and 3800X were this close due to lack of scaling of Premiere Pro CC, but the involvement of an iGPU in the former changes the whole picture. 3900X/3950X may possibly fare significantly better then.


Using the iGPU is faster but at the expense of image quality, at same quality, when that happens, there s no gain in encoding time with quicksync...

 

Bavor

Member
Nov 11, 2001
82
18
81
Using the iGPU is faster but at the expense of image quality, at same quality, when that happens, there s no gain in encoding time with quicksync...


WIth YouTube's bandwidth limitations for video, including 4K video, nobody can tell the difference unless they pause the video and advance it frame by frame. Even then, the differences are usually small and hard to find.
 

coercitiv

Diamond Member
Jan 24, 2014
6,150
11,670
136
WIth YouTube's bandwidth limitations for video, including 4K video, nobody can tell the difference unless they pause the video and advance it frame by frame. Even then, the differences are usually small and hard to find.
Without ISO encoding quality there's little point in comparing encoding performance. What happens if you can lower the software encoding settings on the 3800X and still the difference is "small and hard to find". Does that make the 3800X inherently faster for your work?

For those who did not have the time to browse the Puget Systems review posted above, here's the loss in detail going from "pure" software to hardware accelerated encoding:

rmCtge1.gif


5QnuGd7.gif


Please note these are not 100% crops, images were already shrunk by half.
 

Bavor

Member
Nov 11, 2001
82
18
81
Wait, you were using Intel Quicksync, and the 3800X was still 50 seconds faster?

That's impressive to say the least...

Its definitely a noticeable improvement over my previous 1700X system compared to my 6700K system. Premiere Pro CC encoding responds well to CPU speed.

I sold my 1700X system and I don't have the original projects and original files that I used to compare the 1700X and 6700K, so I can't do an equal comparison between the 6700K, 1700X, and 3800X. So, I can't see what the actual differences are between the 1700X and 3800X.

When you look at the actual percentage difference it seems smaller.
6700K: 927 seconds
3800X: 877 seconds

50/927 = 5.4%
50/877 = 5.7%

Also, the 6700K has an older version of the iGPU when compared to newer Intel CPUs. The results may be different on newer Intel CPUs with their faster and newer iGPUs. The 6700K has Intel HD Graphics 530, while the newer Intel CPUs have Intel UHD Graphics 630, which supposedly performs better in Premiere than older Intel graphics.
 

TheELF

Diamond Member
Dec 22, 2012
3,967
720
126
For those who did not have the time to browse the Puget Systems review posted above, here's the loss in detail going from "pure" software to hardware accelerated encoding:


Please note these are not 100% crops, images were already shrunk by half.
The devil is in the details...
They tell you from the get go that there is a bug that causes lower quality but then they go ahead and treat it as a normal test,like what the heck!?
Also does this bug still exist or did they patch it out which would make the whole thing pointless.
We have found that much of the quality issue with Hardware Accelerated encoding is due to it being limited to ~60mbps. If you try to set it higher than that (which the "H.264 High Quality 2160p 4K" preset does), Premiere Pro and Media Encoder don't follow the target bitrate and revert back to a very low bitrate. This explains both the low quality and faster performance found in our testing.

We plan to do a follow-up to this testing in the future.
 

coercitiv

Diamond Member
Jan 24, 2014
6,150
11,670
136
They tell you from the get go that there is a bug that causes lower quality but then they go ahead and treat it as a normal test,like what the heck!?
I did not use image crops from the bugged export files, instead I used the first batch of export files with equal bitrate. (under 60mbps)

The bugged Adobe implementation was still present in July 2019 as reported by a Puget reviewer in the comment section of a newer video CPU benchmark.
 

TheGiant

Senior member
Jun 12, 2017
748
353
106
Without ISO encoding quality there's little point in comparing encoding performance. What happens if you can lower the software encoding settings on the 3800X and still the difference is "small and hard to find". Does that make the 3800X inherently faster for your work?

For those who did not have the time to browse the Puget Systems review posted above, here's the loss in detail going from "pure" software to hardware accelerated encoding:

rmCtge1.gif


5QnuGd7.gif


Please note these are not 100% crops, images were already shrunk by half.
from user POV, there is little point in ISO method
the user defines what is good and the OP defined, that the super old q3 2015 iGPU does enough quality then its ok
ofc we cant expect 6700K to perform like 3800X on pure CPU encoding, but that is another use case

Its definitely a noticeable improvement over my previous 1700X system compared to my 6700K system. Premiere Pro CC encoding responds well to CPU speed.

I sold my 1700X system and I don't have the original projects and original files that I used to compare the 1700X and 6700K, so I can't do an equal comparison between the 6700K, 1700X, and 3800X. So, I can't see what the actual differences are between the 1700X and 3800X.

When you look at the actual percentage difference it seems smaller.
6700K: 927 seconds
3800X: 877 seconds

50/927 = 5.4%
50/877 = 5.7%

Also, the 6700K has an older version of the iGPU when compared to newer Intel CPUs. The results may be different on newer Intel CPUs with their faster and newer iGPUs. The 6700K has Intel HD Graphics 530, while the newer Intel CPUs have Intel UHD Graphics 630, which supposedly performs better in Premiere than older Intel graphics.
if you can try Intel's SVT- after a long time finally a avx512 non niche use case (see benchmarks like Phoronix)
AVX2 muscle CPUs see very good improvement too, Intel or AMD doesnt matter
I does what I hoped for in 2019- the avx512 25W U lIcelake aptops will have encoding performance of 9900K and that is a big deal
 

Bavor

Member
Nov 11, 2001
82
18
81
I did some further testing. I reran the test on the i7-6700K and paused google drive sync, exited dropbox, and stopped microsoft onedrive in the startup. I noticed the 3800X system did not have those installed or enabled and I forgot to disable/stop them when I ran the initial 6700K test. The result was 14 minutes 59 seconds. It was an improvement of 28 seconds.

Next, I encoded the same project with the 6700K with hyper threading off and all those cloud drive programs paused or shut down.. The result was 14 minutes 47 seconds. I didn't expect turning hyperthreading off to have that effect. Maybe the spectre/meltdown patches have a bigger effect on hyperthreading in Premiere than I thought they would. I'll have to try the 3800X with SMT off to see what happens just for a comparison.

I'll also try some encoding with the iGPU acceleration disabled on the 6700K to see what difference it makes.

i7-6700K: 15 minutes 27 seconds
i7-6700K Cloud Drives Paused/Off/Stopped: 14 minutes 59 seconds
i7-6700K HT off: 14 minutes 47 seconds
R7 3800X: 14 minutes 37 seconds

Here is the 6700K task manager Performance tab while encoding with HT off:
P2TWMg3.jpg
 

TheELF

Diamond Member
Dec 22, 2012
3,967
720
126
Next, I encoded the same project with the 6700K with hyper threading off and all those cloud drive programs paused or shut down.. The result was 14 minutes 47 seconds. I didn't expect turning hyperthreading off to have that effect.
Have a look at clocks while doing this,at default settings a K CPU will use the same TDP for HTT on or off which means a bit of difference in clock speeds.
 

richaron

Golden Member
Mar 27, 2012
1,357
329
136
I'll also try some encoding with the iGPU acceleration disabled on the 6700K to see what difference it makes.
Yeah I kinda speed read through the thread but seems this is the point.

At the least what you were doing is an apples to oranges comparison. You posed the OP as a CPU vs CPU question, but seems the whole time GPU acceleration was the difference. Nothing wrong with taking advantage of GPU acceleration but it's not the only product with an integrated GPU, plus a dedicated GPU (from anyone) can also provide acceleration. And as has been stated many times GPU accelerated encoding is generally considered inferior to pure CPU, even if it's fine for many most people. In fact if you're looking for the fastest GPU acceleration then even a low end CPU or GPU (with the newest GPU acceleration) might be up with the fastest?

I do appreciate the effort you've gone to running benchmarks and providing data to the community, but the CPU vs CPU benchmark without GPU acceleration should have been the fundamental starting point. Then you can consider GPU acceleration and the convenience/quality of an integrated or dedicated GPU. And again: iGPU acceleration is super convenient and fast and good enough for most people, but not apples to apples compared to pure CPU.
 
Last edited: