Discussion which desktop CPU is most computing effective on 65W

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

greencpu

Junior Member
Mar 14, 2023
12
3
41
Is there any CPU computing efficiency versus power consumption test tables? I can't find any.
I use Windows 10 and video editing, when single encoding task may take 5-15 hours, which itself isn't a problem.
When upgrading computer I'd choose a more contemporary CPU which doesn't require massive heating but still is computing effective.
Question might be which desktop CPU is most computing effective on 65W?
Is there any such comparison tests?
 

TheELF

Diamond Member
Dec 22, 2012
3,973
730
126
Dedicate HW doesn't come close to CPU in terms of quality/bitrate.
So if you care about storage, you have to go with CPU encoding.
Any comparison I have ever seen on this topic uses bitrates that are completely laughable low and that nobody that is serious about transcoding would ever use.
And even then the differences are barely noticeable even in still frames.
Do you have an article on this that you are basing your opinion on that shows something different?!
 

Geegeeoh

Member
Oct 16, 2011
145
126
116
I doesn't take much to test yourself.
Take a video (or a couple of different kinds) and try the same settings with CPU and GPU.
That's what I did.
 

mmaenpaa

Member
Aug 4, 2009
78
138
106
Well if you set them at same wattage they will be much slower therefore much less efficient.

Examples? Sure, browsing the internet while working on excel sheets with some videos playing on the back, Intel does that at 8 to 15w, amd takes 50.

The whole Adobe suite, like premiere photoshop and the likes, those tasks use 3 to 6 cores roundabout, and there Intel is king in efficiency. Technotice, a YouTube channel that specifically makes videos testing tasks for content creators made a comparison video, Intel wiped the floor with amd in those tasks.

Basically, anything that doesn't max out your cores - even idle - Intel >>> amd.
OK,

I now see what you mean. Based on his test results on creator workflows 7950X has worse efficiency compared to 13900K.

Columns indicate yearly electricity usage running test workflows in KWh.

So one could say AMD (7950X) sucks at mixed workload efficiency and Intel (13900K) sucks at multicore workload efficiency :laughing:


I believe OP is looking for a multicore efficiency, so 7950X seems the logical choice.

13900K7950X7950X % worse
Premier59,6876,41
28,03284182​
Photoshop25,5632,47
27,03442879​
Lightroom42,2561,08
44,56804734​
 

Markfw

Moderator Emeritus, Elite Member
May 16, 2002
25,560
14,513
136
Well if you set them at same wattage they will be much slower therefore much less efficient.

Examples? Sure, browsing the internet while working on excel sheets with some videos playing on the back, Intel does that at 8 to 15w, amd takes 50.

The whole Adobe suite, like premiere photoshop and the likes, those tasks use 3 to 6 cores roundabout, and there Intel is king in efficiency. Technotice, a YouTube channel that specifically makes videos testing tasks for content creators made a comparison video, Intel wiped the floor with amd in those tasks.

Basically, anything that doesn't max out your cores - even idle - Intel >>> amd.
The OP is talking about encoding, not browsing the internet. For what he does, AMD is more efficient, and I proved it. You are just trolling again.
 

greencpu

Junior Member
Mar 14, 2023
12
3
41
I wondered if there exist some magic mid CPU, when power consumption still is low and CPU effectiveness acceptable.
But I understand there may be a controversy - when manufacturers move to next level thinner silicon wafer, they use all this benefit to increase frequencies and to combine more cores into single CPU. As an opposite could be i7 on thinner wafer with less power consumption.

I have computer with 3-4 years old i5 CPU
Browsing Internet, working with Excel it runs smoothly. Only with photo editing there occur slowdowns and worst most visible and time consuming is occasional h265 encoding. During 12 hours encoding I can use computer for other common tasks, so not a a problem itself.
But quite often it appears that encoding settings are not optimal and several runs are required, then computer "slowness" already makes me "nervous".
Also when browsing Adobe Lightroom 24 mpix photo album in full view, it may take several seconds to open next photo. Then I choose to generate whole album before viewing to make browsing faster, which may take 10-15 minutes, depending how many photos there are.

When I decided to upgrade hardware and there was occasion to buy i9-12900K at good reasonable price, I didn't hesitated.
Questions started when I discovered i9 enormous heating requirements and I realised that most of this i9 efficiency comes from huge electrical power consumed. Obviously I'm not this CPU target group at all. :-(
I have already also spare Intel i7-11700 with Gigabyte B460 motherboard on shelf. Not so fast setup as with i9-12900K but I expect it still to be 2x better as my present i5, except that this i7 is already 3 years old technology.
So I asked for suggestions.
 

Heartbreaker

Diamond Member
Apr 3, 2006
4,227
5,228
136
Dedicate HW doesn't come close to CPU in terms of quality/bitrate.
So if you care about storage, you have to go with CPU encoding.

Which is why I was hoping they had another mode. Doing HW encoding is like using the "Superfast" preset, but there is no HW option to do "Very Slow" type encoding to trade off extending the time, to improve size/quality.

I see no reason why this MUST be the case, but it does seem like the HW encoders focus on speed and not size/quality.

My only experience is with Quick Sync of H265.

It does sound like AV1 HW encoding is doing a much better job.

It's also hard to find decent quality comparisons:

This one finds NVenc easily coming out on top, but they don't test software H265 (Grrrrr!):
 
Last edited:

TheELF

Diamond Member
Dec 22, 2012
3,973
730
126
Which is why I was hoping they had another mode. Doing HW encoding is like using the "Superfast" preset, but there is no HW option to do "Very Slow" type encoding to trade off extending the time, to improve size/quality.

I see no reason why this MUST be the case, but it does seem like the HW encoders focus on speed and not size/quality.

My only experience is with Quick Sync of H265.

It does sound like AV1 HW encoding is doing a much better job.

It's also hard to find decent quality comparisons:

This one finds NVenc easily coming out on top, but they don't test software H265 (Grrrrr!):
With software each quality setting can be a completely different algorithm, with hardware each quality setting would have to be a completely different circuit, they would have to implement the hardware multiple times and that just won't happen, you can adjust the bitrate and that's how you set the quality.
 

Heartbreaker

Diamond Member
Apr 3, 2006
4,227
5,228
136
Here is another H265 encoder comparison where NVenc wins again among HW. Author considers it close enough to software to use for his own videos:

Has HEVC hardware encoding caught up to the quality of software encoding?
No.

It seems that the three titans of the GPU industry still haven’t figured out how to build encoding hardware pipelines that are both fast and high quality.

Would I use hardware encoders for my own videos?
Yes.

Thee two Samsung tests were done on the extreme end of compressions, down to 1/20th of the original size. The Glass Blowing Demo VMAF deep dive gives a good idea of what would be more expected of a re-encode from going to 15Mb/s to 10Mb/s.