Discussion What is the best CPU benchmark?

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

FlameTail

Platinum Member
Dec 15, 2021
2,356
1,273
106
Title says it all. Need I elaborate ?

In my opinion the best benchmark is the one that gives a good all round overview of a CPU's performance across a wide range of use scenarios. Of course, you are free to have your own definition of what the best CPU benchmark is, and if so, please post below!

So, what is the best CPU benchmark?
 

DrMrLordX

Lifer
Apr 27, 2000
21,637
10,854
136
This thread is supposed to be in a grave, though. It's only active due to thread necromancy.

thread_necromancer_card_3.jpg


Apparently the Ubuntu forums have this problem as well.
 

FlameTail

Platinum Member
Dec 15, 2021
2,356
1,273
106
In my mind the only correct answer is: The one that demonstrates the CPU's performance for YOUR intended use.

In my view many of the all-in-one attempts are great as an all-around score, but that score is NOT going to provide the appropriate information that I'd use to make recommendations to someone assuming I understand exactly what they primarily intend to do with the machine.

If someone says they will primarily be doing transcodes and encodes and maybe weekend gaming where your understanding is that they make money from the transcodes and encodes, you don't recommend the same CPU as the next person who says that they primarily game, surf the net, and occasionally (a couple times a month) do some 1 to 2hr source duration transcodes.

And with that, it's easy to get into how people have two completely different rigs with two different intended goals.
But let's say we want to make an objective assessment to find out which of 2 CPUs has better performance? This is not for customer buying advice. No, we are tech enthusiasts, and we want to objectively know which one is the more powerful CPU.

In this scenario, what benchmark would we use? The general purpose ones (Geekbench. SPEC) or the specific purpose ones (Cinebench, Blender etc...) ?
 

GaiaHunter

Diamond Member
Jul 13, 2008
3,628
158
106
But let's say we want to make an objective assessment to find out which of 2 CPUs has better performance? This is not for customer buying advice. No, we are tech enthusiasts, and we want to objectively know which one is the more powerful CPU.

In this scenario, what benchmark would we use? The general purpose ones (Geekbench. SPEC) or the specific purpose ones (Cinebench, Blender etc...) ?
Better performance at what?
You choose your task and then run the CPUs. The one that can do it in the least time, render more frames, achieve higher quality, etc, is the most powerful.

Any other method is just a rough guideline to be used as a purchasing decision for general purpose machines, most likely personal machines.
 

moinmoin

Diamond Member
Jun 1, 2017
4,952
7,666
136
But let's say we want to make an objective assessment to find out which of 2 CPUs has better performance?
The concrete workload @HutchinsonJC talked about is actually the only objective assessment you can make. As soon as you make a mix of different workloads it becomes a subjective assessment since the selection of workloads itself is already subjective.
 

deasd

Senior member
Dec 31, 2013
520
761
136
I don't know whether this below would worth a new topic to discuss or not, and it's shocking.


SPEC says it will no longer be publishing SPEC CPU 2017 results for Intel CPUs running a specific version of the Intel compiler, citing displeasure over an apparent targeted optimization for a specific workload (via ServeTheHome and Phoronix) that essentially amounts to cheating. A note has been added to the more than 2,600 benchmark results published with the offending compiler, effectively invalidating those results, mostly from machines running 4th Gen Xeon Sapphire Rapids CPUs.
 

Nothingness

Platinum Member
Jul 3, 2013
2,421
753
136
I don't know whether this below would worth a new topic to discuss or not, and it's shocking.

What surprises me if that they keep on cheating with their compiler despite having been caught several times (SPEC, antutu, etc.). Every benchmark result with icc should have been banned from SPEC for more than 10 years.

Note they’re not alone to have been caught. Sun did that too.
 
  • Like
Reactions: Tlh97 and moinmoin

SarahKerrigan

Senior member
Oct 12, 2014
372
536
136
What surprises me if that they keep on cheating with their compiler despite having been caught several times (SPEC, antutu, etc.). Every benchmark result with icc should have been banned from SPEC for more than 10 years.

Note they’re not alone to have been caught. Sun did that too.

Vendor SPEC submissions are all kinda crap. When at all possible, I run my own.

(On a semi-related note, adding OpenMP to the xz speed subtest was an abomination. Absolutely incomprehensible choice by SPEC.)
 
  • Like
Reactions: Tlh97 and Doug S

Doug S

Platinum Member
Feb 8, 2020
2,267
3,519
136
Vendor SPEC submissions are all kinda crap. When at all possible, I run my own.

(On a semi-related note, adding OpenMP to the xz speed subtest was an abomination. Absolutely incomprehensible choice by SPEC.)

Between vendors having too much incentive to cheat with their in house compiler, and SPEC's abominable decision to allow autopar as you note in speed results (so they aren't necessarily true single core results) make vendor results completely useless.

I don't want to dig through Intel or AMD submissions to figure out what games they've played, better to rely on a neutral third party like Anandtech (in the good old days *sigh*) for results. Those are more applicable to the real world, using an ordinary compiler like LLVM or GCC is more realistic. So long as all the CPUs you are comparing used the same or similar compiler/version they are 10x more useful than looking up whatever fantasy numbers a vendor submitted for their own CPUs.
 

SarahKerrigan

Senior member
Oct 12, 2014
372
536
136
Between vendors having too much incentive to cheat with their in house compiler, and SPEC's abominable decision to allow autopar as you note in speed results (so they aren't necessarily true single core results) make vendor results completely useless.

I don't want to dig through Intel or AMD submissions to figure out what games they've played, better to rely on a neutral third party like Anandtech (in the good old days *sigh*) for results. Those are more applicable to the real world, using an ordinary compiler like LLVM or GCC is more realistic. So long as all the CPUs you are comparing used the same or similar compiler/version they are 10x more useful than looking up whatever fantasy numbers a vendor submitted for their own CPUs.

xz isn't just autopar, but straight-up multithreaded with OpenMP. Autopar was and is stupid to allow, but putting a multithreaded-by-default subtest into the intspeed suite is just an utterly surreal choice. It makes that subtest almost entirely useless for comparison.

When I run spec, it's always gcc or clang, O3, with a couple of normal optimization options - and xz OpenMP suppressed. No trick compilers, no SmartHeap, no autopar. It often paints a very different picture than vendor submissions.
 

Doug S

Platinum Member
Feb 8, 2020
2,267
3,519
136
xz isn't just autopar, but straight-up multithreaded with OpenMP. Autopar was and is stupid to allow, but putting a multithreaded-by-default subtest into the intspeed suite is just an utterly surreal choice. It makes that subtest almost entirely useless for comparison.

When I run spec, it's always gcc or clang, O3, with a couple of normal optimization options - and xz OpenMP suppressed. No trick compilers, no SmartHeap, no autopar. It often paints a very different picture than vendor submissions.

Ugh I forgot all about that Quill SmartHeap BS :mad:

If you're running SPEC yourself are you posting your results anywhere, or is this part of your job so they are considered the IP of your employer? Because with the gaping hole left by Andrei and Ian's departures, there are precious few third party SPEC results available from someone who knows how to run the darn thing in a way that reports useful results!
 

SarahKerrigan

Senior member
Oct 12, 2014
372
536
136
Ugh I forgot all about that Quill SmartHeap BS :mad:

If you're running SPEC yourself are you posting your results anywhere, or is this part of your job so they are considered the IP of your employer? Because with the gaping hole left by Andrei and Ian's departures, there are precious few third party SPEC results available from someone who knows how to run the darn thing in a way that reports useful results!

I'm running them myself. I have results for some miscellaneous x86/ARM/PPC/Itanium/SPARC hardware, mainly server, across twenty or so systems; I haven't published them anywhere because I'm not convinced they're useful to anyone but me (they some variation in compiler version, though all the same settings; usually only one run, rather than the SPEC-required three; no SPECFP and no Rate because neither are useful to me) and because I'd have to make sure to weed out any systems from the list that I'm not supposed to release results for.
 

Doug S

Platinum Member
Feb 8, 2020
2,267
3,519
136
I'm running them myself. I have results for some miscellaneous x86/ARM/PPC/Itanium/SPARC hardware, mainly server, across twenty or so systems; I haven't published them anywhere because I'm not convinced they're useful to anyone but me (they some variation in compiler version, though all the same settings; usually only one run, rather than the SPEC-required three; no SPECFP and no Rate because neither are useful to me) and because I'd have to make sure to weed out any systems from the list that I'm not supposed to release results for.

If you have time to do the weeding (or if not just for future runs) I'm sure people here would find them interesting. I care most about single core int so they'd be useful to me at least. Maybe you could start a thread here and update as you do new runs?

You might offers from people to allow you access to their hardware if there are systems you would like to benchmark but don't have available to you. There will always be someone here buying the latest Intel, AMD, Apple hardware, and I'm sure Qualcomm's Elite X when it hits the market. Be nice to see how they really compare, rather than looking at their submitted results or believing their marketing droids when they shower us with graphs containing unlabeled axes, unknown benchmarks, and deliberately non-optimal scenarios for the "competition".
 

SarahKerrigan

Senior member
Oct 12, 2014
372
536
136
If you have time to do the weeding (or if not just for future runs) I'm sure people here would find them interesting. I care most about single core int so they'd be useful to me at least. Maybe you could start a thread here and update as you do new runs?

You might offers from people to allow you access to their hardware if there are systems you would like to benchmark but don't have available to you. There will always be someone here buying the latest Intel, AMD, Apple hardware, and I'm sure Qualcomm's Elite X when it hits the market. Be nice to see how they really compare, rather than looking at their submitted results or believing their marketing droids when they shower us with graphs containing unlabeled axes, unknown benchmarks, and deliberately non-optimal scenarios for the "competition".

I'll think on it. I'd certainly be open to doing it for new runs, if permissible.

One thing I'm really not excited about is the prospect of needing to run SPEC on Android or iPhone OS, or for that matter NT. I know Anandtech, among others, have done it, but I'm lazy and like being able to just write up a config and go. Sticking to server hardware (whether Linux or commercial UNIX flavored) has made that easy.