• Guest, The rules for the P & N subforum have been updated to prohibit "ad hominem" or personal attacks against other posters. See the full details in the post "Politics and News Rules & Guidelines."

News PassMark (cpubenchmark.net) pulls a UserBenchmark and downgrades all AMD chips

moinmoin

Platinum Member
Jun 1, 2017
2,294
2,751
106
After Userbenchmark (see previous thread on here) PassMark is the next benchmarking site making all its numbers useless by boosting some obscure parameters in favor of Intel chips:
The first AMD chip in the single thread performance listing is now AMD Ryzen 3 4300U at position 35...

Yesterday there were still some few more AMD chips:
http://web.archive.org/web/20200312201105/https://www.cpubenchmark.net/singleThread.html
Until a week ago AMD chips dominated the top:
http://web.archive.org/web/20200309064753/https://www.cpubenchmark.net/singleThread.html

That's quite the strong push for some alternate reality.

According to some Twitter response there was a change in how the testing is done on or before March 6th. But as the above web.archive.org links show the number still have been heavily tweaked after that point.


Going by the dates in the changelog either build 1002 or 1003 did these changes: https://www.passmark.com/products/performancetest/history.php
Only hint in the notes that may fit is
"CPU tests, Single Threaded, started scaling single threaded score down to be closer to PT9 for better comparability with older results."
Yeah, makes sense... not.
 
  • Haha
Reactions: lobz

lobz

Golden Member
Feb 10, 2017
1,565
1,983
136
After Userbenchmark (see previous thread on here) PassMark is the next benchmarking site making all its numbers useless by boosting some obscure parameters in favor of Intel chips:
The first AMD chip in the single thread performance listing is now AMD Ryzen 3 4300U at position 35...

Yesterday there were still some few more AMD chips:
http://web.archive.org/web/20200312201105/https://www.cpubenchmark.net/singleThread.html
Until a week ago AMD chips dominated the top:
http://web.archive.org/web/20200309064753/https://www.cpubenchmark.net/singleThread.html

That's quite the strong push for some alternate reality.

According to some Twitter response there was a change in how the testing is done on or before March 6th. But as the above web.archive.org links show the number still have been heavily tweaked after that point.


Going by the dates in the changelog either build 1002 or 1003 did these changes: https://www.passmark.com/products/performancetest/history.php
Only hint in the notes that may fit is
"CPU tests, Single Threaded, started scaling single threaded score down to be closer to PT9 for better comparability with older results."
Yeah, makes sense... not.
This is hilarious :D
 

Panino Manino

Senior member
Jan 28, 2017
351
400
136
Some Intel CPUs climbing the ladder... ok, no problem, they indeed are strong in single, with all the extra megahertz. But the first AMD to be a mobile? This is ridiculous.
 

moinmoin

Platinum Member
Jun 1, 2017
2,294
2,751
106
There's a thread on PassMark were the results are being discussed with one of the developers: https://www.passmark.com/forum/pc-hardware-and-benchmarks/46757-single-thread-score-rating

Looks like the ratings for Zen and Zen+ are actually more accurate now, but Zen 2 chips are off. Will be interesting to see how far they go into fixing that discrepancy, though the developer's transparency on display so far is at least promising.

Other than that the listing is a mess of old and new data, AMD Ryzen 3 4300U is too rare (it's effectively not yet released after all) so calculated using old benchmark data. This change could certainly have been handled much better.
 

RetroZombie

Senior member
Nov 5, 2019
464
383
96
I still say that its WAY too coincidetal that the 2 biggest benchmark sites recently have made changes that make AMD look bad.
Have you also noticed that those changes normally happens around with amd new products releases?

Like now, renoir soon to be released...
 
  • Like
Reactions: lightmanek

Det0x

Senior member
Sep 11, 2014
474
500
136
Other useless benchmark not be run and included in cpu reviews then i guess.

Hope they got a big payout from Intel because they have pretty much killed their own "benchmark"
 
  • Like
Reactions: lightmanek

SAAA

Senior member
May 14, 2014
494
91
91
I don't want to be that guy but I pointed out a few months ago even here on the forums that Zen2 results were insane. +40% IPC over Zen+ for all single threaded results, absolutely unrepresented in any other bench. Looks like they noticed too and changed the bench to better reflect that arch capabilities, among any other change. Now beside this the results are messy as for some odd reason they are displaying both old and new tests mixed up, that's wrong on their part imho, but it doesn't certainly look like a downgrade and more like a scaling back some odd outlier.
 
  • Like
Reactions: coercitiv

lobz

Golden Member
Feb 10, 2017
1,565
1,983
136
I don't want to be that guy but I pointed out a few months ago even here on the forums that Zen2 results were insane. +40% IPC over Zen+ for all single threaded results, absolutely unrepresented in any other bench. Looks like they noticed too and changed the bench to better reflect that arch capabilities, among any other change. Now beside this the results are messy as for some odd reason they are displaying both old and new tests mixed up, that's wrong on their part imho, but it doesn't certainly look like a downgrade and more like a scaling back some odd outlier.
I don't want to be that guy but I point out to you right now, that if that was flawed back then, what's there now is just utter nonsense. It's not like it takes a lot of manpower to take ONE SINGLE SUPERFICIAL LOOK at your own charts before locking in the changes or make them public. It takes literally 20 seconds. Therefore there is no way it's not intentional that they leave it that way until they decide to do something about it.
 

RetroZombie

Senior member
Nov 5, 2019
464
383
96
+40% IPC over Zen+ for all single threaded results, absolutely unrepresented in any other bench.
Maybe this is a bad example:


There's 42% performance increased over the 2700X there. And it's not just the cpu being tested here also gpu.
Now should we discard the result? It's fake? It's not the norm? Better trust some random synthetic test?

I like cpuz tests just to check if some cpu is getting the correct result or performing correctly.
Now using cpuz to compare some random intel cpu vs random amd cpu and based on the result that the program gives, end up saying that the cpu A is better than the cpu B because cpuz said so, just NO.
 

SAAA

Senior member
May 14, 2014
494
91
91
I don't want to be that guy but I point out to you right now, that if that was flawed back then, what's there now is just utter nonsense. It's not like it takes a lot of manpower to take ONE SINGLE SUPERFICIAL LOOK at your own charts before locking in the changes or make them public. It takes literally 20 seconds. Therefore there is no way it's not intentional that they leave it that way until they decide to do something about it.
The nonsense bench charts that you can look at are the result of their own mess up of course, by including over 10 years old results and only a few new ones, if not any at all for many rare cpus. Basically this turns their benchmark into a useless chart for the next 2-3 years till there's enough to get a worthy statistical average. They should have done like geekbench with different generations of test on different pages (not that geek does much better with his subvariants and operative system affecting the scores...).
 
  • Like
Reactions: coercitiv and lobz

Kuiva maa

Member
May 1, 2014
166
188
116
I don't want to be that guy but I pointed out a few months ago even here on the forums that Zen2 results were insane. +40% IPC over Zen+ for all single threaded results, absolutely unrepresented in any other bench. Looks like they noticed too and changed the bench to better reflect that arch capabilities, among any other change. Now beside this the results are messy as for some odd reason they are displaying both old and new tests mixed up, that's wrong on their part imho, but it doesn't certainly look like a downgrade and more like a scaling back some odd outlier.
Well something similar happend with CPU-z bench and first gen Zen CPUs, they rewrote their bench after it became an outlier for Ryzen 1000 series CPUs. While I understand the reasons behind this policy, I have to comment that a bench that fails to scale properly with new tech (in comparison to other suites that do their job fine) , perhaps wasn't that good a judge of hardware in general, after all. All benchmarks do become obsolete eventually especially when lots of new instructions cause a paradigm shift in how applications are executed but having a single arch forcing your hand into changing the way you measure things, is taking away your credibility. You are supposed to be in the business of quantifying the performance delta between hardware- If a piece of hardware does exceptionally well in the tests you put it through, then so be it.
 

SAAA

Senior member
May 14, 2014
494
91
91
Maybe this is a bad example:


There's 42% performance increased over the 2700X there. And it's not just the cpu being tested here also gpu.
Now should we discard the result? It's fake? It's not the norm? Better trust some random synthetic test?

I like cpuz tests just to check if some cpu is getting the correct result or performing correctly.
Now using cpuz to compare some random intel cpu vs random amd cpu and based on the result that the program gives, end up saying that the cpu A is better than the cpu B because cpuz said so, just NO.
That's definitely a good example of real world result being very different from averages, unfortunately if we want to get a more universal picture single outliers can't count as a generic bench result. Yes, some subtest might be 100% faster but if the average IPC is told even by AMD themselves to be around 10-15% there's no "good" bench that should point to 2.5x that.

As a sidenote: whatever new algorithms they implemented displayed 3x single threaded scores on most cpus at first, that tells something about how software is poorly optimized... If this were a game and one update made everyone get 3x the frame/s noone would complain about some mixing up in the tables.
 

Markfw

CPU Moderator, VC&G Moderator, Elite Member
Super Moderator
May 16, 2002
21,134
9,263
136
That's definitely a good example of real world result being very different from averages, unfortunately if we want to get a more universal picture single outliers can't count as a generic bench result. Yes, some subtest might be 100% faster but if the average IPC is told even by AMD themselves to be around 10-15% there's no "good" bench that should point to 2.5x that.

As a sidenote: whatever new algorithms they implemented displayed 3x single threaded scores on most cpus at first, that tells something about how software is poorly optimized... If this were a game and one update made everyone get 3x the frame/s noone would complain about some mixing up in the tables.
We all know (those that read this forum a lot) that CS:GO is an outlier, as it must love the big L3 cache. But as you say, it IS 10-15% faster than Zen1 in everything due to design improvements. Their tests are just crap.
 
  • Like
Reactions: Thunder 57

RetroZombie

Senior member
Nov 5, 2019
464
383
96
We all know (those that read this forum a lot) that CS:GO is an outlier, as it must love the big L3 cache.
Yes mark correct. But should the result be discarded because of that?

I also remember in the core 2 duo era that xbitlabs tested the intel core 2 duo and instead of the 4MB/6MB L2 cache for single core tests, they tested it with only 512KB L2 cache and the cpu performance tanked to amd K8 512LB L2 levels if not slower, so the core 2 duo results were some how mislead because of that (huge L2 cache)?

The cache is part of the cpu design and it's there to improve performance, because of that I'm very curious with renoir performance on mobile and desktop vs matisse.
 
  • Like
Reactions: lightmanek

Markfw

CPU Moderator, VC&G Moderator, Elite Member
Super Moderator
May 16, 2002
21,134
9,263
136
Yes mark correct. But should the result be discarded because of that?

I also remember in the core 2 duo era that xbitlabs tested the intel core 2 duo and instead of the 4MB/6MB L2 cache for single core tests, they tested it with only 512KB L2 cache and the cpu performance tanked to amd K8 512LB L2 levels if not slower, so the core 2 duo results were some how mislead because of that (huge L2 cache)?

The cache is part of the cpu design and it's there to improve performance, because of that I'm very curious with renoir performance on mobile and desktop vs matisse.
Oh, no, the result is real. I was just saying that since it seemed like some did not believe the result since it was so good.
 
  • Like
Reactions: RetroZombie

coercitiv

Diamond Member
Jan 24, 2014
4,169
5,072
136
Let's have some fun:

Over a month ago I posted a Passmark comparison between R5 4300U and ICL / CML i7 CPUs. I was obviously favoring Zen 2 somehow, but as I said back then, was fun to watch:



Fast forward today, let's take another look at the... "updated" results from the same link used to create the comparison:
Screenshot_2020-03-14 AMD Ryzen 3 4300U vs Intel Core i7-1065G7 1 30GHz vs Intel Core i7-10710...png

Ice Lake vs. Comet Lake situation is downright comical.
 

Arkaign

Lifer
Oct 27, 2006
20,565
1,047
126
Man, it went from bad to outright broken lol. They simply haven't been anything to take seriously in a very very long time.
 

eek2121

Senior member
Aug 2, 2005
740
796
136
Who can trust any benchmark that changes things as a result of new CPUs being rolled out? The idea behind a benchmark is it represents a single specific workload. That workload should be independent from an architectural standpoint.

The only reasons that scores would change is if the benchmark itself was flawed in some way. If that is the case, they need to be transparent about it, and about why.
 
  • Like
Reactions: lightmanek

Markfw

CPU Moderator, VC&G Moderator, Elite Member
Super Moderator
May 16, 2002
21,134
9,263
136
Who can trust any benchmark that changes things as a result of new CPUs being rolled out? The idea behind a benchmark is it represents a single specific workload. That workload should be independent from an architectural standpoint.

The only reasons that scores would change is if the benchmark itself was flawed in some way. If that is the case, they need to be transparent about it, and about why.
Its flawed because it didn't show Intel in a good enough light,. :)
 

ASK THE COMMUNITY