Linus Torvalds: "Geekbench is SH*T"

Mar 10, 2006
11,719
122
126
#1
http://www.realworldtech.com/forum/?threadid=136526&curpostid=136666

Geekbench is SH*T.

It actually seems to have gotten worse with version 3, which you should be aware of. On ARM64, that SHA1 performance is hardware-assisted. I don't know if SHA2 is too, but Aarch64 does apparently do SHA256 in the crypto unit, so it might be fully or partially so.

And on both ARM and x86, the AES numbers are similarly just about the crypto unit.

So basically a quarter to a third of the "integer" workloads are just utter BS. They are not comparable across architectures due to the crypto units, and even within one architecture the numbers just don't mean much of anything.

And quite frankly, it's not even just the crypto ones. Looking at the other GB3 "benchmarks", they are mainly small kernels: not really much different from dhrystone. I suspect most of them have a code footprint that basically fits in a L1I cache.

Linus
 

Exophase

Diamond Member
Apr 19, 2012
4,440
0
81
#3
For Linus anything that warrants any kind of criticism is worthless garbage. His hyperbole is way over the top, great example of a really smart person who says some really stupid things. My favorite was his quip about how no one sane uses NEON because it's not on Tegra 2 (he said this mere weeks ago, mind you). Or if you want to go more classic, his "arguments" about why C++ is so much worse than C (in my own stuff I prefer to use C over C++ but none of his arguments make any sense)

He has his opinions, that's fair, and really he makes a lot of good points, but most of the time when he makes a sweeping generalization about how awful something is he's completely off his rocker. In this case, the argument of how much crypto should affect the scores is subjective - but the good thing is that you can ignore them if you don't like it; most reports of Geekbench are NOT just aggregate scores. Lots of real world stuff has smallish icache footprint, saying that that puts it on the level of Dhrystone (which also has a very small dcache footprint and a ton of other problems) is way over the top.
 
Last edited:
Mar 10, 2006
11,719
122
126
#4
For Linus anything that warrants any kind of criticism is worthless garbage. His hyperbole is way over the top, great example of a really smart person who says some really stupid things. My favorite was his quip about how no one sane uses NEON because it's not on Tegra 2 (he made this WEEKS ago, mind you)
Do you disagree with his characterization of GB3? You are a software expert, so I'd love to hear your thoughts on the matter.
 

Nothingness

Golden Member
Jul 3, 2013
1,905
35
106
#5
Linus does mistakes as everyone. Benchmark wise the only thing I agree with him is that SPEC is a pile of sh*t except for gcc.

My favorite one is the <deleted explicative>nvidia, when nvidia is the only company to have correct drivers for Linux (Intel can't be counted given their IGP is useless for any form of serious gaming).


No profanity here
Markfw900
Anandtech Moderator.
 
Last edited by a moderator:

Nothingness

Golden Member
Jul 3, 2013
1,905
35
106
#6
Do you disagree with his characterization of GB3? You are a software expert, so I'd love to hear your thoughts on the matter.
I'll give my thoughts about Linus remarks:
1. most of SPEC 2006 benchmarks have close to 0 I cache misses
2. the Lua test in GB3 is rather large code wise
3. there are too many crypto and compress tests in GB3

That being said what is the alternative?
 
Apr 22, 2012
20,395
0
106
#7
So what some of us assumed all along is true. The Apple hype is...hype.
 

Exophase

Diamond Member
Apr 19, 2012
4,440
0
81
#8
Do you disagree with his characterization of GB3? You are a software expert, so I'd love to hear your thoughts on the matter.
Sorry, my post was updated ;p

Geekbench is far from perfect and the guys are learning, I've given my own suggestions to them. It's definitely not an "everything" benchmark, with a big emphasis on crypto and compression/decompression in the integer section - but those ARE real world things that have real world impact. Not everyone can nor should be SPEC. As far as mobile benches go it offers a lot of things that many others fail to, like using similar compilers when on the same OS, having actual native code that isn't totally browser dependent, not being hugely dependent on external library stuff that can be tuned a lot, and being way less about pointless toy benches than Sunspider or AnTuTu.

Just so it's clear, in no way am I saying my stature in programming is anything like Linus's, he's way more experienced and talented than me in all sorts of crap. But when talking on something like this he quickly goes into this mode where he calls the other person a blithering idiot and doesn't listen to a word they have to say, then starts hand waving around a lot and only gradually backs down a little (or at least he did that with me). I really think that all of the recognition he's gotten and how much people are willing to defer to him on anything has gone to his head.
 

Vesku

Diamond Member
Aug 25, 2005
3,745
0
86
#9
The crypto tests wouldn't be bad if they put them under their own heading of "Crypto Performance" and added in some additional tests to probe the robustness of any dedicated hardware, bcrypt for example. But you still have the overall benchmark issue where a single number final result does not convey much useful information other than "this one runs this benchmark better".
 
Jan 15, 2013
12,164
0
81
#10
Every processor is a bundle of thousands of shortcuts for commonly used math functions. Should we disable the hardware square root function to keep things level? Barrel shifter? Out-of-order execution? Cherry picking optimizations is a way to make things biased real quick.

If you want to compare a specific benchmark, look at that one.
 

Nothingness

Golden Member
Jul 3, 2013
1,905
35
106
#11
But when talking on something like this he quickly goes into this mode where he calls the other person a blithering idiot and doesn't listen to a word they have to say, then starts hand waving around a lot and only gradually backs down a little (or at least he did that with me).
I have discussed with him by e-mail after some insulting comment he made about benchmarking some years ago. He was very civil and certainly less vocal ;)
 

Ajay

Diamond Member
Jan 8, 2001
5,245
247
136
#12
Linus is a bully, has been for some time, at least on forums and what not. I remember not to long ago where he ripped the head off of some guy contributing to the kernel code. It's not that Linus wasn't correct, it's just that there was no need to pound the guy into the sand like that. Even if I had some expertise in OS kernels, I wouldn't contribute to the Linux kernel for fear the Linus the Lord God of the Linux kernel would be having a bad day the same day I made a mistake.
 
Apr 22, 2012
20,395
0
106
#14
Ugh, why exactly? Not that I'm an Apple drone, but why the h8?
Why the assumption of hate if you dont fall for the hype? That seems to be a common stereotyping these days:\
 

Vesku

Diamond Member
Aug 25, 2005
3,745
0
86
#16
Every processor is a bundle of thousands of shortcuts for commonly used math functions. Should we disable the hardware square root function to keep things level? Barrel shifter? Out-of-order execution? Cherry picking optimizations is a way to make things biased real quick.

If you want to compare a specific benchmark, look at that one.
Which is why I stated I'd like to see more Crypto functions tested by Geekbench in their own subheading.
 

pm

Elite Member Mobile Devices
Super Moderator
Jan 25, 2000
7,420
0
81
#18
I can see where Mr. Torvalds is coming from - the benchmark isn't a pure test of CPU performance because they are offloading a substantial portion of the crypto operations to a dedicated unit - but I don't necessarily see it as a bad thing except that amount weighting given to the crypto results. If you want to do crypto operations, you'd use a dedicated crypto unit if it was available and you'd want to know the performance of that operation on that dedicated unit... if other SOC's suffer because they don't have a dedicated crypto unit, then they will justifiably suffer in their scores. Crypto performance is a pretty big deal nowadays. But when the benchmark is so heavily skewed towards crypto ops - 1/3 of the score just comes from one dedicated hardware unit, it does start to swamp out other differences. I wouldn't say that GB3 is bad - or even excrement - but it's good to highlight the inherent skew of the results due to the focus on dedicated silicon.

I will say also that I used to post at RealWorldTech but I've always found that site intimidating. Despite the fact that I have almost 20 years as a CPU designer and I've met David Kanter in person (at ISSCC... he may not remember it, but I do), I do feel like I'm playing in the big leagues every time that I post there and someone slamming me with some verbal putdown does tend to ruin my day (I'm a thin-skinned geek). So I read RWT, but I don't post... or at least not in a long long time.
 
Last edited:

Nothingness

Golden Member
Jul 3, 2013
1,905
35
106
#19
I can see where Mr. Torvalds is coming from - the benchmark isn't a pure test of CPU performance because they are offloading it to a dedicated unit - but I don't necessarily see it as a bad thing except that amount weighting given to the crypto results. If you want to do crypto operations, you'd use a dedicated crypto unit if it was available and you'd want to know the performance of that operation on that dedicated unit... if other SOC's suffer because they don't have a dedicated crypto unit, then they will justifiably suffer in their scores. Crypto performance is a pretty big deal nowadays. But when the benchmark is so heavily skewed towards crypto ops - 1/3 of the score just comes from one dedicated hardware unit, it does start to swamp out other differences.
On both Haswell and ARMv8 chips these are not really crypto units, but dedicated instructions that do part of the work. That's different from what VIA or TI do; they have dedicated copros for that.

I will say also that I used to post at RealWorldTech but I've always found that site intimidating. Despite the fact that I have almost 20 years as a CPU designer and I've met David Kanter in person (at ISSCC... he may not remember it, but I do), I do feel like I'm playing in the big leagues every time that I post there and someone slamming me with some verbal putdown does tend to ruin my day (I'm a thin-skinned geek). So read, but I don't post.
There are no more people slamming others on RWT than what you find here. Avoid "someone" (Paul de Mone), Alberto (an Intel fanboy who seems to have lost his brain somewhere, assuming he ever had one), Wilco (an ARM fanboy but at least with a brain) and Linus when he is in a bad day, and discussions are very civil and enlightening.

I certainly feel better on RWT than I do here ;)
 

Ajay

Diamond Member
Jan 8, 2001
5,245
247
136
#20
Wow, avoid Paul de Mone? I used think he was a damn smart guy back when posting on comp.arch (1990s) Has he gone nuts or something?
 

Nothingness

Golden Member
Jul 3, 2013
1,905
35
106
#21
Wow, avoid Paul de Mone? I used think he was a damn smart guy back when posting on comp.arch (1990s) Has he gone nuts or something?
Yes, I also liked his articles on RWT. But it seems his love for Intel Itanium turned him into an angry and embittered man. He now keeps on bashing people that don't agree with him. Note that he still makes some interesting posts, but he somehow reminds me of people who were great, are no more great, and keep on talking about better times, failing to notice things changed. He recently basically wrote that everything had been done in compiler technology, how can a smart guy say that unless he's blinded by his hate or love for a company?
 
Dec 5, 2012
70
0
66
#22
I can see where Mr. Torvalds is coming from - the benchmark isn't a pure test of CPU performance because they are offloading a substantial portion of the crypto operations to a dedicated unit - but I don't necessarily see it as a bad thing except that amount weighting given to the crypto results. If you want to do crypto operations, you'd use a dedicated crypto unit if it was available and you'd want to know the performance of that operation on that dedicated unit... if other SOC's suffer because they don't have a dedicated crypto unit, then they will justifiably suffer in their scores. Crypto performance is a pretty big deal nowadays. But when the benchmark is so heavily skewed towards crypto ops - 1/3 of the score just comes from one dedicated hardware unit, it does start to swamp out other differences. I wouldn't say that GB3 is bad - or even excrement - but it's good to highlight the inherent skew of the results due to the focus on dedicated silicon.

I will say also that I used to post at RealWorldTech but I've always found that site intimidating. Despite the fact that I have almost 20 years as a CPU designer and I've met David Kanter in person (at ISSCC... he may not remember it, but I do), I do feel like I'm playing in the big leagues every time that I post there and someone slamming me with some verbal putdown does tend to ruin my day (I'm a thin-skinned geek). So I read RWT, but I don't post... or at least not in a long long time.
As an experienced CPU designer, what's your opinion about PASemi's PA6T in 2006? how about pert/Mhz? You know the PA was merged into Apple, so apple now has the ability to challenge intel in CPU design?

There was an article by David Kanter in real world tech.

http://www.realworldtech.com/pa-semi/
 
Mar 10, 2006
11,719
122
126
#24
As an experienced CPU designer, what's your opinion about PASemi's PA6T in 2006? how about pert/Mhz? You know the PA was merged into Apple, so apple now has the ability to challenge intel in CPU design?

There was an article by David Kanter in real world tech.

http://www.realworldtech.com/pa-semi/
Apple really got a talented bunch of folks with PASemi.
 
Thread starter Similar threads Forum Replies Date
CHADBOGA CPUs and Overclocking 68

Similar threads



ASK THE COMMUNITY

TRENDING THREADS