Apple A12 benchmarks

Page 5 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

scannall

Golden Member
Jan 1, 2012
1,946
1,638
136
The validity of Geekbench as a cross platform benchmark for me goes out the toilet when the simple act of switching to Linux gains you thousands of GB score on Ryzen. Seriously go to the database and search 2700X by highest score. Everything Linux as far as the eye can see. And MacOS on Hackintosh machines sometimes, before you see Windows.

The OS and compiler play such a huge role you cannot use it as a valid comparison for architectures. It's only a valid comparison for the entire chain of software and hardware that get it running.
It isn't a bad ballpark though. And is good for showing improvements from generation to generation in the same product line and OS. Some super accurate thing, across all platforms? No, not really. But it is still useful, and the data is valid provided you keep the shortcomings in mind.
 
  • Like
Reactions: ksec

Nothingness

Platinum Member
Jul 3, 2013
2,409
739
136
The validity of Geekbench as a cross platform benchmark for me goes out the toilet when the simple act of switching to Linux gains you thousands of GB score on Ryzen. Seriously go to the database and search 2700X by highest score. Everything Linux as far as the eye can see. And MacOS on Hackintosh machines sometimes, before you see Windows.
All of these Linux scores were on the same machine, likely an overclocked one. Also you're likely referring to multi score, in which case the OS scheduler can play an important role.

In my experience the difference between compiler/libs for single score is less than 10%.
 

itsmydamnation

Platinum Member
Feb 6, 2011
2,769
3,144
136
I think the problem exposed by Ryzen 2k for geekbench is it shows there are a whole bunch of workloads not captured by the int/fp scores and that is where latency and throughput together are important. The only way it's improved latencies really show up is in the memory score yet there are things like games that are seeing big perf per clock up lifts.

In don't think it's bad, it's way better then v3. But clock rate detection sux and as noted before android/ Linux score a fair amount higher then windows.
 

ksec

Senior member
Mar 5, 2010
420
117
116
I don't think Linus ever praised GB. He might have said that GB4 is not as bad as GB3 which was utter garbage, that is true. But trying to spin that as praise or even approval is a stretch. IIRC he still doesn't consider it good, though I can't find more recent post quickly.

He didn't. Because he understand precisely you can't have simple magic numbers to represent an ISA or implementation of an ISA. Be it x86, ARMv8, POWER, or what ever. He consider it good enough, the thread was on mailing list as well as in Realworldtech. And he is not the only one with a decent opinion.

And no, good GB4 number doesn't mean it will translate to real world Application performance. But again, I mean if we even have to prove or argue about CPU being design with a specific set of user benchmarks than there is no point discussing it further.
 

CatMerc

Golden Member
Jul 16, 2016
1,114
1,149
136
All of these Linux scores were on the same machine, likely an overclocked one. Also you're likely referring to multi score, in which case the OS scheduler can play an important role.

In my experience the difference between compiler/libs for single score is less than 10%.
You assume no one overclocked and ran geekbench on Windows?

I've done comparisons, there's a pretty large difference clock for clock.

Edit: Looked around for scores with similar memory scores to minimize differences: https://browser.geekbench.com/v4/cpu/compare/8214774?baseline=8190535
 
Last edited:

Eug

Lifer
Mar 11, 2000
23,586
1,000
126
You assume no one overclocked and ran geekbench on Windows?

I've done comparisons, there's a pretty large difference clock for clock.

Edit: Looked around for scores with similar memory scores to minimize differences: https://browser.geekbench.com/v4/cpu/compare/8214774?baseline=8190535
I should point out that with GB4, there is variability of up to a couple hundred points even within the same machine.

The differences are even greater between machines if not all the background process are turned off.

BTW, in your comparison, the memory bandwidth is 7% higher in higher scoring Linux bench.
 

Nothingness

Platinum Member
Jul 3, 2013
2,409
739
136
You assume no one overclocked and ran geekbench on Windows?
Don't put words in my mouth. I didn't say that. I said that all of the top scores were seemingly from an OC machine,nothing else.

I've done comparisons, there's a pretty large difference clock for clock.

Edit: Looked around for scores with similar memory scores to minimize differences: https://browser.geekbench.com/v4/cpu/compare/8214774?baseline=8190535
Within the 10% I mentioned for single thread.

Anyway the comparison Windows vs Linux favors Linux that's a granted. VS is not as good as it used to be. Comparing OSX vs iOS vs Linux vs Android is more interesting as they all use a similar compiler.
 

CatMerc

Golden Member
Jul 16, 2016
1,114
1,149
136
10% is 3 generations of Intel IPC gains :p

The subscores are extremely telling. Look at LLVM, PDF Rendering, Lua, the difference is large. There's also in multithreaded test a large swing in favor of Windows in HTML5.

I should point out that with GB4, there is variability of up to a couple hundred points even within the same machine.

The differences are even greater between machines if not all the background process are turned off.

BTW, in your comparison, the memory bandwidth is 7% higher in higher scoring Linux bench.
The difference in bandwidth is minor, and as close as you're going to get. Even run to run the memory scores I've had on the exact same machine aren't exactly identical.

Don't get me wrong Geekbench is a nice single data point for mobile devices, but don't get fooled into thinking they're anywhere near the desktop x86 behemoths.
 
Last edited:

french toast

Senior member
Feb 22, 2017
988
825
136
So, can we get a consensus on how to interpret cross OS geekbench comparisons?
10-15% favouring Linux/android/ios?
 

Thala

Golden Member
Nov 12, 2014
1,355
653
136
Don't put words in my mouth. I didn't say that. I said that all of the top scores were seemingly from an OC machine,nothing else.
Within the 10% I mentioned for single thread.

Anyway the comparison Windows vs Linux favors Linux that's a granted. VS is not as good as it used to be. Comparing OSX vs iOS vs Linux vs Android is more interesting as they all use a similar compiler.

The variation between different runs is much smaller than 10% unless the conditions are drastically different. I consistently getting results in the 1% range (e.g. +/-50 points out of 5000).
Also variation is not property of Geekbench, but a property of every software you run on your machine.

So, can we get a consensus on how to interpret cross OS geekbench comparisons?

You mean Compiler as the OS has almost no impact?
I assume Clang and XCode producing around 5-10% faster code than MSVC. I wonder why there is no move to Clang under Windows - in particular since Clang/LLVM can produce Windows PE code since version 3.8 or so.
 
Last edited:
  • Like
Reactions: CatMerc

dark zero

Platinum Member
Jun 2, 2015
2,655
138
106
The validity of Geekbench as a cross platform benchmark for me goes out the toilet when the simple act of switching to Linux gains you thousands of GB score on Ryzen. Seriously go to the database and search 2700X by highest score. Everything Linux as far as the eye can see. And MacOS on Hackintosh machines sometimes, before you see Windows.

The OS and compiler play such a huge role you cannot use it as a valid comparison for architectures. It's only a valid comparison for the entire chain of software and hardware that get it running.
In fact Windows is not great OS to do benchmarks when is too Intel biased.
 

Headfoot

Diamond Member
Feb 28, 2008
4,444
641
126
3dMark is also a suite of tests, yet real life gaming results do not conform to 3dMark. Why would it be any different for Geekbench?
 

FIVR

Diamond Member
Jun 1, 2016
3,753
911
106
So, can we get a consensus on how to interpret cross OS geekbench comparisons?
10-15% favouring Linux/android/ios?

The rule is: However much you need to subtract from the score to make Intel x86 look as good or better, you subtract that much.


If we set a goal post percentage of 10-15% now it will have to be moved when Apple is 20-25% ahead. It's much easer to use the above rule.
 

BeepBeep2

Member
Dec 14, 2016
86
44
61
I am interested in what the power consumption of Apple's SoC's are under loads. I am under the impression that Apple is fine using up all of its available TDP for single threaded performance.

The i7 8650U scores really high in GB4, Kali Linux for WSL - 5.7k/18.7k. I assume this is at 25w TDP-up, but still, this is far from the "100w" number being thrown around. These likely do 90% of that at 15w TDP.

If the A12 is using 4.5-7.5w for those scores on TSMC 7nm, it is impressive, but I'm not sure we can say they are blowing Intel or AMD's doors off yet. I have a feeling Zen 2 is going to be outstanding at 7nm, good enough that intel needs to figure out how to raise the performance of its 10nm process quickly, or redesign their microarchitectures.
 

french toast

Senior member
Feb 22, 2017
988
825
136
The rule is: However much you need to subtract from the score to make Intel x86 look as good or better, you subtract that much.


If we set a goal post percentage of 10-15% now it will have to be moved when Apple is 20-25% ahead. It's much easer to use the above rule.
I see what you are saying, but there is clearly a difference is there not?
 
  • Like
Reactions: CatMerc

StinkyPinky

Diamond Member
Jul 6, 2002
6,765
783
126
I am interested in what the power consumption of Apple's SoC's are under loads. I am under the impression that Apple is fine using up all of its available TDP for single threaded performance.

The i7 8650U scores really high in GB4, Kali Linux for WSL - 5.7k/18.7k. I assume this is at 25w TDP-up, but still, this is far from the "100w" number being thrown around. These likely do 90% of that at 15w TDP.

If the A12 is using 4.5-7.5w for those scores on TSMC 7nm, it is impressive, but I'm not sure we can say they are blowing Intel or AMD's doors off yet. I have a feeling Zen 2 is going to be outstanding at 7nm, good enough that intel needs to figure out how to raise the performance of its 10nm process quickly, or redesign their microarchitectures.

I'm already excited about Zen 2 and it's still a year away. I think Intel will be very nervous.
 

ksec

Senior member
Mar 5, 2010
420
117
116
I am interested in what the power consumption of Apple's SoC's are under loads. I am under the impression that Apple is fine using up all of its available TDP for single threaded performance.

The i7 8650U scores really high in GB4, Kali Linux for WSL - 5.7k/18.7k. I assume this is at 25w TDP-up, but still, this is far from the "100w" number being thrown around. These likely do 90% of that at 15w TDP.

The 8650U is actually only a 15W Chip. Assuming the TDP-Up don't change its Turbo-Max Frequency.
 
Aug 11, 2008
10,451
642
126
All you have to do is look at the Geekbench scores between Intel processors to see that the 8700k is the worst possible case for an efficiency comparison. The 8650U has essentially the same single threaded score and only a 50% higher multi-threaded score at SIX TIMES THE TDP. So obviously the Geekbench score does not scale with TDP for an x86 processor, thus making the 8700k a worst case scenario comparison. If one compares the 8650U at about 3x the tdp and accepting this leak for the A12, the difference is only 3x instead of the 10x plus when comparing to the 8700k.
 

Thala

Golden Member
Nov 12, 2014
1,355
653
136
All you have to do is look at the Geekbench scores between Intel processors to see that the 8700k is the worst possible case for an efficiency comparison. The 8650U has essentially the same single threaded score and only a 50% higher multi-threaded score at SIX TIMES THE TDP. So obviously the Geekbench score does not scale with TDP for an x86 processor, thus making the 8700k a worst case scenario comparison. If one compares the 8650U at about 3x the tdp and accepting this leak for the A12, the difference is only 3x instead of the 10x plus when comparing to the 8700k.

The fallacy here is to equate TDP with actual power usage. Having 6 times higher TDP does not mean that it is actually using 6 times the power for the actual workload in question. Likewise if a cooling solution supports it, the the 6850U can operate outside its TDP limits - so actual power draw is higher than what the TDP suggests.

In summary you can not possibly derive from TDP, that the 8700K is using 6 times the power for a workload which is only 50% higher. In addition you can not possibly use TDP to compare CPUs and trying to draw conclusions about its efficiency - you always need the actual power at a given workload.
 
Aug 11, 2008
10,451
642
126
The fallacy here is to equate TDP with actual power usage. Having 6 times higher TDP does not mean that it is actually using 6 times the power for the actual workload in question. Likewise if a cooling solution supports it, the the 6850U can operate outside its TDP limits - so actual power draw is higher than what the TDP suggests.

In summary you can not possibly derive from TDP, that the 8700K is using 6 times the power for a workload which is only 50% higher. In addition you can not possibly use TDP to compare CPUs and trying to draw conclusions about its efficiency - you always need the actual power at a given workload.

I never said the 8700k was using six times the power of the 8650U. I said it had six times the TDP, which is correct. But this whole thread is based on TDP comparisons, so if that is not valid, then the A12 benchmarks are no more valid. I have not seen one post of actual measured power usage in this thread.

In any case, if Intel has these horrible cpu designers that are so inept, AMD must have even worse, if you accept Geekbench as the sole benchmark of cpu performance. The 2700U has Geekbench scores of 3500/10,000 (single/multi, rough average) while the 8650U has scores of 5500/18,000. So even comparing within the same architecture, at the same TDP, intel has an 80% advantage over AMD based on multithreaded geekbench. Does anyone believe it is actually that much faster? I certainly dont.
 

Eug

Lifer
Mar 11, 2000
23,586
1,000
126
It would actually be pretty easy to do this test. Load up Intel Power Gadget and run the benchmarks on 8700K and 8650U.

FWIW though, according to reviews with other benchmarks, desktop flagship chips like the 8700K do often run relatively close to its TDP (95 W) under heavy load, whereas some of the non-flagship desktop chips often run well under the TDP, even when the TDP is significantly lower too (eg. 65 W).
 

JDG1980

Golden Member
Jul 18, 2013
1,663
570
136
The interesting question is when Apple decides to get back into the server market. Server products have very high margins, and are Intel's bread and butter; if Apple can come up with a chip that has better perf/watt than Intel's best offerings, and scale it up to massive multicore levels with the expected professional features, then there's a good chance they can bite off some market share even without x86 support. An advantage is that server parts generally care less about super-high per-core clock speeds than desktop/workstation parts (and this is the one area left where Intel still has a sizable edge on the competition).

Most other ARM-based server chips have fallen flat, but Apple's hardware (and reputation) is good enough that they could make it work.