Apple A12 benchmarks

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

plopke

Senior member
Jan 26, 2010
238
74
101
I am always impressed by Apple their ability to push mobile low power CPU's but does not mean it will scale to workstation levels. On the other hand since they now are fully in the high end design cpu business , the story of them dumping Intel in the following years is making more and more sense. Imagine Apple laptop CPU's outperforming Intel/AMD by a big margin , would be a werid day to be Microsoft :p.
 

Eug

Lifer
Mar 11, 2000
23,586
1,000
126
I am always impressed by Apple their ability to push mobile low power CPU's but does not mean it will scale to workstation levels. On the other hand since they now are fully in the high end design cpu business , the story of them dumping Intel in the following years is making more and more sense. Imagine Apple laptop CPU's outperforming Intel/AMD by a big margin , would be a werid day to be Microsoft :p.
Well, Microsoft is already selling Windows on ARM, and has been for years in various iterations.
 

JoeRambo

Golden Member
Jun 13, 2013
1,814
2,105
136
So their single core is matching the 8700K? I struggle to believe that. I think this just shows the flaws of Geekbench as a benchmark software.

It is within spitting distance ( read 10-15% ) versus Intel's best ST performance chip. And we already know from earlier Apple chips in GB4 that score is completely legit - the chip seems to have awesome ALU and memory hierarchy. Combine that with proper accelerators for crypto and hashing + very competent vector math unit(s) and you arrive at great CPU.

The only thing it lacked in the past was clocking ability, critics (including me) were claiming that it is easy to do a fast cache, TLB etc on lower clocks, and here we are looking at what is likely 3Ghz chip. With scores like that and clocks being what they seem to be, Apple takes ST IPC crown. One would have to be fan boy to imagine that they need LN2 to reach 3.5ghz they need to beat 5Ghz Intel.
 

Thala

Golden Member
Nov 12, 2014
1,355
653
136
The only thing it lacked in the past was clocking ability, critics (including me) were claiming that it is easy to do a fast cache, TLB etc on lower clocks, and here we are looking at what is likely 3Ghz chip.

We do not know at all that it is lacking clocking abilities. Why don't we know? Because the only conclusion we can possibly draw are its maximum clocks in a low power and low voltage design (e.g. clocks and voltage that make the architecture fit into smartphone power envelope).

Thats like taking the M3 7y32 as reference, a design for fan-less laptops and not smartphones, and conclude that the core architecture max out at 3GHz.

The fallacy when comparing frequencies of different architectures is the wrong assumption that everything else in actual designs (low/high track version of cells, voltage, buffers, slow/fast corner, thermal limit) being the same.
 
Last edited:
  • Like
Reactions: name99

Eug

Lifer
Mar 11, 2000
23,586
1,000
126
He actually mentions it somewhere in the comment it is closer to 25%. Making this 25% would be 3Ghz. The latter is what I calculate to be around 3Ghz, although he did mention performance coming from better branch prediction, and no mention of clock speed.
I was just going by his posted scores. They are 21% higher than top A11 scores but closer to 25% if you just take more average scores.

So going by clock speed alone it’s 2.9 GHz (+21%) to 3.0 GHz (+25%).

However there could be a further IPC increase. For all we know it could be 2.7 - 2.8 GHz, plus higher IPC.
 

Eug

Lifer
Mar 11, 2000
23,586
1,000
126
Here are the Geekbench 4 chip scores.

https://browser.geekbench.com/processor-benchmarks

The 8700K comes in at 5928, or 14% higher than this supposed A12.

5200 would put it somewhere in between the 6600K and 6700K for single core, and higher than my i5-7600 which I just bought last year.

...but for a chip that would go into a phone. If they actually manage to pull this off, some of the chip enthusiast sites will go into melt down mode.

BTW, I wonder what the A11X will be like. That is also supposed to be 7 nm.
 
Last edited:

Thala

Golden Member
Nov 12, 2014
1,355
653
136
Here are the Geekbench 4 chip scores.

https://browser.geekbench.com/processor-benchmarks

The 8700K comes in at 5928, or 14% higher than this supposed A12.

Scores for the Geekbench browser include extreme overclocks - this should not be taken as reference. A stock 8700k should come in closer to 5300 i assume?

Update: I did cross check with my i7-6700K. Looks like the extremes have been removed from the average somehow, as the linked table is only slightly higher than my results (5230 vs 5330)
So 5900 for 8700K looks like a reasonable estimate.
 
Last edited:

Nothingness

Platinum Member
Jul 3, 2013
2,405
735
136
Scores for the Geekbench browser include extreme overclocks - this should not be taken as reference. A stock 8700k should come in closer to 5300 i assume?
@Eug link shows official results with no overclocking. It looks like extreme OC on 8700K get more than 9000.
 

Eug

Lifer
Mar 11, 2000
23,586
1,000
126
I'm not sure how they do the filtering but Geekbench does record both base frequency and max frequency so it would be easy for them to remove any score with a max frequency over 4.7 GHz. 8700K has a Turbo frequency up to 4.7 GHz.

I do note that there are entries listed with a seemingly legit single-core 8700K score over 6000.

Also I have been able to raise my 7600 and 7700K scores by several hundred points or more simply by turning off all background processes before doing the bench. I will point out that that processor list has the 7700K below at only 18767 multi-core and 5703 single core. I personally have been able to get over 20000 multi-core at stock speed (iMac). For single core some people have gotten well over 5800 legit. Again, that’s for the 7700K, not 8700K.
 

ksec

Senior member
Mar 5, 2010
420
117
116
I don't think it matches the 8700K. Here are the scores for my i7-8700 under Windows, macOS and Linux. Results seem to vary quite a bit between different Geekbench versions.

May be power management issues? After all, these CPU now have thermal and TDP limits.

GB, especially since version 4 is a pretty nice benchmarks. But you have to look into the actual individual results instead of just looking at the final scores.

And people need to realise, there is a different being able to run the test once and get an decent score, or continue running it for 100 min and get the same results. i.e Burst vs Sustain work load.

That is like A11 is very fast, but you can't have its peak performance all the time, and you can tell if you Game on it.
 

ksec

Senior member
Mar 5, 2010
420
117
116
I was just going by his posted scores. They are 21% higher than top A11 scores but closer to 25% if you just take more average scores.

So going by clock speed alone it’s 2.9 GHz (+21%) to 3.0 GHz (+25%).

However there could be a further IPC increase. For all we know it could be 2.7 - 2.8 GHz, plus higher IPC.

I am simply guessing there is no IPC improvement. Since clock speed scale ( mostly, assuming no other memory bottleneck ) linearly with performance, I am simply guessing this is just pure clock speed improvement. ( And may be Apple wants to tell you they can now do 3Ghz on a Phone )

I would be surprise if they actually have IPC improvement again. They have been doing this YoY for 5 years now. I remember reading Linus said most of the low hanging fruit for IPC performance design were done in A10. And for A11 to continue to gain 20%+ performance next year came as a surprise to most.

It also kind of make sense in an "Intel" Tick Tock perspective. new uArch, then new node.

I also hope he is completely off the mark and we end up with 8 Core SoC.
And we still don't know if we get more GPU power, more Memory, faster memory, and may be new ISP.

Note: Turns out words are already out in the wild before my post.

https://www.cultofmac.com/543115/a12-iphone-processor-7nm-tsmc-faster-efficient/
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
I'm not sure how they do the filtering but Geekbench does record both base frequency and max frequency so it would be easy for them to remove any score with a max frequency over 4.7 GHz. 8700K has a Turbo frequency up to 4.7 GHz.

The top 8550U gets 5400 points. Notebookcheck's top results get 4900, and average is 4800. That's with peak Turbo clock of 4GHz.

Based on that, one thing sure is there's no single thread gain to be had going from 15W to 95W. The TDP difference is used for 2 extra cores and to keep clocks high when running the 6 cores. I'd also assume most of the TDP is used up between 4.2-4.7GHz.
 

Eug

Lifer
Mar 11, 2000
23,586
1,000
126
A11 is 88 mm2. A pure 7 nm shrink is 55 mm2.

The other thing is that A11 is a 6-core design with 2 high performance cores, and 4 low performance cores. How would that translate for a desktop part? Get rid of most or all of the low performance cores?
 

oak8292

Member
Sep 14, 2016
82
67
91
How much performance could Intel achieve with Core architecture on the Intel SOC process? Am I wrong in thinking that the Y processors are still made with 'tall transistors'.

How much power or IPC improvement could Intel make if they ditched 32 bit compatibility? How much of a drag is the ability to run the 10 year old legacy software that is a requirement of so many Windows users?
 

Headfoot

Diamond Member
Feb 28, 2008
4,444
641
126
You can make all the excuses you want but in the end if the test was "stupid" or had no relation to performance intel would be able to game it and score just as well. Unless you are suggesting that Apple can "game" these test but intel is simply "too honest" to do the same.


The simple explanation is that something about intel's architecture is inherently less efficient than apple's.

Uh, no. Not at all. The simple explanation is that you're talking about an entirely different technology stack top to bottom. ARM ISA -> Apple BIOS/Mobo firmware -> Apple iOS -> App in Objective C vs x86 ISA -> Many vendor firmware -> Microsoft Windows OS -> App in who knows what language on PC, maybe c++? Maybe a port of Objective C?

I would bet money that there are thousands of tiny or not so tiny discrepancies from porting an app that is designed to measure performance to such a completely different tech stack. It is a massive engineering challenge to attempt to perfectly port an app where the exact assembly/binaries can make a huge difference in the uniformity of the result. There are hundreds of system calls, libraries, firmwares, etc. all along the way that can all achieve a result in differing ways very little of which will be common between the platforms. I doubt its even using the same programming language.

To say Geekbench results are ironclad and it must be the hardware is so myopic as to be an absurd position
 
Last edited:

HurleyBird

Platinum Member
Apr 22, 2003
2,684
1,267
136
-geekbench is crap and these scores mean nothing in the real world, like an Ax series desktop chip wouldn't run games and apps nearly as fast as the numbers suggest.

-scores are somehow valid thus all the genius CPU architects must have gone to Apple and are working on fantastic tech that unfortunately will never power our PCs.

The third option is that Geekbench isn't crap in and of itself, but when one benchmark becomes overridingly important people will, of course, optimise for it. Geekbench isn't an important metric at all when it comes to marketing the speed of PCs, but is by far the biggest metric when it comes to marketing the speed of mobile devices.

You want a variety of benchmarks not just so that you get a varied picture on things, but so that this kind of thing never happens. Right now 3DMark counts for, maybe, at most 5% of the perceived speed of a GPU with the performance of real games accounting for the remaining 95%. Imagine if this was like the mobile device world, where a synthetic benchmark like 3DMark became all that anyone cared about. GPUs would quickly become much better at 3DMark, and would not advance as quickly in actual games as they otherwise would.

That said, the wide variance you see in GB based on OS does point towards the benchmark not being ideal in any case.
 
  • Like
Reactions: Zucker2k

french toast

Senior member
Feb 22, 2017
988
825
136
So their single core is matching the 8700K? I struggle to believe that. I think this just shows the flaws of Geekbench as a benchmark software.

As for the macbook, the one port makes it useless as far as I'm concerned.
I think mobile vendors optimise their chips for geekbench, where as x86 don't bother, I would deduct 20-30% off of the ARM (apple)score then compare to x86.
The third option is that Geekbench isn't crap in and of itself, but when one benchmark becomes overridingly important people will, of course, optimise for it. Geekbench isn't an important metric at all when it comes to marketing the speed of PCs, but is by far the biggest metric when it comes to marketing the speed of mobile devices.

You want a variety of benchmarks not just so that you get a varied picture on things, but so that this kind of thing never happens. Right now 3DMark counts for, maybe, at most 5% of the perceived speed of a GPU with the performance of real games accounting for the remaining 95%. Imagine if this was like the mobile device world, where a synthetic benchmark like 3DMark became all that anyone cared about. GPUs would quickly become much better at 3DMark, and would not advance as quickly in actual games as they otherwise would.

That said, the wide variance you see in GB based on OS does point towards the benchmark not being ideal in any case.
^^ This I agree with 100%.
 

Eug

Lifer
Mar 11, 2000
23,586
1,000
126
I think mobile vendors optimise their chips for geekbench, where as x86 don't bother, I would deduct 20-30% off of the ARM (apple)score then compare to x86.
^^ This I agree with 100%.
So far as far as the press have surmised, just about no chip manufacturer optimizes their chips for Geekbench, but a few of the manufacturers of mobile phones have optimized for the specific benchmarks. ie. Qualcomm doesn't optimize for the benchmarks, but a phone manufacturer using Qualcomm's chips may. The only time I recall the chip company doing that was Samsung, but it wasn't Samsung's semiconductor group doing this in the design, but the phone division doing this in software. Interestingly though, Samsung wasn't cheating just with its own SoCs, but also with Qualcomm SoCs, again supporting the notion it's the phone division doing this alone.

Of note though, Apple has never been implicated in this with Geekbench. Does Apple even report Geekbench numbers?
 

ksec

Senior member
Mar 5, 2010
420
117
116
I think mobile vendors optimise their chips for geekbench, where as x86 don't bother, I would deduct 20-30% off of the ARM (apple)score then compare to x86.

Sigh.

I think you should go and read up what each GB test before you make a judgement.
 

Headfoot

Diamond Member
Feb 28, 2008
4,444
641
126
GB is a useful data point, but its far from enough data to make such sweeping conclusions from. Before anyone tries to place a firm "gap" between Apple and Intel processors we're gonna need more tests from more sources.
 

JoeRambo

Golden Member
Jun 13, 2013
1,814
2,105
136
Before anyone tries to place a firm "gap" between Apple and Intel processors we're gonna need more tests from more sources.

GB4 is as good as they get. It might not have perfect clock/platform etc detection, but benchmarks are well designed and cross platform (and also documented). Sure there is compiler difference, but that is more of a plus to Apple then, cause real world code is also compiled with that toolset, just like VS is used to compile on Windows?

What else is there? Those horrible browser benchmarks that test vendor JS machine academic paper acrobatics? SPEC that is legendary for getting cheated in ridiculous ways? So GB4 is as good proxy as there is. If one sees some subtest numbers, they are what they are.

There is a reason junk from Samsung/QC camp is scoring 2-2.5k in GB4 ST tests. That same reason makes them get destroyed in real world testing.