Apple CPUs "just margins off" desktop CPUs - Anandtech

Page 4 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Spartak

Senior member
Jul 4, 2015
353
266
136
Can we stop the bullshit TDP comparison figures for Intel/desktops? Those 95W are when all cores are on. If I run two demanding threads on the A12 it'll also go up to 10W before it throttles.


I never claimed that was an exact apples to appes comparison, but its the only TDP info we can go by for the moment since you havent published any figures. But even at 10W versus maybe 30W for a comparable performing Intel mobile part that's a massive performance/watt gap.
 

naukkis

Golden Member
Jun 5, 2002
1,030
854
136
Single threaded performance? That thing we thought was limited by whats physical possible? No more low hanging fruits?
I am still sceptical.

IPC means instructions per clock. For that ARMv64 starts to be class above x86, and there's probably nothing that x86 could do to overcome that deficit. It's ridiculous that cpu meant to phone starts to be faster than desktop-class x86. AMD did find that too, K12 was ARMv64 arch but it's side-project Zen prioritizes more because competition from Intel was so low level.
 

tempestglen

Member
Dec 5, 2012
88
17
71
A12 IPC seems to be well over 50% better in specint2006 than Skylake. It seems plain obviously that x86 time as performance cpu architecture starts to be over.

1c5eddf9d72a60594ad650682534349b023bba4b.jpg


cd5b1dd8bc3eb135ffdb07c9ab1ea8d3fd1f4471.jpg


cfdaa21ea8d3fd1f901512483d4e251f95ca5f71.jpg
 

naukkis

Golden Member
Jun 5, 2002
1,030
854
136
If people though that Zen's IPC improvement over Dozer was great what about A12 IPC over Skylake? It's plain obvious that next gen cpu arch from Intel can't be x86 compatible anymore.
 

Spartak

Senior member
Jul 4, 2015
353
266
136
I was responding to the comment that a higher TDP A12 cpu would ""smash x86 processors completely if you package it like a desktop CPU". I am not the one that needs to support any claim, it is usually the person making the accusation in the first place (it wasn't you that made it, I am aware of that). I wasn't getting into the debate which architecture is better or which one has more of a future.

I am simply pointing out we know nothing about A-series CPU's and how they perform when given more juice to play with. It was also mentioned that if the A series cannot get over 3ghz, then Apple could just "add more cores" but we all know it isn't that easy. We only need to look at the threadripper cpu's that adding more cores isn't a magical solution to everything.

In short, we know *nothing* about how these cpu's will perform when given higher tdp. If it is anything like the x86, there will be diminishing returns.

You were actually making the claim the A11/12 didnt scale, and this was common knowledge. You then used the lack of a switch of the Mac to ARM by now as proof for that. As others also mentioned, this is a flawed argument.

The rest of the post I'm glad we finally found common ground. But Apple's papers do look really good with almost double the IPC performance compared to Skylake and double the perfromance/watt compared to other ARM chips (not counting the A12 as this one has a node advantage resulting in an even higher p/w)
 

naukkis

Golden Member
Jun 5, 2002
1,030
854
136
Huh? Intel and AMD will continue to make x86 products. Software demands it.

x86 have lasted because racing cpu arch's don't have had any meaningful performance advantage. Now it seems that there will be twofold performance advantage for aarch64 so x86 can't hold it's place anymore and even Intel has to acknowledge that.
 

Spartak

Senior member
Jul 4, 2015
353
266
136
If people though that Zen's IPC improvement over Dozer was great what about A12 IPC over Skylake? It's plain obvious that next gen cpu arch from Intel can't be x86 compatible anymore.

This might explain why we havent seen any meaningful improvement since Skylake. It's an abandoned architecture. If its not...Intel better have unicorns inserted for dedicated workloads.
 
  • Like
Reactions: french toast

Arkaign

Lifer
Oct 27, 2006
20,736
1,379
126
This might explain why we havent seen any meaningful improvement since Skylake. It's an abandoned architecture. If its not...Intel better have unicorns inserted for dedicated workloads.

So AMD and Intel are obsolete? o_O;):tearsofjoy:

Goodbye x86. Guess we won't see anymore Ryzen or Core CPUs in 2019+.
 
  • Like
Reactions: USER8000

Spartak

Senior member
Jul 4, 2015
353
266
136
So AMD and Intel are obsolete? o_O;):tearsofjoy:

Goodbye x86. Guess we won't see anymore Ryzen or Core CPUs in 2019+.

It's quite ironic you make fun of a remark just a few cm's below a graph that shows how the A12 completely destroys x86 on IPC. On average +71% SPECint, and in Geekbench the gap is even wider. Even AMD was never that much behind on IPC during the dark dozer ages.

Also, how do you characterise the notion that Intel hasnt meaningfully improved their core architecture in 5 years? I guess some people will remain in denial no matter what.
 
Last edited:
  • Like
Reactions: Lodix

cytg111

Lifer
Mar 17, 2008
26,356
15,749
136
It's quite ironic you make fun of a remark just a few cm's below a graph that shows how the A12 completely destroys x86 on IPC. On average +71% SPECint, and in Geekbench the gap is even wider. Even AMD was never that much behind on IPC during the dark dozer ages.

Also, how do you characterise the notion that Intel hasnt meaningfully improved their core architecture in 5 years? I guess some people will remain in denial no matter what.

I would love a rundown of how/why x86 is inferior to ... whatever... in terms of perf and wattage..
 
  • Like
Reactions: ryan20fun

naukkis

Golden Member
Jun 5, 2002
1,030
854
136
A 28-core Xeon is pretty much the worst possible current Intel CPU to showcase single-threaded Intel IPC. A better comparison would be something like the Xeon 1220 v6:

http://spec.org/cpu2006/results/res2017q3/cpu2006-20170822-48480.html
http://spec.org/cpu2006/results/res2017q3/cpu2006-20170808-48211.html

IPC is about same for every cpu with same arch. But spec-results from Intel compiler should be excluded, Intel compiler is hand optimized so well to spec-workload that it's pure cheating.
 

Accord99

Platinum Member
Jul 2, 2001
2,259
172
106
IPC is about same for every cpu with same arch.
No it's not, it changes for the same CPU just by changing the clock speed since few software scales perfectly with clock speed. Various parts of Spec 2006 are also very memory sensitive, so changing the speed of the main memory would also affect scores even if the CPU stayed exactly the same.

k970-spec-lat.png
 

Andrei.

Senior member
Jan 26, 2015
316
386
136
A 28-core Xeon is pretty much the worst possible current Intel CPU to showcase single-threaded Intel IPC. A better comparison would be something like the Xeon 1220 v6:

http://spec.org/cpu2006/results/res2017q3/cpu2006-20170822-48480.html
http://spec.org/cpu2006/results/res2017q3/cpu2006-20170808-48211.html
ICC is a bit dubious in what it does to SPEC, people have been saying there's specific code morphing that's specific to SPEC functions. Libquantum scores are obviously broken.

https://www.ct.nl/achtergrond/amd-ryzen-en-windows-compilers-prestaties/17735/

Ryzen6.png

Ryzen7.png


C't is another third party source on this - they avoid libquantum in the score - for comparison my figure for A12 is 40.95 without it. The fp score is directly comparable with that I published (54.84). I expect GCC/LLVM to be faster than MSVC

As I said in the piece, it's a whole other topic to be revisited in the future.
 
Last edited:
  • Like
Reactions: CatMerc

Ventanni

Golden Member
Jul 25, 2011
1,432
142
106
I dunno, something seems off here. While I don't keep up as much with CPU architectures like I used to, I do remember having a discussion regarding scaling. It was one of the major reasons Intel had such a hard time breaking into the phone market a few years back (and ultimately failed), because you can't just take an architecture and raise/lower the clock speeds, and abracadabra, you have a contender. The A12 is a very, very dense and wide architecture purpose-built for a given power zone. In order to scale out to 4.5-5ghz like Intel and AMD, they'd have to redesign the chip, and that's no easy feat. Intel's advantage here may not be raw IPC anymore, but it is a scalable architecture that can be used from 5W to 100W+, and from 1.1ghz to as high a 5ghz. And, it can do it in an economically viable package. That's the key. Who knows what a redesigned A12-like chip would look like in terms of cost, performance, and most importantly, could it be produced in quantities to meet demand?

I wouldn't be surprised if Apple begins using these chips in its laptops, but they will stick to their own ecosystem. They're not a company that has any interest in breaking into existing ecosystems. They'll just recreate their own.
 

Entropyq3

Junior Member
Jan 24, 2005
22
22
81
It bears remembering that Apple cannot speed bin their iPhone chips. There is no lower tier to put them in, so what we are seeing in the phones is essentially the lowest performance tier. (No, Apple isn’t going to throw away a lot of otherwise perfectly fine chips that doesn’t perform well enough on the frequency/power curve. They will adjust their targets to optimize yields.)
It is anybodys guess how the best, say, 10% of the cores would perform. Or for that matter with five times the power budget per core.
While the article data can make for some interesting technical discussion (or mud slinging), the x86 market is maintained by software rather than absolute hardware performance. The health of the x86 CPU market has little to nothing to do with A12 benchmark results, so any doomsaying is more a statement about prejudice than anything else.
 

french toast

Senior member
Feb 22, 2017
988
825
136
I dunno, something seems off here. While I don't keep up as much with CPU architectures like I used to, I do remember having a discussion regarding scaling. It was one of the major reasons Intel had such a hard time breaking into the phone market a few years back (and ultimately failed), because you can't just take an architecture and raise/lower the clock speeds, and abracadabra, you have a contender. The A12 is a very, very dense and wide architecture purpose-built for a given power zone. In order to scale out to 4.5-5ghz like Intel and AMD, they'd have to redesign the chip, and that's no easy feat. Intel's advantage here may not be raw IPC anymore, but it is a scalable architecture that can be used from 5W to 100W+, and from 1.1ghz to as high a 5ghz. And, it can do it in an economically viable package. That's the key. Who knows what a redesigned A12-like chip would look like in terms of cost, performance, and most importantly, could it be produced in quantities to meet demand?

I wouldn't be surprised if Apple begins using these chips in its laptops, but they will stick to their own ecosystem. They're not a company that has any interest in breaking into existing ecosystems. They'll just recreate their own.
This is the crazy thing...Apple wouldn't need to clock it's big cores to any where near 5ghz to best a i9 9900k...
If they made it for desktop for instance...they could throw 10x the power at the problem (if not more)...have access to ddr4 memory, the ability to properly bin the chips for higher performance...
Most likely they would design a bespoke core for the job, using many of the the same tech from their mobile core but maybe lengthening the pipelines and adjusting the caches to hit higher frequencies.

I have no doubt what so ever Apple could design a desktop chip that would crush any x86 chip of 2019, if they had reason to, which they don't...for now.
Let's have a look at A12X when it arrives and compare it to Intel and Ryzen APUs...could well have an 8 core CPU + 4 x 32bit lpddr5, 2-3x A12 GPU performance, frightening!.
 
  • Like
Reactions: Spartak and cytg111

Entropyq3

Junior Member
Jan 24, 2005
22
22
81
ICC shouldn't be used for SPEC scores. lib quantum isn't the only subtest "corrupted".
It is a philosophical question whether ICC constitutes cheating or not, but it definitely breaks the spirit of SPEC if not the rules at the time. There has been quite the discussion about it, but see the changes made for the latest version of SPEC.
In short, ICC data is useless for comparisons other than between Intel processors.
 

pepone1234

Member
Jun 20, 2014
36
8
81
It's quite ironic you make fun of a remark just a few cm's below a graph that shows how the A12 completely destroys x86 on IPC. On average +71% SPECint, and in Geekbench the gap is even wider. Even AMD was never that much behind on IPC during the dark dozer ages.

Also, how do you characterise the notion that Intel hasnt meaningfully improved their core architecture in 5 years? I guess some people will remain in denial no matter what.

But how do we know that the problem is x86 and not intel?

When intel hit a wall with the pentium 4 I am sure there was people also saying that x86 was dead back then. And then Intel showed the new core arch with huge gains in every metric.
The problem I see here is not x86. The problem I see now is that Intel doesn't have an Israel side project with a brand new arch like ten years ago they can use to save the situation. AMD with vastly less resources showed us that they can be up there with x86 if you execute well enough and I am sure that if apple poured the money they are pouring in their arm64 to a hypotetical x86 they could have achieved the same levels of performance and efficiency.