[AppleInsider] Apple may abandon Intel for its Macs starting with post-Broadwell

Page 11 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

monstercameron

Diamond Member
Feb 12, 2013
3,818
1
0
Looks great! Now I can get a device that's not only quiet and efficient, but has decent performance too! I'll go pick up one of those iPad Airs today and use it as my primary system. Just gotta load Windows on it for a few legacy programs though-- oh wait... D:

But seriously, the performance disparity is big.

There aren't many benchmarks the two devices share (because of OS again...), but in the ones they do:

A8x:

http://www.notebookcheck.net/Apple-A8X-iPad-SoC.128403.0.html
http://gyazo.com/bcf6ea68020e3b57dc753999add6f417

i5:

http://www.notebookcheck.net/Intel-Core-i5-4250U-Notebook-Processor.93564.0.html
http://gyazo.com/43c86ed90a39d04512f33053cc53163f

Ouch.

Don't mention p/w, power and performance don't scale linearly and the two devices aren't even in the same league of performance; unless something performing almost 3x worse than something else is the same league.

not that far off. Think about it, ~2.6x more performance for the intel part for nearly 4x more power draw and heat output. That was with using the stress test numbers from notebookcheck, that might not be representative of a real world workloads.
 
Last edited:

RampantAndroid

Diamond Member
Jun 27, 2004
6,591
3
81
not that far off. Think about it, ~1.6x more performance for the intel part for nearly 4x more power draw and heat output. That was with using the stress test numbers from notebookcheck, that might not be representative of a real world workloads.

Where do you get 1.6x? In 3DMark, the intel part (using the physics bench) is 2.5x more powerful...and that's just a physics benchmark - we're not even talking about using AVX and such.
 

elemein

Member
Jan 13, 2015
114
0
0
not that far off. Think about it, ~2.6x more performance for the intel part for nearly 4x more power draw and heat output. That was with using the stress test numbers from notebookcheck, that might not be representative of a real world workloads.

I literally just said don't bring up performance/watt. Like right away.

Power draw does not scale linearly with performance/design/clock. At all. In any way. Period.

I'm settling the argument there. It's a fact you're gonna have to accept, and until you do I'm not going further with this particular argument/discussion/debate. Doubling the clock rate of a design doesn't double power draw. It quadruples it. Sure, we're not talking clock rates, but I'm using it as an analogy for over arching performance. Performance isn't linear with power draw.
 

monstercameron

Diamond Member
Feb 12, 2013
3,818
1
0
I literally just said don't bring up performance/watt. Like right away.

Power draw does not scale linearly with performance/design/clock. At all. In any way. Period.

I'm settling the argument there. It's a fact you're gonna have to accept, and until you do I'm not going further with this particular argument/discussion/debate. Doubling the clock rate of a design doesn't double power draw. It quadruples it. Sure, we're not talking clock rates, but I'm using it as an analogy for over arching performance. Performance isn't linear with power draw.

ok, because that is how you think forums work. goodnight.
 

tential

Diamond Member
May 13, 2008
7,348
642
121
While I'm not hitting ignore (yet :sneaky:) you are making broad statements without doing any homework on them. It takes a few seconds of searching to find an IDC report on market share in the consumer space. Your statements are so far off reality that I wasn't sure if you actually believed that, or were just wearing the jester hat.

Did his ban end or did he just take a voluntary vacation?
I did not miss him when he stopped posting.
 

imported_ats

Senior member
Mar 21, 2008
422
64
86
Devil's canyon isn't the top dog in the desktop arena, it's the 5960x surely you must know that :whiste:

Except a 5960x is generally slower in most workloads. The only workloads where a 5960x does better is in highly parallel workloads. Which makes sense because its not a desktop part and not designed to be a desktop part. Its an extremely cut down server part that still contains all the core functionality needed to make a 15-18 core part work, which is why it has significantly higher latency than a Devil's Canyon part and hence generally slower performance on workloads that matter on the desktop.
 

imported_ats

Senior member
Mar 21, 2008
422
64
86
Somewhat differently, it's not like there's a gap of night & day between the two besides if you see the A8X it already has the Baytrail beat in quite a few benches with one less core.

Except there is a gap of night and day between how they scale. The scaling factors for GPUs are an order of magnitude higher than for CPUs. Double the number of ALUs on a GPU by effectively putting 2x on a chip sharing a memory interface, and you'll get ~2x performance. Do the same with CPUs and you'll be lucky to get more than 5% in most cases. For a typical GPU workload, there is on the order of 2-8 million pixels of parallelism to exploit. Most GPUs have on the order of 1-2k hardware contexts, giving each context a queue of 1-16k pixel to work on. CPUs on the other handle have between 2-8 hardware contexts on average with a realistic parallel workload <=4.
 

imported_ats

Senior member
Mar 21, 2008
422
64
86
BK is on patrol defending Intel share price. It looks like Intel is going with "deny until last minute" route. Just like how Skylake is "not delayed" and like how until the last minute Broadwell was "not delayed". Until one day it was!

All evidence points to Skylake hitting exactly when it was suppose to.

Get out now, Intel is done. What happens when apple abandons you as a CPU supplier? Let's ask IBM!

Intel loses maybe 5-8% revenue short term and gains it back as people abandon Macs in droves... Apple was close to 90% of IBMs volume. Apple is at best 10% of Intel's volume and less in revenues.
 

imported_ats

Senior member
Mar 21, 2008
422
64
86
Would you classify this as a mixed metaphor? I mean, there were swimming dinosaurs. And I assume if they were alive today they'd be very confused. On the other hand, the discovery of a living dinosaur would seem awfully relevant to me.

Um, there are living dinosaurs today. And they do just fine in the oceans. As an example, Sturgeons are over 200 millions years old and were around in the same time frames as many of the extinct dinosaurs.
 

R0H1T

Platinum Member
Jan 12, 2013
2,583
164
106
Except a 5960x is generally slower in most workloads. The only workloads where a 5960x does better is in highly parallel workloads. Which makes sense because its not a desktop part and not designed to be a desktop part. Its an extremely cut down server part that still contains all the core functionality needed to make a 15-18 core part work, which is why it has significantly higher latency than a Devil's Canyon part and hence generally slower performance on workloads that matter on the desktop.
Single threaded, the place where Apple is catching up fast. Intel will still lead with their more cores(+HT) for the foreseeable future but the needs of 99% of Apple consumers can be met by a more powerful Ax SoC some 4 or 5yrs down the line.

You could say the same about every other desktop chip since it's essentially the same (without extesive validation) just with less cache & less cores OR that some of the notebook chips are highly binned desktop parts hand picked just for minimal power consumption. Where do you stop, besides it's not like the HSW-E is a die harvested server part is it?
Except there is a gap of night and day between how they scale. The scaling factors for GPUs are an order of magnitude higher than for CPUs. Double the number of ALUs on a GPU by effectively putting 2x on a chip sharing a memory interface, and you'll get ~2x performance. Do the same with CPUs and you'll be lucky to get more than 5% in most cases. For a typical GPU workload, there is on the order of 2-8 million pixels of parallelism to exploit. Most GPUs have on the order of 1-2k hardware contexts, giving each context a queue of 1-16k pixel to work on. CPUs on the other handle have between 2-8 hardware contexts on average with a realistic parallel workload <=4.
No there isn't, you just do what most have others have done before Apple i.e. add more cores. The same reason why the early quads weren't natively quad core OR why the Jaguar SoC powering consoles not being a true octa core still worked just fine. You just have to tweak caches, interconnect & the core placement on the actual die, it isn't as bad as you're making it sound. You don't double ALU or FPU to double performance, you generally double the cores &/or increase clockspeeds. Sure this'll take time but that's a luxury Apple can afford aplenty.
 
Last edited:

imported_ats

Senior member
Mar 21, 2008
422
64
86
You could say the same about every other desktop chip since it's essentially the same (without extesive validation) just with less cache & less cores OR that some of the notebook chips are highly binned desktop parts hand picked just for minimal power consumption. Where do you stop, besides it's not like the HSW-E is a die harvested server part is it?

No you can't say the same thing. The design used for the -E parts aren't at all designed for the -E part. They are designed for the -EP and -EX parts. AKA, they are designed as 15-18 core parts (for the last 2 generation core counts), period. And, YES, HSW-E is a chopped and harvested server part. Every EE part since the initial release of EE parts has been a harvested server part.

OTOH, the notebook and desktop parts are designed for notebooks and desktops. There are at least 3 distinct designs being used for BDW notebook/desktop parts.





No there isn't, you just do what most have others have done before Apple i.e. add more cores. The same reason why the early quads weren't natively quad core OR why the Jaguar SoC powering consoles not being a true octa core still worked just fine. You just have to tweak caches, interconnect & the core placement on the actual die, it isn't as bad as you're making it sound. You don't double ALU or FPU to double performance, you generally double the cores &/or increase clockspeeds. Sure this'll take time but that's a luxury Apple can afford aplenty.

Yes there IS. You are somehow completely ignoring the performance implications. And no doubling cores in CPUs doesn't increase performance. In many cases, esp when thermally constrained, it results in lower performance. GPUs can just double stuff willy nilly, because they have orders of 1k-10k parallelism differentials between workloads and hardware. CPUs are already under 1:1 in parallelism differentials on the desktop. AKA, in general adding cores buys you zero to negative performance. I would say that's pretty much a night and day difference between GPUs and CPUs.
 

R0H1T

Platinum Member
Jan 12, 2013
2,583
164
106
No you can't say the same thing. The design used for the -E parts aren't at all designed for the -E part. They are designed for the -EP and -EX parts. AKA, they are designed as 15-18 core parts (for the last 2 generation core counts), period. And, YES, HSW-E is a chopped and harvested server part. Every EE part since the initial release of EE parts has been a harvested server part.

OTOH, the notebook and desktop parts are designed for notebooks and desktops. There are at least 3 distinct designs being used for BDW notebook/desktop parts.
You don't get my point do you? You take a Haswell core & then add more Haswell cores, & cache, depending on the workload. That's why more Mobile chips generally have 2 cores, 4 cores for the desktops & more for server chips. The fact is you can always work around a good design & pretty much always will do better with (more) good cores.

There is a reason why the metal api was released just for the iOS & that the A8X CPU cores occupy such a vast majority of that die when pretty much everyone else, including Intel is moving in the opposite direction with more die space for IGP.
Of this die space GXA6850 occupies 30% of A8X&#8217;s die, putting the GPU size at roughly 38mm2
http://www.anandtech.com/show/8716/apple-a8xs-gpu-gxa6850-even-better-than-i-thought
Yes there IS. You are somehow completely ignoring the performance implications. And no doubling cores in CPUs doesn't increase performance. In many cases, esp when thermally constrained, it results in lower performance. GPUs can just double stuff willy nilly, because they have orders of 1k-10k parallelism differentials between workloads and hardware. CPUs are already under 1:1 in parallelism differentials on the desktop. AKA, in general adding cores buys you zero to negative performance. I would say that's pretty much a night and day difference between GPUs and CPUs.
Tell me it ain't so & that what I'm proposing is simply not feasible or better yet that it's out of the realms of possibility.
3rR0m77.png
YXEWmDU.png


My only point is that it can be done so long as Apple's gunning for it, the rest is just a technical limitation & not a hard set physical one.
 

III-V

Senior member
Oct 12, 2014
678
1
41
You don't get my point do you? You take a Haswell core & then add more Haswell cores, & cache, depending on the workload. That's why more Mobile chips generally have 2 cores, 4 cores for the desktops & more for server chips. The fact is you can always work around a good design & pretty much always will do better with (more) good cores.
Keep in mind that unless Apple's still amassing engineers, any work they'd be doing to build SoCs for their Mac line would be detracting from their iDevice line. Apple certainly has the resources to acquire more teams, but they have to make a conscious decision to go that route. They don't get to simply copy and paste cores with their existing hardware guys.

As I've already pointed out, they don't get to save a bunch of money by making their own chips... economics do not work that way. The only way they'd really be able to save money is if Intel was charging extortionate pricing, and given the buying power Apple has, it is extremely unlikely that such a scenario is occurring.

So the only reason Apple would want to go this route is if they could truly offer something Intel cannot, or put another way, if Apple wanted something from Intel that they aren't getting.
 

R0H1T

Platinum Member
Jan 12, 2013
2,583
164
106
Keep in mind that unless Apple's still amassing engineers, any work they'd be doing to build SoCs for their Mac line would be detracting from their iDevice line. Apple certainly has the resources to acquire more teams, but they have to make a conscious decision to go that route. They don't get to simply copy and paste cores with their existing hardware guys.

As I've already pointed out, they don't get to save a bunch of money by making their own chips... economics do not work that way. The only way they'd really be able to save money is if Intel was charging extortionate pricing, and given the buying power Apple has, it is extremely unlikely that such a scenario is occurring.

So the only reason Apple would want to go this route is if they could truly offer something Intel cannot, or put another way, if Apple wanted something from Intel that they aren't getting.
Of course not, like I said they'll have to do some tweaking before the Ax can even feature in an MBP. I also said that the focus of Apple must be such, maybe it's just me but if you look at my last post I'd say Apple is certainly hinting towards such a path by constantly upgrading their CPU, dedicating less space for GPU as well, & releasing something like a metal api for iOS. Tbh I'd be really surprised if Apple didn't at least attempt such a thing ,whether they succeed is a an entirely different tale.

Interesting, but isn't there another possibility that they might look to converge their entire ecosystem into a common platform running on ARM?
 

elemein

Member
Jan 13, 2015
114
0
0
Of course not, like I said they'll have to do some tweaking before the Ax can even feature in an MBP. I also said that the focus of Apple must be such, maybe it's just me but if you look at my last post I'd say Apple is certainly hinting towards such a path by constantly upgrading their CPU, dedicating less space for GPU as well, & releasing something like a metal api for iOS. Tbh I'd be really surprised if Apple didn't at least attempt such a thing ,whether they succeed is a an entirely different tale.

Interesting, but isn't there another possibility that they might look to converge their entire ecosystem into a common platform running on ARM?

Just to be clear, if they did make the ARM switch, any third party software made for Mac would have to be recompiled, right?

Meaning, the devs would have to be actively supporting their software AND support Apple's decision, right? :\
 

R0H1T

Platinum Member
Jan 12, 2013
2,583
164
106
Just to be clear, if they did make the ARM switch, any third party software made for Mac would have to be recompiled, right?

Meaning, the devs would have to be actively supporting their software AND support Apple's decision, right? :\
That's right, not the first time this has happened either. Knowing Apple of today I suspect they'd work with a majority of devs in porting their software, possibly even funding such an endeavor, that is if they make the switch.
 

elemein

Member
Jan 13, 2015
114
0
0
That's right, not the first time this has happened either. Knowing Apple of today I suspect they'd work with a majority of devs in porting their software, possibly even funding such an endeavor, that is if they make the switch.

It's possible, but that'd be an even larger cost to Apple and even still not all devs would hop over.
 

NTMBK

Lifer
Nov 14, 2011
10,522
6,041
136
Just to be clear, if they did make the ARM switch, any third party software made for Mac would have to be recompiled, right?

Meaning, the devs would have to be actively supporting their software AND support Apple's decision, right? :\

Or alternatively they could perform binary translation, same as Intel running ARM Android binaries for NDK applications (or like Apple did when transitioning from PPC to x86).
 
Mar 10, 2006
11,715
2,012
126
Or alternatively they could perform binary translation, same as Intel running ARM Android binaries for NDK applications (or like Apple did when transitioning from PPC to x86).

Binary translation would have been an acceptable solution for Apple in the PPC days when Macs were a niche, but given how popular MacOS is today among "laymen" and how many MacOS users like the ability to run Windows, this would be a disaster, IMHO.
 

tential

Diamond Member
May 13, 2008
7,348
642
121
The cost to benefit ratio of this proposition is so absurd I can't believe we're seriously discussing this.
We'd have seen some indication of Apple starting to get the pieces together to make this happen if it was actually going to happen. One does not switch over their whole line to their own processors and we just don't see any of the assets being acquired to make it happen...
 

jpiniero

Lifer
Oct 1, 2010
17,167
7,544
136
We'd have seen some indication of Apple starting to get the pieces together to make this happen if it was actually going to happen. One does not switch over their whole line to their own processors and we just don't see any of the assets being acquired to make it happen...

What assets would they need to make it happen? They have all they need.
 

RampantAndroid

Diamond Member
Jun 27, 2004
6,591
3
81
Binary translation would have been an acceptable solution for Apple in the PPC days when Macs were a niche, but given how popular MacOS is today among "laymen" and how many MacOS users like the ability to run Windows, this would be a disaster, IMHO.

Agreed; Also, running one arch on another arch never achieves the same performance as running it natively. Unless you're telling me Apple has an ARM CPU 2x more powerful than their current Intel offerings, it wouldn't be very viable.
 

jpiniero

Lifer
Oct 1, 2010
17,167
7,544
136
Agreed; Also, running one arch on another arch never achieves the same performance as running it natively. Unless you're telling me Apple has an ARM CPU 2x more powerful than their current Intel offerings, it wouldn't be very viable.

Oh yeah, there wouldn't be any kind of binary translation. It'd have to be a straight cutover. But that's fine, since Objective-C is very portable. This isn't like Windows where most of the apps would be a pain to switch architectures.