Apple CPUs "just margins off" desktop CPUs - Anandtech

Page 7 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

CZroe

Lifer
Jun 24, 2001
24,195
857
126
Only effect it would have is power consumption. That's significantly more I/O to power than mobile devices usually have. I wouldn't be surprised if the overall power consumption exceeded the A12's consumption in iPhone.

It wouldn’t be the CPUs power consumption. On any X86 system, that’s the motherboard, chipsets, and supporting devices. It’s really no different than using a powered USB hub to multiply the USB ports on your phone. Yes, you are using more power. No, it doesn’t mean the CPU is any less efficient.
 

Hitman928

Diamond Member
Apr 15, 2012
6,737
12,457
136
It wouldn’t be the CPUs power consumption. On any X86 system, that’s the motherboard, chipsets, and supporting devices. It’s really no different than using a powered USB hub to multiply the USB ports on your phone. Yes, you are using more power. No, it doesn’t mean the CPU is any less efficient.

These are SOCs where the functionality mentioned is integrated into the CPU itself. So yes, power consumption is effected.
 

krumme

Diamond Member
Oct 9, 2009
5,956
1,596
136
If Andrei and Johan teamed up and had time and money for a proper and dead objective no bs x86 vs arm battle it would be the big bang of this decade for the cpu tech community and extending to the entire industry. Now is a good time. Regardless of the results it would make a fire.
 
  • Like
Reactions: french toast

Andrei.

Senior member
Jan 26, 2015
316
386
136
If Andrei and Johan teamed up and had time and money for a proper and dead objective no bs x86 vs arm battle it would be the big bang of this decade for the cpu tech community and extending to the entire industry. Now is a good time. Regardless of the results it would make a fire.
I'll be handling things in the coming months. Bunch of phones to get out of the way first.
 

CZroe

Lifer
Jun 24, 2001
24,195
857
126
Only effect it would have is power consumption. That's significantly more I/O to power than mobile devices usually have. I wouldn't be surprised if the overall power consumption exceeded the A12's consumption in iPhone.

It wouldn’t be the CPUs power consumption. On any X86 system, that’s the motherboard, chipsets, and supporting devices. It’s really no different than using a powered USB hub to multiply the USB ports on your phone. Yes, you are using more power. No, it doesn’t mean the CPU is any less efficient.
These are SOCs where the functionality mentioned is integrated into the CPU itself. So yes, power consumption is effected.
Wow. This is ridiculously obtuse.

No, the power supply is not built into the SOC. No, adding a USB hub with a power supply does not make the CPU less efficient no matter how inefficient you make the system as a whole. Also, what makes you think that integrating hardware that’s normally discrete or integrated into another chip on an X86 system would be any less efficient on an SOC?

Yes, power is “[a]ffected” when you add stuff. No, that doesn’t mean you lose your efficiency gains over a comparable X86 CPU/GPU. You are now comparing system to system where particular sub systems like CPU/GPU are already significantly more efficient on the ARM SOC and it remains true even if there are ZERO gains for the rest of the equipment needed to make a traditional notebook/desktop with it.

If anything, it gets more efficient as you integrate more. Distinguishing one as an SOC ignores that X86 has been getting more and more integrated over the years as well. We now have cache, memory controllers, GPUs, and a ton of I/O that was once handled by chipsets or even discrete cards. The entire point of this comparison is that X86, despite all of its advances, is no where near the efficiency of something like an A12 for that level of performance, and future Apple CPUs could conceivably replace X86 for desktop and notebook use.
 

Jan Olšan

Senior member
Jan 12, 2017
588
1,156
136
For example Nehalem's core change is minimal, the focus was on the I/O and uncore. Single thread improvement was in the 0-5% range if you ignore Turbo.
I think you are rather mistaken there. Nehalem brought huge core changes. You are aware it implemented SMT (HT) for rather great MT performance boost? And yeah, that means changing the core from beginning to end.
Or, the SIMD performance has jumped up a lot, the units were definitely overhauled a lot. Lots of throughput/latency improvements happened there, IIRC.
Another thing I recall: the performance problems when handling data access straddling cacheline boundary (or something like that) Intel cores used to have since like forever, those were addressed in Nehalem.

If you think Nehalem was Conroe/Penryn's core with integrated memory controller bolted on, you are very wrong.

Now consider that Tiger Lake was supposed to ALREADY be out, and it's quite clear to me that it's a problem of execution, not of architecture.

I don't think that is accurate. Tiger Lake is the "O" in the (Kung)PAO model, so that means it was added into roadmap when Intel realised they need 3-years cycles. In other words, its planning already counted on the Kaby Lake delay taking place. Which means that whatever was some "original mythical plan before 10nm turned out to be like nuclear fusion", Tiger Lake could not have ever been planned to come out sooner than late 2019.

Kaby Lake: 2016/2017 year turn
Cannon Lake (replaced by Coffee Lake): 2017/2018 year turn
Ice Lake (replaced(?) by whisky/amber/coffee reheat): late 2018
[projected in the plan] Tiger Lake: late 2019 - early 2020

Sure, your point about the future architectures being delayed is valid. But you mustn't overvalue it.
Claiming that if Intel's 10nm was fine, they would release Superman Lake with 16 cores, 1W TDP and whatnots two years ago already, that's just another form of those "OMG Apple magic cores" memes, IMHO :) /Once Ocean Cove is mass produced...!!!/

BTW, I think we should ponder the possibility that the extra time the design teams received when the manufacturing process slipped into the future might have been used to improve the architecture beyond what it would have been in its original form, would it appear on time? Not saying that's what happened, but it is possible.
 
Last edited:
  • Like
Reactions: ryan20fun

jpiniero

Lifer
Oct 1, 2010
16,948
7,369
136
Kaby Lake was Cannonlake's replacement, not Coffee Lake.

BTW, I think we should ponder the possibility that the extra time the design teams received when the manufacturing process slipped into the future might have been used to improve the architecture beyond what it would have been in its original form, would it appear on time? Not saying that's what happened, but it is possible.

Rumor is that Icelake (well the one that is going to get released) will include Tigerlake's CPU changes.
 
  • Like
Reactions: CatMerc

Hitman928

Diamond Member
Apr 15, 2012
6,737
12,457
136
Wow. This is ridiculously obtuse.

No, the power supply is not built into the SOC.
Maybe I'm just too obtuse, but I believe this is called a straw man.

No, adding a USB hub with a power supply does not make the CPU less efficient no matter how inefficient you make the system as a whole.
Another straw man.
Also, what makes you think that integrating hardware that’s normally discrete or integrated into another chip on an X86 system would be any less efficient on an SOC?
And another.

Yes, power is “[a]ffected” when you add stuff.

So we're in agreement.

No, that doesn’t mean you lose your efficiency gains over a comparable X86 CPU/GPU.
There's that straw man again.

You are now comparing system to system where particular sub systems like CPU/GPU are already significantly more efficient on the ARM SOC and it remains true even if there are ZERO gains for the rest of the equipment needed to make a traditional notebook/desktop with it.

Clarification, we are comparing SoC to SoC. Also, it depends on how you define efficiency. Efficiency is work load and target performance dependent.

If anything, it gets more efficient as you integrate more.

Again, depends on how you define efficiency.

Distinguishing one as an SOC

They are both SoCs.

ignores that X86 has been getting more and more integrated over the years as well. We now have cache, memory controllers, GPUs, and a ton of I/O that was once handled by chipsets or even discrete cards.

Welcome back to the original argument. Modern x86 cores have significantly more IO functionality integrated into the SoC than modern ARM cores. If ARM cores want to add the same amount of functionality, it will increase the power consumption of the ARM SoC. Not sure why this simple statement of fact caused such a fuss.

The entire point of this comparison is that X86, despite all of its advances, is no where near the efficiency of something like an A12 for that level of performance, and future Apple CPUs could conceivably replace X86 for desktop and notebook use.

For mobile loads, ARM has been proven over and over again to be more efficient than competing x86 solutions. For server environments, the results we have show x86 solutions to be significantly more efficient though in the case of servers, efficiency is even more platform dependent than mobile or desktop environments.. Desktop is unproven at this point as there aren't really any ARM desktop solutions for comparison. I look forward to see how it develops.
 
Last edited:

WelshBloke

Lifer
Jan 12, 2005
33,263
11,400
136
If Andrei and Johan teamed up and had time and money for a proper and dead objective no bs x86 vs arm battle it would be the big bang of this decade for the cpu tech community and extending to the entire industry. Now is a good time. Regardless of the results it would make a fire.
That would be awesome! I guess the problem would be getting performance figures for something that bears some relationship to the real world and managing to do it in an equal way on each platform.
 

Jan Olšan

Senior member
Jan 12, 2017
588
1,156
136
Kaby Lake was Cannonlake's replacement, not Coffee Lake.

They both are, actually. First, the cycle got prolonged by inserting Kaby between Sky and Cannon. But after that, the thus-delayed Cannon Lake was again partially or virtually completely replaced by Coffee Lake chips, because even in the time that Kaby bought for it, 10nm process was not ready.

And as I said, Tiger Lake is similar to Kaby Lake, it is an addition that padded the tick-tock cycle into the 3-year PAO. Cannon Lake getting replaced by Coffee Lake in the is separate/furhter roadmap change after that, so that's why I put it in the same time slot as Coffee.
Originally in even older plan, it was supposed to come after Skylake, so 2016. But in a timeline with Kaby and Tiger existing, that was no longer true. (Did I explain that properly?)

Rumor is that Icelake (well the one that is going to get released) will include Tigerlake's CPU changes.
Is there any source for that? Or where did the rumour appear?
 

jpiniero

Lifer
Oct 1, 2010
16,948
7,369
136
Is there any source for that? Or where did the rumour appear?

Don't remember, but given that the more recent samples that have popped up... the L2 has increased to 512 KB/core, so it's probably true.
 
Mar 10, 2006
11,715
2,012
126
Don't remember, but given that the more recent samples that have popped up... the L2 has increased to 512 KB/core, so it's probably true.

How do you know that the move from 256KB/core to 512KB/core wasn't already planned for the ICL core? You can't use the fact that cache size changed between SKL and current ICL to conclude that the ICL that'll be released is really a TGL core.
 

jpiniero

Lifer
Oct 1, 2010
16,948
7,369
136
How do you know that the move from 256KB/core to 512KB/core wasn't already planned for the ICL core? You can't use the fact that cache size changed between SKL and current ICL to conclude that the ICL that'll be released is really a TGL core.

There were earlier ones that were 256 kb/core.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,787
136
I think you are rather mistaken there. Nehalem brought huge core changes. You are aware it implemented SMT (HT) for rather great MT performance boost? And yeah, that means changing the core from beginning to end.

Yes, I know there's a lot of little details that changed, and those things are very important especially for highly threaded/high memory bandwidth/vector load applications.

In the big picture however, for consumers they don't improve in the things that matter, which is single threaded performance. And even in servers, if we were still getting huge boosts from process, there would be a greater focus to get perf/clock/thread.

Adding SMT, or improved vector units, or putting a ridiculously fast memory controller is more like adding specialized function units. It becomes harder and harder to improve the overall flow of the architecture, so you address things easier and more cost effective to do.

Like how for Nvidia's RTX cards, yea it improved rasterization performance, but the real focus is about Tensor and RT cores. Or how mobile chips are adding DL/NN(or whatever flavor of the day term is for big data).
 

CZroe

Lifer
Jun 24, 2001
24,195
857
126
Maybe I'm just too obtuse, but I believe this is called as straw man.


Another straw man.

And another.



So we're in agreement.


There's that straw man again.



Clarification, we are comparing SoC to SoC. Also, it depends on how you define efficiency. Efficiency is work load and target performance dependent.



Again, depends on how you define efficiency.



They are both SoCs.



Welcome back to the original argument. Modern x86 cores have significantly more IO functionality integrated into the SoC than modern ARM cores. If ARM cores want to add the same amount of functionality, it will increase the power consumption of the ARM SoC. Not sure why this simple statement of fact caused such a fuss.



For mobile loads, ARM has been proven over and over again to be more efficient than competing x86 solutions. For server environments, the results we have show x86 solutions to be significantly more efficient though in the case of servers, efficiency is even more platform dependent than mobile or desktop environments.. Desktop is unproven at this point as there aren't really any ARM desktop solutions for comparison. I look forward to see how it develops.
LOL! It gets even MORE ridiculous: I was biting my tongue to keep myself from calling out your strawman only to see you turn around and throw that same word at me.

Ever see the guy with notoriously “stank breaf” tell someone else “yo breaf stanks” when he gets mad? Yeah: there’s a reason for that.

Strawman: an intentionally misrepresented proposition that is set up because it is easier to defeat than an opponent's real argument

This started when Denly asked...
I only read till page 4, forgive me if this asked. What will happen to A12 if it need to support things like x86 CPU does? 6 SATA, 6-10 USBs, 2-3 screens and a few PCIe?
...and CatMerc answered:
Only effect it would have is power consumption. That's significantly more I/O to power than mobile devices usually have. I wouldn't be surprised if the overall power consumption exceeded the A12's consumption in iPhone.
I *correctly* responded to say that there is no reason why adding a bunch of I/O, like USB and PCIe would be any more costly for A12 with an external chip than it would be on an X86 chip with an external chip, because X86 still adds them via an external bus (DMI for the latest Intel CPUs). They would BOTH make up for the lack of general I/O with external devices on the rest of the board while retaining their inherent performance and efficiency advantages/disadvantages for their comparable functionality at a given performance level.

...then you show up with some baseless and irrelevant distinction (“SOC”) which you use to try and dismiss the truth I just told you:
These are SOCs where the functionality mentioned is integrated into the CPU itself. So yes, power consumption is effected.
*facepalm*
“The functionality mentioned” was, and I quote, “6 SATA, 6-10 USBs, 2-3 screens and a few PCIe.” For X86, this I/O is all on the motherboard chipset:
a9968cdb6127571553accc338cf6a4aa.png

...not the SOC. The rest of your posturing about “strawman” this and “strawman” boils down to you using “SOC” to refer to the X86 CPU only where I more logically assumed you were talking about Apple’s chip.

About the only place where you might think you have a point is where the latest X86 CPUs would have PCIe lanes for graphics without going through the chipset, which is irrelevant for a theoretical A12 successor-based iMac or MacBook that is relying on the integrated GPU specifically to maintain an efficiency advantage (power to performance). Considering that their current GPUs already outclass any other iGPUs, they could easily integrate a GPU with enough performance for “2-3 screens.” That isn’t a big consideration for selling iMacs and MacBooks but, who knows? Maybe it will be for future iMac Pros or something.

I think it’s weird that you specifically try to distinguish X86 CPUs as SOCs when, traditionally, ARM-based SOC have been significantly more highly integrated. Heck, some even throw RAM into the package. If anything, an Apple SOC that rivals X86 for future iMac and MacBooks would likely be even more highly integrated than a similar-performing X86 system.
 

DrMrLordX

Lifer
Apr 27, 2000
23,037
13,133
136
That would be awesome! I guess the problem would be getting performance figures for something that bears some relationship to the real world and managing to do it in an equal way on each platform.

My only concern is that the best ARM design out there right now - Apple's A12 - is locked into a hardware/software platform that is resistant to independent testing at the hardware level. Can you Install Linux on an A12-based iPhone? It's pretty easy to install Linux on machines using other ARM SoCs.
 

Thala

Golden Member
Nov 12, 2014
1,355
653
136
My only concern is that the best ARM design out there right now - Apple's A12 - is locked into a hardware/software platform that is resistant to independent testing at the hardware level. Can you Install Linux on an A12-based iPhone? It's pretty easy to install Linux on machines using other ARM SoCs.

You do not need Linux to evaluate efficiency of a CPU. Just make sure that the code you intended to run actually runs on the core and second make sure you have means to measure power and runtime. Both do not require Linux nor any other fancy environment.
 
Last edited:

NTMBK

Lifer
Nov 14, 2011
10,480
5,897
136
LOL! It gets even MORE ridiculous: I was biting my tongue to keep myself from calling out your strawman only to see you turn around and throw that same word at me.

Ever see the guy with notoriously “stank breaf” tell someone else “yo breaf stanks” when he gets mad? Yeah: there’s a reason for that.

Strawman: an intentionally misrepresented proposition that is set up because it is easier to defeat than an opponent's real argument

This started when Denly asked...

...and CatMerc answered:

I *correctly* responded to say that there is no reason why adding a bunch of I/O, like USB and PCIe would be any more costly for A12 with an external chip than it would be on an X86 chip with an external chip, because X86 still adds them via an external bus (DMI for the latest Intel CPUs). They would BOTH make up for the lack of general I/O with external devices on the rest of the board while retaining their inherent performance and efficiency advantages/disadvantages for their comparable functionality at a given performance level.

...then you show up with some baseless and irrelevant distinction (“SOC”) which you use to try and dismiss the truth I just told you:

*facepalm*
“The functionality mentioned” was, and I quote, “6 SATA, 6-10 USBs, 2-3 screens and a few PCIe.” For X86, this I/O is all on the motherboard chipset:
a9968cdb6127571553accc338cf6a4aa.png

...not the SOC. The rest of your posturing about “strawman” this and “strawman” boils down to you using “SOC” to refer to the X86 CPU only where I more logically assumed you were talking about Apple’s chip.

About the only place where you might think you have a point is where the latest X86 CPUs would have PCIe lanes for graphics without going through the chipset, which is irrelevant for a theoretical A12 successor-based iMac or MacBook that is relying on the integrated GPU specifically to maintain an efficiency advantage (power to performance). Considering that their current GPUs already outclass any other iGPUs, they could easily integrate a GPU with enough performance for “2-3 screens.” That isn’t a big consideration for selling iMacs and MacBooks but, who knows? Maybe it will be for future iMac Pros or something.

I think it’s weird that you specifically try to distinguish X86 CPUs as SOCs when, traditionally, ARM-based SOC have been significantly more highly integrated. Heck, some even throw RAM into the package. If anything, an Apple SOC that rivals X86 for future iMac and MacBooks would likely be even more highly integrated than a similar-performing X86 system.

x86 SoCs exist, and are sold very widely.

apollo-lake-diagram-transparent-16x9.png.rendition.intel.web.864.486.png
 

Nothingness

Diamond Member
Jul 3, 2013
3,334
2,418
136
You do not need Linux to evaluate efficiency of a CPU. Just make sure that the code you intended to run actually runs on the core and second make sure you have means to measure power and runtime. Both do not require Linux nor any other fancy environment.
iOS has some restrictions such as not allowing run time code generation. This will limit the benchmarks you can run.
 

CZroe

Lifer
Jun 24, 2001
24,195
857
126
x86 SoCs exist, and are sold very widely.

apollo-lake-diagram-transparent-16x9.png.rendition.intel.web.864.486.png
Right. I’m calling him out for assuming that SOC=X86 any more than SOC=ARM, as if “SOC” is more synonymous with X86 than the various ARM SOCs out there. To be properly understood, his usage required me to share that incorrect assumption.
 

Hitman928

Diamond Member
Apr 15, 2012
6,737
12,457
136
LOL! It gets even MORE ridiculous: I was biting my tongue to keep myself from calling out your strawman only to see you turn around and throw that same word at me.

Ever see the guy with notoriously “stank breaf” tell someone else “yo breaf stanks” when he gets mad? Yeah: there’s a reason for that.

Ok then. . .

This started when Denly asked...
I only read till page 4, forgive me if this asked. What will happen to A12 if it need to support things like x86 CPU does? 6 SATA, 6-10 USBs, 2-3 screens and a few PCIe?

...and CatMerc answered:
Only effect it would have is power consumption. That's significantly more I/O to power than mobile devices usually have. I wouldn't be surprised if the overall power consumption exceeded the A12's consumption in iPhone.

I *correctly* responded to say that there is no reason why adding a bunch of I/O, like USB and PCIe would be any more costly for A12 with an external chip than it would be on an X86 chip
But who made the argument that adding this stuff would be more costly for A12 than for x86? I don't see that argument anywhere in your quoted posts or in this thread. . .

because X86 still adds them via an external bus (DMI for the latest Intel CPUs). They would BOTH make up for the lack of general I/O with external devices on the rest of the board while retaining their inherent performance and efficiency advantages/disadvantages for their comparable functionality at a given performance level.
x86 covers quite a few different processors with quite a few different levels of integration. The most modern x86 SoCs integrate much more IO than the most modern ARM SoCs. The question was about what happens if A12 adds functionality to match.


...then you show up with some baseless and irrelevant distinction (“SOC”) which you use to try and dismiss the truth I just told you:

SoC is just the general industry name for a processor that incorporates IO functionality. Not sure why using that term would be baseless in a discussion about integrating more IO into a processor. Either way it's just a label.

*facepalm*
“The functionality mentioned” was, and I quote, “6 SATA, 6-10 USBs, 2-3 screens and a few PCIe.” For X86, this I/O is all on the motherboard chipset:
a9968cdb6127571553accc338cf6a4aa.png

...not the SOC. The rest of your posturing about “strawman” this and “strawman” boils down to you using “SOC” to refer to the X86 CPU only where I more logically assumed you were talking about Apple’s chip.

You're giving 1 example of 1 x86 processor line. AMD has more IO integrated as do some intel processors. Either way, I didn't take the enumerated list as a strict requirement but rather a general statement of IO found in the most modern x86 processors which include things like a few PCIe lanes, etc. If we want to take it as a strict list, there still exist mobile versions of intel processors which include the listed IO functionality, or nearly all of it. Even if we only use the latest intel desktop processors which is what your block diagram is for, you still have many of the listed elements integrated into the processor itself and those that aren't go over a DMI link. That DMI link still takes additional power to support on the processor itself. You still need to add IO drivers, buffers, clocking elements, and mostly likely some MUXing and Decoding. All of that takes power. Not as much as a full integration, but it still takes power to drive and process an IO bus. Again, it was a simple statement that adding additional IO to the A12 processor (or any processor) will increase the power needed by that processor. That's it. I still don't know why it got blown up to such a big deal.

About the only place where you might think you have a point is where the latest X86 CPUs would have PCIe lanes for graphics without going through the chipset, which is irrelevant for a theoretical A12 successor-based iMac or MacBook that is relying on the integrated GPU specifically to maintain an efficiency advantage (power to performance).

PCIe lanes are used for more than graphics. Either way, you're going away from the question again.

Considering that their current GPUs already outclass any other iGPUs, they could easily integrate a GPU with enough performance for “2-3 screens.” That isn’t a big consideration for selling iMacs and MacBooks but, who knows? Maybe it will be for future iMac Pros or something.

Not sure Apple's latest GPU out classes Vega11 or the Volta GPU in Xavier, not in outright performance anyway. Possibly in efficiency but I haven't really seen any comparisons of such. Either way, adding support for 2-3 screens isn't as much (anymore) about how powerful the GPU is but again the IO circuitry needed to drive it. If you need to add that circuitry, you will increase the power requirement to drive that circuitry.

I think it’s weird that you specifically try to distinguish X86 CPUs as SOCs when, traditionally, ARM-based SOC have been significantly more highly integrated.
Never did any such thing.
 

Hitman928

Diamond Member
Apr 15, 2012
6,737
12,457
136
Right. I’m calling him out for assuming that SOC=X86 any more than SOC=ARM, as if “SOC” is more synonymous with X86 than the various ARM SOCs out there. To be properly understood, his usage required me to share that incorrect assumption.

Please show where I did this. In fact, in my second post I explicitly stated that both modern x86 cores and modern ARM cores are both SoCs.

They are both SoCs.

They have different levels of integration, but they are both SoCs.

The original question was very simple. As I read it and boiled down but in context, it was: What would happen if Apple integrated all the IO that regular x86 desktop systems typically provide?

Answer: A12 would need more power to support the additional circuitry to drive / process the added IO.

I think everyone agrees with this statement so I don't understand why it didn't just end there. Instead you went off all about efficiency versus x86, and it's not needed anyway, and "stank breaf".
 
Last edited:

CZroe

Lifer
Jun 24, 2001
24,195
857
126
Please show where I did this. In fact, in my second post I explicitly stated that both modern x86 cores and modern ARM cores are both SoCs.

It sure sounded like you were saying it right here:
These are SOCs where the functionality mentioned is integrated into the CPU itself. So yes, power consumption is effected.
I read it as “Intel CPUs are SOCs where the functionality mentioned is integrated into the CPU itself.” because, obviously, you are saying that it has to be added to an Apple chip in the next line, which I read as “So yes, power consumption is affected when you add these to an Apple CPU.”

Sure sounded like you were trying to create a distinction without a difference to apply “SOC” to Intel and imply that Apple will have to add Jaycee features to become an SOC themselves. I guess you mean to say that it has to add such features to become a comparable SOC.

I look at it the exact opposite way: an existing Intel CPU of comparable performance would have a less-capable GPU and be far less efficient, where the Apple chip could gain most of the I/O required through something like adding USB hubs. If it needed to add a second on-die GPU or something to meet those requirements then there is no reason to believe that the efficiency wouldn’t scale with the performance.

After all, Apple isn’t using Atom and Celeron in their Mac Pros and servers. Comparing apples to apples, Apple would be looking at how X86 fits their current product line to determine is a switch to a desktop/notebook/server oriented A12 (or A12 successor) is prudent. That means we need to be looking and the current gen i5 and i7 CPUs and the like and comparing them to A12 and what it would take to make A12 even more comparable. Last I checked, iPads and iPhones supported USB and driving external displays without adding any I/O, so they already have external busses and display controllers connected to their SOC. Sure, it’s no DMI bus, but any efficiency loss for adding ports over that bus/those busses would not detract from the efficiency advantage the A12 already enjoys.

Speaking of efficiency and performance, you asked me to define this for reference. I keep referring to the A12 CPU we have compared to the most similar-performing Intel CPUs (as CPUs; not as GPUs), since the thought experiment is specifically about the possibility of Apple using this as a desktop, notebook, or server CPU in their other products. The comparable Intel CPUs invariably have significantly higher power requirements and significantly lower GPU performance while having excessive and power-hungry I/O that could be totally unneeded in an iMac or MacBook. If Apple ramps this up with an actively cooled A12 with more cores and GPUs then the gulf will likely widen. The fact that adding additional I/O draws more power is essentially irrelevant if they weren’t going to add it for an iMac or a MacBook anyway. Their CPUs don’t need to be nearly as flexible or general-purpose when they are only using them in their devices.

If Apple does this, it will be shifting their desktop and notebook platforms to be even more iPad-like than they already are. We will likely lose discrete GPUs and modular internal storage all together and use USB-C for everything (not even Thunderbolt). Heck, we already lost modular internal storage on several of their Intel-based systems. The 2015 MacBook already uses USB-C for everything else. Me thinks they’ve already shifted so far that the move to ARM will be less painful than the move from PowerPC to X86 was, especially with the success of the App Store and it’s ability to enforce that all applications accepted will run on both platforms (exactly what Microsoft tried to do with the failed Windows Store and Windows RT).
 
Last edited:
  • Like
Reactions: Etain05

Nothingness

Diamond Member
Jul 3, 2013
3,334
2,418
136
Answer: A12 would need more power to support the additional circuitry to drive / process the added IO.

I think everyone agrees with this statement so I don't understand why it didn't just end there. Instead you went off all about efficiency versus x86, and it's not needed anyway, and "stank breaf".
I disagree: if these IO are not used, a properly designed SoC won't consume more just because the handling of these IO are on-chip. And if you start using these IO then you're not measuring CPU efficiency alone any more.