Does anyone think we'll see ARM replace x86 in desktops?

Page 4 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Desktop processors in near future: ARM or x86?

  • x86

  • ARM

  • See my comment


Results are only viewable after voting.

Exophase

Diamond Member
Apr 19, 2012
4,439
9
81
i actually see Atom SOC's overtaking ARM in the mobile space given time. Intel still has tons of space to optimize x86 processors in to <4.5 watt TDP

They couldn't make enough inroads into mobile when they had a quite competitive CPU design, a bigger process advantage, and a huge contra-revenue program.

Since then they've provided one lacklustre CPU iteration while their competitors have pushed forward much more aggressively. Their 14nm Atom with integrated wireless was supposed to be out forever ago. Their 28nm Rockchip SoC looks pretty miserable. They've lost any hope of going anywhere with partners like Nokia who have themselves tanked, and Windows' phone prospects are the worst they've ever been.

It's only going to get harder from here.
 

Exophase

Diamond Member
Apr 19, 2012
4,439
9
81
For Itanium code at least, icc does generally generate better code than gcc.

Well that's good news for all three of its remaining users ;)

Itanium is very hard to compile for, Intel must have invested a tremendous amount in ICC to make the processor viable at all. GCC or LLVM, on the other hand, have very little incentive to go through the same thing.

I can say something similar for TI's c6x DSPs. It's another VLIW that needs very special care in the compiler to get good enough results. GCC does much, much worse than TI's compiler. But that doesn't mean that TI has special compiler technology that can give amazing increases across the board for x86 or ARM. Same thing for ICC.

I'm sure there are places where ICC does legitimately do better but from everything I've seen it's just not the magic compiler many make it out to be unless you're the maintainer of libquantum.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
If any of the ARM players is able to field a good, but cheap octocore I think there is a chance.

Reason: Games and NAS scale well to eight cores and Intel has been weak on performance per dollar over the last few years,

See below for example of lowest end desktop Pentiums from Sandy Bridge onwards:

G620 (SB 2.6 Ghz, $64, Q2 2011) http://ark.intel.com/products/53480/Intel-Pentium-Processor-G620-3M-Cache-2_60-GHz
G2010 (IV 2.8 Ghz, $64, Q1 2013) http://ark.intel.com/products/71071/Intel-Pentium-Processor-G2010-3M-Cache-2_80-GHz
G3220 (HSW 3.0 Ghz, $64, Q3 2013) http://ark.intel.com/products/77773/Intel-Pentium-Processor-G3220-3M-Cache-3_00-GHz
G4400 (SKL 3.3 GHz, $64, Q3 2015) http://ark.intel.com/products/88179/Intel-Pentium-Processor-G4400-3M-Cache-3_30-GHz

So for the same $64 a person 4 years 1 quarter later gained 700 Mhz and ~25% higher IPC along with a iGPU that doubled the number of EUs from 6 to 12 (accompanied by architectural improvements). AES-NI and Vt-d was added with Skylake. This is a weaker performance per dollar improvement compared to what we have seen in other technology sectors (SSDs, memory....even Video cards).
 
Last edited:
Mar 10, 2006
11,715
2,012
126
They couldn't make enough inroads into mobile when they had a quite competitive CPU design, a bigger process advantage, and a huge contra-revenue program.

Since then they've provided one lacklustre CPU iteration while their competitors have pushed forward much more aggressively. Their 14nm Atom with integrated wireless was supposed to be out forever ago. Their 28nm Rockchip SoC looks pretty miserable. They've lost any hope of going anywhere with partners like Nokia who have themselves tanked, and Windows' phone prospects are the worst they've ever been.

It's only going to get harder from here.

Damn this is a good post. Spot on!
 

Essence_of_War

Platinum Member
Feb 21, 2013
2,650
4
81
If any of the ARM players is able to field a good, but cheap octocore I think there is a chance.

Reason: Games and NAS scale well to eight cores and Intel has been weak on performance per dollar over the last few years,

Game engines really don't scale well at all to 8 cores. Maybe crysis3. That's really about it

Neither smbd nor nmbd scale to multiple cores for a single connection either, and mostly with good reasons. They scale well to multiple cores for multiple connection...is that what you mean?
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
Game engines really don't scale well at all to 8 cores. Maybe crysis3. That's really about it

Battlefield 4 64 player scales well to eight core and the following games do as well:

http--www.gamegpu.ru-images-stories-Test_GPU-RPG-Fallout_4-test-proz.jpg



http--www.gamegpu.com-images-stories-Test_GPU-RPG-dragon_age_inquisition-test-DragonAgeInquisition_proz_amd.jpg


http--www.gamegpu.com-images-stories-Test_GPU-RPG-dragon_age_inquisition-test-DragonAgeInquisition_proz_mantle.jpg


http--www.gamegpu.com-images-stories-Test_GPU-Action-Assassins_Creed_Syndicate-test-new-as_proz.jpg



http--www.gamegpu.com-images-stories-Test_GPU-RPG-Fallout_4-test-amd.jpg


http--www.gamegpu.com-images-stories-Test_GPU-RPG-dragon_age_inquisition-test-DragonAgeInquisition_amd.jpg


http--www.gamegpu.com-images-stories-Test_GPU-Action-Assassins_Creed_Syndicate-test-new-as_amd.jpg


With that mentioned, FX-8350 does have a higher clockspeed than FX-6300 so there will be a gain of FPS from that alone.
 
Last edited:

Shivansps

Diamond Member
Sep 11, 2013
3,918
1,570
136
Developers can do this.. but will all of them? Supporting Win 10 Universal is kind of a pain for a lot of developers (I know I'm not that thrilled about having to compile in Visual Studio), and no matter how much it might seem like something is portable serious commercial developers can't really afford to release a binary for a platform without properly testing for it. Little things that are undefined in the language can end up biting you, for example I've seen discrepancies due to different shift behavior going from x86 to ARM. And even though it's often spoken of as obsolete today there is still a non-negligible amount of hand-written assembly or intrinsics floating around, including in some popular middleware.

Fat binaries have been the standard on Android for about as long as the NDK was around, and x86 got first class support here before they really became viable in the market. Despite that, a few years later a lot of the most popular apps still lacked compile targets for x86.

The fact remain that today devs can target Windows 10 Universal apps, and you only have to pick ARM, x86 or x64 when compiling... if you pick ARM it can run in Windows 10 Phone (ARM), and Windows 10 IoT(ARM) (Raspberry PI), if you pick x86 it can run on desktop, Windows 10 Phone (there is any?) and Windows 10 IoT (minnowboard max)...

Thats a mayor advantage of Windows 10 Universal apps, and allows for Windows 10 ARM.

Yes, it still costs because you must do testing i know.
 

Exophase

Diamond Member
Apr 19, 2012
4,439
9
81
The fact remain that today devs can target Windows 10 Universal apps, and you only have to pick ARM, x86 or x64 when compiling... if you pick ARM it can run in Windows 10 Phone (ARM), and Windows 10 IoT(ARM) (Raspberry PI), if you pick x86 it can run on desktop, Windows 10 Phone (there is any?) and Windows 10 IoT (minnowboard max)...

Thats a mayor advantage of Windows 10 Universal apps, and allows for Windows 10 ARM.

I understand, but CPU architecture is not the only thing that gets in the way of portability. Right now ARM on Windows is a minority platform, most of the people interested in developing mobile devices are doing so on Android and iOS.

If I want to write a program that runs on both Android and iOS I can do so by making the meat of it in C/C++ that compiles on both. I can use similar build systems to do so and the code will for the most part compile for both without any surprises. Same thing for compiling on a standard Linux or Windows under something like mingw or Cygwin.

But if I now want to add support Windows 10 Universal it means that I have to use Visual Studio. That means I have to port the build system to work with it, it means various compiler language extensions and latest standards that are widely support in other compilers won't be here. It means that there are different formats for inline assembly and separate assembly files. And sometimes other surprises.

Like usual Microsoft has to defy anything resembling a de facto standard anywhere else and make you conform to all their ways of doing things differently. But unlike usual they are the underdog so that just gives people another reason not to start using them.
 

Thala

Golden Member
Nov 12, 2014
1,355
653
136
But if I now want to add support Windows 10 Universal it means that I have to use Visual Studio. That means I have to port the build system to work with it, it means various compiler language extensions and latest standards that are widely support in other compilers won't be here. It means that there are different formats for inline assembly and separate assembly files. And sometimes other surprises.

It is not that bad. We compile most of our sources with ARMCC, ICC (GCC compatibility mode) and MSVC without bigger issues. We typically use GNU make in conjunction with all compilers, which means cygwin when building under Windows.
Your best bet is to use standard language features and not any proprietary GCC or MSVC features. The most useful parts of C11/C++11 standard are supported by both compilers.
When targeting ARM, the syntax of assembly files and pseudo opcodes of both ARMCC and MCVC are very similar.
 
May 11, 2008
22,566
1,472
126
No, X86(-64) is impressively powerful. And with all the extensions to the instruction set like SSE and AVX, i do not think X86 will be replaced. X86 will just evolve into a more streamlined instruction set if ARM would come to the desktop. But for ARM to be relevant on the desktop, it has to emulate x86 instructions because there is a whole ecosystem of x86 applications. For that to happen, ARM will have to out perform x86 cores, which is difficult since x86 is also internally a mixed CISC/RISC execution. But Intel especially will not give up that easy. And when ARM and x86 will go on the competition, x86 manufacturers have more experience to solve problems at high clock speeds.
 
Aug 11, 2008
10,451
642
126
If it could sell the next iPhone, sure :)

I dont doubt for a second that Apple would sacrifice its own semi division any day if it could sell more phones. We already know Samsung does it.

But at this stage its purely theoretical. Since Intel would have to make a product much better to change to it from ARM. And other ARM companies isn't likely to overtake Apple either in terms of replacement there. But make no mistake, the goal is to sell the phones, not the chips.

Yea, but this is like saying Ferrari is going to replace their 12 cylinder, quad overhead cam, 4 valve per cylinder fuel injected engines with a six cylinder Buick engine from 1965 so they can sell more cars.
 

HeXen

Diamond Member
Dec 13, 2009
7,837
38
91
I'm not afraid to post my ignorance but just want to say that most of the words used here outside of ARM and x86 were completely foreign to me. All I care to know is 2 things....which platform is going to yield 60fps or higher in my games and which platform is going to yield the most battery life.

Since I know the answer to each, it's why I own each processing platform. That's likely all consumers really give a shit about.
 

Exophase

Diamond Member
Apr 19, 2012
4,439
9
81
It is not that bad. We compile most of our sources with ARMCC, ICC (GCC compatibility mode) and MSVC without bigger issues. We typically use GNU make in conjunction with all compilers, which means cygwin when building under Windows.
Your best bet is to use standard language features and not any proprietary GCC or MSVC features. The most useful parts of C11/C++11 standard are supported by both compilers.
When targeting ARM, the syntax of assembly files and pseudo opcodes of both ARMCC and MCVC are very similar.

We've always used GCC/GAS for assembly files, in part because that's the only real assembler that supported both x86 and ARM before MS started doing it. Although obviously you can't have the same files assembled to two different architectures it's still a burden to have to write for multiple different syntaxes and have to bring in different assemblers depending on what you're building for.

And there's a lot that has to be changed to get our stuff assembled by MASM now, especially since I didn't use .intel_syntax (which I don't think was around until fairly recently). And I use C preprocessor on the assembly (something that's a standard option with GCC .S files) - even with a custom pass in a Visual Studio build the preprocessor expansion doesn't work out the same so we'd probably need to straight up use our own preprocessor to make this work.

No idea about ARMCC, even ARM themselves seem to have moved away from promoting it these days... There hasn't ever been a precedence to use it for Android, this is not something I'm aware of anyone doing.

Sometimes compiler extensions are important for performance or other issues, and while they're technically proprietary most other compilers (including ICC) make an effort to support them. It's really just MSVC that's taken a strong stance against this because of course they don't want to be compatible, they don't want to make it easier to migrate away from them. But with ARM they should really be thinking about the other direction.
 

jhu

Lifer
Oct 10, 1999
11,918
9
81
Intel syntax was always the standard for DOS/Windows assembly. AT&T syntax always made more sense to me.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
With regard to game scaling mentioned earlier it does appear Ashes of Singularity scales well to 16 threads (pretty even distribution of work looking at the CPU utilization chart for i7 5960X):

http--www.gamegpu.com-images-stories-Test_GPU-strategy-Ashes_of_the_Singularity-test-Ashes_proz_fury.jpg


http--www.gamegpu.com-images-stories-Test_GPU-strategy-Ashes_of_the_Singularity-test-Ashes_intel.jpg
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
146
106
Again you forget thread jumping. And it certainly dont scale well if that's the case. See 4330/4670/4770. Its rather case and simple clock.
 
Last edited:

cbn

Lifer
Mar 27, 2009
12,968
221
106
Again you forget thread jumping. And it certainly dont scale well if that's the case. See 4330/4670/4770. Its rather case and simple clock.

Perhaps a better test would be to take 8C/16T and some other core configurations (ie, 4C/8T, 4C/4T, 2C/4T) and reduce clocks to make the single thread a bottleneck. (A downclocked i7 5960X could be used for this purpose)

Then with single thread processing power as a bottleneck (like it would be in an ARM chip) test again to see how well the FPS scales.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
Case should be cache.

And yes, take a 5960X. Run from 1 to 8 cores, HT disabled.

For scaling up to eight threads, FX-8350 is scaling 31% faster in DX11 than FX-6300 in Ashes of Singularity

This accompanied by a 14% faster clock on FX-8350.

Assuming perfect scaling a 8C/8T should scale 33% faster than 6C/6T assuming both are at the same clocks. So the gain is not bad.

With that mentioned, what I really interested in is 16T scaling. And for that perhaps Assassin's Creed is an even better game to use assuming it can really distribute work evenly as indicated by the following CPU utilization chart:

http--www.gamegpu.com-images-stories-Test_GPU-Action-Assassins_Creed_Syndicate-test-new-as_intel.jpg
 
Last edited:

B-Riz

Golden Member
Feb 15, 2011
1,595
765
136
Hooray for another one of *these* threads.

ARM and x86 are instruction sets; being that x86 has a massive head start and most of the back-end infrastructure is running on x86 hardware / software that dumb devices like ARM powered internet phones use; x86 will never be replaced.

Also, ARM hardware designed to perform like x86 hardware will use about the same amount of power.

Physical chips are designed for a use case; you cannot extrapolate a low power use chip still having low power and performing like an i7.

Read the link OP.

http://research.cs.wisc.edu/vertical/papers/2013/isa-power-struggles-tr.pdf

Key Finding 8: The choice of power or performance optimized
core designs impacts core power use more than ISA.

Key Finding 9: Since power and performance are both primarily
design choices, energy use is also primarily impacted by design
choice. ISA’s impact on energy is insignificant.

Key Finding 10: Regardless of ISA or energy-efficiency,
high-performance processors require more power than lower performance
processors. They follow well established cubic
power/performance trade-offs.

Finding T5: Balancing power and performance leads to
energy-efficient cores, regardless of the ISA: A9 and Atom processor
energy requirements are within 24% of each other and
use up to 50% less energy than other cores.

Key Finding 11: It is the micro-architecture and design methodologies
that really matter.