Why doesn't Intel use its superior manufacturing process and makes an ARM SOC?

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

TuxDave

Lifer
Oct 8, 2002
10,571
3
71
but would it? I haven't looked super close at the x86 architecture, aren't there a lot of inefficiencies going on or...no? Something with it being made of RISC now or something.

That's just marketing getting to you. I believe x86 is regularly quoted as CISC but actually when you look at what the processor is doing, it really takes a lot of the goodness in RISC into the design. It's not a 100% pure CISC processor anymore.
 
Dec 30, 2004
12,553
2
76
The x86 ISA is for all practical purposes decoupled from the architecture of the chips themselves. Intel and AMD simply use a decoder frontend to combine and break-up operations as necessary. Intel can change the backend practically at will because they'll just decode instructions for whatever new architecture they use, and indeed the rumors are that Haswell will be that kind of a leap. In any case, a decoder combined with out of order execution means that they can deal with even terrible x86 code well.

The only place the ISA matters is for parallel operations (SSE, AVX), as those operations have to be explicitly bundled together. And no one is complaining about those instruction sets anyhow.

sounds like it really is all hype then.

Good to know. So we just need to look at IPC + ghz and we can figure out if the latest ARM chips are infringing on Intel's territory.

I still think a quad core 2ghz ARM chip with OoOE and a GPU that can handle desktop resolutions is all that we need to unseat Intel from the desktop market. Just plug phone into monitor and go.
 

ViRGE

Elite Member, Moderator Emeritus
Oct 9, 1999
31,516
167
106
X86 outperforms or competes with almost all other high performance designs, even at a performance per watt level. It's a bit of a jack of all trades, master of none. Even in terms of low power or small die size designs, Atom and Zacate don't compare too poorly to other architectures. (I wouldn't be surprised to see Atom's successor actually make a play for a large portion of the laptop market as well)
Intel even made a decent showing of extending X86 to graphics with Larabee and Knight's Corner. It's not great, but considering how poor the x86 ISA SHOULD be for that work load, it's still within 1/2-1/3rd the performance of nvidia's hardware. And that's pretty much a worst case scenario for Intel, and they get 33% to 50% the performance of a comparable design with a better ISA for the task.
Intel's vastly superior fabrication processes pretty much mean they could force x86 into almost any market. Hypothetically, x86 could have 20% lower code density, 20% lower performance per clock, and 20% worse power consumption, yet with intel's manufacturing advantage they could give it 50% more cache, clock it 50% higher, and spend transistors on all sorts of power gating to reduce typical power consumption.
An excellent point. It's why Intel is often called a foundry with a chip design addiction.
 

Fox5

Diamond Member
Jan 31, 2005
5,957
7
81
That's just marketing getting to you. I believe x86 is regularly quoted as CISC but actually when you look at what the processor is doing, it really takes a lot of the goodness in RISC into the design. It's not a 100% pure CISC processor anymore.

IMO, RISC is basically dead. (it's all semantics anyway)

The idea of RISC is Reduced Instruction Set Computer, focusing on doing a few operations fast and optimizing well. It made sense when hardware was expensive, but now we've really reached the point where it's possible to throw any complex function you want onto a cpu without sacrificing much of anything.

ARM is certainly not RISC, it's got like 4 different variants of every instruction and loads of complicated functionality. What ARM is, is a well designed architecture for low memory, low power, and low memory bandwidth systems. It has fantastic code density (I think only comparable to x86, but x86 has really good code density too) reducing memory loads, and the instruction set is designed to minimize branch misses (at the expense of slightly more processing time than being able to successfully predict branch misses) so it doesn't need as much complicated hardware to perform well, and it can interpret java instructions natively, short circuiting some processing (although I don't think android uses this). It also has a ton of registers.

X86 (and other high performance architectures) tackle the same problems with large caches and fast memory buses, because hardware IS cheap, all sorts of complicated back end processing (which is pretty much necessary as you make your design more general purpose), and even x86 has tons of registers, they're just abstracted away.

The ARM architecture was optimized for a world where memory is slow and small, embedded flash is expensive, and power requirements extremely tight, so it's primary goal is small code size.
The x86 architecture was also optimized for a world not too different from ARM. It was designed originally in the 70s, where memory was slow, small, and expensive. However, power consumption was never a factor, only raw performance within the constraints of limited memory, so it developed a bit differently. It's also older, and has more bolted on functionality than ARM, but it fairs well despite its age.
 

greenhawk

Platinum Member
Feb 23, 2011
2,007
1
71
You're assuming ARM is best for the customer. ;)

LOL.

given the overhead for x86 vs, say 5 other technologies, x86 is nearly always going to be the bottom of the pile in terms of efficency. Anything newer that is worse does not get off the ground as x86 is too heavly entrenched.

ofcourse, being better covers a lot of area including having a suitably sized developer community at the software level, cost at the hard level and applications at the user level.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
59
91
The x86 ISA is for all practical purposes decoupled from the architecture of the chips themselves. Intel and AMD simply use a decoder frontend to combine and break-up operations as necessary. Intel can change the backend practically at will because they'll just decode instructions for whatever new architecture they use, and indeed the rumors are that Haswell will be that kind of a leap. In any case, a decoder combined with out of order execution means that they can deal with even terrible x86 code well.

The only place the ISA matters is for parallel operations (SSE, AVX), as those operations have to be explicitly bundled together. And no one is complaining about those instruction sets anyhow.

Perhaps the most perfect example, and equally extreme at that, in support of this is the Transmeta Crusoe.

The Crusoe is a family of x86-compatible microprocessors developed by Transmeta. Crusoe was notable for its method of achieving x86 compatibility. Instead of the instruction set architecture being implemented in hardware, or translated by specialized hardware, the Crusoe runs a software abstraction layer, or a virtual machine, known as the Code Morphing Software (CMS).
 

Idontcare

Elite Member
Oct 10, 1999
21,110
59
91
LOL.

given the overhead for x86 vs, say 5 other technologies, x86 is nearly always going to be the bottom of the pile in terms of efficency. Anything newer that is worse does not get off the ground as x86 is too heavly entrenched.

ofcourse, being better covers a lot of area including having a suitably sized developer community at the software level, cost at the hard level and applications at the user level.

My bicycle is more energy efficient than my car, but I'm not about to try and force my bicycle to go 75 mph on the highway.

Nor am I going to take my car onto the velodrome, even if the A/C would be nice.
 

ElFenix

Elite Member
Super Moderator
Mar 20, 2000
102,393
8,552
126
It's going to be interesting to see what nVidia can come up with. They're going to be a threat to Intel going forward IMO. They have so many connections with not only game developers, but software developers, that it's not even funny. If they can convince the people who make software like AutoCAD and all the film industry's software to port their stuff over to ARM and Linux, it could be game over for Intel in a lot of respects.

nvidia burned bridges with MS on the original xbox. as long as AMD can offer competitive graphics for a competitive price MS is going to pick them.

powerpc (a relatively high performance RISC arch which has power consumption on a level with typical x86 offerings) is also heavily entrenched in the gaming market.


the real threat to intel from ARM isn't that ARM will ever match the performance of a full bore 95 watt desktop part. it's that ARM SoCs now have 'good enough' performance to do 95% of regular user computing tasks.


though intel also will reach the point where it'll have an x86 part with 'good enough' performance for battery life. maybe the haswell generation




i have to wonder where MIPS went in all this
 

Ferzerp

Diamond Member
Oct 12, 1999
6,438
107
106
My bicycle is more energy efficient than my car, but I'm not about to try and force my bicycle to go 75 mph on the highway.

Nor am I going to take my car onto the velodrome, even if the A/C would be nice.

This.

It's built to sip power, and perform as well as can be expected considering the power sipping (which isn't that good in some areas, and totally atrocious in others). Why some people have the idea that is an indication of the ability to scale to something beyond miniature computing (phones, and limited feature set tablets, and yes, and ipad has a rather limited feature set) is somewhat baffling. The reason these phones, etc seem to work well is because they really aren't doing much, and they're pretty much dedicated to a single task at a time. What we would traditionally consider multi-tasking just doesn't happen on these devices, though you can on Android if you want to, and you can make your phone nearly unusable if you do much of it.

Still baffled. Perhaps someone should make one of those memebase things with some ARM-centric hype with that little misshapen man saying,"Y U NO UNDERSTAND PROCESSOR ARCHITECTURE?"
 

TuxDave

Lifer
Oct 8, 2002
10,571
3
71
The idea of RISC is Reduced Instruction Set Computer, focusing on doing a few operations fast and optimizing well. It made sense when hardware was expensive, but now we've really reached the point where it's possible to throw any complex function you want onto a cpu without sacrificing much of anything.

I can agree that hardware is cheap-ER than it was before but it's still definitely not cheap. Dedicated complex hardware adds area, power and may slow down the chip because of the extra loading it adds or the extra distance it puts between hardware because of the area it consumed. So we can't willy-nilly add all sorts of random hardware on the chip just because it lets you do some really complicated and specialized instruction.

We gotta find the right tradeoff to decide whether or not we add in this hardware that does the instruction the fastest but takes a ton of area and power.... or something that is ALMOST as fast but can reuse a lot of existing hardware.
 

Joseph F

Diamond Member
Jul 12, 2010
3,522
2
0
Let's say Intel makes an ARM SoC and helps promote ARM's popularity so that majority of its business was ARM. It's now stuck in a place where:

1) Has to pay license fees to ARM
2) Is in a market with a lower barrier of entry (anyone can get an ARM license)

vs

1) Owns its own license
2) Barrier of entry is very high since no one except AMD can license x86


To me I still think it's in Intel's best interest to make sure x86 is the dominant instruction set by focusing on making x86 uber.

VIA also has an x86 license.

nvm
 
Last edited:

Davegod

Platinum Member
Nov 26, 2001
2,874
0
76
Sure it might be profitable, but the margins would suck compared to what they're used to. More profit isn't always better - it'd hurt their return on investment type financial ratios so could actually be bad for shareholders.

The most important thing for a financial institution (particularly a publicly-traded company) is the amount of profit in proportion to the amount of money invested to make it. It's broadly similar in principle to it not being worth investing in a 5% bond if you'd need to borrow the money and pay interest at 6%. For a public company every cent they have is essentially borrowed from their shareholders.

On top of that in the long term it makes ARM more competitive towards intel's core (sorry) business.
 

RampantAndroid

Diamond Member
Jun 27, 2004
6,591
3
81
Intel is dumb for not pasting a cookie cutter ARM core into their designs. Tap it into the ring bus and let Windows use it to do background idle tasks, allowing the bigger cores more sleepy time. I said the same thing about AMD. These guys are way behind on what appears to be common sense stuff.

You compile code for ARM or for x86. How would you propose to switch between ARM and x86 binaries?
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,787
136
Intel is dumb for not pasting a cookie cutter ARM core into their designs. Tap it into the ring bus and let Windows use it to do background idle tasks, allowing the bigger cores more sleepy time. I said the same thing about AMD. These guys are way behind on what appears to be common sense stuff.

Background idle tasks don't take any CPU. That's why they are called "idle tasks".

BTW, what you said won't work. Read a bit more about computers before you continue with that random train of thought.
 

sm625

Diamond Member
May 6, 2011
8,172
137
106
Background idle tasks don't take any CPU. That's why they are called "idle tasks".

It takes some number of microseconds to wake a cpu core up, do a bare minimum amount of work, and then power back down. When all you need to do consumes only a few nanoseconds of cpu time, spending hundreds of microseconds or even milliseconds to do it is highly inefficient. If you have a simple timer hitting the cpu every 10 milliseconds and all that timer does is increment a variable, then it would be an utter shame to have to spend 100 microseconds waking the cpu, 10 nanoseconds doing work, and then another 100 microseconds putting it to sleep. That is 2% cpu awake time. So if you take 2% of typical idle power of 10W, you get 200mW average power. All to do what could be done by a tiny "5th core" for only 10mW average power. It really is a no-brainer and I bet even intel will do it within a few years. Most likely it will be built right into their power control unit.

BTW, what you said won't work. Read a bit more about computers before you continue with that random train of thought.

You mean software designers are too lazy to think outside the box. Just like AMD is too lazy to write a proper APU graphics driver that treats the cpu and gpu as one piece of silicon. Laziness should not be a factor of innovation.