The ARM v.s. Intel Thing - Let's Discuss It

Page 8 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Khato

Golden Member
Jul 15, 2001
1,288
367
136
There's no silver bullet to it. It's just that for a few decades, ARM has cared a lot more about idle power than Intel. It's a million small design decisions in the cores that accumulate.

Depends upon which 'idle power' you're talking about though. The one place where ARM wins in that review is the 'ideal' system idle... and that's simply a result of whatever logic the respective designs have in place to wake up the core due to activity no? With the actual processor core logic likely power gated.

If that is the case then yeah, it is a result of Intel just getting into the realm of low power and still needing to iterate their wakeup logic. But that has no real bearing upon other design decisions.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
Dang IDC, you are unusually snarky in that post.

:oops:

I generally "suffer the fool" better, true, but this is one of those cases where the arrogance of the member I am responding to appears to be so deeply entrenched in a pool of ignorance that I figured a shock-and-awe response was the only viable method for opening their eyes.

I have, no doubt, failed to accomplish anything here excepting for making myself the fool, but at least I know I tried and my intentions were coming from a good place. The implementation was disastrous though, agreed.

If i ever see someone trying to do "real work" on a tablet then quite honestly ill laugh and point at them before feeling sorry for them.

about the only good thing a tablet is for work is for showing people pictures or taking tick box surveys. They are less useful than a pen and note pad.

^ ignorance fueled arrogance.
 

Homeles

Platinum Member
Dec 9, 2011
2,580
0
0
I'm much, much worse at "suffering the fool," as you put it. In fact, I'm actually banned form OCN for that exact reason. My attitude changes wildly depending on who I'm addressing and in what community... despite my infraction count here, I'm actually better behaved on this forum.

Anyways, I just thought it was out of character for you, so I thought I should point that out. :p
 

Fjodor2001

Diamond Member
Feb 6, 2010
4,224
589
126
I don't agree with Fx1's statement that tablets useless. But I do think that tablets are more aimed at consuming content than producing. That does not make them useless, but they definitely have limitations to what areas they are useful in.
 

blackened23

Diamond Member
Jul 26, 2011
8,548
2
0
IDC has a good point in that tablets can function well with certain types of work - but for some, they're a little limiting. I really hope MS hits a home run with the surface pro, i'm pretty excited about it. The way I see it, it has all the features that makes a tablet great while doubling as a system with ultrabook functionality.
 
Mar 10, 2006
11,715
2,012
126
IDC has a good point in that tablets can function well with certain types of work - but for some, they're a little limiting. I really hope MS hits a home run with the surface pro, i'm pretty excited about it. The way I see it, it has all the features that makes a tablet great while doubling as a system with ultrabook functionality.

Tried a Samsung Ativ SmartPC Pro today at the Microsoft store. Who needs a Surface Pro?
 

sontin

Diamond Member
Sep 12, 2011
3,273
149
106
Ha, look what happened when you don't get the "power optimized" Samsung Atom tablet:

hdbattlife.png


wifi200.png

http://www.tomshardware.com/reviews/ativ-smart-pc-500t-windows-8-atom,3360-11.html

:eek:
 
Last edited:

Tuna-Fish

Golden Member
Mar 4, 2011
1,669
2,541
136
I'd love to see you substantiate that claim with real evidence and information.

Byte-aligned instructions mean that either you limit yourself to one or at most two decoded instructions per clock, or you pay a high toll on power as your frontend needs to be an ungodly mess of muxes. Just keeping all the present semantics of the instruction set, losing the prefix bytes and redesigning the opcodes for 2-byte aligned instruction set would make x86 much more power-efficient, with minimal code size penalties. And you could claw back the code size by losing some of the old instructions that no-one has used for decades. None of that is controversial in the slightest.

ARM is great for battery life, but in terms of performance ARM chips will never compare to intel offerings.
For the foreseeable future, I completely agree. The point is, it's not about the instruction set. In terms of speed and power, x86 is a liability. It's just that it's main asset, backwards compatibility for more than 3 decades, has bought Intel the resources to do better despite it's drawbacks.

x86 was never designed to be a high-performance isa. It was originally meant for pocket calculators and running traffic signals and the like. After that, the primary consideration has been adding useful features while preserving backwards compatibility.

For a isa that was originally designed for speed, look at Alpha and Power. Their design makes it easier to build wider and faster cpus. When they were designed, a lot of people believed they would win over x86 because of their inherent advantages for speed. Instead, Intel showed us that when you have the most money to pour into the fabs, and the most money to hire all the best designers and engineers, the differences between isas are not that significant. Having a lead on the process tech and having better uarch buys Intel the opportunity to keep using a crappy isa, and still having the faster cpus than the competition.
 

SPBHM

Diamond Member
Sep 12, 2012
5,066
418
126
Ha, look what happened when you don't get the "power optimized" Samsung Atom tablet:


:eek:


also the GPU is seriously slow
wowlow.png


I think the GMA 3150 was also quite bad, considerably slower than even the GMA 950, and most ARM SoCs also have better GPUs
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
For a isa that was originally designed for speed, look at Alpha and Power. Their design makes it easier to build wider and faster cpus. When they were designed, a lot of people believed they would win over x86 because of their inherent advantages for speed. Instead, Intel showed us that when you have the most money to pour into the fabs, and the most money to hire all the best designers and engineers, the differences between isas are not that significant. Having a lead on the process tech and having better uarch buys Intel the opportunity to keep using a crappy isa, and still having the faster cpus than the competition.

I was all ready to buy myself an Alpha 21164PC to run WinNT and FX32! but then DEC went and imploded into bankruptcy.

The K7 (and its predecessors) would not have existed if DEC had not imploded, and had the Alpha continued evolving and being the benefactor of billions of R&D for iterative improvements every process node then it would have surely been quite the contender in today's MPU markets.

Power has done alright for itself, but I do suspect Alpha would have been even more potent.
 

CHADBOGA

Platinum Member
Mar 31, 2009
2,135
833
136
I was all ready to buy myself an Alpha 21164PC to run WinNT and FX32! but then DEC went and imploded into bankruptcy.

The K7 (and its predecessors) would not have existed if DEC had not imploded, and had the Alpha continued evolving and being the benefactor of billions of R&D for iterative improvements every process node then it would have surely been quite the contender in today's MPU markets.

Power has done alright for itself, but I do suspect Alpha would have been even more potent.

I always wonder what would have happened if Intel took over the development of Alpha, instead of pushing Itanium.
 
Mar 10, 2006
11,715
2,012
126
Anandtech has a review of the Samsung Ativ tablet with DualCore Krait:
52169.png

http://www.anandtech.com/show/6528/samsung-ativ-tab-review-qualcomms-first-windows-rt-tablet/3

So, a 28nm ARM SoC is beating the hell out of a 32nm Atom SoC.

I would imagine that a Krait (3 issue, OoO, with a lot of goodies) would actually be faster than the in order, 2-issue POS that the Atom is. The fact that they're anywhere near comparable on performance is pretty astounding, and very likely due to the fact that Intel's caches are second to none.
 

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
Anandtech has a review of the Samsung Ativ tablet with DualCore Krait:
52169.png

http://www.anandtech.com/show/6528/samsung-ativ-tab-review-qualcomms-first-windows-rt-tablet/3

So, a 28nm ARM SoC is beating the hell out of a 32nm Atom SoC.

Where did you see that . Thats not at all true Intel wins . Nice Cherry pick by the way . 3 hours of video is alot . But What will arm 28nm vs intel Silvermont which is soon to be shown. Merrifield looks like it will destroy these ARM cpus In all metrics everyone and thats the single core version . The dual core will more than double the performance differance and be more power efficient at the samme time . Next week puts all this BS to rest.
 
Last edited:

Olikan

Platinum Member
Sep 23, 2011
2,023
275
126
For a isa that was originally designed for speed, look at Alpha and Power. Their design makes it easier to build wider and faster cpus. When they were designed, a lot of people believed they would win over x86 because of their inherent advantages for speed. Instead, Intel showed us that when you have the most money to pour into the fabs, and the most money to hire all the best designers and engineers, the differences between isas are not that significant. Having a lead on the process tech and having better uarch buys Intel the opportunity to keep using a crappy isa, and still having the faster cpus than the competition.

well, sequoia (aka power7) is one of the best supercomputer out there :p
 
Last edited:

Exophase

Diamond Member
Apr 19, 2012
4,439
9
81
I would imagine that a Krait (3 issue, OoO, with a lot of goodies) would actually be faster than the in order, 2-issue POS that the Atom is. The fact that they're anywhere near comparable on performance is pretty astounding, and very likely due to the fact that Intel's caches are second to none.

Having a 20% frequency advantage and hyperthreading doesn't hurt either. It'd be great to see some ARM cores adopt turbo boost and push the high end of frequency better. big.LITTLE is also pushing towards this direction - how effective it is will depend on how capable the little cores are. I think the differences in both straight cache performance and memory subsystem are exaggerated; latency and bandwidth is typically similar (A15 has much better L2 bandwidth, worse latency though) and even main memory latency is similar in the best cases. Check out the charts on 7-cpu to see what I mean. Intel probably does have better prefetchers but ARM and the rest have been catching up here, improving far beyond the old Cortex-A8 that had no hardware prefetching at all. This can of course make a huge difference on some benches.

Atom may flat out be better tuned for higher frequency, plus Intel is probably better at leveraging voltage vs frequency than Qualcomm or other ARM vendors are (look at how low voltage can go on Sandy Bridge for instance). No doubt they bring engineering advantages to the table, although it'll be interesting to see if these advantages grow or decrease over more time, as more talent consolidates in other places in the SoC world.

What I see most in this review is that once again Intel's advantage in Javascript is bigger than its advantage in native apps, although at least they've proven this to be consistent across multiple JS engines. I don't know yet how much of this is because JS engines have been optimizing for x86 longer or because JS JIT techniques naturally lend themselves better to ISA or uarch advantages Intel has but it should be clear that JS performance is NOT a good benchmark for everything. Remember, when you run a JS program most of the time is spent in overhead that's not part of the actual calculations your program performs.
 
Last edited:

Olikan

Platinum Member
Sep 23, 2011
2,023
275
126
I don't know yet how much of this is because JS engines have been optimizing for x86 longer or because JS JIT techniques naturally lend themselves better to ISA or uarch advantages Intel has but it should be clear that JS performance is NOT a good benchmark for everything

if you see charlie D. as a good info, x86 just have better tools for developers to work with
 

Exophase

Diamond Member
Apr 19, 2012
4,439
9
81
if you see charlie D. as a good info, x86 just have better tools for developers to work with

I'll spare you what I think of Charlie's info :p

Intel does have a better compiler with ICC. Problem is that I don't think it's going to be getting a lot of usage on Windows 8 Metro apps. And the usage for Android is going to be close to zero.

The impact on Javascript engine JIT quality is also going to be zero.
 

Khato

Golden Member
Jul 15, 2001
1,288
367
136
So, a 28nm ARM SoC is beating the hell out of a 32nm Atom SoC.

As mentioned in the article, you have to take into account the difference in screen size. Both of the Samsung ATIV tablets have 30 Wh batteries, but the Atom based model is 11.6 inches versus 10.1 inches. That difference results in the ARM based tablet having basically 75% the screen area, and given that luminosity is kept constant it should be using roughly 75% the power for the backlight, which likely means a 0.25W advantage.

Now 13.3 hours on a 30Wh battery equates to an average of 2.25 watts, while the 10.82 hours of the Atom tablet is about 2.75 watts. So once you roughly account for the quarter of a watt difference in backlight you only have a quarter of a watt difference left. That's still a decent advantage for the ARM tablet, but not quite as impressive as otherwise.
 

Exophase

Diamond Member
Apr 19, 2012
4,439
9
81
That difference results in the ARM based tablet having basically 75% the screen area, and given that luminosity is kept constant it should be using roughly 75% the power for the backlight, which likely means a 0.25W advantage.

How do you know luminosity is kept constant? Nowhere does the review indicate that luminosity was normalized. Maybe you're thinking of that THG article.
 

Skurge

Diamond Member
Aug 17, 2009
5,195
1
71
How do you know luminosity is kept constant? Nowhere does the review indicate that luminosity was normalized. Maybe you're thinking of that THG article.

If you read earlier AT review, they keep all the test devices at 200nits or as close as they can get them to that number.
 

blackened23

Diamond Member
Jul 26, 2011
8,548
2
0
It's also fun to see people completely discount intel. I certainly think they can put up a fight with their massive R+D budget and fabs - maybe not immediately, but its not an "if" but "when" because they're more focused on mobile than ever.
 
Last edited: