Comparing ARM performance with x86s

makken

Golden Member
Aug 28, 2004
1,476
0
76
Ever since the rumor that Apple would switch its laptops from Intel processors to ARMs (yes yes, I know its a rumor, and won't happen for a whole list of reasons, but it did get me wondering) I've been curious to how the ARM processors in smartphones and tablets compare to desktop processors.

So are there any benchmarks that can give me a relative idea as to where they stand? The only thing I've seen that compares the two is that slide from nVidia's press release that puts Kal-El above a 2.0Ghz C2D. But given the fact that it came from an nVidia press release, I'll take that with a large helping of salt.

Yes, I know they won't touch SB i7s, I'm more interested in how they stack up to the atom, brazos, low voltage C2Ds and the likes.
 

Fox5

Diamond Member
Jan 31, 2005
5,957
7
81
Kal-El is a quad core, so it's using twice as many cores to compare to a core 2 duo. Clock for clock, I could see it being comparable to Brazos though.
 

Tuna-Fish

Golden Member
Mar 4, 2011
1,558
2,223
136
Given this rigidly set, unlevel playing field, we deployed a battery of benchmarks that run primarily within the CPU’s caches. In other words, we made an attempt to only measure CPU-bound performance.

^ is the only major problem with the article -- they deliberately chose not to benchmark the memory subsystems, but at least in my experience, the memory subsystems are very relevant to performance between the platforms.

When I had a OMAP3430 system (Nokia n900) in my hands, I did some spot benches, and my conclusion was that the device was memory bandwidth limited in more or less all code that had any memory references. At all. Doubling the amount of bit twiddling in any real benches rarely had any effect on execution time. That's what you get with slow 32bit ram interfaces that are shared with the display, I guess.
 

Schmide

Diamond Member
Mar 7, 2002
5,612
786
126
^ is the only major problem with the article -- they deliberately chose not to benchmark the memory subsystems, but at least in my experience, the memory subsystems are very relevant to performance between the platforms.

???

Page 3

Nevertheless, memory bandwidth results are important because they underscore a handicap that ARM must eventually address. ARM systems have typically been optimized for extreme low-power environments while x86 systems have been aggressively optimized for performance. A sacrifice made in the Freescale i.MX515 is memory speed exchanged for low power usage, but this absolutely destroys performance on many types of tasks as exemplified by our STREAM results.

As can be seen in the graph above, the ARM Cortex-A8 as part of the Freescale i.MX515 struggles against even the ancient AMD Athlon and is creamed by the VIA Nano and the Intel Atom. While part of the problem is its pokey memory, another component is the ARM chip’s meager 32-bit memory interface, half the width used for single-channel memory access by x86 chips. If the Cortex-A8 were equipped to access DDR2-800 memory through a 64-bit interface, it might very well keep up with its x86 rivals in terms of memory bandwidth.
 

Arkadrel

Diamond Member
Oct 19, 2010
3,681
2
0


Core vs Core? looks like from that review:

Intel Atom is 2-3 times faster than the ARM proccessor at same speeds.
Intel Atom is 2-3 times more power hungry than the ARM processor.

performance/watt?... from those benchmarks, it looks pretty equal.
I was kinda expecting the ARM cpu to walk all over the 86x in this reguard.

Looks like it ll be a closer battle than I though... and arm could be on the loseing end, other(s) have tried to take on 86x and lost before.
 

Dark Shroud

Golden Member
Mar 26, 2010
1,576
1
0
There is a reason Intel just shrunk Atom to 32nm and has openly talked about Atom's roadmap all the way to 14nm. Throw in Intel's new Trigate transistors and their fab power and we'll see that Atom has serious advantages over ARM. AMD's Bobcat cores have a big advantage over Atom as well since they use even less power than Atom while getting better performance.

I just don't see ARM surviving in most tablets or getting a real foothold in ultra portables or notebooks.
 

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
A8 v. Atom: meh. Need an A9 SoC to test.

I would personally go and run some media encoding tests, as well, as that tends to stress main memory, caches, indirect branch prediction, etc., all at once. Any will be biased towards x86, but not by more than 10-20%.

For graphics tests, I would think the one to test right now would be a Tegra 2.

If Intel can find a novel way to tackle idle power, however, ARM will be stuck on the low end, no matter what they try.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
59
91

That is a great article :thumbsup:

I'd also like to mention that much of the analysis of ARM's capabilities are based on its performance in traditional bulk-Si process tech.

ARM undertook a project to explore the advantages to going SOI and came to some surprising results:

ARM reports 45-nm SOI test chip with 40% power-saving

The results show that 45-nm high-performance SOI technology can provide up to 40 percent power savings and a 7 percent circuit area reduction compared to bulk CMOS low-power technology, operating at the same speed.

This same implementation also demonstrated 20 percent higher operating frequency capability over bulk while saving 30 percent in total power in specific test applications.

Source: EETimes
ARM on SOI, an expense that would surely be justified if competing in a marketspace where ASP's are in the hundreds of dollars versus a few dollars as is common in the mobile phone markets, would be even more competitive against x86-based devices.
 

sm625

Diamond Member
May 6, 2011
8,172
137
106
I bet if you take the latest ARM core, beef up the fpu by a factor of 5, widen the memory bus by a factor of 2, and increase memory clocks by a factor of 3, you'd have something very similar to atom in performance AND power consumption. There is no free lunch with ARM. ARM has a better reputation because the software written for it has been more memory constrained, which forced more efficiency which helps in the long run. If smart phones had gigabytes to play with for the last 10 years their software would be just as bad as windows
 

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
There is a reason Intel just shrunk Atom to 32nm and has openly talked about Atom's roadmap all the way to 14nm. Throw in Intel's new Trigate transistors and their fab power and we'll see that Atom has serious advantages over ARM. AMD's Bobcat cores have a big advantage over Atom as well since they use even less power than Atom while getting better performance.

I just don't see ARM surviving in most tablets or getting a real foothold in ultra portables or notebooks.

Atom uses less power than Bobcat . It depends on the Atom your referring to . Monday you will be seeing an Atom cpu that bobcat can't touch power wise. I suspect in tablets Intel may even have the better IGP we shall see next week . Even tho Oaktrail isn't anything like I was expecting its steal interesting . Fact is bobcat suppose to go into tablets this year . I want to see that .
 
Last edited:

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
Ever since the rumor that Apple would switch its laptops from Intel processors to ARMs (yes yes, I know its a rumor, and won't happen for a whole list of reasons, but it did get me wondering) I've been curious to how the ARM processors in smartphones and tablets compare to desktop processors.

So are there any benchmarks that can give me a relative idea as to where they stand? The only thing I've seen that compares the two is that slide from nVidia's press release that puts Kal-El above a 2.0Ghz C2D. But given the fact that it came from an nVidia press release, I'll take that with a large helping of salt.

Yes, I know they won't touch SB i7s, I'm more interested in how they stack up to the atom, brazos, low voltage C2Ds and the likes.

Just hook up a 26" monotor and you will find out everthing you need to know. NO reason to talk C2Ds . As this time next year intel will have 2 core IB running in the 10 watt range . Arm has nothing coming out in forseeable future that can match that . Nothing.Think about it. 2 core SB 17 watts 2 core IB 8 watts. Thats getting close to the lowest present bobcat core you know the 8 watt unit . I would say what ever intel is cooking up for 22nm atom it will not disappoint.
 
Last edited:

Idontcare

Elite Member
Oct 10, 1999
21,110
59
91
Atom uses less power than Bobcat . It depends on the Atom your referring to . Monday you will be seeing an Atom cpu that bobcat can't touch power wise. I suspect in tablets Intel may even have the better IGP we shall see next week . Even tho Oaktrail isn't anything like I was expecting its steal interesting . Fact is bobcat suppose to go into tablets this year . I want to see that .

One would hope the folks at Intel could find a way to use their super-duper 2nd-gen gate-last HKMG 32nm process tech to best the performance/watt capabilities of a 40nm bulk-Si standard gate foundry process.

Bravo Intel.

Just hook up a 26" monotor and you will find out everthing you need to know. NO reason to talk C2Ds . As this time next year intel will have 2 core IB running in the 10 watt range . Arm has nothing coming out in forseeable future that can match that . Nothing.Think about it. 2 core SB 17 watts 2 core IB 8 watts. Thats getting close to the lowest present bobcat core you know the 8 watt unit . I would say what ever intel is cooking up for 22nm atom it will not disappoint.

And in the meantime neither ARM's partners nor AMD are sitting still either. They too are busy working on 28nm HKMG parts as well as 20nm.

So long as Intel's mantra is "wait till you see what we plan to roll out eventually" then AMD and the competition have every right to play the same tune.
 

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
Ya but they don't have intels super duper 22nm coming out in 6 months. I don't care what ARM cpus are made from . Just as long as intel doesn't make them for someone else.

Intels getting ready to roll out 22nm 3 D TRI-GATE trannies. This isn't some cheap IBM finfet design this is intel flexing its muscle . If the other fabs are having problems at 32nm and 28 nm now. It will be 2 years befor they are close to 20nm . Fact is intel will likely be at 14nm befor anyone else reaches 20nm. So using intel 32nm against others 40 is something New I think not. Its not even a good dig.
 

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
So IDC I take it you believe SOI on 32nm and 28 nm will be business as usual . SOI the smaller the process becomes has less impact . Plus it adds 10% cost to fabbing chips . Were as Intels super_duper 22nm 3D tri-gate trannsies only raise cost by 2-3% Thats a 7-8% advantage for intel befor they leave the fabs. Intel will steal be leading the market and still controlling price. I will eat at Intels table thank you very much.
 

georgec84

Senior member
May 9, 2011
234
0
71
Intel delaying Ivy Bridge by a few months might mean one of the following:

- they want to space out their chip generations to maximize profits as PC sales dwindle
- they are having trouble with their 22 nm fab (3D transistors)
- they want to screw with their opponents' heads

If Intel starts strong they just may very will defeat ARM. If their initial showing is weak then they face a losing battle, as they are already a couple years late to the foray.
 

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
There is no proof anywere that you can link that shows intel will not ship in the first Qt of 2012. I expect a jan 2012 release but if its march 30 2012 release they are still not late . According to XS AMD has an embargo until the 17th of june . Here I thought we would see them next week.

Ya intel fell asleep with Atom . They probably should have had a new design ready for 32nm . But Intel likely new they would move to 3D TRI-Gate @ 22nm. So designing a new arch for that node was likely the correct move . The only thing I hope for on the 3D TRI-Gate 22nm node is that the Israel Team does the work . These guys are the Best of the Best.
 

Voo

Golden Member
Feb 27, 2009
1,684
0
76
Intel Atom is 2-3 times faster than the ARM proccessor at same speeds.
Intel Atom is 2-3 times more power hungry than the ARM processor.

performance/watt?... from those benchmarks, it looks pretty equal.
Well you're assuming a linear correlation between power - performance, which most of the times doesn't hold true. Usually power hikes much faster than performance. But then especially for ARM chips at the moment they're changing quite a lot from each generation to the next, so it's not as if interpolating from an A8 will get us anywhere.

But it seems that Intel finally started to take Atom serious (but late than never..) and not treat it like the unloved stepchild, so we'll see how their next iterations turn out.

@Nemesis: I extremely, extremely doubt that Intel will try a new architecture on a new process with lots of unknowns. Their tick-tock strategy has worked quite well for them and not doing that is one cornerstone of it.
So either we'll see a new Atom arch on an older process, they let the 22nm process mature a bit before we get a new Atom or we see the old Atom with light changes on the new process. But with Atom being what it is even the best process in the world won't do much good imo.
 

Soleron

Senior member
May 10, 2009
337
0
71
Nemesis, Intel will not have parts below Bobcat's power range on the first day of the 22nm process's launch. It takes them a year+ to roll out to all segments. Do you know where Atom or 10W IBs are in the priority queue? Not first I'd bet.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
59
91
Fact is intel will likely be at 14nm befor anyone else reaches 20nm.

I believe this will be the case as well.

So IDC I take it you believe SOI on 32nm and 28 nm will be business as usual . SOI the smaller the process becomes has less impact . Plus it adds 10% cost to fabbing chips . Were as Intels super_duper 22nm 3D tri-gate trannsies only raise cost by 2-3% Thats a 7-8% advantage for intel befor they leave the fabs. Intel will steal be leading the market and still controlling price. I will eat at Intels table thank you very much.

Production cost itself is not the only cost contributor to the equation. Sure SOI raises production costs but it lowers R&D cost. If you are not doing hundreds of thousands of wafers per month it doesn't make sense to worry about 7% wafer cost delta.
 

Mopetar

Diamond Member
Jan 31, 2011
8,201
7,029
136
The only thing I've seen that compares the two is that slide from nVidia's press release that puts Kal-El above a 2.0Ghz C2D. But given the fact that it came from an nVidia press release, I'll take that with a large helping of salt.

After that came out, a few people pointed out that Nvidia had artificially manipulated the numbers by using an older version of the compiler for the C2D numbers along with some other optimization tricks. When someone ran the most up-to-date version, the C2D easily outperformed the Tegra 3.

Nvidia-s-Kal-El-Quad-Core-ARM-Chip-Is-Actually-Slower-Than-Intel-s-Core-2-Duo-T7200-5.png


Nvidia-s-Kal-El-Quad-Core-ARM-Chip-Is-Actually-Slower-Than-Intel-s-Core-2-Duo-T7200-4.png


That being said, that level of performance is probably more than sufficient for a large number of consumers who just use their notebooks to check email and do light web browsing. Normally one would say that this would be huge, but Intel hasn't been resting on their laurels since the C2D was released. Eventually it just comes down to economics and which company/architecture is able to provide this minimal amount of performance for the lowest cost.
 

Soulkeeper

Diamond Member
Nov 23, 2001
6,731
155
106
Here are some interesting comments from via's management on just this subject
Apparently via has licenses for both arm and x86 yet they continue to push x86

"When an OEM manufacturer looks at their BOM [bill of materials] for comparably equipped platforms, there is not a huge spread between the cost of the Via low-power x86 platform and the ARM-based platform. They still require a CPU, system memory, data storage, and graphics capabilities, while display costs are nearly a mirror image of each other. So you will not see a 20% - 30% spread in the cost of manufacturing comparable platforms," assumed the vice president of Via Technologies.
 
Last edited: