The ARM v.s. Intel Thing - Let's Discuss It

Page 13 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Exophase

Diamond Member
Apr 19, 2012
4,439
9
81
Remember the Samsung "8 core" is actually 4+4. Only 4 can be active.

That's not actually true, big.LITTLE doesn't mandate anything about deactivating cores. Migrating everything between the big and little clusters and only keeping one powered most of the time is just one use model.

Running all 8 cores isn't often going to be a practical decision, but it could make sense to mix and match some portion of each cluster. For example 2 Cortex-A15 + 3 Cortex-A7 active.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
146
106
That's not actually true, big.LITTLE doesn't mandate anything about deactivating cores. Migrating everything between the big and little clusters and only keeping one powered most of the time is just one use model.

Running all 8 cores isn't often going to be a practical decision, but it could make sense to mix and match some portion of each cluster. For example 2 Cortex-A15 + 3 Cortex-A7 active.

I thought the news from Samsung was only one type of cores could be active at a time.

I think its important to note there are 2 different use models. big.LITTLE and big.LITTLE MP.

In the big.LITTLE task migration use model the OS and applications only ever execute on Cortex-A15 or Cortex-A7 and never both processors at the same time.
This use-model is a natural extension to the Dynamic Voltage and Frequency Scaling (DVFS), operating points provided by current mobile platforms with a single application processor to allow the OS to match the performance of the platform to the performance required by the application.

[FONT=Arial,Arial][FONT=Arial,Arial]Since a big.LITTLE system containing Cortex-A15 and Cortex-A7 is fully coherent through CCI-400 another logical use-model is to allow both Cortex-A15 and Cortex-A7 to be powered on and simultaneously executing code. This is termed big.LITTLE MP, which is essentially Heterogeneous Multi-Processing. Note that in this use model Cortex-A15 only needs to be powered on and simultaneously executing next to Cortex-A7 if there are threads that need that level of processing performance. If not, only Cortex-A7 needs to be powered on. [/FONT][/FONT]

http://www.arm.com/files/downloads/big_LITTLE_Final_Final.pdf
 
Last edited:

Exophase

Diamond Member
Apr 19, 2012
4,439
9
81
A use model is just that. It's not a hardware design choice. It's not a configuration option provided by ARM. It's a policy implemented by software, either in an OS or hypervisor/firmware.

big.LITTLE actually needs both clusters active simultaneously in order to achieve the switch latency Samsung has claimed. Look specifically for the parts where they talk about direct communication paths between the two clusters to speed up L2 migration on demand. This makes no sense if the hardware can't power both simultaneously.

The real intention of the design is flexibility. You want to be able to run both clusters with only some processors enabled in order to better fit an asynchronously balanced load.
 

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
I have seen SGX544 mentioned on the Z2580. Not SGX544MP2. That's why I asked for proof.

EDIT: I see it does have the SGX544MP2 now. Getting info on that fact makes it still be outdated and outclassed by what's already out there.

You dont Know what GPU is on the Z2580 W If its for windows tablet like surface for instances. It can be that single core HD product will pack a punch in this space. I don't expect it to Beat Temash , Your still not getting it, Its a stop gap nothing more as Silvermont makes it way here ,Same as IVB 7wattSDP The SGX544MP2@533 is complete overkill for phones I suspect this will play nice in android tablets . Intel covering all markets
 
Last edited:

Exophase

Diamond Member
Apr 19, 2012
4,439
9
81
The SGX544MP2@533 is complete overkill for phones

An SGX544MP2 provides better perf/W than SGX544MP1 since it scales almost as well with core count as it does with frequency, and power consumption scales with frequency more poorly than linearly. So while of course it is an area cost it's not overkill, improving perf/W for graphics on phones is a good thing.
 

Sable

Golden Member
Jan 7, 2006
1,130
105
106
I thought the news from Samsung was only one type of cores could be active at a time.

I think its important to note there are 2 different use models. big.LITTLE and big.LITTLE MP.





http://www.arm.com/files/downloads/big_LITTLE_Final_Final.pdf
http://arstechnica.com/gadgets/2013...t-core-exynos-5-octa-soc-promises-efficiency/

The A15 and A7 cores can work in concert with one another, making it theoretically possible to create a device that can set new speed records without devouring your battery (though of course we'll need to get actual devices in for testing to see how well the technology works in practice.)
 

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
An SGX544MP2 provides better perf/W than SGX544MP1 since it scales almost as well with core count as it does with frequency, and power consumption scales with frequency more poorly than linearly. So while of course it is an area cost it's not overkill, improving perf/W for graphics on phones is a good thing.

NV with Tegra 3 won't agree they have great gpu performance for this market , It only hurts them on Phones at this time , Read anands article on the subject Truth is Today tegra 3 is old and outdated and still has to much gpu power in the phone space . They need that power in the tablet space however. It does't do well here either as its a power pig . I like NV but the truth is the truth, Z2580L is phone It will work well in android tablet tho . Intel covering 2 markets here Windows and android .
 
Last edited:

Hulk

Diamond Member
Oct 9, 1999
5,203
3,834
136
We'll never know the truth because we aren't in any of the rooms where the big decisions are being made, but we can guess as to the topics they will have had to wrestle with in the journey to get to where they are at.

Is x86 superior to ARM when designed to be manufactured on the same process node? Or does x86's advantage merely a masked disadvantage that has been more than over-compensated by billions spent on developing a process node that is far superior to that available at the foundry for other ARM designers?

And what of the design itself? Is x86 superior to ARM because it has benefited from decades of outsized R&D budgets to enhance, optimize, and tweak the design for maximal benefit on any given node - at the expense of billions upon billions in R&D bucks - whereas ARM is more like a scattered collection of nomads who wander the wilderness using stone tools to eek out a living year over year?

If Intel threw as many dollars into ARM as they have invested into their existing mobile x86 effort how would such an ARM chip fare against the current x86 offerings? If Nvidia had access to Intel's leading edge process tech would Tegra 3 be a much more potent competitor?

We can't hope to ever know the answers to these questions. And as consumers, investors, competitors, or OEMs, the answers to these questions are irrelevant anyways.

All that is relevant is what reality has made available, sans the resource and investment normalization efforts which are at best topics of relevance for the academically inclined.

Is there an "x86 tax"? Perhaps, or perhaps there is an "ARM tax" instead.



Now this post really cuts to the core of the issue doesn't it.
Yes, we will never know the answers to these questions but history can give us some ideas.

If you are old enough to have lived through it Apple made an enormous push to RISC when it moved from the outdated Motorola processors to the Power PC RISC architecture. It was a big gamble because it meant a total rewrite of the operating system and all applications. That's an enormous thing to ask the user base and I doubt any base except the absolutely loyal Apple following would have gone along with it. They did and even the most stalwart were kind of disgruntled as quite a few of my friends were such.
Anyway Apple with their Motorola driven computers were falling farther and farther behind Intel by the time the 486 came along. They moved to RISC and for a short time enjoyed a performance lead in many applications such as Photoshop. We heard from Apple and others in the computer community that x86 was a dead end, RISC was the future. TV commercials, radio ads, slides...
Being a lifelong Intel user I was wondering if this was the end of x86.
But then Intel came back with this new "Pentium" processor and a new RISC-like addition called micro-ops. Intel instantly became viable again and 10 years or so later Apple waived the white flag and moved to the Intel camp.

AMD caught Intel with it's pants down and in 2006 Intel again came roaring back with C2D.

The point is that Intel has been challenged before and has always fended of such challenges. They have enormous pockets. They have the best and the brightest and a wealth of x86 and process information we can only dream about.

If this battle is to be fought over who will have the processors which will run Windows in ultramobile devices then I think it is safe to predict Intel won't be caught. They might be pushed a bit which would be a very good thing for us.

If they must compete in the ARM space then this is more of a fair fight but even then I think they still win the war, if not every battle.

Whatever happens I love that their are other heavyweight contenders such as Samsung getting into the ring with Intel.
 

Exophase

Diamond Member
Apr 19, 2012
4,439
9
81
NV with Tegra 3 won't agree they have great gpu performance for this market , It only hurts them on Phones at this time , Read anands article on the subject Truth is Today tegra 3 is old and outdated and still has to much gpu power in the phone space . They need that power in the tablet space however. It does't do well here either as its a power pig . I like NV but the truth is the truth, Z2580L is phone It will work well in android tablet tho . Intel covering 2 markets here Windows and android .

This has no relevance whatsoever to anything I said. A wider GPU gets better power efficiency than the a narrower version of the same GPU technology. It's an space vs efficiency/performance trade-off. This is strictly addressing your claim that an SGX544MP2 is overkill in a phone, because better efficiency is never wasteful on a phone.
 

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
It kill batteries needlessly . Vary important to some like 99%. anand has the data you go debate him . The world turned upside down , 1.3 years ago everyone laughs at Intel in a phone X86 never in phones . Not only did medfield break into the arm space it did it in still It bitched slapped everthing current in performance and was Good at efficiency . Now its all of a sudden AN army of Arm fanbois some NV and the AMD guys dening everthing intel acompishes . I expect in the phone space Z2580L will beat every metric medfield set for intel
 
Last edited:

Exophase

Diamond Member
Apr 19, 2012
4,439
9
81
It's MORE efficient, for the same level of performance it uses LESS POWER, if you want to limit power consumption to some max you cap clock speed like with everything else.

Please stop saying "go read anand" as your argument to everything without even trying to understand what I'm actually saying. Maybe I'm the one who needs to stop trying with you.
 

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
Please Do stop . First lets take the K900 its 2 core 4 threaded , But it will use only 1 core in a phone period until better apps come on the scene It has SOIX . It has 6x better graphics than medfield which is vary weak gpu . But it does the job . These higher end GPU aren't for phones. And If the manufactors take them out of spec for Tablets they loose battery fast . In a tablet these small cpu/gpu work hard , We can talk till were blue in the face read anand article look at pretty graphs and pay attention to what a smart man has to say to his readers , Nothing can be simpler than that , I don't care to debate fot the sake of debating .
 

MightyMalus

Senior member
Jan 3, 2013
292
0
0
Its like a broken record.

I'm not a fan of the Exynos, yet. I think Custom chips are more interesting.
Those are the ones I wanna see benchmarked.