big.LITTLE: the death of Medfield

tempestglen

Member
Dec 5, 2012
88
17
71
154841e3fooetl9rm2e2e3.jpg


154844bckkkpkekcv0spkb.jpg


154845mygyc9y7gw3mugom.jpg


154846412oiv2a05bpf5b7.jpg


1548476kjccw0jyymzpyj5.jpg


155028iimcid3gcg0ndg0a.jpg


2013 will be a hard time for Intel.
 

itsmydamnation

Diamond Member
Feb 6, 2011
3,091
3,931
136
This is whats so wrong about this forum. The title is completely stupid. How about we evaluate technical products for what they are. The "market" will decide whats "best".
 

Puppies04

Diamond Member
Apr 25, 2011
5,909
17
76
This is whats so wrong about this forum. The title is completely stupid. How about we evaluate technical products for what they are. The "market" will decide whats "best".

Judging by the "highly technical" comments left by the OP I wouldn't hold your breath.
 

itsmydamnation

Diamond Member
Feb 6, 2011
3,091
3,931
136
I find it interesting that the big and little have there own separate L2, i wonder if your using the A15 does the A7's L2 flush and shutdown and vise versa.
 

Khato

Golden Member
Jul 15, 2001
1,319
391
136
2013 will be a hard time for Intel.

Quite possibly, but not so much because of the competition. The performance/power 'data' of the last slide is typical marketing - the 'performance' charts are basically just saying that using big.LITTLE doesn't hurt performance, while the 'power' charts merely show that the A7's can do trivial tasks very efficiently whereas the A15's keep sucking down the juice. Now if they'd included A9 results for reference we'd have an idea about where things stand, but as is these slides don't tell us anything we didn't already know - A7's are designed for efficiency, A15's are designed for performance.
 

Homeles

Platinum Member
Dec 9, 2011
2,580
0
0
Marketing slides are such an objective, accurate source of data.
 
Last edited:

Fjodor2001

Diamond Member
Feb 6, 2010
4,377
651
126
Quite possibly, but not so much because of the competition. The performance/power 'data' of the last slide is typical marketing - the 'performance' charts are basically just saying that using big.LITTLE doesn't hurt performance, while the 'power' charts merely show that the A7's can do trivial tasks very efficiently whereas the A15's keep sucking down the juice. Now if they'd included A9 results for reference we'd have an idea about where things stand, but as is these slides don't tell us anything we didn't already know - A7's are designed for efficiency, A15's are designed for performance.

http://www.androidcentral.com/samsung-announces-8-core-exynos-5-mobile-processor

"Samsung says this tech will allow the chip to use up to 70-percent less power than a traditional quad-core A15 SoC"

So why do you need the Cortex-A9 results specifically for comparison?

Note that I too would like to see some more detailed benchmarks of course. But the numbers above at least give some indication, don't they?
 
Last edited:

Khato

Golden Member
Jul 15, 2001
1,319
391
136
http://www.androidcentral.com/samsung-announces-8-core-exynos-5-mobile-processor

"Samsung says this tech will allow the chip to use up to 70-percent less power than a traditional quad-core A15 SoC"

So why do you need the Cortex-A9 results specifically for comparison?

Note that I too would like to see some more detailed benchmarks of course. But the numbers above at least give some indication, don't they?

I'm not doubting that the big.LITTLE approach can allow for excellent power savings in low-load situations. But since we have power data for neither the A15 nor A7 in low load situations there's no good basis for comparison. The performance charts show no change for big.LITTLE on anything except for the bbench + audio metric, which likely means that the A15 cores are doing all the work and the A7's are idling. Whereas all the power charts save for the same just tell us that the A7's are more efficient when running a low load.

The problem being that we don't know whether A15 keeps sucking down the power even in low load situations and the A7 is merely comparable to the A9 in those situations or if it's quite a bit better. All this material tells me is that ARM couldn't figure out a way to make an efficient, high performance core, so instead they decided to make an inefficient high performance core and couple it with a slow efficient core. Which is definitely a valid approach, but does cost both in terms of die size and design time.
 

Fjodor2001

Diamond Member
Feb 6, 2010
4,377
651
126
All this material tells me is that ARM couldn't figure out a way to make an efficient, high performance core, so instead they decided to make an inefficient high performance core and couple it with a slow efficient core. Which is definitely a valid approach, but does cost both in terms of die size and design time.

Or the reason ARM decided to go for the big.LITTLE solution is this:

http://forums.anandtech.com/showpost.php?p=34489166&postcount=23
 

LogOver

Member
May 29, 2011
198
0
0
I would say that A15 cores are for benchmarks while A7 cores for all other work (which is not that demanding on phones).
 

Exophase

Diamond Member
Apr 19, 2012
4,439
9
81
:eek:

mmm.... is Samsung known for lie like nvidia, or are serious about theys PRs?

Everybody in this space has been playing games with words.. Samsung did make some pretty notoriously ridiculous claims in the past, like about S5PC110's SGX540 achieving 90MTri/s.

Their claims about power consumption for the dual core 45m vs quad core 32nm Exynos 4s ended up being about right though.
 

alyarb

Platinum Member
Jan 25, 2009
2,425
0
76
yeah, why would ARM make an SDK for their chips... it's not like they are trying or anything
 

djgandy

Member
Nov 2, 2012
78
0
0
So Medfield, a design that has been hanging around for 2/3 years now in various forms is doomed because something newer has come out and nothing to do with it being retired due to old age anyway?

What excellent analysis.
 
Last edited:

djgandy

Member
Nov 2, 2012
78
0
0
Everybody in this space has been playing games with words.. Samsung did make some pretty notoriously ridiculous claims in the past, like about S5PC110's SGX540 achieving 90MTri/s.

Their claims about power consumption for the dual core 45m vs quad core 32nm Exynos 4s ended up being about right though.

They said SGX540 could do 90 MTri/s? Really?
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
Hi,
I know you have some experience with CPU designing. Could you please explain how this big little thing is supposed to work? When a thread starts running how does the system or CPU determine whether it is a light or heavy task?

Thanks.

I'm not sure how it will ultimately play out in reality once products hit the market TBH.

Right now it is being spun as all sunshine and roses, but so too was Intel's hyperthreading when it debuted, as was AMD's CMT...and I wouldn't call either of those approaches to be all that great.

Fjodor2001's links will tell you how it is supposed to work. But you know what they say - "In theory there is no difference between theory and practice, but in practice there is ;)"

We'll see how borked big.LITTLE is once it is reduced to practice. The last time I saw something as complicated as big.LITTLE it was AMD's original ambitions for Cool'n'Quiet on 65nm Phenom - individual cores could have specific voltages applied and so forth, only it was completely broken in practice and resulted in dire performance losses so the best advice was to just disable it (which AMD elected to do in their own 45nm PhenomII shrink, rather than waste more resources trying to get it right).

If you can't tell, I am rather the doubting Thomas on the big.LITTLE concept. Not because I think it is a bad idea, I don't, I really think it is great. But it critically depends on humans getting a lot of stuff right and as we see with what are essentially vastly simpler core/thread topologies with HT and CMT, the human element that is involved in crafting the OS and the scheduler and the apps is a fatal flaw in this heterogeneous core approach.

The hardware guys have outsmarted themselves and it shows, they failed at KISS (keep it stupid simple) and history will repeat itself IMO.
 
Jan 8, 2013
59
0
0
I'm not sure how it will ultimately play out in reality once products hit the market TBH.

Right now it is being spun as all sunshine and roses, but so too was Intel's hyperthreading when it debuted, as was AMD's CMT...and I wouldn't call either of those approaches to be all that great.

Fjodor2001's links will tell you how it is supposed to work. But you know what they say - "In theory there is no difference between theory and practice, but in practice there is ;)"

We'll see how borked big.LITTLE is once it is reduced to practice. The last time I saw something as complicated as big.LITTLE it was AMD's original ambitions for Cool'n'Quiet on 65nm Phenom - individual cores could have specific voltages applied and so forth, only it was completely broken in practice and resulted in dire performance losses so the best advice was to just disable it (which AMD elected to do in their own 45nm PhenomII shrink, rather than waste more resources trying to get it right).

If you can't tell, I am rather the doubting Thomas on the big.LITTLE concept. Not because I think it is a bad idea, I don't, I really think it is great. But it critically depends on humans getting a lot of stuff right and as we see with what are essentially vastly simpler core/thread topologies with HT and CMT, the human element that is involved in crafting the OS and the scheduler and the apps is a fatal flaw in this heterogeneous core approach.

The hardware guys have outsmarted themselves and it shows, they failed at KISS (keep it stupid simple) and history will repeat itself IMO.

I share your opinion on this. Sounds kind of silly. Samsung is going one step further and putting 4 of those little cores on their upcoming chips. What sense could it make except marketing? I can assure the OP that Intel's doom will not come from this trick.
 

Fjodor2001

Diamond Member
Feb 6, 2010
4,377
651
126
I share your opinion on this. Sounds kind of silly. Samsung is going one step further and putting 4 of those little cores on their upcoming chips. What sense could it make except marketing? I can assure the OP that Intel's doom will not come from this trick.

"What sense could it make except marketing?"

Perhaps power saving combined with high performance when needed? I.e. the whole purpose of big.LITTLE.
 

Homeles

Platinum Member
Dec 9, 2011
2,580
0
0
"What sense could it make except marketing?"

Perhaps power saving combined with high performance when needed? I.e. the whole purpose of big.LITTLE.
Seriously. Doesn't make sense to waste die real estate just for a marketing gimmick, especially when the competition is so fierce.