Time to shift direction towards heterogeneous cores?

her34

Senior member
Dec 4, 2004
581
1
81
Intel and AMD have both always stated that they aren't going to enter a multi-core race. As quad cores are starting to become common and the prospect of 8 or 16 core cpu's is approaching, we have to ask if that's the right path to head down.

What if instead intel/amd started using transistors to create different cores instead of many identical ones?

given the choice of:

a) 8 core homogenous cpu

or

b) heterogeneous cpu:
*low processing core. Similar to xscale. All other cores turn off when desktop is idle. Purpose is to save power.
*2 high processing cores. Today's core 2 duo
*parallel or stream processing core. Basically gpu integrated into cpu
*micro cores: transistors dedicated to specific app. Similar to amd's uvd for video decoding. Other likely app would be encryption
*translating core/legacy core: allows cpu's to be isa independent


Some would argue it's chicken and egg problem for software, so we need to have mass adoption of 8 core cpu's before developers start to come up with good uses for it. And not just reworking current types of programs for 8 cores but coming up with new types of programs.

Others would argue average consumer will never need 8 cores ever so a heterogeneous cpu with mix of cores would be of greater benefit.

Between choice a and b, which would you prefer?
 

myocardia

Diamond Member
Jun 21, 2003
9,291
30
91
I see heterogeneous cores hitting the desktop sometime between 2010 & 2011. But seriously, couldn't you find anything further into the future we could talk about? How about that retinal interface that nVidia has hinted at for 2018, or that direct neurological interface I'm always dreaming about, and will happen sooner or later?:D
 

hans007

Lifer
Feb 1, 2000
20,212
17
81
it would be insanely hard to program something like that, or the translation / control unit on the cpu would have to be crazy complex to know which cores were on and which were not and whether to send instructions there.

i havent really touched CE stuff since college, but this idea is in a way already implemented as is. if you have a low procesing core it would basically have to support any instruction that would be used . so if you had one for just 2d apps and such, i suppose yo uwouldnt need SSE units of FPU units. all the hard core processing is fpu and SSE mmx etc as it is. i am not sure if intel has put it into all their desktop CPUs but on the laptop ones, they can shutdown units not being used, so if you arent doing any floating point math or SSE stuff it could turn those off and wake them up only when needed. i know the mobile cores can even turn parts of cache off etc .

so in effect this is already done, just on one chip. the floating point and integer stuff is not there especially since the core duo came out where it wasnt actually 2 full cpus on one die (like say the pentium D or a core 2 quad) where there is some duplication of units (for example a core 2 quad has 2 L2 cache controllers...)

that said, this is sort of already there as cpus have always been divided into logical units.
 

myocardia

Diamond Member
Jun 21, 2003
9,291
30
91
Originally posted by: heyheybooboo
Some would argue it's chicken and egg problem for software

It always has been that way. The hardware is the chicken, though. Without a user base, the software guys absolutely refuse to spend the extra week or two (or month or two, or whatever it happens to be) to even make their software dual threaded. Just imagine how it would be for homogenous cores.
 

heyheybooboo

Diamond Member
Jun 29, 2007
6,278
0
0
Originally posted by: myocardia
Originally posted by: heyheybooboo
Some would argue it's chicken and egg problem for software

It always has been that way. The hardware is the chicken, though. Without a user base, the software guys absolutely refuse to spend the extra week or two (or month or two, or whatever it happens to be) to even make their software dual threaded. Just imagine how it would be for homogenous cores.

Yup.

Yah want to increase cpu effiency 100% at no cost ?? :shocked:

Parallelise threads.
 

aka1nas

Diamond Member
Aug 30, 2001
4,335
1
0
Originally posted by: heyheybooboo
Originally posted by: myocardia
Originally posted by: heyheybooboo
Some would argue it's chicken and egg problem for software

It always has been that way. The hardware is the chicken, though. Without a user base, the software guys absolutely refuse to spend the extra week or two (or month or two, or whatever it happens to be) to even make their software dual threaded. Just imagine how it would be for homogenous cores.

Yup.

Yah want to increase cpu effiency 100% at no cost ?? :shocked:

Parallelise threads.

No cost for whom? It's definitely costs the developer more in increased development time and complexity.
 

hans007

Lifer
Feb 1, 2000
20,212
17
81
Originally posted by: myocardia
Originally posted by: heyheybooboo
Some would argue it's chicken and egg problem for software

It always has been that way. The hardware is the chicken, though. Without a user base, the software guys absolutely refuse to spend the extra week or two (or month or two, or whatever it happens to be) to even make their software dual threaded. Just imagine how it would be for homogenous cores.





it is not really a software issue, its not like software guys did not use the FPU eventhough they did not have fpus until the 486 really was common place.


shutting down and making high power cores and such and making the software engineers have to handle controlling and turning them on, just ist stupid. the CPUs control logic should do that by demand. i suppose it would be like traction control on a car. that said it already is mroe or less there in mobile cpus that have multiple cores on asingle die.

the OPs post seems to really be a general , dual die MCM type vs monolithic die argument, and well this topic phrased differently has been beaten to death in the AMD barcelona intel arguments and such.

and its not that easy to port code from single to multi thread. i just had to do something like that for something relatively simple, and there are a lot of wierd little issues. its hard to debug anyway. honestly i dont mind doing it, just it takes a long time and sales people etc do not uindestand this. that said, its doable and it hink the reasons software guys complain so much is that its generally difficult and biz dev / program manager types do not understand these things. oh well...
 

firewolfsm

Golden Member
Oct 16, 2005
1,848
29
91
The GPU makes sense, but beyond that, not really. Most apps can use at least one core at 100%, just threading the processes brings better efficiency.

In the case of separate, specialized processors, they're a waste when not being used. Sure they can go to idle mode and not eat power, but that's still extra die space that could have gone to something else.
 

cputeq

Member
Sep 2, 2007
154
0
0
This, in essence, already exists in the Cell processor, but in simplified form from what you're describiing.

And we all see / hear how easy that thing is to program for. What you're proposing, while interesting, would be a total nightmare for software to impliment. The amount of software optimization needed would probably be an order of magnitude greater than using multi-core approaches.