- Oct 16, 2005
- 1,848
- 29
- 91
After thinking it over, this idea could hurt performance of multi-threaded applications, but might be worth it anyways, assuming it's architecturally possible, because I don't know if it is.
I don't know too much about CPU architecture, but enough for this idea to come to me, I'd like your opinions on it.
What if we had a thread of instructions go through a complex branch predictor which focused on out-of-order execution as far into the future as is possible (I have no idea how many instructions that would be) then had it deal out instructions for the same thread, to separate cores. Then have the data converge at a more powerful core for error checking and corrections. This would need L1 cache shared between the first cores, then L2 shared by all of them. Couldn't this effectively spread something like SuperPi over many cores?
Eh, my idea is very crude and underdeveloped, but I think I have something here.
I don't know too much about CPU architecture, but enough for this idea to come to me, I'd like your opinions on it.
What if we had a thread of instructions go through a complex branch predictor which focused on out-of-order execution as far into the future as is possible (I have no idea how many instructions that would be) then had it deal out instructions for the same thread, to separate cores. Then have the data converge at a more powerful core for error checking and corrections. This would need L1 cache shared between the first cores, then L2 shared by all of them. Couldn't this effectively spread something like SuperPi over many cores?
Eh, my idea is very crude and underdeveloped, but I think I have something here.