So, here’s a theory on Big.Little - maybe someone more knowledgeable could share if there’s anything to it?
I wonder if part of the idea here is with little cores, it frees up space on the chip to make the big cores even bigger. So, Intel would have a 16 core CPU, but thanks to die space allocation, the first 8-10 cores that actually get the hardest use today are bigger and more powerful than a competing 16 core. And then the remaining little cores are set to handle all the smaller tasks that sometimes needlessly eat up an under utilized big core.
Thus, this might not just be about power efficiency. It could be about power / space allocation to make sure the main stars (the 8-10 big cores) are oversized - even bigger and more powerful than the competition - all while not losing in total core count. I mean, there’s only so much die space - why waste too much on cores above 10 that are running background tasks?
Most apps today have diminishing returns after around 8 cores, but extra cores are handy for background tasks etc. So this would allow multiple apps and tasks to run, it would power up the main 8 cores PAST the competition thanks to more die space available for them, and over time Intel would add more big and more little cores of course as software and parallelization matures.
Thoughts?