dullard
Elite Member
- May 21, 2001
- 24,591
- 2,845
- 126
This just sounds like your own fanboy hysteria getting riled up because you heard things you don't like.
Also, what new core?
Are you saying that Golden Cove is the same core?
View attachment 31840
I said, that part was originally meant to be a from-scratch design (i.e. totally new), and it has now been scaled back to whatever incrementalist design it is now. The original goal is what I would consider "new".
What's your definition of "new"?
As it relates to this thread, I could take definitions 1, 2b, 4a, 5, or 6 in the link below. It seems that you only accept item 6?I said, that part was originally meant to be a from-scratch design (i.e. totally new), and it has now been scaled back to whatever incrementalist design it is now. The original goal is what I would consider "new".
What's your definition of "new"?
Given that Intel has been selling the same core for 5 years now, changing anything is considered new.
As it relates to this thread, I could take definitions 1, 2b, 4a, 5, or 6 in the link below. It seems that you only accept item 6?
![]()
Definition of NEW
having recently come into existence : recent, modern; having been seen, used, or known for a short time : novel; unfamiliar… See the full definitionwww.merriam-webster.com
Do you mind posting any proof that you actually worked at Intel? Otherwise why would we choose to believe anything you say?Well it is not my fault some people align to Intel's dismal track record. That's part of the reason I quit.
Also, what new core?
Do you mind posting any proof that you actually worked at Intel? Otherwise why would we choose to believe anything you say?
It goes further than that. Intel plans to use this concept to add AI, more graphics, more IO, etc. as needed for the specific customer need. The little cores handle the basics. Then mix-and-match as needed for the rest. You need X number of big cores? You got it. You need lots of graphics capability? You got it. You need a AI? You got it. You need a bit of everything? You got it. You need just the cheapest possible thing? You got it. Media accelerators, ray tracing, on chip memory, crypto? Yes to all.So, here’s a theory on Big.Little - maybe someone more knowledgeable could share if there’s anything to it?
I wonder if part of the idea here is with little cores, it frees up space on the chip to make the big cores even bigger. So, Intel would have a 16 core CPU, but thanks to die space allocation, the first 8-10 cores that actually get the hardest use today are bigger and more powerful than a competing 16 core. And then the remaining little cores are set to handle all the smaller tasks that sometimes needlessly eat up an under utilized big core.
Thus, this might not just be about power efficiency. It could be about power / space allocation to make sure the main stars (the 8-10 big cores) are oversized - even bigger and more powerful than the competition - all while not losing in total core count. I mean, there’s only so much die space - why waste too much on cores above 10 that are running background tasks?
Most apps today have diminishing returns after around 8 cores, but extra cores are handy for background tasks etc. So this would allow multiple apps and tasks to run, it would power up the main 8 cores PAST the competition thanks to more die space available for them, and over time Intel would add more big and more little cores of course as software and parallelization matures.
Thoughts?
So this means that intel did nothing with Willow Cove arch? Despite them flat out mentioning they decided not to chase IPC to get the clocks up.
I can say I was convinced he worked at Intel years ago, from what he said, and how he said it. Just the fact that he quit becuase he hates what was going on reinforces that.Do you mind posting any proof that you actually worked at Intel? Otherwise why would we choose to believe anything you say?
Except that I'm sure it is against the rules for former tech employees, if that is indeed what you are, to use anandtech forums as a staging ground for attacks against your former employer. If you indeed resigned, then why the haterade?Hah, I happily handed in my blue badge the day I quit that place. I suppose I can post a picture of some COBRA health insurance or 401k documents they spammed me afterwards, but frankly, the opinion of internet dwellers is not worth that effort.
So you can choose to believe whatever you want. If you choose to believe I did not work at Intel for 14 years as a CPU designer, then everything I post has as much veracity to you as every other poster on this thread. LOL.
I wonder if part of the idea here is with little cores, it frees up space on the chip to make the big cores even bigger. So, Intel would have a 16 core CPU, but thanks to die space allocation, the first 8-10 cores that actually get the hardest use today are bigger and more powerful than a competing 16 core. And then the remaining little cores are set to handle all the smaller tasks that sometimes needlessly eat up an under utilized big core.
For example, the logic required for out-of-order issue scales quadratically with issue width [22] and would eventually dominate the chip area and power consumption. This situation had been summarised by Pollack [31] by stating that the performance of a single core increases with the square root of its number of transistors
I said, that part was originally meant to be a from-scratch design (i.e. totally new), and it has now been scaled back to whatever incrementalist design it is now. The original goal is what I would consider "new".
Exce
Except that I'm sure it is against the rules for former tech employees, if that is indeed what you are, to use anandtech forums as a staging ground for attacks against your former employer. If you indeed resigned, then why the haterade?
Also, I always thought you were credible, with good sources, only to hear you say you heard stuff even while you were working as a cpu design architect? Seriously? You must have been really low rung then. Even interns seem to have better inside knowledge than you, and your "technical expertise" is questionable with some of the things you've revealed today.
So, here’s a theory on Big.Little - maybe someone more knowledgeable could share if there’s anything to it?
I wonder if part of the idea here is with little cores, it frees up space on the chip to make the big cores even bigger. So, Intel would have a 16 core CPU, but thanks to die space allocation, the first 8-10 cores that actually get the hardest use today are bigger and more powerful than a competing 16 core. And then the remaining little cores are set to handle all the smaller tasks that sometimes needlessly eat up an under utilized big core.
Thus, this might not just be about power efficiency. It could be about power / space allocation to make sure the main stars (the 8-10 big cores) are oversized - even bigger and more powerful than the competition - all while not losing in total core count. I mean, there’s only so much die space - why waste too much on cores above 10 that are running background tasks?
Most apps today have diminishing returns after around 8 cores, but extra cores are handy for background tasks etc. So this would allow multiple apps and tasks to run, it would power up the main 8 cores PAST the competition thanks to more die space available for them, and over time Intel would add more big and more little cores of course as software and parallelization matures.
Thoughts?
I don't see how 8 big cores and 8 little cores beats 16 big cores. Could happen, I guess. I wouldn't count on it.
I wonder if part of the idea here is with little cores, it frees up space on the chip to make the big cores even bigger. So, Intel would have a 16 core CPU, but thanks to die space allocation, the first 8-10 cores that actually get the hardest use today are bigger and more powerful than a competing 16 core. And then the remaining little cores are set to handle all the smaller tasks that sometimes needlessly eat up an under utilized big core.
Thus, this might not just be about power efficiency. It could be about power / space allocation to make sure the main stars (the 8-10 big cores) are oversized - even bigger and more powerful than the competition - all while not losing in total core count. I mean, there’s only so much die space - why waste too much on cores above 10 that are running background tasks?
Here's the rub with this theory: the mainstream consumer platform does not need more than 8-10 big "fat" cores at the moment. If space allocation were an issue, they could just make 8 huge cores and be done with it. If background tasks were an issue, they could just make a 10 big core instead. Anyone looking for 12+ cores for a blend of consumer and semiprofessional workloads could always be guided towards HEDT.You've got the gist. Fundamentally, small cores exist because they provide better PPA than their bigger counterparts, and on a consumer system, you will always have low priority threads that will run happily on them. Thus hybrid gives you better efficiency and lower cost for the same "user experience". That balance is delicate, however. The ARM ecosystem has been refining QoS for years, but it remains to be seen how x86 will fair.
Here's the rub with this theory: the mainstream consumer platform does not need more than 8-10 big "fat" cores at the moment.
Yea. 8+8 may compete with 12 core Zen, but Intel has no answer for 16 core. Right now 16 core is pretty much niche though for consumer desktop, might not hurt them too much unless 16 core consumer cpu becomes much more standard. I do think hybrid for the desktop is being done just because they still dont have a good yielding process and will need a chiplet to get 12 or more cores.Intel certainly needs more than 8C if they want to compete with Zen 3 12-16C in MT workloads and HEDT is basically dead, at least for Intel.