• Guest, The rules for the P & N subforum have been updated to prohibit "ad hominem" or personal attacks against other posters. See the full details in the post "Politics and News Rules & Guidelines."

Discussion Intel current and future Lakes & Rapids thread

Page 300 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Exist50

Member
Aug 18, 2016
126
165
116
This just sounds like your own fanboy hysteria getting riled up because you heard things you don't like.

Also, what new core?
I listed things we know are new. If it's incremental, it's in no way that's externally visible.

And as for new core, it uses Golden Cove. I would indeed call that new, writing in 2020.
 
  • Like
Reactions: jj109 and uzzi38

dmens

Golden Member
Mar 18, 2005
1,984
322
126
Are you saying that Golden Cove is the same core?
View attachment 31840
I said, that part was originally meant to be a from-scratch design (i.e. totally new), and it has now been scaled back to whatever incrementalist design it is now. The original goal is what I would consider "new".

What's your definition of "new"?
 

jpiniero

Diamond Member
Oct 1, 2010
8,397
1,426
126
I said, that part was originally meant to be a from-scratch design (i.e. totally new), and it has now been scaled back to whatever incrementalist design it is now. The original goal is what I would consider "new".

What's your definition of "new"?
Given that Intel has been selling the same core for 5 years now, changing anything is considered new.
 

dullard

Elite Member
May 21, 2001
22,480
807
126
I said, that part was originally meant to be a from-scratch design (i.e. totally new), and it has now been scaled back to whatever incrementalist design it is now. The original goal is what I would consider "new".

What's your definition of "new"?
As it relates to this thread, I could take definitions 1, 2b, 4a, 5, or 6 in the link below. It seems that you only accept item 6?

 
  • Like
Reactions: jj109

dmens

Golden Member
Mar 18, 2005
1,984
322
126
Given that Intel has been selling the same core for 5 years now, changing anything is considered new.
Well it is not my fault some people align to Intel's dismal track record. That's part of the reason I quit.
 

yuri69

Member
Jul 16, 2013
71
64
91
Mind you, Golden Cove aims at a 20% IPC gain.

AMD treats their Zen 3 as an entirely new architecture. Quoting Mark Papermaster: "Zen 3 is not a derivative design".

So why wouldn't the Golden Cove be a new design? Besides, there are no from-scratch designs...
 
  • Like
Reactions: mikk

dmens

Golden Member
Mar 18, 2005
1,984
322
126
As it relates to this thread, I could take definitions 1, 2b, 4a, 5, or 6 in the link below. It seems that you only accept item 6?

By your dictionary definition #1, every single stepping of the same CPU family can be considered new. So no, I don't accept your contrived examples.
 
  • Like
Reactions: lightmanek

Dayman1225

Senior member
Aug 14, 2017
998
572
106
Well it is not my fault some people align to Intel's dismal track record. That's part of the reason I quit.
Do you mind posting any proof that you actually worked at Intel? Otherwise why would we choose to believe anything you say?
 

dmens

Golden Member
Mar 18, 2005
1,984
322
126
Do you mind posting any proof that you actually worked at Intel? Otherwise why would we choose to believe anything you say?
Hah, I happily handed in my blue badge the day I quit that place. I suppose I can post a picture of some COBRA health insurance or 401k documents they spammed me afterwards, but frankly, the opinion of internet dwellers is not worth that effort.

So you can choose to believe whatever you want. If you choose to believe I did not work at Intel for 14 years as a CPU designer, then everything I post has as much veracity to you as every other poster on this thread. LOL.
 

dsc106

Senior member
May 31, 2012
302
8
81
So, here’s a theory on Big.Little - maybe someone more knowledgeable could share if there’s anything to it?


I wonder if part of the idea here is with little cores, it frees up space on the chip to make the big cores even bigger. So, Intel would have a 16 core CPU, but thanks to die space allocation, the first 8-10 cores that actually get the hardest use today are bigger and more powerful than a competing 16 core. And then the remaining little cores are set to handle all the smaller tasks that sometimes needlessly eat up an under utilized big core.

Thus, this might not just be about power efficiency. It could be about power / space allocation to make sure the main stars (the 8-10 big cores) are oversized - even bigger and more powerful than the competition - all while not losing in total core count. I mean, there’s only so much die space - why waste too much on cores above 10 that are running background tasks?

Most apps today have diminishing returns after around 8 cores, but extra cores are handy for background tasks etc. So this would allow multiple apps and tasks to run, it would power up the main 8 cores PAST the competition thanks to more die space available for them, and over time Intel would add more big and more little cores of course as software and parallelization matures.

Thoughts?
 

dullard

Elite Member
May 21, 2001
22,480
807
126
So, here’s a theory on Big.Little - maybe someone more knowledgeable could share if there’s anything to it?


I wonder if part of the idea here is with little cores, it frees up space on the chip to make the big cores even bigger. So, Intel would have a 16 core CPU, but thanks to die space allocation, the first 8-10 cores that actually get the hardest use today are bigger and more powerful than a competing 16 core. And then the remaining little cores are set to handle all the smaller tasks that sometimes needlessly eat up an under utilized big core.

Thus, this might not just be about power efficiency. It could be about power / space allocation to make sure the main stars (the 8-10 big cores) are oversized - even bigger and more powerful than the competition - all while not losing in total core count. I mean, there’s only so much die space - why waste too much on cores above 10 that are running background tasks?

Most apps today have diminishing returns after around 8 cores, but extra cores are handy for background tasks etc. So this would allow multiple apps and tasks to run, it would power up the main 8 cores PAST the competition thanks to more die space available for them, and over time Intel would add more big and more little cores of course as software and parallelization matures.

Thoughts?
It goes further than that. Intel plans to use this concept to add AI, more graphics, more IO, etc. as needed for the specific customer need. The little cores handle the basics. Then mix-and-match as needed for the rest. You need X number of big cores? You got it. You need lots of graphics capability? You got it. You need a AI? You got it. You need a bit of everything? You got it. You need just the cheapest possible thing? You got it. Media accelerators, ray tracing, on chip memory, crypto? Yes to all.

This will take several years to implement fully. But that is their end-goal.
 
Last edited:

jj109

Senior member
Dec 17, 2013
381
33
91
So this means that intel did nothing with Willow Cove arch? Despite them flat out mentioning they decided not to chase IPC to get the clocks up.
It has new instructions, CET, and a major high-bandwidth cache redesign.
10SF is a metal stack change if their architecture day graphic is to go by. There are too many metal layers compared to the previously disclosed 10nm. It's practically a new process at that point.
Therefore even if the core architecture did not change, the entire backend design process would have to be redone.

I also disagree that IPC has nothing to do with clock speed. IPC depends heavily on how well the core branch predicts and how well the cache and memory system keeps the core fed to avoid stalled cycles.
A processor running at speed will naturally have lower IPC. The program being tested will also affect IPC.
 

Markfw

CPU Moderator, VC&G Moderator, Elite Member
Super Moderator
May 16, 2002
20,810
8,970
136
Do you mind posting any proof that you actually worked at Intel? Otherwise why would we choose to believe anything you say?
I can say I was convinced he worked at Intel years ago, from what he said, and how he said it. Just the fact that he quit becuase he hates what was going on reinforces that.

Here is one of his posts from 2014..

 
Last edited:

Zucker2k

Golden Member
Feb 15, 2006
1,233
593
136
Exce
Hah, I happily handed in my blue badge the day I quit that place. I suppose I can post a picture of some COBRA health insurance or 401k documents they spammed me afterwards, but frankly, the opinion of internet dwellers is not worth that effort.

So you can choose to believe whatever you want. If you choose to believe I did not work at Intel for 14 years as a CPU designer, then everything I post has as much veracity to you as every other poster on this thread. LOL.
Except that I'm sure it is against the rules for former tech employees, if that is indeed what you are, to use anandtech forums as a staging ground for attacks against your former employer. If you indeed resigned, then why the haterade?

Also, I always thought you were credible, with good sources, only to hear you say you heard stuff even while you were working as a cpu design architect? Seriously? You must have been really low rung then. Even interns seem to have better inside knowledge than you, and your "technical expertise" is questionable with some of the things you've revealed today.
 
  • Like
Reactions: mikk

jj109

Senior member
Dec 17, 2013
381
33
91
I wonder if part of the idea here is with little cores, it frees up space on the chip to make the big cores even bigger. So, Intel would have a 16 core CPU, but thanks to die space allocation, the first 8-10 cores that actually get the hardest use today are bigger and more powerful than a competing 16 core. And then the remaining little cores are set to handle all the smaller tasks that sometimes needlessly eat up an under utilized big core.
The issue is the cost of single-thread performance doesn't scale linearly.

For example, the logic required for out-of-order issue scales quadratically with issue width [22] and would eventually dominate the chip area and power consumption. This situation had been summarised by Pollack [31] by stating that the performance of a single core increases with the square root of its number of transistors
So given a particular transistor budget, a big array of small cores will beat out a small array big cores in nT tasks... like Cinebench.
 

Exist50

Member
Aug 18, 2016
126
165
116
I said, that part was originally meant to be a from-scratch design (i.e. totally new), and it has now been scaled back to whatever incrementalist design it is now. The original goal is what I would consider "new".
If you were talking about Golden Cove alone, then I can maybe see what you mean. Irrespective of IPC increases, Zen 3 deserves to be described as a "from scratch" architecture, while Golden Cove does not. Note that that says nothing about gen/gen improvements from either.

From the perspective of Sapphire Rapids, however, the combined changes are significant enough that I don't think anyone looking at the product as a whole would call it incremental.
 

dmens

Golden Member
Mar 18, 2005
1,984
322
126
Exce

Except that I'm sure it is against the rules for former tech employees, if that is indeed what you are, to use anandtech forums as a staging ground for attacks against your former employer. If you indeed resigned, then why the haterade?

Also, I always thought you were credible, with good sources, only to hear you say you heard stuff even while you were working as a cpu design architect? Seriously? You must have been really low rung then. Even interns seem to have better inside knowledge than you, and your "technical expertise" is questionable with some of the things you've revealed today.
Against the rules? Whose rules?

Gotta applaud your attempt at a diss. Because my posts, which are based on actually having worked there, or from talking to my friends and coworkers after I left, does not match your preferred internet rumors, therefore I must have been a low-ranking and/or non-technical employee! LOL!

Anyways, it's Friday and my compilations are done, so I have to go back to work at a company that actually produces quality chip designs. Later.
 

Exist50

Member
Aug 18, 2016
126
165
116
So, here’s a theory on Big.Little - maybe someone more knowledgeable could share if there’s anything to it?


I wonder if part of the idea here is with little cores, it frees up space on the chip to make the big cores even bigger. So, Intel would have a 16 core CPU, but thanks to die space allocation, the first 8-10 cores that actually get the hardest use today are bigger and more powerful than a competing 16 core. And then the remaining little cores are set to handle all the smaller tasks that sometimes needlessly eat up an under utilized big core.

Thus, this might not just be about power efficiency. It could be about power / space allocation to make sure the main stars (the 8-10 big cores) are oversized - even bigger and more powerful than the competition - all while not losing in total core count. I mean, there’s only so much die space - why waste too much on cores above 10 that are running background tasks?

Most apps today have diminishing returns after around 8 cores, but extra cores are handy for background tasks etc. So this would allow multiple apps and tasks to run, it would power up the main 8 cores PAST the competition thanks to more die space available for them, and over time Intel would add more big and more little cores of course as software and parallelization matures.

Thoughts?
You've got the gist. Fundamentally, small cores exist because they provide better PPA than their bigger counterparts, and on a consumer system, you will always have low priority threads that will run happily on them. Thus hybrid gives you better efficiency and lower cost for the same "user experience". That balance is delicate, however. The ARM ecosystem has been refining QoS for years, but it remains to be seen how x86 will fair.
 
  • Like
Reactions: Tlh97

jpiniero

Diamond Member
Oct 1, 2010
8,397
1,426
126
I don't see how 8 big cores and 8 little cores beats 16 big cores. Could happen, I guess. I wouldn't count on it.
Going back to this, for sure Alder Lake''s not going to beat 16 Big Zen 3 cores in MT. But it should do just fine in lower threaded tasks like games. The real competitive part would be something like Sapphire Rapids-X, but we have no idea of the timeframe or if it's even going to happen.
 
  • Like
Reactions: Tlh97

coercitiv

Diamond Member
Jan 24, 2014
3,936
4,229
136
I wonder if part of the idea here is with little cores, it frees up space on the chip to make the big cores even bigger. So, Intel would have a 16 core CPU, but thanks to die space allocation, the first 8-10 cores that actually get the hardest use today are bigger and more powerful than a competing 16 core. And then the remaining little cores are set to handle all the smaller tasks that sometimes needlessly eat up an under utilized big core.

Thus, this might not just be about power efficiency. It could be about power / space allocation to make sure the main stars (the 8-10 big cores) are oversized - even bigger and more powerful than the competition - all while not losing in total core count. I mean, there’s only so much die space - why waste too much on cores above 10 that are running background tasks?
You've got the gist. Fundamentally, small cores exist because they provide better PPA than their bigger counterparts, and on a consumer system, you will always have low priority threads that will run happily on them. Thus hybrid gives you better efficiency and lower cost for the same "user experience". That balance is delicate, however. The ARM ecosystem has been refining QoS for years, but it remains to be seen how x86 will fair.
Here's the rub with this theory: the mainstream consumer platform does not need more than 8-10 big "fat" cores at the moment. If space allocation were an issue, they could just make 8 huge cores and be done with it. If background tasks were an issue, they could just make a 10 big core instead. Anyone looking for 12+ cores for a blend of consumer and semiprofessional workloads could always be guided towards HEDT.

There is no reason for Intel to go hybrid on desktops except them - currently - lacking a complete and modern offering for both consumer and server platforms.
 

mikk

Platinum Member
May 15, 2012
2,909
729
136
Here's the rub with this theory: the mainstream consumer platform does not need more than 8-10 big "fat" cores at the moment.

Intel certainly needs more than 8C if they want to compete with Zen 3 12-16C in MT workloads and HEDT is basically dead, at least for Intel.
 

ondma

Golden Member
Mar 18, 2018
1,567
445
106
Intel certainly needs more than 8C if they want to compete with Zen 3 12-16C in MT workloads and HEDT is basically dead, at least for Intel.
Yea. 8+8 may compete with 12 core Zen, but Intel has no answer for 16 core. Right now 16 core is pretty much niche though for consumer desktop, might not hurt them too much unless 16 core consumer cpu becomes much more standard. I do think hybrid for the desktop is being done just because they still dont have a good yielding process and will need a chiplet to get 12 or more cores.
 

inf64

Diamond Member
Mar 11, 2011
3,060
1,769
136
Intel certainly needs more than 8C if they want to compete with Zen 3 12-16C in MT workloads and HEDT is basically dead, at least for Intel.
It's obvious that intel completely conceded 12C+ market to AMD, they don't even talk about it anymore (as they have nothing to show for). The first good bet would be Sapphire Rapids HEDT platform if/when it arrives. Bad news is that intel will not be able to command the highest prices for "high" core counts any more, they have extremely difficult competition in even previous gen TR 3000. This is the reason why we don't hear anything about TR 4000, there is no point really.
 
  • Like
Reactions: Tlh97 and Drazick

ASK THE COMMUNITY