Discussion Intel current and future Lakes & Rapids thread

Page 881 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

A///

Diamond Member
Feb 24, 2017
4,351
3,160
136
No hyperthreading isn't an issue to me. Unless, of course you agree with FangBlade that hyperthreading is a headache for developers since those threads are so much slower. Hyperthreading slows far too many important software down that I usually just leave it turned off. I'd much rather just spam with more E cores instead.
This is a personal preference you're presenting. You're blanket stating what should be done according to your own whims. Doesn't make sense.
The no AVX-512 was a terrible mistake by Intel. Shot themselves in more than just the foot. But that is just a side distraction from my comments of why every company is going to Big/Little. We just can't have 40+ big cores each using 20W or more. In order to go to large numbers of cores, each core needs to use less and less power. Thus, might as well bight the bullet and use cores that are optimal for that small power. Then toss in 4 Big cores for a snappy user experience on the single-threaded issues (Intel's other mistake was using 8 big cores in Big/Little).
Yes, but idk where you're getting 20w or more per core. I suspect hungry Intel processors. I expect per core wattage to shrink down over the next few years not keep steady state or grow. There is rumour of intel dumping ht, so if that isn't a big old lie we'll see how it compete with a "Traditional" processor. As it stands now the current intel setup is imperfect. It's 95% there, the remaining 5% needs to get reimplemented somehow. It's my firm believe had Intel not run into these self imposed walls, pat's statement about amd being in the rear view mirror would have been true. Power use would have been high still but justified.
 

A///

Diamond Member
Feb 24, 2017
4,351
3,160
136
Threadripper CPUs have had 64-cores for a while. Obviously grandma doesn't need one, but professional users can use that kind of processing power.
TR isn't quite client is it? It's watered down Epyc. How many normal consumers are dumping 5 grand for a processor? Find me a 40 core processor in a <$2000 laptop or a <1500 prebuilt.
 

A///

Diamond Member
Feb 24, 2017
4,351
3,160
136
The most exciting thing about Intel's e-cores is that it gave them a way back into HEDT where they basically gave up trying to compete because they couldn't make a monolithic die that had even a quarter as many cores.
In what fantasy world do you live in mate?
 

A///

Diamond Member
Feb 24, 2017
4,351
3,160
136
The first limit you hit and easiest to address is you can't realistically have hundreds of big cores each using 10 W, 20 W, or more for typical customers. You either have to severely frequency limit the big cores which no longer makes them act big. Or you have to go to cores that operate well with lower power (i.e. little cores).
Not quite. Intel found their first limit in testing on simulations was the performance was not linear. Much like the 10 Ghz goal they had they presumed the performance would be linear. I think at the time most of the c2d were a 2 core setup with ht. The performance was incredible compared to pentium d or the outgoing pentium 4. They assumed the more cores or the doubling of cores would result in a linear performance slope. It didn't quite turn out like that. Even now you won't get linear performance.
 

A///

Diamond Member
Feb 24, 2017
4,351
3,160
136
I am not making claims about any specific CPUs that no one had heard of. I am talking about the future of computing.
Right the future is the future. We could all die from an asteroid next week but until it comes out it's a bit futile to discuss a future processor that doesn't exist and likely isn't on anyone's drawing board. The rumoured 8+32 setup wouldn't have intrigued me but if team red pulls it off it would with their compact core design. If Intel can figure it out, then Intel's design will also intrigue me. Suffice to say I'll be waiting to see what Z5 and ARL are like before pulling my wallet out. The wait time is obnoxious but when it comes to replacing 5 computers at once it matters a lot.
 

dullard

Elite Member
May 21, 2001
26,024
4,646
126
Not quite. Intel found their first limit in testing on simulations was the performance was not linear. Much like the 10 Ghz goal they had they presumed the performance would be linear. I think at the time most of the c2d were a 2 core setup with ht. The performance was incredible compared to pentium d or the outgoing pentium 4. They assumed the more cores or the doubling of cores would result in a linear performance slope. It didn't quite turn out like that. Even now you won't get linear performance.
Any computer person should know Amdahl's law that more cores aren't linear: https://en.wikipedia.org/wiki/Amdahl's_law The law has been around for over 50 years. No simulation is necessary for that.
 

dullard

Elite Member
May 21, 2001
26,024
4,646
126
Right the future is the future. We could all die from an asteroid next week but until it comes out it's a bit futile to discuss a future processor that doesn't exist and likely isn't on anyone's drawing board.
So, we can't discuss where computers are going here? Unless there is a confirmed rumor by a person approved by you, we can't talk about it?

We simply cannot keep adding big cores without making them run like crap. We need cores that can run well on little power as we add more cores.
 

Abwx

Lifer
Apr 2, 2011
11,885
4,873
136
The first limit you hit and easiest to address is you can't realistically have hundreds of big cores each using 10 W, 20 W, or more for typical customers. You either have to severely frequency limit the big cores which no longer makes them act big. Or you have to go to cores that operate well with lower power (i.e. little cores).

Say a big core that has 30% higher IPC than a small core, at same throughput it will work at 0.77x the frequency of the small core and 0.5x its nominal power at same frequency than said small core, so the efficency argument for small cores is moot since a big core will likely have better efficency at any given same throughput.

Actually even in laptops small cores are useless on an efficency POV, only advantage is cost and that one can advertise more cores to lure the gullible buying a half baked bunch of cores.
 

A///

Diamond Member
Feb 24, 2017
4,351
3,160
136
So, we can't discuss where computers are going here? Unless there is a confirmed rumor by a person approved by you, we can't talk about it?

We simply cannot keep adding big cores without making them run like crap. We need cores that can run well on little power as we add more cores.
We can, but the idea of a 40 core regular consumer processor that doesn't cost several paycheques for most is ridiculous. From my own perusals of the web most serious gamers who utilise 12 or 13 gen disable their e cores in gaming and or video editing if they do media stuff because the "e cores are useless" to them.
 
  • Like
Reactions: igor_kavinski

A///

Diamond Member
Feb 24, 2017
4,351
3,160
136
Link other than MLID?
Mlid wasn't born when that news came out. Try not to live up to your username here. Gillespie of Intel wrote a white paper at the time Intel was considering this which used examples of Amdahls law and Gustafson's trend but attempted to white wash it by saying the theory doesn't necessarily apply to real world performance. At the time intel saw a 1.9-2x speedup in performance with 2 cores versus 1, and a near 3.8x performance gap with 4 cores versus 2. The running theory at Intel at the time was they could bend the laws by modern reality with more cores while offering more frequency for a near-linear performance improvement. Except we all know you run into more problems.

Around the same time a European paper which Intel's French engineer, forget his name, commented on believing they could achieve it possibly in the near future on a better lithography process that closed the gaps alongside more careful design. I'm sure intel did try at some point in their labs but it didn't obviously pan out. It wouldn't be until years later we got better core designs, more cores, and better i/o handling in the processor. Still no linear performance, but damn good.
 

dullard

Elite Member
May 21, 2001
26,024
4,646
126
We can, but the idea of a 40 core regular consumer processor that doesn't cost several paycheques for most is ridiculous.
2003 wants it's P4 Extreme Edition post back (aka Extremely Expensive Edition https://www.theregister.com/2003/10/31/intel_pentium_4_extremely_expensive/ ). Or is it that 2013 wants its Enthusiast edition post back? Your type of comment has been said every decade for years about processors with more cores or more cache or more frequency, etc.
From my own perusals of the web most serious gamers who utilise 12 or 13 gen disable their e cores in gaming and or video editing if they do media stuff because the "e cores are useless" to them.
Check recent games and come back.
 
Jul 27, 2020
28,110
19,175
146
We can, but the idea of a 40 core regular consumer processor that doesn't cost several paycheques for most is ridiculous. From my own perusals of the web most serious gamers who utilise 12 or 13 gen disable their e cores in gaming and or video editing if they do media stuff because the "e cores are useless" to them.
There should be a "core blacklist" where users can specify which cores a process must NEVER touch. Then users won't have to disable E-cores. Let Windows and associated system processes cuddle with those cores but user can rest easy knowing none of his/her critical processes will get molested.
 

A///

Diamond Member
Feb 24, 2017
4,351
3,160
136
2003 wants it's P4 Extreme Edition post back (aka Extremely Expensive Edition https://www.theregister.com/2003/10/31/intel_pentium_4_extremely_expensive/ ). Or is it that 2013 wants its Enthusiast edition post back? Your type of comment has been said every decade for years about processors with more cores or more cache or more frequency, etc.
No, herb sutter's writeup from 2005 killed that concept.

Check recent games and come back.
I've seen it, but people don't like the e cores. What can I tell you. If you love em that's fine mate. To me they're useless.
 

A///

Diamond Member
Feb 24, 2017
4,351
3,160
136
There should be a "core blacklist" where users can specify which cores a process must NEVER touch. Then users won't have to disable E-cores. Let Windows and associated system processes cuddle with those cores but user can rest easy knowing none of his/her critical processes will get molested.
You can pick cores with software you install. Some intel boards have an option where if you enable screen lock, I think it is, it disables the e cores without having to jump into the bios every time. Intel really needs software like AMD's what do you call it in desktop. It'll be better and less buggy, and hopefully as polished looking.
 

Khato

Golden Member
Jul 15, 2001
1,279
361
136
Intel really needs software like AMD's what do you call it in desktop. It'll be better and less buggy, and hopefully as polished looking.
Intel's variant of Ryzen Master is called Intel XTU. It's been around since 2007.
 

Dayman1225

Golden Member
Aug 14, 2017
1,160
996
146
I keep seeing this but I've never seen any concrete proof other than the initial rumour about the setup that goes back several years now before alderlake even came out. I simply don't see it. Otherwise it feeds into that weird game Intel loves to play against themselves thinking they can add for linear performance. When core came out they theorised they could add cores ad infinitum. Sounded incredible but modern logic tells us that doesn't work. You'll soon hit limits, multiples you need to address. Though we are now on the verge of getting up hundreds of cores on datacentre processors.
1694738584942.jpeg
This slide is obviously pretty old considering it mentions MTL-S. but 8+32 ARL certainly existed