• Guest, The rules for the P & N subforum have been updated to prohibit "ad hominem" or personal attacks against other posters. See the full details in the post "Politics and News Rules & Guidelines."

Discussion Intel current and future Lakes & Rapids thread

Page 553 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

LightningZ71

Golden Member
Mar 10, 2017
1,000
974
136
You're assuming that Intel is as good as Qualcomm at making mobile SoCs... We can see by Intel penetration into the mobile phone and tablet market with their Atom products how well they are doing at that...
 

Abwx

Diamond Member
Apr 2, 2011
9,307
1,224
126
How would the chip become less reliable? Is Intel really that incompetent?
Not the chip but the MB, two VRs are less reliable than a single one.


And what if they require radically less voltage due to being built and optimized for 3.7Ghz compared to P monster.
Thoses are the same transistors within both P and E cores, they ll switch at the same speed at a same voltage, difference will be on delays within the pipelines, surely that E cores have a shorter pipeline wich doesnt allow as high frequencies but otherwise energy/transistor is the same.

That being said i wouldnt be surprised if they go 16 P cores, either at the next node or eventually by using chiplets at current one.
 

JoeRambo

Golden Member
Jun 13, 2013
1,333
1,299
136
Thoses are the same transistors within both P and E cores, they ll switch at the same speed at a same voltage, difference will be on delays within the pipelines, surely that E cores have a shorter pipeline wich doesnt allow as high frequencies but otherwise energy/transistor is the same.
Doubt it, i think we understand the same things and we are really splitting Mhz here. E core should in practice require less voltage in same way as old processors required less voltage for non-Prime95 workload versus Prime95 workload. Or HT vs HT disabled.
It is all pure speculation without hard data, but for 3.7Ghz E cores should require less voltage to function reliably than P core. Someone will surely test it by varying VCore and dialing down P core freq to find X + 3.7Ghz E core point?
 

IntelUser2000

Elite Member
Oct 14, 2003
7,627
2,519
136
If Qualcomm can implement seperate voltage planes on mobile SoCs at least 4 years ago I severely doubt that "cost/complexity" or "reliability" would be an issue here.
Qualcomm doesn't have a 5GHz+ P core that pushes voltage levels which rivals chips from the early 2000s.

Also the cost becomes really important in the value motherboards that Alderlake may be in. You do want the 12900K to at least run at the stock settings in the $50 motherboard. That's the whole point of catering to the DIY market. Who cares about little bit extra on the high end motherboards? But that's not the only segment they have to consider.

The amount of things you have to do to sell a board as complex as a modern motherboard for $50 to the end user is quite amazing.
 
Last edited:

mikk

Diamond Member
May 15, 2012
3,290
1,093
136
Oh I see we both were seeking the same thing. :D


It's cute that anyone is even looking at the power consumption for all-core 5+ GHz overclocks. I bash on Intel for power usage issues all the time, but if you are going to compare Intel, let's compare performance based on a similar power envelope. That is, If you can't get Zen 3 to hit 5.3 GHz all core, don't look at Intel products that DO hit that point and claim they are inefficient. Take the max Zen 3 all core clocks and compare them to Intel equivalents, and see who gets more work done. Then we will have a starting point to see who is more efficient.

So anyone with 5+ Ghz Zen 3 to compare? Based on the posts of some pople it must be super efficient at this clock speed.
 

insertcarehere

Senior member
Jan 17, 2013
462
371
136
Qualcomm doesn't have a 5GHz+ P core that pushes voltage levels which rivals chips from the early 2000s.

Also the cost becomes really important in the value motherboards that Alderlake may be in. You do want the 12900K to at least run at the stock settings in the $50 motherboard. That's the whole point of catering to the DIY market. Who cares about little bit extra on the high end motherboards? But that's not the only segment they have to consider.

The amount of things you have to do to sell a board as complex as a modern motherboard for $50 to the end user is quite amazing.
Qualcomm would have far tighter margins for costs, for devices they cater to, $50 might be the BoM for the entire SoC, Modem & motherboard, combined. Any extra components would also have to be tiny and efficient, which wouldn't help budgets at all, and yet for them (and Apple/Mediatek almost certainly) this is evidently something done ages ago. If implementing separate voltage planes were in any way costly, complex, or unreliable, we'd be hearing about problems with them long before Alder Lake became a thing.
 
  • Like
Reactions: Tlh97

Doug S

Senior member
Feb 8, 2020
851
1,257
96
You're assuming that Intel is as good as Qualcomm at making mobile SoCs... We can see by Intel penetration into the mobile phone and tablet market with their Atom products how well they are doing at that...

Intel's failure in the mobile market had little to do with technical capability. It was mostly a market driven failure. Mobile SoCs sell for a lot less than PC CPUs, so in order to maintain their margins they offered crappy products on older processes.

Intel was unwilling to compromise their margins by producing the best SoCs they could - which would have also hit revenue by massively cannibalizing sales of < $100 PC CPUs that OEMs would refuse to buy if $50 mobile SoCs cost less and performed better. Had Intel taken a long term view and been willing to accept those hits to own the Android SoC market and basically standardize it on x86, things might have been different. But for all those years they have to live with lower margins and less revenue before they've eliminated Qualcomm as a player in Android the stock price would be depressed and Intel's execs would make less money. Who has the patience for that, give me my fat bonus check now!

Intel's lack of a good modem to integrate was another problem. They could compete outside the US with the Infineon modem they acquired (and took over development of, and later sold to Apple) which yeah wasn't as good as Qualcomm's for LTE but was good enough. In the US though the lack of CDMA was a big problem for half the country's mobile subscribers. Maybe they could have made a deal with Qualcomm for discrete modems like Apple did, but if Qualcomm realized what that would mean for the Android market they'd refuse such a deal at any price.

All this taken together is why you saw Intel devoting C team design resources to their mobile SoCs and fabbing them on N+2 (not even N+1, that's how much they disrespected them) processes. I can only guess that enough of Intel's management believed that x86 and/or "Intel Inside" would somehow be enough to overcome those hurdles. Or perhaps more likely believed the PC market with its triple digit ASPs deserved the best and mobile with its crappy sub-$50 ASPs could sink or swim on the scraps left over. They weren't smart enough to see that the PC market had started terminal decline after 2010 (pandemic excepted) and the Android market would grow to a billion plus units a year within a decade.

Had Windows Phone been the iPhone alternative instead of Android, Intel selling mobile SoCs running x86 might have been enough of an advantage to stave off ARM, but Microsoft had their own market driven failure in mobile - i.e. wanting to charge for licenses like they did in the PC market, and enforcing minimum standards that kept them out of the low end market - thus leaving it all to Android to dominate and build up an installed base that quickly made Windows Phone a doomed also ran.
 

Abwx

Diamond Member
Apr 2, 2011
9,307
1,224
126
Qualcomm would have far tighter margins for costs, for devices they cater to, $50 might be the BoM for the entire SoC, Modem & motherboard, combined. Any extra components would also have to be tiny and efficient, which wouldn't help budgets at all, and yet for them (and Apple/Mediatek almost certainly) this is evidently something done ages ago. If implementing separate voltage planes were in any way costly, complex, or unreliable, we'd be hearing about problems with them long before Alder Lake became a thing.
You are talking of a 5W TDP chip and power planes provided by basic switching, akin to a linear regulation with lower efficency than a genuine SMPS, and that cost about nothing to implement like AMD s own on chip controled voltage per core using such a solution.

In a DT motherboard that s purely external, this would require adding a 4 or 5 phase VR for the small cores complexe.
 

Joe NYC

Senior member
Jun 26, 2021
514
453
96
Intel was unwilling to compromise their margins by producing the best SoCs they could - which would have also hit revenue by massively cannibalizing sales of < $100 PC CPUs that OEMs would refuse to buy if $50 mobile SoCs cost less and performed better. Had Intel taken a long term view and been willing to accept those hits to own the Android SoC market and basically standardize it on x86, things might have been different. But for all those years they have to live with lower margins and less revenue before they've eliminated Qualcomm as a player in Android the stock price would be depressed and Intel's execs would make less money. Who has the patience for that, give me my fat bonus check now!
This.

And also the dumbass Wall Street analysts, worshipers of the Gross Margin.

Intel divestiture from many promising areas came after Intel executives realized that 50% margin could not be achieved / maintained - without thinking strategically what giving up on certain markets would mean.
 

Hitman928

Diamond Member
Apr 15, 2012
3,753
4,286
136
I'm curious as well.
You'll need sub-ambient cooling to get there probably.

der8auer did get a 5950x up to 5.8 GHz on LN2 and it drew about 450W in Cinebench.

Gamer's Nexus got their 5950x up to 4.7 GHz and it drew about 250W in Blender.

A very rough first order estimation for 5950x at 5 GHz would put it at about 300W, probably a little higher. This is of course 16 cores at 5 GHz versus 8 cores at 5 GHz and 8 at 3.7 GHz. I don't know what software the twitter person used but Blender usually pushes Ryzens about as hard as anything. Cinebench is a little lighter but not too much.

Edit:

If I assume the 5950x would consume 10% more running blender, than my interpolated data points show 310W at 5 GHz for the 5950x and 375W at 5.3 GHz. That's probably decently accurate to the real numbers. BTW, der8auer's 5950x scored 12526 pts in Cinebench r20 at 5 GHz and 14543 pts at 5.8 GHz.
 
Last edited:

Hulk

Diamond Member
Oct 9, 1999
3,205
710
136
You'll need sub-ambient cooling to get there probably.

der8auer did get a 5950x up to 5.8 GHz on LN2 and it drew about 450W in Cinebench.

Gamer's Nexus got their 5950x up to 4.7 GHz and it drew about 250W in Blender.

A very rough first order estimation for 5950x at 5 GHz would put it at about 300W, probably a little higher. This is of course 16 cores at 5 GHz versus 8 cores at 5 GHz and 8 at 3.7 GHz. I don't know what software the twitter person used but Blender usually pushes Ryzens about as hard as anything. Cinebench is a little lighter but not too much.

Edit:

If I assume the 5950x would consume 10% more running blender, than my interpolated data points show 310W at 5 GHz for the 5950x and 375W at 5.3 GHz. That's probably decently accurate to the real numbers. BTW, der8auer's 5950x scored 12526 pts in Cinebench r20 at 5 GHz and 14543 pts at 5.8 GHz.
Good info. Thanks.
I think many people see high power numbers and don't fully understand the exponential relationship between frequency and power. While moving from 4.5GHz to 5.0GHz is only an 11.1% increase in frequency there could be a 50% increase in power. This isn't really a deficiency other than the part is being pushed beyond far beyond the linear range for voltage and frequency.

AMD has enjoyed amazing architectural efficiency (IPC) and process with Zen 3. The end result being that beating Skylake and even Rocket Lake was pretty easy without needing to push multi-core speeds into the ridiculous 4.5+GHz zone. The fact that Zen 3 is up against it's 3rd Intel desktop generation (Skylake, Rocket Lake, and now Alder Lake) and still may be top dog is incredibly impressive.

Golden Cove vs. Zen 3 at the same frequency will definitely be closer than Rocket as far as performance and power efficiency. Gracemont throws a curve in there as it's hard to isolate Gracemont vs. Golden Cove performance from these nebulous leaks we've been reading into so far. Shoot, it's going to be hard for Ian to pull apart these numbers from ADL for us!
 
Last edited:

Hitman928

Diamond Member
Apr 15, 2012
3,753
4,286
136
Good info. Thanks.
I think many people see high power numbers and don't fully understand the exponential relationship between frequency and power. While moving from 4.5GHz to 5.0GHz is only an 11.1% increase in frequency there could be a 50% increase in power. This isn't really a factor in a deficiency other than it is being pushed beyond far beyond the linear range for voltage and frequency.

AMD has enjoyed amazing architectural efficiency (IPC) and process with Zen 3. The end result being that beating Skylake and even Rocket Lake was pretty easy without needing to push multi-core speeds into the ridiculous 4.5+GHz zone. The fact that Zen 3 is up against it's 3rd Intel desktop generation (Skylake, Rocket Lake, and now Alder Lake) and still may be top dog is incredibly impressive.

Golden Cove vs. Zen 3 at the same frequency will definitely be closer than Rocket as far as performance and power efficiency. Gracemont throws a curve in there as it's hard to isolate Gracemont vs. Golden Cove performance from these nebulous leaks we've been reading into so far. Shoot, it's going to be hard for Ian to pull apart these numbers from ADL for us!
Yeah, Alderlake is definitely a big step up for Intel and they will get back the single thread performance crown and have a very competitive MT chip, albeit doing so by having to push the 'p' cores hard which will translate to a significant perf/w advantage to AMD. The big potential pitfall for Intel is if there are any issues with how the OS/Apps handle the hybrid architecture. Hopefully W11 and Linux are up to the task, we'll see.
 

IntelUser2000

Elite Member
Oct 14, 2003
7,627
2,519
136
Intel's failure in the mobile market is entirely attributed to Otellini. It's hard to beat the first mover advantage - something Apple still has. They might have something like only quarter of the market, but their revenue share is in the 80 percentile range.

The only other way for them to have had any success is to get a mobile chip in 1-2 years - clearly impossible considering the original Atom with the MID push came out few months after it! They needed to get Medfield(the first platform to be power competitive) in 2010, instead they got Moorestown, which was clearly behind in both board size and power use.

Once they lost the iPhone, then losing the chance to become a major player in the leftover market, any chance they had of succeeding in the phone market was gone. Because even if you had a platform with equal features, they would be going against incumbents, and translating the ISA at that! Having success in such a market would have been difficult even if Bay Trail was half the power and twice the performance of the competition!

But forget the phone market. Because saying PCs are dead has been repeated since 2008, and it still hasn't died and well alive. Saying it's alive "because of X" is an excuse. Everyone could have got a Tablet/Phone but they didn't did they?
 

IntelUser2000

Elite Member
Oct 14, 2003
7,627
2,519
136
The big potential pitfall for Intel is if there are any issues with how the OS/Apps handle the hybrid architecture. Hopefully W11 and Linux are up to the task, we'll see.
Even if the Thread Director and Windows 11 is best-class in scheduling, you'll still fall into scenarios where the contention created by two very different cores will result in reduced performance.

Even in the scenario where they can work in harmony, it won't scale linearly because there still will exist contention and overhead. It'll never be 1+1, instead you'll end up with 0.95 + 0.95.

Alderlake's maximum throughput gain without contention and overhead will probably end up to be 1+0.5. But most positive cases it might be 0.95+0.45. As long as it's significantly above 1 like 1.2, then the hybrid architecture will be justified.

In the 15W space where it'll be 2+8, it's hard to lose, because it'll likely end up somewhat outperforming the 28W 4C Tigerlake chip. At similar TDP, it might end up being 40-50% faster. 4+8 might be where it ends up being double the performance.

What was MLiD saying? Twice the performance at somewhat lower power?


You can see despite the potential pitfalls, the possibilities with Alderlake's configuration when it comes to power limited devices.
 

Accord99

Platinum Member
Jul 2, 2001
2,237
134
106
A very rough first order estimation for 5950x at 5 GHz would put it at about 300W, probably a little higher. This is of course 16 cores at 5 GHz versus 8 cores at 5 GHz and 8 at 3.7 GHz. I don't know what software the twitter person used but Blender usually pushes Ryzens about as hard as anything. Cinebench is a little lighter but not too much.
I don't think the LN2 data point should be used because the low temperature itself has a major impact on power consumption, by flattening the voltage/power vs frequency curve as well as greatly reducing the impact of leakage on power usage. I don't think der8bauer did a similar plot for the 5000 series, but for a 3900X he created a chart showing power and frequency scaling with temperature.



 

mikk

Diamond Member
May 15, 2012
3,290
1,093
136
Gamer's Nexus got their 5950x up to 4.7 GHz and it drew about 250W in Blender.

A very rough first order estimation for 5950x at 5 GHz would put it at about 300W, probably a little higher. This is of course 16 cores at 5 GHz versus 8 cores at 5 GHz and 8 at 3.7 GHz. I don't know what software the twitter person used but Blender usually pushes Ryzens about as hard as anything. Cinebench is a little lighter but not too much.

Edit:

If I assume the 5950x would consume 10% more running blender, than my interpolated data points show 310W at 5 GHz for the 5950x and 375W at 5.3 GHz. That's probably decently accurate to the real numbers. BTW, der8auer's 5950x scored 12526 pts in Cinebench r20 at 5 GHz and 14543 pts at 5.8 GHz.

I don't even think +50W are enough because 4.7 to 5.0 Ghz is a different world, the additional voltage required for this relatively small clock speed bump will be enormous. I think many people don't understand that the clock speed+voltage has a big effect on the power efficiency of these cores when they are clocked close to the limit. They automatically assume Golden Cove must be inefficent in every range. And furthermore on a per core basis Golden Cove has to clock lower than Zen 3 because of its better IPC. i5-12400 with only a 4 Ghz all core clock speed could match or beat the 5600x running at 4.5 Ghz. There is no chance Golden Cove running at 5 Ghz on all cores can reach a comparable efficiency.
 

mikk

Diamond Member
May 15, 2012
3,290
1,093
136
They appear to be inefficient in the range where Alder Lake-S can start to beat the competition in some MT workloads.
Because they have to rely on much higher clock speeds and have a huge big core count disadvantage to barely match 16C Zen3 at 4 Ghz. Do you understand this?
 

insertcarehere

Senior member
Jan 17, 2013
462
371
136
They appear to be inefficient in the range where Alder Lake-S can start to beat the competition in some MT workloads.
Correction: They appear to be inefficient in the range where the 12900k can start to beat the 5950x in some MT workloads.

How a 12700k compares vs a 5900x, or a 12600k compares vs a 5800x is still very much TBD. And as somebody with a limited budget, I am far more interested in the latter two comparisons.
 

DrMrLordX

Lifer
Apr 27, 2000
18,165
7,069
136
Because they have to rely on much higher clock speeds and have a huge big core count disadvantage to barely match 16C Zen3 at 4 Ghz. Do you understand this?
And whose fault is that? Hmmmmmmm?

(yes, I do understand that fact, which is why I commented in the first place)

Correction: They appear to be inefficient in the range where the 12900k can start to beat the 5950x in some MT workloads.

How a 12700k compares vs a 5900x, or a 12600k compares vs a 5800x is still very much TBD. And as somebody with a limited budget, I am far more interested in the latter two comparisons.
The 12900k is their flagship. It's not really gonna look good for them when it struggles against a CPU from a year ago, fabbed on an N7.
 
  • Like
Reactions: Tlh97 and moinmoin

Hitman928

Diamond Member
Apr 15, 2012
3,753
4,286
136
I don't think the LN2 data point should be used because the low temperature itself has a major impact on power consumption, by flattening the voltage/power vs frequency curve as well as greatly reducing the impact of leakage on power usage. I don't think der8bauer did a similar plot for the 5000 series, but for a 3900X he created a chart showing power and frequency scaling with temperature.



As I mentioned, it was just meant to be a rough first order estimate. But even if I add 150W to the 5.8 GHz data point, that only puts the 5 GHz interpolation at 325W and 425W at 5.3 GHz. I don't think 150W is very realistic but even still, it shines a positive light on Zen 3 comparatively. It will be interesting to see how sustainable the twitter frequencies are as well. The reason you need sub-ambient to reach 5GHz+ on Zen3 is because of hotspotting, not the overall power consumption. I'm sure Golden Cove is probably more spread out and even has some dark silicon from not enabling AVX512 to help out there, but we'll see if it's enough to sustain 5.3 GHz all p-core without sub-ambient cooling.
 
  • Like
Reactions: Tlh97 and moinmoin

ASK THE COMMUNITY