Discussion Intel current and future Lakes & Rapids thread

Page 647 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

JasonLD

Senior member
Aug 22, 2017
485
445
136
It has nothing to do with HBM. Intel just came in cheaper. Didn't matter last time back when Nvidia picked Rome because Intel didn't have a PCIe Gen 4 platform back then. This time around they have a competent platform and were actually an option.

One thing Sapphire Rapids does well over competition is AI related performance and well, that is the system focused on AI.
 

itsmydamnation

Platinum Member
Feb 6, 2011
2,743
3,075
136
One thing Sapphire Rapids does well over competition is AI related performance and well, that is the system focused on AI.
against Zen3 yep, against Zen4 it will probably lose as both will have probably around the same per core per clock AVX-512 performance .

this is obviously ignoring memory bandwidth and thus HBM vs no HBM.

Stop lying.. are you a amd employee or something
do you not know how to internet ?
Sure your not an intel sales/channel guy ( the ones i have dealt with 100% do not know how to internet)
 

uzzi38

Platinum Member
Oct 16, 2019
2,565
5,574
146
One thing Sapphire Rapids does well over competition is AI related performance and well, that is the system focused on AI.
That's not how Nvidia wants CPUs to be used. You can tell by how they advertise Grace. They don't particularly care about any of that stuff, what they care about is how easily they can access system memory via the CPU for large datasets that can't just sit in VRAM alone.
 
  • Like
Reactions: HurleyBird

Exist50

Platinum Member
Aug 18, 2016
2,445
3,043
136
against Zen3 yep, against Zen4 it will probably lose as both will have probably around the same per core per clock AVX-512 performance .

this is obviously ignoring memory bandwidth and thus HBM vs no HBM.
There's also AMX to consider, but I'm inclined to agree with @uzzi38 that Nvidia doesn't want people focusing on that. Perhaps the Data Streaming Accelerator can help?
 

ashFTW

Senior member
Sep 21, 2020
303
225
96
It has nothing to do with HBM. Intel just came in cheaper. Didn't matter last time back when Nvidia picked Rome because Intel didn't have a PCIe Gen 4 platform back then. This time around they have a competent platform and were actually an option.
Source?
 

moinmoin

Diamond Member
Jun 1, 2017
4,933
7,619
136

FWIW nVidia decided on using SPR for the x86 Hopper DGX.
It has nothing to do with HBM. Intel just came in cheaper. Didn't matter last time back when Nvidia picked Rome because Intel didn't have a PCIe Gen 4 platform back then. This time around they have a competent platform and were actually an option.
Yeah, Nvidia doesn't really care about the CPUs (only as far as it's no bottleneck but still can be replaced by the upcoming Grace eventually), H100 is the focus. What matters is the platform being up to speed. There SPR with PCIe5 and DDR5 may well come in first in near term availability (at least we can expect Intel gave Nvidia a guaranteed allocation, AMD might not have been able to do that with Genoa already).

What makes this choice interesting to me though is the impact on energy efficiency.
 
  • Like
Reactions: Mopetar

JoeRambo

Golden Member
Jun 13, 2013
1,814
2,105
136
They don't particularly care about any of that stuff, what they care about is how easily they can access system memory via the CPU for large datasets that can't just sit in VRAM alone.

This, people forget that CPU is order of magnitude slower and less efficient in AI stuff. Performance difference between SR and Z4 probably fits into variability between runs on GPUs.
What really matters for this type of integration -> device IO and access to memory. NV did that with custom Power8 stuff before and they are moving to CXL this time. CXL.IO and CXL.cache is what GPU or other accelerator needs to speed up accesses to system memory in coherent way.

So obviuosly Zen3 is out, cause it does not support it, question is why Zen4 was not chosen, could be completely political, could have some technical merits.
 

mikk

Diamond Member
May 15, 2012
4,112
2,106
136
davidbepo claims no 3nm GPU chiplet for MTL-S, instead there are 64EUs included in the Soc tile which uses 10ESF. Maybe the chipset is still separate on desktop board, in this case there is enough space for a 64EU 10ESF GPU. Not the best node for GPU efficiency but who cares on desktop with only 64EUs.


 
  • Like
Reactions: pcp7

Exist50

Platinum Member
Aug 18, 2016
2,445
3,043
136
davidbepo claims no 3nm GPU chiplet for MTL-S, instead there are 64EUs included in the Soc tile which uses 10ESF. Maybe the chipset is still separate on desktop board, in this case there is enough space for a 64EU 10ESF GPU. Not the best node for GPU efficiency but who cares on desktop with only 64EUs.


Lmao, he thinks the SoC is 10ESF? And that that would be a good thing compared to N3? I question that it's integrated as well. Guy's delusional.
 

jpiniero

Lifer
Oct 1, 2010
14,510
5,159
136
Because he has no idea what he's talking about.

It does actually make sense given that alleged screenshot of the socket that only had a CPU and SoC. Sounds like a lot of work to do a completely different SoC on a different node though.

Doing both Meteor and Arrow at the same time sounds like a nightmare from a SKU perspective. But if they are sharing the CPU tile with mobile, that would make it easier.
 

Exist50

Platinum Member
Aug 18, 2016
2,445
3,043
136
It does actually make sense given that alleged screenshot of the socket that only had a CPU and SoC. Sounds like a lot of work to do a completely different SoC on a different node though.

Doing both Meteor and Arrow at the same time sounds like a nightmare from a SKU perspective. But if they are sharing the CPU tile with mobile, that would make it easier.
The only thing remotely plausible about what he said would be the idea of an integrated GPU in the desktop SoC die. But it would not be on a different process from the mobile one (and thus not Intel anything), and would probably have to be smaller than 64EU. A distinct desktop SoC is required regardless, but the only real advantage to integrating the GPU would be from an IO shoreline perspective. Maybe a little cost savings.

Seems far more likely to me that they just reuse the mobile GPU dies, and maybe add another, smaller config for desktop.
 

DrMrLordX

Lifer
Apr 27, 2000
21,582
10,785
136
While it’s a fact that Intel has lost a lot of server market share to AMD, the above statement is patently false.

Say what?

Enabled by our IDM advantage, Ice Lake servers shipped more than 1 million units, equal to the amount we had shipped in the prior 3 quarters combined.”

3x crap volume = 3x crap volume.

Ice Lake-SP took forever and a day to get to market ,and months and months to reach volume, and they're still only shipping 1 million per quarter?!?!?

AND WHERE IS SAPPHIRE RAPIDS WHICH WAS THE WHOLE POINT

Look at your own cited numbers, over 80% of that volume was still 14nm, probably selling at a discount since its so old. And Intel is getting hammered on margins with their 10nm products from their own earnings reports.

Stop drinking the Kool-Aid.

edit: I will add, I'm surprised people are still buying so much Cascade Lake. Nobody seems to be able to replace Intel's 14nm volume. Even Intel!
 
Last edited:

ashFTW

Senior member
Sep 21, 2020
303
225
96
Say what?



3x crap volume = 3x crap volume.

Ice Lake-SP took forever and a day to get to market ,and months and months to reach volume, and they're still only shipping 1 million per quarter?!?!?

AND WHERE IS SAPPHIRE RAPIDS WHICH WAS THE WHOLE POINT

Look at your own cited numbers, over 80% of that volume was still 14nm, probably selling at a discount since its so old. And Intel is getting hammered on margins with their 10nm products from their own earnings reports.

Stop drinking the Kool-Aid.

edit: I will add, I'm surprised people are still buying so much Cascade Lake. Nobody seems to be able to replace Intel's 14nm volume. Even Intel!
I’m not here for the sake for meaningless arguing. Learn to read! IGNORED
 
  • Haha
Reactions: pcp7

itsmydamnation

Platinum Member
Feb 6, 2011
2,743
3,075
136
I’m not here for the sake for meaningless arguing. Learn to read! IGNORED
I went back and read the exchange

you just got slapped mate.....

Your position of Intel 10nm server delivery is fine , you think 80% of intels clients want to buy 5 year old 14nm Cores? .

SPR doesnt look that much better for intel, ~1600mm sq for at best 50 something cores vs 630mm sq for 40.

Also being 50 something cores sux for things like Vmware licensing , so expect most SPR in enterprise to be 32 cores ( just like all my ice lake servers are). How is intel even going to service those..... 4 tiles 1600mm for 32 cores..... Jesus.
 
  • Like
Reactions: Tlh97

itsmydamnation

Platinum Member
Feb 6, 2011
2,743
3,075
136
Hence the speculation of a monolithic die. 32 cores would be very doable if they so wanted. Would cut total silicon in half.
I vagally remember being told by intel sales channel that the plan was icelake server and SPR would service different markets and exist along side each other not as a direct generational replacement. So it will be interesting to see what happens. icelake server at 32 cores in "enterprise workloads " isn't to bad again Zen2/3 it can sustain solid clocks, but that might be looking a bit average by the time Zen4 comes around.