Discussion Intel current and future Lakes & Rapids thread

Page 740 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Hitman928

Diamond Member
Apr 15, 2012
5,182
7,633
136
That HEDT SPR score is definitely low. I'm sure we'll see significantly higher scores as time goes on.
 

Exist50

Platinum Member
Aug 18, 2016
2,445
3,043
136
it's saving grace are it's powerful AVX-512 and AMX accelerators that give them a strong case in many HPC market. But I believe this impact will not be as strong for the HEDT parts since most Designers that this will be pitch for already have powerful GPUs that do the SIMD much better.
I think it's the opposite, really. What else heavily leverages CPU SIMD instructions if not workstation apps? And I think we've seen plenty of examples that don't offload well to a GPU.

Though that's just for AVX-512. AMX is still pretty much useless outside of cloud inference. I suppose there might be a small market for people who want to test their code locally, but that seems pretty negligible. If they added larger precision datatype support, then AMX would be interesting.
 

nicalandia

Diamond Member
Jan 10, 2019
3,330
5,281
136
I think it's the opposite, really. What else heavily leverages CPU SIMD instructions if not workstation apps? And I think we've seen plenty of examples that don't offload well to a GPU.
Well in Phoronix Linux Tests there are at least 20 Benchmarks that take advantage of AVX-512. How many apps can you name in Windows that take advantage of AVX-512? Due to that I believe that the TR PRO 5975X and 5995WX will be very competitive against it, until Genoa comes throwing their weight around with proper AVX-512
 

Exist50

Platinum Member
Aug 18, 2016
2,445
3,043
136
Well in Phoronix Linux Tests there are at least 20 Benchmarks that take advantage of AVX-512. How many apps can you name in Windows that take advantage of AVX-512?
If you look at Phoronix's suite, the majority are either workstation tasks (e.g. rendering, parsing), or ML (which is also sometimes a workstation task), both of which should translate to Windows as well. Not that Linux workstations are remotely uncommon.

I'm definitely not saying that AVX-512 is a must-have, but it certainly has more value for workstations compared to almost any other category outside of HPC or CPU-based ML servers.
 
  • Like
Reactions: scannall

nicalandia

Diamond Member
Jan 10, 2019
3,330
5,281
136
I'm definitely not saying that AVX-512 is a must-have, but it certainly has more value for workstations compared to almost any other category outside of HPC or CPU-based ML servers.
Do you have a list of those Valuable workstation apps be? Or perhaps you mean in-house proprietary apps?
 

Exist50

Platinum Member
Aug 18, 2016
2,445
3,043
136
Do you have a list of those Valuable workstation apps be? Or perhaps you mean in-house proprietary apps?
Last I checked, the workstation market was something like CAD/Engineering > 3D Modeling > Scientific Compute > Other. Not sure who has good CAD benchmarks.
 

Markfw

Moderator Emeritus, Elite Member
May 16, 2002
25,483
14,434
136
Last I checked, the workstation market was something like CAD/Engineering > 3D Modeling > Scientific Compute > Other. Not sure who has good CAD benchmarks.
Honest Question. Do you know if "scientific compute" and the primegrid application are anyhow used in those workstations ? I ask because I know for a fact primegrid and several DC applications in BOINC use avx-512 and FMA3 and AVX2 etc... and quite a few of us Stefan especially have quite a few benchmarks on Xeons and other CPU's. We also have users that buy and use A100, V100, etc Nvidia compute video cards for some types of work, and I know these are used in the cloud and on workstations. Here is an example:


Edit: I was trying to see if I could provide the desired benchmarks. If nobody sees any benefit of these, then I will not reply again.
 
Last edited:

diediealldie

Member
May 9, 2020
77
68
61
Intel's dual-issue AVX512 indeed has its uses and shows us performance improvements. But that strength is somehow diluted by a lack of memory channel, so the true potential of AVX512 will be visible on HBM variants. This is somehow due to tile designs since you cannot scram 3 memory channels and tons of UPIs together without sacrificing die areas. Note that intel needs additional spaces to support 4S and 8S server variants, who are still suffering from 14nm(cooper lake).

I'm curious how Intel's design will overcome this in future rapids. Maybe UPI is kept in core tiles, moving memory channels out?
 

Exist50

Platinum Member
Aug 18, 2016
2,445
3,043
136
Honest Question. Do you know if "scientific compute" and the primegrid application are anyhow used in those workstations ? I ask because I know for a fact primegrid and several DC applications in BOINC use avx-512 and FMA3 and AVX2 etc... and quite a few of us Stefan especially have quite a few benchmarks on Xeons and other CPU's. We also have users that buy and use A100, V100, etc Nvidia compute video cards for some types of work, and I know these are used in the cloud and on workstations. Here is an example:


Edit: I was trying to see if I could provide the desired benchmarks. If nobody sees any benefit of these, then I will not reply again.
Primegrid does seem closest to the "scientific computing" category, and a good fit for these kind of CPUs. I'm not familiar with the actual algorithm, but if people are using specifically the A100, V100, etc., then it sounds like it uses a lot of FP64 compute, which is in line with many other scientific workloads, as well as a good amount of CAD, iirc.
 

Markfw

Moderator Emeritus, Elite Member
May 16, 2002
25,483
14,434
136
Primegrid does seem closest to the "scientific computing" category, and a good fit for these kind of CPUs. I'm not familiar with the actual algorithm, but if people are using specifically the A100, V100, etc., then it sounds like it uses a lot of FP64 compute, which is in line with many other scientific workloads, as well as a good amount of CAD, iirc.
Yes, the Zen 4, and those cards, along with the Titan V (what I use), and EPYC Rome and Milan are currently the leaders in hardware for these. The Broadwell Xeons were good based on the L3 cache size, but Zen 4 is taking over, and The X3d chips are going to totally take over.

So if you want any benchmarks on any of these, please post in the DC forum, as most of us do not post here (except me)

Edit: And I should add, if you think I have a lot of hardware, these guys dwarf me. A100 and V100's galore, not to mention Zeon, EPYC, etc farms that make me look like a pauper.

Here is part of a post by one of our newest members: (Letin Noxe)

I use Dual 2680v2 (AVX), Dual E5-2640v4 (AVX2), Dual 6154 Gold (AVX-512), Dual 5218R (AVX-512 x2/core) servers, EPYC Rome 7H12 (AVX2) for (geo)physics computations. Some Xeon Phi (RIP), Titan, Titan V, V100, ... too. Used to dive into iDRAC and the pizza box. But never allowed to try DC, that's too bad. Thank you for sharing your experience. Indeed, it seems that datacenters decommissioned servers become available in numbers and quite affordable (now Broadwell, Haswell with AVX2) (No more official support, so many second-hands and spare parts). These servers are nice pets, a bit noisy though ! But they don't bark and byte.
 
Last edited:

Geddagod

Golden Member
Dec 28, 2021
1,147
1,003
106
Intel Lunar Lake due 2024, likely the very end of 2024, so essentially 20A and 18A are supposed to launch at nearly the same time?
Intel 20A is supposed to have "breakthrough innovations" 1H 2024, so assuming that means a end of 2024 launch...
The time frame for Intel 18A would be 2H 2024 readiness for something (considering it was a 2025 node pushed up 6 months) and then product launch in the same half???
Edit: maybe Lunar Lake uses an external node...
 

Tigerick

Senior member
Apr 1, 2022
577
448
96

Lunar lake first stepping taped out.
I wonder why intel is talking more about lunar lake instead of arrow lake considering LNL is supposed to be later.
Because Arrowlake tCPUs are mostly manufactured by TSMC N3 process. It is embarrassing to IFS cause they can't make it under 20A process. Can you imagine that Intel new gen CPUs are mostly made by TSMC? Yeah, Intel will supply base tile to let 4 compute tiles that made by TSMC 'sit' on it.

As for GNR, SRF and LNL, we will see can Intel ship them before end of 2024, come back next year :rolleyes:
 
  • Like
Reactions: lightmanek

Exist50

Platinum Member
Aug 18, 2016
2,445
3,043
136
Because Arrowlake tCPUs are mostly manufactured by TSMC N3 process. It is embarrassing to IFS cause they can't make it under 20A process. Can you imagine that Intel new gen CPUs are mostly made by TSMC? Yeah, Intel will supply base tile to let 4 compute tiles that made by TSMC 'sit' on it.

As for GNR, SRF and LNL, we will see can Intel ship them before end of 2024, come back next year :rolleyes:
That can't be the reason. LNL is heavily suggested to be on N3. Probably just a more interesting/novel product.
 

Tigerick

Senior member
Apr 1, 2022
577
448
96
That can't be the reason. LNL is heavily suggested to be on N3. Probably just a more interesting/novel product.
Intel did hint that LNL are based on 18A process, and almost ignoring 20A process during investor calls. Go figure..
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
Intel is exiting the Network Switch business and RISC-V Pathfinder program.

@Tigerick It's hard to see Lunar Lake as being an Intel process when they don't mention anything about it, and in the same sentence they say Meteorlake is Intel 4. They could be purposely hiding it, but so far there's no evidence.

Also, most of Arrowlake will be on 20A with the -P mobile variant.
 
Last edited:
  • Wow
Reactions: ZGR

mikk

Diamond Member
May 15, 2012
4,112
2,108
136
“With MTL progressing well, it is now appropriate to look forward to Lunar Lake, which is on track for production readiness in 2024, having taped-out its first silicon,” said the head of Intel. “Lunar Lake is optimized for ultra-low power performance, which will enable more of our PC partners to create ultra-thin and light systems for mobile users.”


Why they are telling about MTL+LNL mobile and nothing about ARL mobile? Also production readiness in 2024 certainly means 2025 when it hits the market.

MTL ramp in the second half of the year doesn't sound great. It should ramp in H1 to hit the market in H2. There is a big delay from ramp to real product availability on a mobile chip.
 

Markfw

Moderator Emeritus, Elite Member
May 16, 2002
25,483
14,434
136
Kinda makes you wonder, how much of their enterprise volume is IceLake-SP and how much of it is still Cascade Lake-SP?
And how much lost to Milan/Milan-x/Genoa. I think we will see in a few days.
 

jpiniero

Lifer
Oct 1, 2010
14,510
5,159
136
Kinda makes you wonder, how much of their enterprise volume is IceLake-SP and how much of it is still Cascade Lake-SP?

I saw speculation that it's roughly 50-50. Maybe. Which is crazy when you think about it given that Icelake is definitely a lot better than Cascade Lake. Sapphire sounds like it will be like 10% tops.
 

DrMrLordX

Lifer
Apr 27, 2000
21,583
10,785
136
I saw speculation that it's roughly 50-50. Maybe. Which is crazy when you think about it given that Icelake is definitely a lot better than Cascade Lake. Sapphire sounds like it will be like 10% tops.

And it looks like AMD will have unfettered access to N5 wafers for as much Genoa as the market will bear.
 
  • Like
Reactions: Joe NYC and ftt