News Intel GPUs - Intel launches A580

Page 19 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

lobz

Platinum Member
Feb 10, 2017
2,057
2,856
136
I do have a serious point to this - Intel can roll out as many talking heads as they like all promising the world "game-changing" performance...

But till they deliver a product that can actually put frames on screens quick enough (without additional "interesting" artifacts) - I'll assume they are full of crap.
I, for one, got it :) Reading Carmack's statement was a bit shocking to me, he never struck me as the sellout type.
 

jpiniero

Lifer
Oct 1, 2010
14,509
5,159
136
But don't be wrong and believe Intel won't go higher with their Xe.

Sure looks like DG1 is the only 10 nm dGPU. I could see them going higher, but that's not going to happen until 7 nm. I will say that it'd be just way too easy for them to give up and just focus on compute... or maybe only get serious several years from now if they manage to get 7 nm working well.
 
  • Like
Reactions: NTMBK

NTMBK

Lifer
Nov 14, 2011
10,208
4,940
136
Sure looks like DG1 is the only 10 nm dGPU. I could see them going higher, but that's not going to happen until 7 nm. I will say that it'd be just way too easy for them to give up and just focus on compute... or maybe only get serious several years from now if they manage to get 7 nm working well.

Given the disaster that 10nm is, I'm not surprised. I half expect them to ship the next gen on TSMC.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
Some rumors going around that Xe dGPUs aren't going well as expected. TGL seems to be going well though.

Intel for some reason has trouble scaling up from the iGPU level. Iris Pro sucked because of that. Haswell's Gen 7.5 HD 4400 was pretty good, but Iris Pro 5200 sucked power. Skylake's Gen 9 HD 520 was again pretty good on the efficiency department, but Iris Pro 580 fell flat on its face.

The Iris Pro 580 needed 3x the EUs, 128MB eDRAM, 3x the TDP and twice the cores to be just 2x as fast as the HD 520 that's in the Skylake generation U processors.
 

JasonLD

Senior member
Aug 22, 2017
485
445
136
Some rumors going around that Xe dGPUs aren't going well as expected. TGL seems to be going well though.

Intel for some reason has trouble scaling up from the iGPU level. Iris Pro sucked because of that. Haswell's Gen 7.5 HD 4400 was pretty good, but Iris Pro 5200 sucked power. Skylake's Gen 9 HD 520 was again pretty good on the efficiency department, but Iris Pro 580 fell flat on its face.

The Iris Pro 580 needed 3x the EUs, 128MB eDRAM, 3x the TDP and twice the cores to be just 2x as fast as the HD 520 that's in the Skylake generation U processors.

Well, I don't think many were expecting Intel was going to hit a home run in their first attempt at dGPU in 22 years, especially when AMD is still facing a long uphill battle against Nvidia even with process advantage. Even if everything goes well for Intel, it will take at least 3-4 years to be relevant in dGPU market.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
Yep, wccftech interprets this now as DG1 is tigerlake's iGPU in a discrete form factor, with 96EUs.

Yea it'll be an alternative in Cometlake and AMD systems. Not having to share TDP or VRAM will make it faster than the one in Tigerlake, at the cost of higher power use and larger area.
 

lobz

Platinum Member
Feb 10, 2017
2,057
2,856
136
Yea it'll be an alternative in Cometlake and AMD systems. Not having to share TDP or VRAM will make it faster than the one in Tigerlake, at the cost of higher power use and larger area.
Unless they price it at $99, I can only see this thing as a proof of concept, nothing more.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
Unless they price it at $99, I can only see this thing as a proof of concept, nothing more.

You've seen systems that use dGPUs that seem barely better than the iGPUs right?

Nvidia can't be selling them for much.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
Eww. So much for Intel seriously entering the dGPU market on 10nm. Why, Raja, why?

Well, then you can hope for the Xe architecture to scale up in an efficient manner(unlike with previous efforts) and the 512EU version clocking at 1.5GHz+.

:)
 

jpiniero

Lifer
Oct 1, 2010
14,509
5,159
136
Well, then you can hope for the Xe architecture to scale up in an efficient manner(unlike with previous efforts) and the 512EU version clocking at 1.5GHz+.

:)

I'm assuming DG2 got canned due to bad 10 nm yields and it's competitiveness in general.

I don't think Intel will give up on dGPUs, but only because I do think they have a scalable design capable for gaming coming. But my guess for now is that you won't see that until 7 nm. This one product might be the only thing they release until then.
 

JasonLD

Senior member
Aug 22, 2017
485
445
136
I'm assuming DG2 got canned due to bad 10 nm yields and it's competitiveness in general.

I don't think Intel will give up on dGPUs, but only because I do think they have a scalable design capable for gaming coming. But my guess for now is that you won't see that until 7 nm. This one product might be the only thing they release until then.


Won't they at least be able to replace Nvidia MX150~250? Surely it won't shake up the dGPU scene for sure but it probably going to be useful in those laptops. GPUs have more redundancy so I would say yields should be better than CPUs.
 

NTMBK

Lifer
Nov 14, 2011
10,208
4,940
136
I just read the write up of Xe on Anandtech, and I'm feeling... Skeptical. Lots of implausible numbers thrown around, like "50x increase!". But not a single hard number for TFLOPS target, or power consumption. A new proprietary Intel programming language to develop in, with a software stack that claims to automagically target whatever hardware you need from their wildly disparate product line. And 7nm is doing just fine! Honest!

I really want to see a competitive Intel GPU. But I'm getting strong Cannonlake vibes.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
I just read the write up of Xe on Anandtech, and I'm feeling... Skeptical. Lots of implausible numbers thrown around, like "50x increase!".

Hmm, maybe not though.

They didn't specify what they were comparing it to. They said 40x DP FP Flop per EU.

Intel Gen architectures have 1:4, DP to SP ratio, which would make Ponte Vecchio 10x. That seems crazy right? But nope.

It's because Gen 11 doesn't have DP FP hardware! That's part of their quest to make it power/area efficient on client and gaming.

So 40x could mean back to 1:2, if Gen 11 performs like 1:80 due to emulation.

And Ponte Vecchio has 8 GPU dies per card, times two for two boards, making it total of 16 dies. If Ian is right on each Ponte Vecchio being ~66 TFlops DP with 2400 nodes, then each die has 4TFlop DP compute power.

Which coincidentally is equal to 512EU running at 1GHz with 1:2 DP ratio. But, with a real product, even Linpack isn't 100% efficient. Maybe it'll need 1.2GHz to get that performance. You'll have the Sapphire Rapids chips contribute some too.

He also thinks its possible its only 1200 nodes. Then you are talking each GPU needing to deliver double that which is 8TFlop DP. So 1024EU with 1.2GHz or 512EU with 2.4GHz. The former seems likely.

It could be 1200 nodes. This is based on the fact that it talks about Aurora having 10PB of memory. That means each node with 2x Sapphire Rapids has 8TB of memory.

4TB per CPU makes sense when you consider 8 memory channels and using 512GB, 3rd generation Optane DC PMM devices. That makes slightly more sense than 2TB per CPU using 256GB Optane modules when assuming total of 2400 nodes?
 

lobz

Platinum Member
Feb 10, 2017
2,057
2,856
136
Hmm, maybe not though.

They didn't specify what they were comparing it to. They said 40x DP FP Flop per EU.

Intel Gen architectures have 1:4, DP to SP ratio, which would make Ponte Vecchio 10x. That seems crazy right? But nope.

It's because Gen 11 doesn't have DP FP hardware! That's part of their quest to make it power/area efficient on client and gaming.

So 40x could mean back to 1:2, if Gen 11 performs like 1:80 due to emulation.

And Ponte Vecchio has 8 GPU dies per card, times two for two boards, making it total of 16 dies. If Ian is right on each Ponte Vecchio being ~66 TFlops DP with 2400 nodes, then each die has 4TFlop DP compute power.

Which coincidentally is equal to 512EU running at 1GHz with 1:2 DP ratio. But, with a real product, even Linpack isn't 100% efficient. Maybe it'll need 1.2GHz to get that performance. You'll have the Sapphire Rapids chips contribute some too.

He also thinks its possible its only 1200 nodes. Then you are talking each GPU needing to deliver double that which is 8TFlop DP. So 1024EU with 1.2GHz or 512EU with 2.4GHz. The former seems likely.

It could be 1200 nodes. This is based on the fact that it talks about Aurora having 10PB of memory. That means each node with 2x Sapphire Rapids has 8TB of memory.

4TB per CPU makes sense when you consider 8 memory channels and using 512GB, 3rd generation Optane DC PMM devices. That makes slightly more sense than 2TB per CPU using 256GB Optane modules when assuming total of 2400 nodes?
I still don't see them delivering this till the end of 2021. Contract or no contract, risk production or no risk production, doesn't matter.
 
  • Like
Reactions: Tlh97

naukkis

Senior member
Jun 5, 2002
701
569
136
As Intel recent execution has been so great they will introduce something like hard-core brand fan's dream list. New cpu at non-working node, new GPU arch at new node with (more than one) new interconnects to be build something like ten times bigger than anything before, new programming API's and so on.

It probably isn't too far fetched to predict that Intel project Aurora will be a epic failure......
 

NTMBK

Lifer
Nov 14, 2011
10,208
4,940
136
Hmm, maybe not though.

They didn't specify what they were comparing it to. They said 40x DP FP Flop per EU.

Intel Gen architectures have 1:4, DP to SP ratio, which would make Ponte Vecchio 10x. That seems crazy right? But nope.

It's because Gen 11 doesn't have DP FP hardware! That's part of their quest to make it power/area efficient on client and gaming.

So 40x could mean back to 1:2, if Gen 11 performs like 1:80 due to emulation.

And Ponte Vecchio has 8 GPU dies per card, times two for two boards, making it total of 16 dies. If Ian is right on each Ponte Vecchio being ~66 TFlops DP with 2400 nodes, then each die has 4TFlop DP compute power.

Which coincidentally is equal to 512EU running at 1GHz with 1:2 DP ratio. But, with a real product, even Linpack isn't 100% efficient. Maybe it'll need 1.2GHz to get that performance. You'll have the Sapphire Rapids chips contribute some too.

He also thinks its possible its only 1200 nodes. Then you are talking each GPU needing to deliver double that which is 8TFlop DP. So 1024EU with 1.2GHz or 512EU with 2.4GHz. The former seems likely.

It could be 1200 nodes. This is based on the fact that it talks about Aurora having 10PB of memory. That means each node with 2x Sapphire Rapids has 8TB of memory.

4TB per CPU makes sense when you consider 8 memory channels and using 512GB, 3rd generation Optane DC PMM devices. That makes slightly more sense than 2TB per CPU using 256GB Optane modules when assuming total of 2400 nodes?

That's the thing though. Instead of making worthless comparisons to an architecture that nobody in their right mind would use for HPC, they could just give us the damn FLOP number, instead of making Ian jump through ridiculous hoops. Comparing it to crap things so that they can put up a big impressive number looks like they are scared to be directly compared to their real competition.
 

DrMrLordX

Lifer
Apr 27, 2000
21,582
10,785
136
A new proprietary Intel programming language to develop in, with a software stack that claims to automagically target whatever hardware you need from their wildly disparate product line.

If there is anything they can do right, it's that. Intel is revisiting HSA (essentially). SVM and more. Though I'm skeptical about the hardware interface they'll use for their dGPUs, I think they'll do very well targeting iGPU Gen11 and Gen12/Xe.
 

NTMBK

Lifer
Nov 14, 2011
10,208
4,940
136
If there is anything they can do right, it's that. Intel is revisiting HSA (essentially). SVM and more. Though I'm skeptical about the hardware interface they'll use for their dGPUs, I think they'll do very well targeting iGPU Gen11 and Gen12/Xe.

This isn't just a GPU API. It's meant to target CPU, GPU, FPGA and AI devices. Good luck to them...
 

DrMrLordX

Lifer
Apr 27, 2000
21,582
10,785
136
This isn't just a GPU API. It's meant to target CPU, GPU, FPGA and AI devices. Good luck to them...

Oh I know. WRT dGPU, FPGA, and AI accelerators (Loihi?) they'll have to roll out CXL which is going to be the main stumbling block. I expect them to reverse-engineer the hell out of NVLink. The low-hanging fruit will be their iGPUs. If they can get a solid programming interface that's actually useable and a working driver stack that can quickly and efficiently offload calculations to their iGPUs, that will be a big win for Intel wrt developer support, and it could cause AMD a few headaches.