News Intel GPUs - Intel launches A580

Page 34 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

mikk

Diamond Member
May 15, 2012
4,133
2,136
136
They could clock Gen 12 LP higher on 10SF (seen on DG1) but they would lose efficiency which is very important for an ULV Soc. Also they wouldn't gain a lot of performance because it's bandwidth starved, on ADL-P DDR5 this should change. DG1 has latency penalties over the iGPU version, LPDDR4x isn't fast enough to compensate for this. And yes the posted lineup doesn't make sense. In the driver leak back in 2019 only three options were listed: 128/256/512.
 

Tup3x

Senior member
Dec 31, 2016
959
942
136
Since it will most likley be made at TSMC, it will suffer the same fate as AMD and NV. lack of supply.
NVIDIA doesn't use TSMC for consumer Ampere. If they have deal for certain amount of chips then they should get it. In any case, I think the biggest bottleneck is somewhere else.
 

mikk

Diamond Member
May 15, 2012
4,133
2,136
136
First and foremost Intel confirmed DG2 will launch in three variants: 512 EU, 384 EU, and 128 EU.

DG2 also confirms that GDDR6 memory support on this GPU will vary from 14 to 18 Gbps.
 
  • Like
Reactions: Tlh97 and Mopetar

Mopetar

Diamond Member
Jan 31, 2011
7,831
5,980
136
One of the charts they have is utterly bizarre. I doesn't want to copy/paste well and it's not an image so I can't just link to it, but they list SKU1 as having 4,096 shaders and a TGP of up to 150W and SKU5 as having only 768 shaders but up to 120W TGP. That just frankly doesn't make any sense.
 

andermans

Member
Sep 11, 2020
151
153
76
One of the charts they have is utterly bizarre. I doesn't want to copy/paste well and it's not an image so I can't just link to it, but they list SKU1 as having 4,096 shaders and a TGP of up to 150W and SKU5 as having only 768 shaders but up to 120W TGP. That just frankly doesn't make any sense.

Sounds like e.g. different clock speeds?
 

Mopetar

Diamond Member
Jan 31, 2011
7,831
5,980
136
Sounds like e.g. different clock speeds?

Can you think of any other situation where a GPU with 1/5 the hardware can have nearly the same TDP as another GPU just through clock speed differences?

To give some perspective the only "GPUs" that AMD makes with that few shaders are in their APUs now. I can't image any of those GPU drawing that much unless undergoing some extreme OC that probably ruins the silicon.

If we made a historical comparison it would be like a Polaris 455 being able to clock high enough to get close to the TDP of a Vega 64, or it being clocked low enough to draw about a third of the power. The letter seems possible, but how much performance does the card lose as a result?

Like I said, it's just bizarre considering we've never seen anything quite like that. I went back and checked just in case it was a typo on their part that they fixed, but it's still the same as it was before.
 
  • Like
Reactions: Tlh97 and Leeea

Heartbreaker

Diamond Member
Apr 3, 2006
4,226
5,228
136
Can you think of any other situation where a GPU with 1/5 the hardware can have nearly the same TDP as another GPU just through clock speed differences?

I think you are just reading too much into second hand interpretation, on a rumor site. This one bit seems to be the source of all the TGP info:

Intel-DG2-mobile-variants.png



Basically it doesn't differentiate by EU/Shaders at all. It just lists power at the end of long list of SKUs of different EU size.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
600W is actually less than I expected. 100B transistors, 8 stacks of HBM with bw probably well north of 2TB/s.

A100 is 400W @312TF/s FP16

If they can really do 1PF/s FP16 in just 600W without any "sparsity" tricks it would be awesome.

That's 312TF/s in FP16 Tensor operations though. If you go by FP16 doubling FP32, A100 should be at 40TFlops.

They also call it "PetaOps" meaning it's something like the Sparsity/Tensor to get that.

I also wouldn't call 1PF/s FP16 Tensor impressive either. It'll be going against next generation Nvidia parts. So 312TF x 1.5 TDP x 2x Next Gen roughly equals the 1PF/s figure.

Remember they were also boasting 40 TFlops in FP32 using 4 tiles. Ponte Vecchio has 16. That's 160 TFlops FP32. Another doubling gets us to 320TFlops. Higher clocks and something like Int8 numbers gets us to 1PF/s?

Not really interested in Ponte Vecchio or any of the HP/HPC parts for that matter.

I'll probably get the DG2 512EU version unless it's really really bad. The compatibility for games on Xe-LP is acceptable. I feel as though it may arrive after EIP-1559 and they'll need to go by how it performs not just because it's available.
 
Last edited:
  • Like
Reactions: Tlh97 and Leeea

mikk

Diamond Member
May 15, 2012
4,133
2,136
136
There is a new video from Moore's Law Is Dead about Xe HPG:



According to him Intel upped the GPU clock speed and power consumption (I think it refers to the fastest 512EU variant).
 
  • Like
Reactions: Leeea

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
I don't buy the claims because Intel is using TSMC, they'll be supply constrained as the rest. If it doesn't perform well then it'll be available. If it doesn't mine well(or at all) it'll be available. Companies go through contracts and buy X amount over some period of time. So TSMC would have allocated some of the volume to Intel.

I doubt the Xe cards will mine well, at least initially. The AMD/Nvidia cards have robust software support because the people making the mining software has worked on it for years.

I don't think the Eth mining software even started up on the Intel integrated cards, even back then when the DAG size was small you could mine on the APUs. You could take the AMD Ryzen 2400G APUs and get 2-3MH/s out of it. With Intel? Nothing.
 
  • Like
Reactions: Tlh97 and Leeea

Leeea

Diamond Member
Apr 3, 2020
3,617
5,363
136
There is a new video from Moore's Law Is Dead about Xe HPG:



According to him Intel upped the GPU clock speed and power consumption (I think it refers to the fastest 512EU variant).

More performance then a 3070? Seems very unlikely.

The biggest problem will be the drivers. The intel integrated drivers are shit. If their dGPU is no better, it will fail.
 

jpiniero

Lifer
Oct 1, 2010
14,584
5,206
136
I doubt the Xe cards will mine well, at least initially. The AMD/Nvidia cards have robust software support because the people making the mining software has worked on it for years.

If anything it's likely to mine well if it's hitting 3070+ FP32 numbers since mining optimization is surely a lot easier than gaming drivers.
 

eek2121

Platinum Member
Aug 2, 2005
2,930
4,025
136
More performance then a 3070? Seems very unlikely.

The biggest problem will be the drivers. The intel integrated drivers are shit. If their dGPU is no better, it will fail.

They absolutely HAVE to invest in the driver game. As far as performance? The estimate is absolutely on the nose from what I understand. The top cards will be priced around $499.

EDIT: Intel originally planned to backfill cheaper GPU positions en-mass. I have no idea if that is still the case considering the global substrate shortage, but we will see. Both AMD and NVIDIA have lower end models, but they aren't launching due to the substrate shortage.
 

Tup3x

Senior member
Dec 31, 2016
959
942
136
More performance then a 3070? Seems very unlikely.

The biggest problem will be the drivers. The intel integrated drivers are shit. If their dGPU is no better, it will fail.
Why do you think it seems unlikely? Sounds about right to me.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
More performance then a 3070? Seems very unlikely.

Specs-wise it has more than enough.

The 96EU XeLP has 24 ROPs and 48 TMUs. If you scale that up to 512EUs, that's 128 ROPs and 256 TMUs.

It's at RTX 3070 level for shader firepower and exceeds that and RX 6800 for texture throughput. Ampere has "too much" shaders compared to the rest of the subsystem, and RDNA2 uses Infinity Cache.
 

StinkyPinky

Diamond Member
Jul 6, 2002
6,763
783
126
It may match the 3070 on paper perhaps, but I'll be super impressed if it matches it in the real world gaming performance. Nvidia/AMD have had years of experience tweaking their drivers.

Still, great news to see a third player in the market. If anyone can pull off entering the dedicated GPU market successfully, it would be Intel.
 
Feb 4, 2009
34,553
15,766
136
Couldn’t intel just throw a bunch of money at the suspected driver problem?
intel certainly has money.
 

andermans

Member
Sep 11, 2020
151
153
76
Assuming Intel doesn't get scaling problems (like AMD did with the Fury and Vega cards, where they had trouble utilizing all CUs) I think baseline performance can be quite close to where you'd expect them to be given specs. The entire driver situation (and having HW that people haven't optimized for) will probably mean more rough edges, not only in correctness, but also in performance.

I really wonder how well Intel can catch up. They're definitely behind, but they also tend to be more of a software company than AMD is. If they can outmaneuver AMD on a e.g. DLSS competitor that could mean a lot for mindshare and sales.
 
  • Like
Reactions: Tlh97 and Leeea