News Intel GPUs - Intel launches A580

Page 52 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

blckgrffn

Diamond Member
May 1, 2003
9,123
3,056
136
www.teamjuchems.com
You'd think since everything is developed console-first, that at least engine devs would actually spend more time concentrating on there console.
And that this should trickle down to AMD optimising by default.
Otherwise all those low margins and hugh share of TSMC 7nm wafers, is doing AMD more harm than good.

Which might the case anyhow. 10 million sold Radeon 6700XT's instead of all those PS5's would mean developers could not afford to not optimise fo r AMD

I feel like AMD needed to get these console platform wins. It's about sales - it's also about being a big enough platform that all major players need to optimize and support your architecture. Consoles are a closed system with reliable performance profiles and so when studios are aiming for 4k/30 or 1080P/RT/60 fps they are really going to take the time to make that work. This gives AMD huge relevance in all game engine decisions, how RT and other effects are implemented and just in general mindshare that they would otherwise lack. I would imagine that toolchains are chosen by developers that feature robust "console" support, which can be largely interpreted as AMD optimizations.

Nvidia really has a big lead in enthusiast gamer PCs. They lean on this and pay developers all the time. I mean does more need to be said? :) The biggest threat to them is likely APUs and the obsolesce of new GPUs in the sub $200-$300 price point? The gradual erosion of market share that this might result in?

Intel, what do they have? Massive OEM relationships. Cash. Maybe in future generations a more complete supply chain. Years of drivers that mostly work. It seems like if they can get the Arc series up and running, especially in laptops and really squeeze nvidia into the high gaming Intel based laptops, that would be a big win for them. That and all those Dell XPS type PCs for ~$700-$800 shipping with 1660 Supers and the like. Another great way to "bundle" in a Arc GPU, which at least is going to pressure nvidia to probably/maybe get more competitive with their own OEM pricing? To me, this market is the "console" market of AMD for Intel. If they can get into enough PCs they'll gain mindshare with software vendors by the size of their slice of the OEM pie.

Which is a long way to go to say I think AMD stands a lot more to gain long term from getting those consoles wins than pumping 10M more 6700xts into the hands of miners :D To me, each vendor needs a clear path to market and relevancy and that's how I am seeing it.
 
Last edited:

dr1337

Senior member
May 25, 2020
331
559
106
You genuinely think DG2-512 is going to be even weaker than that? Because I certainly don't.
vega 64 had almost 2x the shaders as a gtx 1080 and would often lose to it in gaming performance. Clock speed difference was 100mhz at best too.

There is no precedence whatsoever indicating that xe will be able to scale all the way up to 4096 shaders while maintaining performance scaling. For reference, the 6600xt is actually faster than the radeon vii despite having half the shaders (tho much better clocks). If DG2 clocks as high as rnda2 then there might be some hope for intel, but it still wont matter if gaming performance doesn't scale well.
 
Last edited:
  • Like
Reactions: Tlh97 and Leeea

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
vega 64 had almost 2x the shaders as a gtx 1080 and would often lose to it in gaming performance. Clock speed difference was 100mhz at best too.

DG2 and Xe HPG isn't just XeLP in Tigerlake scaled up.

Intel claims both 1.5x performance/watt and clocks over the XeLP solution. Which means it won't be suprising to see clocks close to RDNA2. And at 200W, it'll be 10x the power of XeLP.

XeHPG's rejigging of the EU configuration means it has 50% higher load/store performance, amount of L1 and texture caches, geometry performance, and rasterizer performance for every single flop over XeLP.

While Vega had serious problems extracting both bandwidth out of HBM memory and taking advantage of the available memory, XeLP isn't out of the ordinary(meaning not bad).

The 6600XT is nearly half the configuration of the top Arc part. It would take serious fail for a 2.2GHz top of the line ARC part to be similar to it.
 
Last edited:

DrMrLordX

Lifer
Apr 27, 2000
21,617
10,825
136
That's weird. Didn't they just announce a Q2 launch? Hey if they can get them out a month or two early then more power to em. Let's see what they've got!
 

jpiniero

Lifer
Oct 1, 2010
14,584
5,204
136
That's weird. Didn't they just announce a Q2 launch? Hey if they can get them out a month or two early then more power to em. Let's see what they've got!

There's enough wiggle room that Intel could be talking about mobile and not a desktop release.
 

mikk

Diamond Member
May 15, 2012
4,133
2,136
136
That's weird. Didn't they just announce a Q2 launch? Hey if they can get them out a month or two early then more power to em. Let's see what they've got!


No they didn't, they always said Q1. Moore's Law is Dead was the one who claimed it's launching in Q2. However he mainly referred to desktop and it's unclear if Intel refers to mobile only when they say Q1. On the other side mobile devices have a much slower on shelves ramp up.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
Wait. There's gonna be a mobile dGPU version of Arc? It makes sense, since the competition does that too, but . . . still, first I'm hearing of it.

The leaks about Arc(such as from igorslab) showed TDP specs for the mobile variants.

Of course they won't skip out in mobile, if it's possible for them to do so. It's a huge revenue opportunity.
 

DrMrLordX

Lifer
Apr 27, 2000
21,617
10,825
136
The leaks about Arc(such as from igorslab) showed TDP specs for the mobile variants.

I was too busy looking at the 512-shader part to notice that. But yes, that would be a potential revenue source for them, as well as (maybe) giving them the flexibility to make their own -G parts without contracting with a competitor. Not that they've had a followup to Kaby-G recently.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
Not that they've had a followup to Kaby-G recently.

You could argue Meteor Lake with separate GPU tile is a successor.

The problem with Kabylake-G was that it didn't bring any advantages of having the GPU integrated. So why would a sane manufacturer use it? We knew it wasn't going to be cheaper. But no extra battery life?

The whole thing with Nvidia blocking them may have some merit but Kaby-G died on it's own.

Theoretically Foveros should improve on this, even though Lakefield failed on the battery life part too. Let's see if we get -G parts again.
 

mikk

Diamond Member
May 15, 2012
4,133
2,136
136
DG2 and Xe HPG isn't just XeLP in Tigerlake scaled up.

Intel claims both 1.5x performance/watt and clocks over the XeLP solution. Which means it won't be suprising to see clocks close to RDNA2. And at 200W, it'll be 10x the power of XeLP.

XeHPG's rejigging of the EU configuration means it has 50% higher load/store performance, amount of L1 and texture caches, geometry performance, and rasterizer performance for every single flop over XeLP.

While Vega had serious problems extracting both bandwidth out of HBM memory and taking advantage of the available memory, XeLP isn't out of the ordinary(meaning not bad).

The 6600XT is nearly half the configuration of the top Arc part. It would take serious fail for a 2.2GHz top of the line ARC part to be similar to it.

There is a nice paper spec compilation from Locuza and yes DG2 looks competitive on paper compared to Navi22 and GA104.


 
  • Love
Reactions: psolord

DrMrLordX

Lifer
Apr 27, 2000
21,617
10,825
136
You could argue Meteor Lake with separate GPU tile is a successor.

The problem with Kabylake-G was that it didn't bring any advantages of having the GPU integrated. So why would a sane manufacturer use it? We knew it wasn't going to be cheaper. But no extra battery life?

The whole thing with Nvidia blocking them may have some merit but Kaby-G died on it's own.

Theoretically Foveros should improve on this, even though Lakefield failed on the battery life part too. Let's see if we get -G parts again.

I remember Kaby-G as being pretty good overall. The main advantage was that you got everything in one package which saved space and theoretically made cooling easier. It didn't bring much in the way of power savings though. Regardless, a mobile dGPU will give Intel the flexibility to try it again someday.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
Yea, specs look pretty good. Nice breakdown by Locuza.

@DrMrLordX It made laptops more portable sure, but it was not cheap at all for the performance. So compared with no advantage in battery life there was little reason to use it.
 

NTMBK

Lifer
Nov 14, 2011
10,232
5,012
136
I remember Kaby-G as being pretty good overall. The main advantage was that you got everything in one package which saved space and theoretically made cooling easier. It didn't bring much in the way of power savings though. Regardless, a mobile dGPU will give Intel the flexibility to try it again someday.

Didn't Kaby-G just use PCIe, but routed over the package instead of going to the motherboard? I can see why that would not save much power.
 
  • Like
Reactions: moinmoin

LightningZ71

Golden Member
Mar 10, 2017
1,627
1,898
136
Yes, PCIe via emib. It also used what was essential a lightly modified Polaris architecture that was just advertised as Vega. it suffered from the same power draw issues as the rest of the Polaris line.
 

moinmoin

Diamond Member
Jun 1, 2017
4,944
7,656
136
Yes, PCIe via emib.
It was the HBM connected with EMIB, not the PCIe link.

hc30-kblg-emib-768x293.png


 

LightningZ71

Golden Member
Mar 10, 2017
1,627
1,898
136
Argh, a hole in my memory!

Still, with the short length and only 8 lanes, it shouldn't have been that much of a factor in power draw. Remember, the Intel iGPU was still there as well for low power situations.
 

Hans Gruber

Platinum Member
Dec 23, 2006
2,131
1,088
136
Am I the only person here who thinks Intel should release their Discrete GPU's immediately? Look at the market. How long does it take for Intel to make a GPU market ready? Didn't they hire the AMD GPU guru 3 or 4 years ago? All the secret tests well over a year ago. How long does it take them to make video drivers? They make their own CPU's. Opening a line for GPU's should not be too difficult for Intel.
 

jpiniero

Lifer
Oct 1, 2010
14,584
5,204
136
Am I the only person here who thinks Intel should release their Discrete GPU's immediately? Look at the market. How long does it take for Intel to make a GPU market ready? Didn't they hire the AMD GPU guru 3 or 4 years ago? All the secret tests well over a year ago. How long does it take them to make video drivers? They make their own CPU's. Opening a line for GPU's should not be too difficult for Intel.

Order from TSMC was for TSMC to deliver the first batch at some point in Q4. It also looks like it is intended to be mobile first.
 

Hans Gruber

Platinum Member
Dec 23, 2006
2,131
1,088
136
I guess my point is the crazy GPU market. Intel has already demonstrated a discrete GPU for PC's. How long does it take for them to bring something to market? With prices for GPU's very high. You would think they wouldn't want to miss out on the market for GPU's. By the time Intel gets their GPU to market. Both AMD and Nvidia will be onto their next generation of GPU's making whatever Intel has now obsolete.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
Am I the only person here who thinks Intel should release their Discrete GPU's immediately? Look at the market. How long does it take for Intel to make a GPU market ready?

They are not sitting there just twiddling their thumbs. It's like with everything. Polishing is the part that takes a long time. Testing for stability, getting the clocks up 5-10% higher, making sure that it works rock solid with all platforms and configurations. Testing for greater reliability is pretty much the only reason server chips take additional 1-1.5 years to come out.

DG1 doesn't even have a BIOS chip, meaning they have to find a team to get that part sorted out, including programming for the firmware, which is among the rare and most difficult part of programming.

Do you want them out now, but with nothing polished? Maybe the performance is erratic, maybe it causes crashes on older platforms, or has compatibility issues with certain PCI Express generations. Maybe the performance is just a bit lower than expected for the price point.

4 years is not a long time for a brand new project. Remember they only have experience with integrated graphics. Discrete graphics is whole another business.

And yes they might end up being too late and compete with next gen AMD/Nvidia parts. That's how business(and life) is.
 
  • Like
Reactions: Heartbreaker