• Guest, The rules for the P & N subforum have been updated to prohibit "ad hominem" or personal attacks against other posters. See the full details in the post "Politics and News Rules & Guidelines."

News Intel to develop discrete GPUs

Page 35 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

ModEl4

Member
Oct 14, 2019
49
24
41
The prediction that the 512EU part at 2.2GHz will reach 3070Ti performance level will die a miserable death imo. Sure with 6nm EUV it is possible to achieve at most +10% vs 7nm, so if AMD achieved in their first real 7nm tryout (Vega VII doesn't really count) 2GHz in the highly OC 5700XT parts then a 275W stressed out N6 part maybe it could reach 2.2GHz but it will need 16Gbps rated GDDR6 in a 256bit memory config to match the already bandwidth starved DG1 (24rops 1.65GHz turbo, 68GB/s) If the turbo clock is 2.2GHz a 128 ROPs/256TU/512EU design with 512GB/s bandwidth will only reach 3060Ti level (something like -2,5% in 4K, +1,5% in QHD, +2,5% in FHD) in actual game tests not synthetics (like in the TPU relative performance results) and the delta will likely be big, big wins-big looses depending on the game. To reach higher performance levels the key is the drivers but it needs time (years) to mature, it's too early now.
My prediction is calculated with what I think Intel will be able to achieve in the driver department. For the guys that this seem too slow (3060Ti perf lvl), look it this way a 3060Ti has actual clock 1.86GHz and 4.864 Cuda cores and the 512 DG2 4.096 cores and 2.2GHz turbo clock, although this comparison of course is really meaningless...
Regarding prices, a 2.2GHz 96ROPs/192TU/384EU part with 192bit memory bus will only have 3060 performance level so the 12GB version should be $299 imo and you can extrapolate the rest from there...
 
  • Like
Reactions: Tlh97 and Leeea

dr1337

Member
May 25, 2020
133
193
76
It's at RTX 3070 level for shader firepower
at 4096 shaders its almost 25% behind the 3070 in terms of alu count. They could go all the way up to 128 rops and 256 TMUs, but its way way way way way more likely that they're HPG is going to be its own design and not just XeLP scaled way up. Either way theres no way their top card mines any faster than a 3060ti. If the clocks are real then maybe they have some hope but unless intel has some magic driver devs, its not going to run any faster than vega64.
 
  • Like
Reactions: Tlh97 and Leeea

IntelUser2000

Elite Member
Oct 14, 2003
7,412
2,099
136
at 4096 shaders its almost 25% behind the 3070 in terms of alu count.
Not at 2.2GHz.

Also remember when Ampere first came out people were wondering why performance was a bit poor compared to the shader count.

That's because Ampere did not scale up the rest - TMU/ROP/Memory in accordance to the massive shader increase.

Mining performance pretty much only matters for Ethereum, and that's basically all memory bandwidth. I don't think it'll mine popular algorithms for a while though.
 

Leeea

Senior member
Apr 3, 2020
611
699
96
Couldn’t intel just throw a bunch of money at the suspected driver problem?
intel certainly has money.
Intel just does not care. Their GPU driver division is the neglected hinterlands of the company. Did not even bother releasing the iGPU driver* for their latest 11th gen CPUs until over a month said CPUs were available at retail**:
*https://www.techspot.com/news/89168-intel-forgot-release-graphics-drivers-their-11th-gen.html
**( intel launched early in france, ex: ) https://www.youtube.com/watch?v=3n0_UcBxnpk

A company just cannot throw money at the problem. This level of neglect is from the top down. They need to hire the expertise from a competitor, or develop it in house the hard way over years. Even if they hire Nvidia's or AMD's driver personnel, they still need write that software from the ground up.

The problem is really bad. Everyone says Intel iGPU is awful. But how awful is awful? Freezing up in games, crash on game launch, vicious stuttering, and massive FPS swings. Pick a major game, like DOTA, and search for "guide to play X with intel integrated GPU" and you will see the loops that intel graphics users jump through.

DOTA the instructions are to use the command line to revert the game to OpenGL, turn triple buffering on, and keep reducing the max FPS until the jerking kind of goes away.

The fortnight guide has the user going into the games files and editing them for playability. Mainly turning down the internal resolution to 384x216...

GOG general instructions for getting around iGPU crash on launch:
https://support.gog.com/hc/en-us/articles/213039525-Screen-freezes-after-launch-Intel-HD-graphics-issue-?product=gog
( a tutorial on how to create a virtual second monitor ... what?! - this is actually clever, the virtual monitor is likely purely software and probably does not use the intel gpu hardware )

With Overwatch Blizzard devs designed it from the ground up to be intel iGPU compatible. So works. Kind of. Yay, an win for the intel GPU! ... but what about all those games not designed for intel gpu?
 
Last edited:

IntelUser2000

Elite Member
Oct 14, 2003
7,412
2,099
136
Intel just does not care. Their GPU driver division is the neglected hinterlands of the company.
You do need explicit support sometimes, but they had support since August last year since the UHD 730/750 uses the same architecture as the Tigerlake mobile Iris Xe GPUs.

That part is inexplicable and a massive screwup on their part.

The problem is really bad. Everyone says Intel iGPU is awful. But how awful is awful? Freezing up in games, crash on game launch, vicious stuttering, and massive FPS swings.
Actually in DoTA what you are saying is true for really old Intel GPUs but once you get the Skylake generation(now 6 years old) at least for that game all it's lacking is in performance.
 
  • Like
Reactions: Leeea

IntelUser2000

Elite Member
Oct 14, 2003
7,412
2,099
136
So let's do some comparisons with available parts to see how realistic the claims are.

Iris Xe G7 is 20-30% faster than Vega 8 in mobile.

Iris Xe G7: 96 EUs @ 1.3GHz = 2TFlops
Vega 8: 512 SPs @ 1.75GHz = 1.8TFlops

The G7 performs 10-15% better per flops than Vega 8. Computerbase's comparisons show there's a big per CU gain from GCN to RDNA, but negligible amount from RDNA to RDNA2.


If you assume RDNA2 is 24% better per Flops than Xe, then it'll end up at 6700XT level, as it exceeds the specs of 6800 by a bit.

Driver optimizations, Xe HPG unique optimizations might be responsible for plus or minus 10%. If it ends up being 10% better, 6800 performance is very much possible.

They could go all the way up to 128 rops and 256 TMUs, but its way way way way way more likely that they're HPG is going to be its own design and not just XeLP scaled way up.
That's true but the die size of the 384 EU HPG is said to be only 190mm2. Tigerlake's iGPU is something like 45mm2. Not everything has to be replicated such as media blocks, but additional units have to be added such as GDDR6 controller and Ray Tracing blocks. Also TSMC 6nm is probably a bit denser than Intel 10nm.
 

blckgrffn

Diamond Member
May 1, 2003
7,715
931
126
www.teamjuchems.com
You do need explicit support sometimes, but they had support since August last year since the UHD 730/750 uses the same architecture as the Tigerlake mobile Iris Xe GPUs.

That part is inexplicable and a massive screwup on their part.



Actually in DoTA what you are saying is true for really old Intel GPUs but once you get the Skylake generation(now 6 years old) at least for that game all it's lacking is in performance.
Also...

If drivers are rough out of the gate we can expect Intel to wear that label for a loooong time. How many Nvidia fans curse (and AMD faithful suspect) AMD GPUs because of “bad drivers”.

Mining won’t be soaking up every Nvidia and AMD GPU made at some point and you can’t win just by showing up when options that people trust are available.

If only they were available now and reasonable drivers. If they had a 14nm GPU that could easily take on a 1050ti at $150 they could be blasting them out right now. If they had the team ready, people would be willing to put up with driver issues just to have something if the quality was improving on a deliberate cadence. I think their 14nm process is mature enough to handle competing with similar processes from TSMC and Samsung.

It feels like a missed opportunity, especially as Xe has really been out for a bit now.
 

Tup3x

Senior member
Dec 31, 2016
507
375
136
It may match the 3070 on paper perhaps, but I'll be super impressed if it matches it in the real world gaming performance. Nvidia/AMD have had years of experience tweaking their drivers.

Still, great news to see a third player in the market. If anyone can pull off entering the dedicated GPU market successfully, it would be Intel.
Xe is new architecture and they might have pushed the reset button in the driver creation but they have been making graphics processors (in some form) and drivers for years, no, decades. They have been tweaking their drivers for quite some time and preparing for this. They are definitely aiming for feature parity, stability and bug removal.

They haven been involved with developers before.
 
  • Like
Reactions: IntelUser2000

andermans

Member
Sep 11, 2020
62
55
51
If you assume RDNA2 is 24% better per Flops than Xe, then it'll end up at 6700XT level, as it exceeds the specs of 6800 by a bit.
IMO this is the real question here. Vega had severe issues scaling to the number of CUs it had. Which means that the Vega-> RDNA2 IPC numbers are likely an overestimation of the perf improvements when you want to use them for a mobile Vega baseline. The big unknown here is the magnitude of this effect.

However with 512 EU at 2.2 GHz that is +46% flops compared to the rx 6700 xt (40 CU at 2.4 GHz). That is a significant advantage and given the decent results of the Xe IGP I think it is unlikely to underperform the rx 6700 xt unless they screw up the scaling and introduce a new bottleneck somewhere. (not unheard of, the memory subsystem or geometry engines can easily become a problem with scaling, but at this point we just don't have any info indicating one way or another)
 

ultimatebob

Lifer
Jul 1, 2001
23,675
1,539
126
Why do you think it seems unlikely? Sounds about right to me.
Yeah, I'm not sure if matching the performance of a mid-range card released a full year earlier should be considered to be a great achievement. The sheer existence of the card should help with the supply situation and bring down prices, though.

I wouldn't worry about the graphics drivers, either. Intel has been making graphics drivers for years now for their integrated graphics. It's actually somewhat impressive what kind of performance they can wring out of those underperforming iGPU's.

Maybe I'll get one as an upgrade for my GeForce 2060 Super in my Ryzen gaming rig some day, although the idea of putting an Intel graphics card in an AMD gaming rig sounds like blasphemy.
 

blckgrffn

Diamond Member
May 1, 2003
7,715
931
126
www.teamjuchems.com
Yeah, I'm not sure if matching the performance of a mid-range card released a full year earlier should be considered to be a great achievement. The sheer existence of the card should help with the supply situation and bring down prices, though.

I wouldn't worry about the graphics drivers, either. Intel has been making graphics drivers for years now for their integrated graphics. It's actually somewhat impressive what kind of performance they can wring out of those underperforming iGPU's.

Maybe I'll get one as an upgrade for my GeForce 2060 Super in my Ryzen gaming rig some day, although the idea of putting an Intel graphics card in an AMD gaming rig sounds like blasphemy.
If their drivers were great, Xe wouldn't be flat out broken for so many games right now. The iGPUs have been shipping for months now. If they had dedicated cards out now in this rough of shape it would be a walk of shame.

Trust is earned.
 

IntelUser2000

Elite Member
Oct 14, 2003
7,412
2,099
136
If only they were available now and reasonable drivers. If they had a 14nm GPU that could easily take on a 1050ti at $150 they could be blasting them out right now.
At 14nm it'll lose clocks and power efficiency. Rocketlake's 32EU iGPU is at 50mm2. Granted there's empty space.

Since the 1050 Ti is like 80% faster than the Iris Xe with 96EUs, they'll need a 192EU version with lowered clocks.

You'll end up with similar performance but a die size over 200mm2 and power efficiency far behind the 1050 Ti.

No, 14nm needs to retire.

Maybe I'll get one as an upgrade for my GeForce 2060 Super in my Ryzen gaming rig some day, although the idea of putting an Intel graphics card in an AMD gaming rig sounds like blasphemy.
I'd do this too if they had Ryzen 3 with iGPU(I use it for testing and is very important) out in retail. I don't want to have to scour eBay to get it!
 

blckgrffn

Diamond Member
May 1, 2003
7,715
931
126
www.teamjuchems.com
At 14nm it'll lose clocks and power efficiency. Rocketlake's 32EU iGPU is at 50mm2. Granted there's empty space.

Since the 1050 Ti is like 80% faster than the Iris Xe with 96EUs, they'll need a 192EU version with lowered clocks.

You'll end up with similar performance but a die size over 200mm2 and power efficiency far behind the 1050 Ti.

No, 14nm needs to retire.
1050ti is on Samsung 14nm. If they can’t compete with that what are they doing?

And why do we think they’ll do better when they are toe to toe on similar smaller processes if they can’t be on their has to be best in class 14nm?

I would also say that I would be willing to spot Intel a fair bit of extra power usage. No one cares so long as one 6 pin does the job. But they should have the superior process and integration insight on 14nm even if Turing is some unbeatable masterpiece of engineering.
 
Last edited:

IntelUser2000

Elite Member
Oct 14, 2003
7,412
2,099
136
1050ti is on Samsung 14nm. If they can’t compete with that what are they doing?
That is what the numbers say on Intel 14nm. Rocketlake's 32EU iGPU is something like 50mm2.

The 96EU Iris Xe on Intel 10nm is 45mm2, and is probably close to 100mm2 on 14nm. I may have been slightly pessimistic but 14nm is NOT the solution.
 
  • Like
Reactions: blckgrffn
Feb 4, 2009
29,485
10,004
136
I personally think intel will have a competent mid range offering think 1660 type performance
I don’t think drivers will be perfect at first but as I’ve said they have ample money to pay plenty of people to write drivers, they’ll get fixed fast
I predict they won’t do ray tracing and other advanced stuff well
I think per its performance it will be fabulously priced
Hopefully we see them soon.
 

blckgrffn

Diamond Member
May 1, 2003
7,715
931
126
www.teamjuchems.com
That is what the numbers say on Intel 14nm. Rocketlake's 32EU iGPU is something like 50mm2.

The 96EU Iris Xe on Intel 10nm is 45mm2, and is probably close to 100mm2 on 14nm. I may have been slightly pessimistic but 14nm is NOT the solution.
It certainly isn't now. You are right.

Even ~154 EUs (math? it's hard...) might have been really powerful with ~100W+ for clocks and 128bit or 196bit GDDR5 to power it. 🤷‍♂️ That maybe could have been ~150 mm vs the Pascal 133 mm die. Seems... doable? Maybe.

If they had started 2-3 years ago in with some purpose maybe we would have been surprised and they'd have a ~1050 and ~1070 part available in the flesh now. Given current market situations, consumers would have put up with both driver issues and lower efficiency if there was just something to buy.

I still find it interesting they didn't do it as a way to get some margin on the 14nm fabs when they realized they would be around for quite some time. They wouldn't have been Xeons but they would have been something and great for mindshare if Intel had been willing to make it that kind of investment. I guess in their hubris they didn't seriously consider there would be competition to force them to move on?
 

mikk

Diamond Member
May 15, 2012
3,120
938
136
If their drivers were great, Xe wouldn't be flat out broken for so many games right now. The iGPUs have been shipping for months now. If they had dedicated cards out now in this rough of shape it would be a walk of shame.

Trust is earned.

Out of curiosity, did you try it? What games are flat out broken right now?

Xe HPG seems to support AV1 encoding which no current GPU can do, I'm looking forward to this. Their HEVC encoder on Iris Xe is a big improvement over Gen9.5, much higher quality.
 
  • Like
Reactions: Leeea

blckgrffn

Diamond Member
May 1, 2003
7,715
931
126
www.teamjuchems.com
Out of curiosity, did you try it? What games are flat out broken right now?

Xe HPG seems to support AV1 encoding which no current GPU can do, I'm looking forward to this. Their HEVC encoder on Iris Xe is a big improvement over Gen9.5, much higher quality.
There is another thread where this is discussed. A poster there couldn’t get Batman Arkham Knight to work and the official Intel response was “that game is not supported.” That’s a specific example.

Ticket closed, works as designed 😂. There is a list of supported games and it seems if your title isn’t on it, GLHF.

Encoding and decoding engines don’t play games, I guess. I was considering a RKL cpu for a Plex box but I just decided to use a 1060 and an 8400 i5 instead. I want to be able to use it as machine to power my once/twice weekly biking simulations and it just doesn’t have the iGPU juice for it.

And there went my only half hearted excuse to have Xe testbed. I was way more excited before the reviews came out.

And let me say that the iGPU on the Skylake through Cometlake CPUs has likely logged waaay more hours of gaming that any of us on a hardware forum would like to admit. But I think people that are trying to make their integrated Intel graphics work have way lower expectations than when Intel starts selling add in cards for hundreds of dollars. Also, Xe is new, different, etc. We've seen AMD and nvidia fumble on new rollouts before.

When I see regularly drops of whatever the equivalent of Xe "Game Ready Drivers" are - the kind that install across platforms, regardless of what your Xe part is - and they start dropping regularly with fixes for games then the trust will start being built. Right now, both AMD and nvidia are constantly rolling new drivers and it's not just for fun. Which makes me believe it is non-trivial.

Finally, I do want Intel to succeed. I hope they are all in on this.
 
Last edited:
Feb 4, 2009
29,485
10,004
136
There is another thread where this is discussed. A poster there couldn’t get Batman Arkham Knight to work and the official Intel response was “that game is not supported.” That’s a specific example.

Ticket closed, works as designed 😂. There is a list of supported games and it seems if your title isn’t on it, GLHF.

Encoding and decoding engines don’t play games, I guess. I was considering a RKL cpu for a Plex box but I just decided to use a 1060 and an 8400 i5 instead. I want to be able to use it as machine to power my once/twice weekly biking simulations and it just doesn’t have the iGPU juice for it.
This is a 12 or 13 year old game with no support for windows 10.
I don't blame intel on this.
 

IntelUser2000

Elite Member
Oct 14, 2003
7,412
2,099
136
Even ~154 EUs (math? it's hard...) might have been really powerful with ~100W+ for clocks and 128bit or 196bit GDDR5 to power it. 🤷‍♂️ That maybe could have been ~150 mm vs the Pascal 133 mm die. Seems... doable? Maybe.
I agree in the hypothetical scenario where Intel didn't screw up their 14nm(and yes everyone forgot about that but that's where the first screwed up), maybe the GPU would have made sense.

But the reality is that without Xe/Gen 12 architecture, they have no chance. Imagine doing the same math with Gen 11 Icelake GPU. Same die area and power use, but nearly half the performance of Xe.

Time Spy GPU
-GTX 1050 Ti: 2500
-Iris Xe G7: 1600
-Hypothetical Iris Xe G7 on desktop with no power limit: 1800-1900
-Iris Plus G7 Icelake: 850
-UHD 750 Rocketlake: 670
-UHD 730 Cometlake: 470

Batman: I just saw that from @tamz_msc

Honestly they probably just did a search on gameplay.intel.com and since it didn't have explicit support they simply said "it's not compatible". Could have had a monkey behind the support line with the same result.

You can see the game running on the UHD 630 on i3 8100. So it probably has intermittent issues.

The driver team isn't doing a very good job right now. It's been more than a month since the last driver release.

But they've been fairly regular: https://downloadcenter.intel.com/download/30266/Intel-Graphics-Windows-10-DCH-Drivers

If you look at the left side it shows previous versions. Every 2-3 weeks is a new update and about every other release they do a game highlight driver. Of course they still need ways to go.
 
Last edited:
  • Like
Reactions: mikk and Leeea

mikk

Diamond Member
May 15, 2012
3,120
938
136
There is another thread where this is discussed. A poster there couldn’t get Batman Arkham Knight to work and the official Intel response was “that game is not supported.” That’s a specific example.

This answer was expected to me because the pattern is a repeat from many posters, someone with no own experience is crying loudest and is exaggerating like hell. Because Batman wasn't running means everything is flat out broken for so many games right now. There are gameplay videos on youtube which implies Batman is running with newer drivers. It would be different if you would have real experience and could name a few games, this is a different story. But in this case it's business as usual, it's exaggerated fantasy. Also not all game issues are driver related, sometimes the game devs have to be blamed.
 
  • Like
Reactions: ryan20fun

IntelUser2000

Elite Member
Oct 14, 2003
7,412
2,099
136
With vendors saying they expect shortages not to be resolved as late as 2023, Intel may be in a decent position even if they get it out by end of this year.

For all the talk about how Intel is swimming in money so they can throw endless resources at it, their actions actually point at the opposite. They give up on too many projects because they feel they are wasting money.
 

NTMBK

Diamond Member
Nov 14, 2011
9,226
2,561
136
With vendors saying they expect shortages not to be resolved as late as 2023, Intel may be in a decent position even if they get it out by end of this year.

For all the talk about how Intel is swimming in money so they can throw endless resources at it, their actions actually point at the opposite. They give up on too many projects because they feel they are wasting money.
Intel are building their GPUs on the same TSMC production lines as AMD. These aren't going to make a difference to chip shortages.
 
  • Like
Reactions: Tlh97 and Leeea

ASK THE COMMUNITY