What can we expect from Intel's first foray into discrete graphics?

Nov 18, 2007
10,650
25
106
#1
My 970 will be due for an upgrade next year. It will be interesting to see a new player on the field. Are they expected to field something that will fight for the top spot or will they go easy and just release mainstream cards? I have a feeling Intel is more concerned with machine learning and such using GPU than gaming.
 

ZGR

Golden Member
Oct 26, 2012
1,827
42
126
#2
I think Intel can compete in the mid range. I found Crystalwell to perform just as good as my GT 650m. It cost too much and power consumption was pretty unremarkable.

Intel and AMD already destroyed the low end dGPU market with their integrate graphics. I can see Intel trying to erode it further at a faster rate. I can't see Intel going high end right away, but that would be awesome.

I saw a lot of MX150/MX130 laptops last year. Would that be a good performance target? Not too ambitious, but i don't know what to expect!e

Edit: Whoever is first to get HDMI 2.1 may be a big deal. Maybe Intel can capitalize on that.
 

DeathReborn

Golden Member
Oct 11, 2005
1,994
13
91
#3
Given that this is their second (third; does Larrabee count?) foray into the discrete market I am not expecting much more than low-mid range performance. I do expect them to do better than they did the first time around: https://en.wikipedia.org/wiki/Intel740 but AMD and Nvidia will in the short-medium term have too much on intel until the kickbacks take effect.
 
Oct 9, 1999
11,427
94
126
#4
If its anything like last time you can expect decent hardware thats completely unusable due to glitchy crap half assed drivers.
 

SPBHM

Diamond Member
Sep 12, 2012
4,844
71
126
#5
their troubles with the 10nm process and issues with supply on the CPU side makes me a little worried, like can they afford to make huge dies to compete using a less efficient architecture at this point?

they are not coming from nowhere because they have their IGP team and have been improving a lot the last decade, but it's certainly not going to be easy, for someone with a 970 I wouldn't be too confident that they will offer a clear upgrade, better than a 1660 anytime soon.


If its anything like last time you can expect decent hardware thats completely unusable due to glitchy crap half assed drivers.
I think their drivers on the current gen IGPs is not THAT BAD, with a gaming focused GPU they would put more work into it,
 

maddie

Platinum Member
Jul 18, 2010
2,697
642
136
#6
Anyone here good at crystal balls?
 
Apr 27, 2000
12,376
1,325
126
#8
their troubles with the 10nm process and issues with supply on the CPU side makes me a little worried, like can they afford to make huge dies to compete using a less efficient architecture at this point?
They won't be using any of their own processes for Xe. I would expect them to use TSMC. It'll be interesting if they keep building iGPUs into their CPUs as well. If Xe is successful, I could see them going the Kaby-G route on many of their CPUs eventually (using Xe instead of Vega or any other AMD chip).
 

NTMBK

Diamond Member
Nov 14, 2011
8,321
316
126
#9
Xe definitely isn't Intel's first discrete graphics card!

 

maddie

Platinum Member
Jul 18, 2010
2,697
642
136
#10
Xe definitely isn't Intel's first discrete graphics card!

Reminds me of my 1st dual monitor build circa 1999. An Intel system with the i740 soldered on motherboard and a Voodoo3 2000 in the slot. Two trinitron monitor heaven.
 
Aug 22, 2017
89
55
51
#11
They won't be using any of their own processes for Xe. I would expect them to use TSMC. It'll be interesting if they keep building iGPUs into their CPUs as well. If Xe is successful, I could see them going the Kaby-G route on many of their CPUs eventually (using Xe instead of Vega or any other AMD chip).
Or Samsung. Raja recently visited their fabs in Korea. I still expect Intel to use their own 10nm for Xe, but they should have a backup plan this time.
 
Apr 27, 2000
12,376
1,325
126
#14
Also a possibility.

I still expect Intel to use their own 10nm for Xe, but they should have a backup plan this time.
That seems less likely. 10nm is looking to go the way of Cannonlake, if any of the recently-leaked Intel roadmaps through 2020 are to be taken seriously. If Intel can't make anything other than 2c/4t U/Y chips on 10nm, in low volumes, with no hope for 10nm desktop or server CPUs ever, what makes you think they're going to do a full GPU on it? Even with EMIB/Foveros?

I would expect them to go to outside fabs as their primary plan until 7nm w/EUV is ready. Then some successor to Xe could be the pipecleaner project to dial in yields on that process.
 
Aug 22, 2017
89
55
51
#15
Also a possibility.



That seems less likely. 10nm is looking to go the way of Cannonlake, if any of the recently-leaked Intel roadmaps through 2020 are to be taken seriously. If Intel can't make anything other than 2c/4t U/Y chips on 10nm, in low volumes, with no hope for 10nm desktop or server CPUs ever, what makes you think they're going to do a full GPU on it? Even with EMIB/Foveros?

I would expect them to go to outside fabs as their primary plan until 7nm w/EUV is ready. Then some successor to Xe could be the pipecleaner project to dial in yields on that process.
Well, none of the leaked roadmaps showed any server roadmap so I will say Ice Lake SP is still slated for 2020. If they can bring Ice Lake SP in 2020, then I think they will be able to bring 10nm GPU in 2020 also. Looks like they will probably stay with 14nms on desktop while use 10nm on low power mobile and servers.
 
Apr 27, 2000
12,376
1,325
126
#16
Well, none of the leaked roadmaps showed any server roadmap so I will say Ice Lake SP is still slated for 2020.
That seems highly optimistic. If they can't push out client/workstation Xeons on 10nm, I don't see it likely that they'll be pushing out MP platform Xeons either. Cooper Lake looks like their 2020 Xeon product.
 

ozzy702

Senior member
Nov 1, 2011
983
203
136
#17
It's going to be a pile of crap.
^ This. Intel may eventually have a place in the market but drivers are going to take a LONG time to mature and as we've seen with NVIDIA, software makes a huge difference. I have extremely low expectations.
 
Aug 22, 2017
89
55
51
#18
That seems highly optimistic. If they can't push out client/workstation Xeons on 10nm, I don't see it likely that they'll be pushing out MP platform Xeons either. Cooper Lake looks like their 2020 Xeon product.
Might has to do with clockspeed. It might be that they can't achieve the clockspeed as high as 14nm so they might choose to stay with 14nm on client desktop CPUs. Since power and density is more important on mobile and server, they can still move to 10nm since they don't need to clock as high.
 
Apr 27, 2000
12,376
1,325
126
#19
Might has to do with clockspeed. It might be that they can't achieve the clockspeed as high as 14nm so they might choose to stay with 14nm on client desktop CPUs. Since power and density is more important on mobile and server, they can still move to 10nm since they don't need to clock as high.
It's more likely a yield problem. Xeon dice have to be pretty big with a lot of cores. Current roadmap leaks make it look like the largest die Intel wants to see is IceLake U/Y in 2c configurations. Anything larger than that will produce so many defective dice per wafer that it's not worth it.
 
Oct 14, 2003
6,218
330
126
#20
It's more likely a yield problem.
It could be due to yield, but clock speed makes a lot of sense.

2004 "Presler" chips can reach 5.xGHz with non-air cooling. In fact, the absolute world record of 8.79GHz is set by AMD's FX chip. The top 10 is occupied by Bulldozers and Celeron Preslers.

It's interesting, because despite Coffeelake being sold as a 5GHz stock part, tests show reaching 5.5GHz or above mark requires water cooling, or something more exotic. And Coffeelake doesn't have the absolute world record either.

IBM's superclocked POWER chips had to stop somewhere around the 5GHz mark too. So this is a fundamental limit that hits all manufacturers, and disregards architecture, implementation, process, ISA, and whatever else you can think of.

So while they are really good at reaching the mark that needed custom overclocking to reach in the past(and killing the gap), the fundamentals haven't changed. That kind of optimization is possible on an old, mature 14nm process. No way a first generation 10nm will achieve that.
 

daveybrat

Super Moderator
Super Moderator
Jan 31, 2000
5,063
112
126
#21
At this point in my life i want stability over speed. I wouldn't buy an Intel GPU to be a beta tester for their drivers.
 

Guru

Senior member
May 5, 2017
621
215
86
#22
I'm expecting them to focus mostly on enterprise, servers, computing, AI, that kind of stuff, so I'm expecting a professional GPU. I expect it to be a rather large GPU, in terms of performance it has to be really good at computer, which is different from gaming and I believe they can be competitive. These professional GPU's go for around $5000, so at $3500 and $5000 or whatever they can have good margins even with their crappy 10nm process. Or maybe they'll release them on 14nm, who knows, either way in that segment I expect them to make money on their GPU's.

In terms of gaming GPU's I'm not expecting too much. Probably compete up to around mid range. I think the fact that they are sort of stuck on 14nm is going to hurt them, as AMD is right now on 7nm, next year Nvidia will be on 7nm too, by 2020 AMD will be on either 7nm+ or even TSMC's new 6nm, which might be the original 7nm+, they might have shrunk it by 1mm.

So yeah, I don't expect them to be super competitive, probably compete up to the mid range, mostly focus on professional GPU's.
 

Stuka87

Diamond Member
Dec 10, 2010
4,213
165
126
#23
The issue won't be their hardware. It will be the drivers. Even if they have great hardware that matches a 2080, they will have immature drivers. It may work ok on brand new games, but they wont have the years of optimizations to older games that AMD and nVidia both have.
 


ASK THE COMMUNITY