• Guest, The rules for the P & N subforum have been updated to prohibit "ad hominem" or personal attacks against other posters. See the full details in the post "Politics and News Rules & Guidelines."

News Intel to develop discrete GPUs

Page 25 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

IntelUser2000

Elite Member
Oct 14, 2003
7,095
1,628
136
The DG1 sample is performing 59% faster than the Tigerlake sample in the 15 W APU.
Both are much lower than expectations. However the Xe architecture relies on the compiler for scheduling so its performance might be impacted greatly by immature drivers. For testing features, lower performance is probably not a big issue.

In another news about Xe, we can infer as such:

512EU: 75W and 150W versions
1024EU: 300W
2048EU: 500W

Intel document says only the 1 tile(512EU) version is for client. That makes sense as above that its essentially SLI.

Not sure how competitive it will be, but it'll be going against Ampere. The server parts are stated to be 70% faster than the predecessor so it'll be a huge upgrade. If we translate that literally, a hypothetical RTX 3080 might end up at 17TFLOPs.

The 512EU version needs 2GHz to reach that so it'll have to pull off everything to compete.
 

NTMBK

Diamond Member
Nov 14, 2011
8,809
1,855
136
Intel's put up a talk they originally intended to do at GDC, pushing multi-adapter (to make the most of integrated graphics alongside discrete): https://devmesh.intel.com/projects/multi-adapter-particles



This doesn't fill me with confidence. This sort of thing puts a lot of burden on developers, and you're susceptible to weird bottlenecks depending on the user's setup. Maybe the GT2 + modest GPU laptop you developed on saw a good speedup, but on a desktop system with GT1 IGP and a big GPU you might actually see a slowdown.
 

PingSpike

Lifer
Feb 25, 2004
21,261
196
106
This doesn't fill me with confidence. This sort of thing puts a lot of burden on developers, and you're susceptible to weird bottlenecks depending on the user's setup. Maybe the GT2 + modest GPU laptop you developed on saw a good speedup, but on a desktop system with GT1 IGP and a big GPU you might actually see a slowdown.
If its dependent upon developer support, it won't ever be used since no one (or almost no one) will bother to develop support for a weird niche low performance solution. It strikes me like their optane caching solution, it would be hard to get people onboard with this even if it was the highest performance solution.
 
  • Like
Reactions: NTMBK

Stuka87

Diamond Member
Dec 10, 2010
5,024
770
126
If its dependent upon developer support, it won't ever be used since no one (or almost no one) will bother to develop support for a weird niche low performance solution. It strikes me like their optane caching solution, it would be hard to get people onboard with this even if it was the highest performance solution.
There are games that support it now, and have for quite a while. As far as I know, its not that much work for the developer. But, with the tight time lines of games these days, any extra work can be a deal breaker.
 

Shivansps

Diamond Member
Sep 11, 2013
3,014
681
136
Multi-GPU is petty much DEAD since low end apis requieres developers to do the extra work and DX11 is not getting updates, Intel making use of it is a really bad decision.

You guys know you can actually do this today by pairing a 3400G with something like a RX570 and try to use DX12 explicit multi-gpu in games that support it? guess what, no one bothered.
 

Stuka87

Diamond Member
Dec 10, 2010
5,024
770
126
The twitter post was removed. But from the photo above, I also think it looks like its two dies on the same package.
 

IntelUser2000

Elite Member
Oct 14, 2003
7,095
1,628
136
Using the AA battery as reference I scaled it to 1x.
80mm x 52mm
Very roughly speaking,

If we assume the location of the passive components reflect the dies, then we're looking at each to be 450-500mm2.

Based on the 96EU Xe LP of Tigerlake being around 45mm2, that suggests each die has 1024EUs or the addition of DP FP and other features(such as the systolic pipeline) greatly increases the area.
 

mikk

Platinum Member
May 15, 2012
2,836
671
136
Some 4GB GPU have appeard on SiSoftware, unsure of what it is...
The sample has 96 EUs which mean it's Gen12LP based and therefore most likely a DG1 variant. Both samples have a Turbo of 1.5 Ghz, this is what we can expect (at least). I wonder if the 4GB sample is a newer ES or Intel is planning to release samples with different VRAM size.
 

SammichPG

Member
Aug 16, 2012
171
12
81
I wish them luck, but it will be a long way before they get professional software support. Even with the most ubiquitous GPUs on the market, I have the feeling that many vendors don't bother supporting Intel and I also doubt Intel themselves ever tested full compliance with openGL standards. No matter the small detail of not having a new gpu architecture on the market for 5 long years.

Let me give you a concrete example: few weeks ago i was working with a 3D visualization software for microscopy data called Amira where you load image slides of your sample taken at different depths and the program reconstructs a 3d model that you can use to get snazzy renderings or to get quantitative measurements.
I was running amira on my laptop (intel i7 8550+nvidia 150mx) and I could not get a proper 3D model of my data to show even though I was sure that there was no error on my side. I eventually fixed the issue by assigning the program to the nvidia gpu, sure the 150mx is a dog but the thing worked and I could work. If I had purchased a intel only laptop I wouldn't have been able to work during the quarantine.

This same version of the software which was released sometime during 2014 runs flawlessly on a GCN 1.0 firepro.
 

Tup3x

Senior member
Dec 31, 2016
395
217
86
Xe 48 EU version in Tiger Lake should be over 2x faster at same power compared to 48 EU Ice Lake Gen11 GPU. 96 EU version @ 28W should be over 3,5x faster. Well, in 3DMark Firestrike at least.
 

IntelUser2000

Elite Member
Oct 14, 2003
7,095
1,628
136
@Tup3x They updated the presentation. 2x gain over Icelake hasn't changed. What's changed is the 48EU TGL being 2x 48EU Icelake. 48EU TGL is still faster than 64EU Icelake though.
 

IntelUser2000

Elite Member
Oct 14, 2003
7,095
1,628
136
According to a reddit post, a Lenovo PM has said the drivers are crap and is only stable in 3DMark for Tigerlake. This explains the 25-35% gap it should be having compared to their initial targets of 2x over Gen 11.

See, this is what I was worried about. Xe uses compiler based scheduling. Nvidia moved to that a while ago. That will enable savings in power and die area, but it moves the burden to the driver developers.

The results are going to look pretty binary. Either they get this right and will look very good and will raise the quality of Intel drivers significantly, or it won't and look pretty bad both in compatibility and performance.
 

Dribble

Golden Member
Aug 9, 2005
1,829
363
126
I was running amira on my laptop (intel i7 8550+nvidia 150mx) and I could not get a proper 3D model of my data to show even though I was sure that there was no error on my side. I eventually fixed the issue by assigning the program to the nvidia gpu, sure the 150mx is a dog but the thing worked and I could work. If I had purchased a intel only laptop I wouldn't have been able to work during the quarantine.

This same version of the software which was released sometime during 2014 runs flawlessly on a GCN 1.0 firepro.
In professional rendering Nvidia are a whole different class from Intel or AMD, they have by far the best drivers and support. It's a virtuous circle as all the devs use Nvidia because when there is a problem they actually respond and fix it so their drivers get better, and obviously being developed using Nvidia hardware means that's the most tested hardware.
 

NTMBK

Diamond Member
Nov 14, 2011
8,809
1,855
136
According to a reddit post, a Lenovo PM has said the drivers are crap and is only stable in 3DMark for Tigerlake. This explains the 25-35% gap it should be having compared to their initial targets of 2x over Gen 11.

See, this is what I was worried about. Xe uses compiler based scheduling. Nvidia moved to that a while ago. That will enable savings in power and die area, but it moves the burden to the driver developers.

The results are going to look pretty binary. Either they get this right and will look very good and will raise the quality of Intel drivers significantly, or it won't and look pretty bad both in compatibility and performance.
On the plus side, if the issue is in the drivers, they could improve those after launch- inefficient hardware is just inefficient forever. I guess we'll see if Intel is willing to spend the money on a huge driver team.
 

IntelUser2000

Elite Member
Oct 14, 2003
7,095
1,628
136
On the plus side, if the issue is in the drivers, they could improve those after launch- inefficient hardware is just inefficient forever. I guess we'll see if Intel is willing to spend the money on a huge driver team.
The issue with bad drivers at launch is that many systems will ship with it and especially laptops aren't going to be updated beyond those initial ones. It'll also impact first impression of the hardware.
 
Last edited:
  • Like
Reactions: xpea
Mar 11, 2004
20,312
2,461
126
Its worse than that. Bad drivers could really hurt them with OEMs (who are not going to want to deal with the headache of dealing with that), right when AMD has competitive chips basically across the board.
 
  • Like
Reactions: xpea

ASK THE COMMUNITY