News Intel GPUs - Intel launches A580

Page 30 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

NTMBK

Lifer
Nov 14, 2011
10,208
4,940
136
View attachment 32214
Intel has confirmed in their earnings that DG1 is currently shipping to customer for revenue and that DG2 has powered on.
They also confirmed that Xe Max branding is for dGPU/DG1
View attachment 32215

Jesus Christ... Intel Iris Xe Max Graphics, what a mouthful.

Glad that DG2 is powered on, but that implies they're still a long way from shipping.
 

Gideon

Golden Member
Nov 27, 2007
1,608
3,573
136
Jesus Christ... Intel Iris Xe Max Graphics, what a mouthful.

Glad that DG2 is powered on, but that implies they're still a long way from shipping.
Yeah, the X'es on the pictures kinda reminded me of "ATI Radeon X1900 XTX" With the exception of the latter being slightly easier to pronounce. What's up with intel's marketing department?

They designed new logos that are kinda nice. How could they fail so hard with model naming? IMO the change with Nehalem to "Core iX" and shorter product names was a really good change. Why can't they do it again?
 

mikk

Diamond Member
May 15, 2012
4,112
2,108
136
Previous reports mentioned that there could be 960 EU SKU, but Tom’s sources say that the top DG2 SKU only has 512 EUs with 4096 stream processors all packed in one big tile.

No big surprise given that the Intel test driver from last year confirmed 512 EUs.
 

blckgrffn

Diamond Member
May 1, 2003
9,110
3,029
136
www.teamjuchems.com
Yeah, the X'es on the pictures kinda reminded me of "ATI Radeon X1900 XTX" With the exception of the latter being slightly easier to pronounce. What's up with intel's marketing department?

They designed new logos that are kinda nice. How could they fail so hard with model naming? IMO the change with Nehalem to "Core iX" and shorter product names was a really good change. Why can't they do it again?

Intel g3/g5/g7/g9 or something would have been way, way too easy. Start out with some three digit numbers after that, work your up. It worked before :)
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
No big surprise given that the Intel test driver from last year confirmed 512 EUs.

I only half trust this guy.

He was claiming Tensor Memory Compression, or that the Ray Tracing core would be much much faster and result in much lower RT losses.

Also Notebookcheck does very good reviews, but their news outlet regurgitates everything they can pull from the net.
 

DeathReborn

Platinum Member
Oct 11, 2005
2,743
734
136

Not pretty for Iris Xe Max DG1. Still, can't wait for reviews when they trot out the 1030, RX550 & the spectacular failure "1030 DDR4" (I know, I know, but it had to be mentioned) to see how it fares.

But hey, at least it can use either LPDDR4x or GDDR6, that's something alright.
 

Shivansps

Diamond Member
Sep 11, 2013
3,835
1,514
136

Not pretty for Iris Xe Max DG1. Still, can't wait for reviews when they trot out the 1030, RX550 & the spectacular failure "1030 DDR4" (I know, I know, but it had to be mentioned) to see how it fares.

But hey, at least it can use either LPDDR4x or GDDR6, that's something alright.

Thats actually not bad when you consider it is LPDDR4X vs 64 bit GDDR5. The MX330 has about 14GB/s more.

Think about this, if the DG1 can provide RX550 performance with just 128-bit DDR4 it would likely means Intel has something that is faster and more memory efficient than Polaris. It remains to be seen how well they can scale that up. But for a first attempt at a dGPU is not bad at all.
 

stebler

Member
Sep 10, 2020
25
75
61
From what I've seen (not really sure), the first DG2 configuration that will appear in the OpenCL driver on GitHub is 2 slices x 4 subslices x 16 EUs.
 

jpiniero

Lifer
Oct 1, 2010
14,510
5,159
136
AT has an article up about DG1 now. Intel says that it is indeed not salvaged Tiger Lake despite not having any real difference other than clock speed (an extra 20%). To me it seems like a waste of 10 nm wafers if that really is the case.

AT says they are also planning on releasing DG1 for OEM desktops next year.
 

mikk

Diamond Member
May 15, 2012
4,112
2,108
136
So I was right, it's a dedicated GPU. Performance looks underwhelming based on the slide (only 2 benchmarks), in some cases the iGPU can be faster despite having a 300 Mhz GPU clock disadvantage:

But in a surprising (and welcome) bit of transparency from Intel, the company admits that in some scenarios Tiger Lake’s iGPU may outperform Xe MAX, and as a result those games shouldn’t run on Xe MAX.


I guess Tigerlake iGPU with LPDDR5 next year will beat DG1.
 

mikk

Diamond Member
May 15, 2012
4,112
2,108
136
There will also be an OEM-only desktop card.



A desktop card wouldn't have the cooling issues, for testing purposes interesting.

Though still in the early stages, a hereto unnamed third party has reached an agreement with Intel to produce DG1-based desktop cards. These cards, in turn, will be going into OEM desktop systems, and they are expected to appear early next year.

In addition to mobile, Intel is working with its partners to bring Xe-LP-based discrete graphics to value desktops in the first half of 2021.
 
Last edited:

Qwertilot

Golden Member
Nov 28, 2013
1,604
257
126
What do people actually think happened here? Surely much more logical to double the iGPU or something.

Some weird internal politics? Road map confusion? Aiming at Ryzen OEM machines?!!?
 

jpiniero

Lifer
Oct 1, 2010
14,510
5,159
136
What do people actually think happened here? Surely much more logical to double the iGPU or something.

Some weird internal politics? Road map confusion? Aiming at Ryzen OEM machines?!!?

Probably connected in a way to the seemingly cancelled Rocket Lake-U. Need more info as to how.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
I guess Tigerlake iGPU with LPDDR5 next year will beat DG1.

Maybe, if the iGPU version is implemented well, and it has 25W TDP available for the SoC.

The dGPU can perform much more consistently over a wider range of devices than the iGPU version for this reason.

Other than the Acer Swift, the rest are underperforming by quite a bit, whether due to implementation, or lack of power or both.
 

mikk

Diamond Member
May 15, 2012
4,112
2,108
136
MSI Prestige 14 seems to be a fast LPDDR4x device fast but it gets hot based on some reports. The new XPS 2-in-1 should run good as well, the cooling is better than the clamshell version which struggles. From Ultrabookreview:

So, while the clamshell XPS 13 9310 goes for a more standard thermal design with two fans and a heatpipe in between over the CPU, the XPS 2-in-1 goes with two fans as well, but a larger vapor chamber in between them. There’s also a difference in how the airflow is designed for the two.


The bigger devices with more thermal breathing room are usually DDR4 based unfortunately.

Btw the Xe Max benchmarks were made from the Acer Swift 3x, according to a hands-on video on youtube it only has one fan for both CPU and Xe Max, there is a thermal throttling risk with this solution. Ideally a CPU+dGPU device runs on 2 fans.
 

mikk

Diamond Member
May 15, 2012
4,112
2,108
136
There is a quirk to note. According to Intel, some games might perform slightly worse on Intel Iris Xe Max graphics than on Intel Iris Xe. An Intel spokesperson said this is “typically due to latencies: Dual-rank versus single-rank, hybrid display copy, or traversing the PCIe bus.” The example given was DOTA 2, which was about 10 FPS slower on average with Iris Xe Max, though it did still exceed 70 FPS.

Interestingly, Intel revealed the Iris Xe Max may not necessarily render a PC game better than the integrated graphics on a Tiger Lake chip. It depends on the game.

“This is really because of preferences in games about how they like to access memory, or latencies across PCI Express buses,” said John Webb, Intel's director of client graphics marketing.


This is why it can be slower in certain cases, basically the bandwidth advantage isn't big enough to overcome the latency disadvantage in some cases versus the iGPU version.
 

Panino Manino

Senior member
Jan 28, 2017
813
1,010
136
It's not "disappointing" IMO, it's a small GPU, we knew would be very low end, right?
Still, there are some advantages:

Hyper encoding compared to RTX 2080 NVENC:


But did the quality improved?
Last mount I finally tried encoding a movie to reduce the file size using handbreak on a skylake notebook I have here. I was surprised by how fast the work was done but unfortunately I got even more surprised by the poor the quality. I tried again a few more times but ended very unsatisfied with the quality and final size.
I remembering having heard comments about poor quality or bugs, so is this QuickSync really worth it? For me it's pointless how fast it works if I end with RMVB quality.
 
  • Like
Reactions: Tlh97 and Zepp

mikk

Diamond Member
May 15, 2012
4,112
2,108
136
This reddit post is complete nonsense, many many errors in it, forget it. Even the subslice count is wrong, this guy is clearly confused.


But did the quality improved?
Last mount I finally tried encoding a movie to reduce the file size using handbreak on a skylake notebook I have here. I was surprised by how fast the work was done but unfortunately I got even more surprised by the poor the quality.


Skylake is Gen9 based which has worse quality than Gen11. I haven't seen Gen12 tests, I will check it out soon.
 

stebler

Member
Sep 10, 2020
25
75
61
This reddit post is complete nonsense, many many errors in it, forget it. Even the subslice count is wrong, this guy is clearly confused.
Just look at the driver if you don't believe me. I already linked data structure descriptions on Reddit. Download 100.8885, open iga64.dll, start at offset 0x22fab8 and subtract 0x180000c00 from every pointer. Shouldn't be that hard!