News Intel GPUs - more reviews coming in!

Page 21 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Maxima1

Diamond Member
Jan 15, 2013
3,475
741
146
I'm measuring at the wall with a kill-a-watt. Its certainly possible their laptop solutions work better than their desktop oriented one, I haven't tested those.

ZeroCore doesn't even work at all on Windows 10 and AMD has said it never will.

I'm certainly curious about the DG1s numbers and performance in this area.
Neither does Nvidia for desktop. Mobile is not the same.


Ultimately ZeroCore Power isn’t a brand new concept, but this is the first time we’ve seen something quite like this on the desktop. Even AMD will tell you the idea is borrowed from their mobile graphics technology, where they need to be able to power down the GPU completely for power savings when using graphics switching capabilities. But unlike mobile graphics switching AMD isn’t fully cutting off the GPU, rather they’re using power islands to leave the GPU turned on in a minimal power state.
 

jpiniero

Lifer
Oct 1, 2010
12,577
3,976
136
I'm measuring at the wall with a kill-a-watt. Its certainly possible their laptop solutions work better than their desktop oriented one, I haven't tested those.
It does shut off basically on laptops. You use the IGP instead. You can't really do that on desktop.
 

NTMBK

Diamond Member
Nov 14, 2011
9,976
4,365
136
I never said it was good. I always thought the dgpus that barely performed better than the integrated parts were stupid solutions for taking the load off of some i7 SKUs. But maybe it isn't like that this time. There is a rumor Xe will have multi-GPU support....
Multi-GPU sucks, and is going to continue to suck.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,518
3,567
136
Even if DG1 isn't faster than the one in TGL-U, its fine, because it'll play the role of replacing dGPUs of its competitors.

With a dGPU you can use it all the way from Core i3 15W Tigerlake having a much weaker, say a GT1 32EU GPU, to a 45W Cometlake part, or even on AMD systems.
 

Maxima1

Diamond Member
Jan 15, 2013
3,475
741
146
Multi-GPU sucks, and is going to continue to suck.
I agree that's been the history of multi-gpu. I was assuming in that post that they somehow manage to get it addressable as one unit with no scaling or stutter issues.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,518
3,567
136
I agree that's been the history of multi-gpu. I was assuming in that post that they somehow manage to get it addressable as one unit with no scaling or stutter issues.
The difficulty of making that possible is akin to making an 8-wide CPU by using two 4-wide dies. It doesn't work. You end up with a dual core 4-wide CPU.

CPUs had no choice but to use multi-cores because ILP/MLP is limited. 3D workloads scale pretty much linearly with amount of resources on the GPU "core".

Intel might be able to make it work better using EMIB connections, but you'll still end up with a "dual core" GPU, not a single GPU. And if they are not identical, then its even worse. iGPU + dGPU even if they have the same specs are very different. One shares TDP and memory, and the other is entirely separate.

The compute GPUs such as Ponte Vecchio are different because they already use multi-GPU setups anyway. Actually top systems use tens of thousands of them.
 
  • Like
Reactions: Tlh97 and lobz

PingSpike

Lifer
Feb 25, 2004
21,706
532
126
It does shut off basically on laptops. You use the IGP instead. You can't really do that on desktop.
I mean, AMD claimed they could and I have to assume it worked in a lab somewhere at least once. I simply don't believe there is a technically reason it can't be done, its just not a priority. Maybe it requires to much bespoke software to be implemented reliably across the board.

When playing around with my machine it seems like the iGPU off and iGPU on and idle staring at linux terminal is only 1 watt difference. Discrete can't really beat that, I'm assuming because they have their own set of vram that has to be powered up.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,518
3,567
136
When playing around with my machine it seems like the iGPU off and iGPU on and idle staring at linux terminal is only 1 watt difference.
1 watt is a lot for mobile. Actually it goes lot lower than that. You can get the entire package to idle at 300mW. GPU off is a fraction at 10-30mW.

It's not just the VRAM that keeps it high powered. Having to communicate through the PCI Express bus means it has to travel longer distances and thus more power.

For most peripheral components its not a big problem, but GPU uses many times more bandwidth.

Realistically, it'll always be using higher power.
 
  • Like
Reactions: Tlh97

lobz

Platinum Member
Feb 10, 2017
2,051
2,833
136
1 watt is a lot for mobile. Actually it goes lot lower than that. You can get the entire package to idle at 300mW. GPU off is a fraction at 10-30mW.

It's not just the VRAM that keeps it high powered. Having to communicate through the PCI Express bus means it has to travel longer distances and thus more power.

For most peripheral components its not a big problem, but GPU uses many times more bandwidth.

Realistically, it'll always be using higher power.
That's why you need 64 Terabytes of HBM stacked on-die, so no need for communicating through the PCIe bus. I'm also sure there's April 1st somewhere in the world right now.

Oh wait...
 
  • Like
Reactions: Tlh97

IntelUser2000

Elite Member
Oct 14, 2003
8,518
3,567
136
That's why you need 64 Terabytes of HBM stacked on-die, so no need for communicating through the PCIe bus. I'm also sure there's April 1st somewhere in the world right now.
You could have infinite VRAM, but CPU-GPU traffic has to go through somewhere and that's the PCI Express bus. VRAM isn't zero power idle. You have to find a way to shut it down when idle, and wake it up when it isn't.

Any fault in software can easily dislodge it off the idle state, and it becomes easier the longer the signal has to travel.

dGPU setups also focus on performance so having too long of a wakeup time can potentially impact frame rates, so they don't really care about every "milliwatt".
 
  • Like
Reactions: Tlh97 and lobz

IntelUser2000

Elite Member
Oct 14, 2003
8,518
3,567
136
So DG2 is called Gen 12.7 graphics. That means the bigger 256 and 512EU variants may not be seen until the next generation. I wonder if a 1024EU version exists, unless they are targetting the mid-end and leave high end for later?

Some examples of .5/.7 graphics.

Arrandale/Clarkdale on package GPU - Gen 5.75
Haswell - Gen 7.5
Kabylake - Gen 9.5
 
  • Like
Reactions: Tlh97 and psolord

moonbogg

Lifer
Jan 8, 2011
10,550
2,847
136
Intel is going to sell computers without AMD or Nvidia stickers on them. This "GPU" will let them sell their own stickers. I see this "GPU" as a crappy excuse to sell Intel-branded computers. It's a sticker they are interested in selling, not a real GPU.
I think Nvidia is just way too far ahead of the game for anyone to realistically compete with. Intel can attack their branding by waging a sticker war, but that's the best they can hope for. Nvidia's GPUs are so good, they can sell high-end for $1200+ and people can't wait to lap them up. They can release another 1080Ti bombshell anytime they want and can sell high performance parts for around $300 with low power consumption. I hate their pricing, but they have a strangle hold on the GPU market. I don't see this as the kind of thing that can be beaten simply by throwing money at it.
I get the feeling this is an attempt by Intel to diversify and establish strength in another significant market, but I also expect their investment to go right down the drain. I can easily see the whole project just failing hard in every aspect. DOA.
 
  • Like
Reactions: Tlh97 and lobz

PingSpike

Lifer
Feb 25, 2004
21,706
532
126
Honestly, taken together with all the F series cpus out there now the purpose of the dGPU seems to be work around yield issues with CPU's iGPUs and not giving that market wholesale over to nvidia and AMD.

Its hard to tell what OEMs are paying for non-F skus versus F skus, but the street price difference seems like about $40 these days. At that point you may as well plug a GT1030/mxX50 in there and buy the F.

And nvidia already has the much maligned DDR4 1030. Even though everyone complains about those, they may be quite competitive compliments to a iGPUless sku if they have a low price.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,518
3,567
136
@PingSpike Don't take stock into street pricing. Lower demand can easily explain the difference. Lower demand = lower prices. That's because the portion of the population that can use the GPU is multiple of those that don't.

Official ARK price difference is $25.

96EU Xe GPU looks to be under 50-55mm2 in size. Of course with the dGPU they'll need to add the GDDR controller and the connection to the outside world.
 

lobz

Platinum Member
Feb 10, 2017
2,051
2,833
136
@PingSpike Don't take stock into street pricing. Lower demand can easily explain the difference. Lower demand = lower prices. That's because the portion of the population that can use the GPU is multiple of those that don't.

Official ARK price difference is $25.

96EU Xe GPU looks to be under 50-55mm2 in size. Of course with the dGPU they'll need to add the GDDR controller and the connection to the outside world.
And that's not just die area but power consumption too. I'm nit ready to bury it before it actually arrives, but I'm genuinely curious if the rumors about xE having much worse power efficiency than expected are true or not. If they are true, then it's a very bad pipe cleaner, and if they are not true then it can also be effectively used as an own-branded dGPU in whyskey systems.
 

Thala

Golden Member
Nov 12, 2014
1,352
651
136
The difficulty of making that possible is akin to making an 8-wide CPU by using two 4-wide dies. It doesn't work. You end up with a dual core 4-wide CPU.
To be fair, it does not hurt much if you end up with "two 4-wide" dies as long as your interconnect is coherent - which rules out PCIE.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,518
3,567
136
If they are true, then it's a very bad pipe cleaner, and if they are not true then it can also be effectively used as an own-branded dGPU in whyskey systems.
Historically they sucked scaling up the integrated solution, but the iGPU itself was pretty decent.

I assume we won't see products launch before summer so they still have lot they can do to fix whatever issues they have if they have one. Sometimes the driver and the hardware improve quite drastically.

Part of the issue may be that they used GT2 iGPU as the base and used that to scale up. That isn't necessarily a bad thing, but if its not planned well and more of "let's see if we can make it bigger" perhaps it can end up being less optimal.
 
  • Like
Reactions: Tlh97 and lobz

railven

Diamond Member
Mar 25, 2010
6,604
561
126
Woke up to a chime from my phone "Intel announces first discrete graphics card" and then read through Anandtech's thingy only for it to be a bleep and not really announcing anything worth even waking up for.


Guess putting pennies in the piggy bank will continue because who knows what the 3080 Ti is going to cost and AMD seems primed to match NV in price / performance.

firstworldproblems.jpg
 

IntelUser2000

Elite Member
Oct 14, 2003
8,518
3,567
136
So DG1 is a software development vehicle to optimize for Xe not for selling devices.


Intel's graphics Lisa said they'll release more info over the year or something. I'll have to watch the presentation myself.
 

NTMBK

Diamond Member
Nov 14, 2011
9,976
4,365
136
So DG1 is a software development vehicle to optimize for Xe not for selling devices.


Intel's graphics Lisa said they'll release more info over the year or something. I'll have to watch the presentation myself.
Smells like Larabee all over again. First device turned out to be a dud, so they turned it into a "development platform".
 

Tup3x

Senior member
Dec 31, 2016
714
612
136
Smells like Larabee all over again. First device turned out to be a dud, so they turned it into a "development platform".
I don't see this even remotely similar thing. Xe graphics are coming. This will help to optimise for it without the need of laptop.
 

Stuka87

Diamond Member
Dec 10, 2010
5,985
2,202
136
I don't see this even remotely similar thing. Xe graphics are coming. This will help to optimise for it without the need of laptop.
Larabee was "coming" at one point in time too, until it then, didn't.

But, I don't think this will be a dead product, but I am curious as to which OEM wants something like this.
 
  • Like
Reactions: Tlh97 and NTMBK

tamz_msc

Diamond Member
Jan 5, 2017
3,324
3,258
136
The Destiny 2 demo was running at 1080p low with FPS in the 40s, along with significant input delay:


Edit: According to Notebookcheck, the GeForce MX150 does 35 FPS at 1080p medium settings. So as of now, Xe as about the same as an MX150 in performance.
 
Last edited:

ASK THE COMMUNITY