News Intel GPUs - Battlemage rumoured cancelled (again)

Page 76 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

uzzi38

Platinum Member
Oct 16, 2019
2,668
6,200
146
Intel Xe-core also has Matrix ALUs (XMX) which AMDs NAVY 24 (6500M) doesn't. Also Intel has Video encoders and 8x lanes of PCIe Gen 4 vs 4x lanes on the AMD NAVY 24.
So comparing die sizes for gaming performance is not apples to apples here.

Intel%20Architecture%20Day%202021_Pressdeck_97_575px.jpg

Even if they had feature parity in that sense I would still expect N24 to be smaller than DG2-128. Not sure about Intel's side, but on Nvidia's the tensor cores are roughly 2mm^2 per TPC (2SMs). For Navi24 the equivalent would be 2mm^2 per WGP, which leaves you with an extra 16mm^2 die area.

DG2-128 is just over 40mm^2 larger than N24. Even if you add the encoder, PCIe lanes and the matrix pipelines into the mix again, Navi24 would still almost certainly be smaller.
 
  • Like
Reactions: Tlh97

AtenRa

Lifer
Feb 2, 2009
14,001
3,357
136
Even if they had feature parity in that sense I would still expect N24 to be smaller than DG2-128. Not sure about Intel's side, but on Nvidia's the tensor cores are roughly 2mm^2 per TPC (2SMs). For Navi24 the equivalent would be 2mm^2 per WGP, which leaves you with an extra 16mm^2 die area.

DG2-128 is just over 40mm^2 larger than N24. Even if you add the encoder, PCIe lanes and the matrix pipelines into the mix again, Navi24 would still almost certainly be smaller.

Perhaps, but it wouldn't be 40% bigger like it is now and Performance/mm2 would be completely different.
 
  • Like
Reactions: coercitiv

mikk

Diamond Member
May 15, 2012
4,152
2,164
136
The 2400 TS GPU score is Default mode. It's in performance mode that gets 3100 points. Default being 25W makes lot of sense. Performance is probably 35W. Most high end Iris Xe scores around 1700 points and with 30W TDP settings. They are at least 1.1GHz clock at those settings.

1.15GHz isn't the base clock. They said it's realistic frequency for gaming workloads. Refer back to HWUboxed. "Typical average frequency across a wide range of workloads. You can think of this as Turbo in "average workloads" since Turbo is active in modern CPUs almost all the time in one form or another.

In the Performance mode it's 80%+ faster. With 96EUs clock can't be the only reason for it.


1150 Mhz is the base clock speed for the lowest TDP, check out Intel ark.

The score is too high for a low clock of 1150 Mhz, imho it must have clocked higher in timespy. Maybe not that much in default mode, in performance mode for sure.

Seems like Intel has a highly dynamic operating frequency range, A350M can go to 2200 Mhz in certain non gaming workloads:

 

Asterox

Golden Member
May 15, 2012
1,026
1,775
136
1150 Mhz is the base clock speed for the lowest TDP, check out Intel ark.

The score is too high for a low clock of 1150 Mhz, imho it must have clocked higher in timespy. Maybe not that much in default mode, in performance mode for sure.

Seems like Intel has a highly dynamic operating frequency range, A350M can go to 2200 Mhz in certain non gaming workloads:


Lol, non gaming workloads huh. :laughing:

That is the point of the whole story, or my comparison with AMD or Nvidia Mobile GPU-s.

Intel cant guarantee certain operating GPU frequencies, and put it all into standard Mobile GPU specifications.

Intel produces fog, or in specification only shows GPU frequencies for the lowest TDP.



It is very simple, if we look at AMD o Nvidia Mobile GPU specification.Everything is nicely listed, and there is no unnecessary fog.


 
Feb 4, 2009
34,630
15,824
136
I’ll say it again. I don’t know why people are crapping on intel for this.
I welcome a 3rd competitor to break up then nvidia/AMD duopoly.
I am a big believer in the rule of three, in this case we need a third player to give a low(er) cost option that does something different. Even if they are not the right cards for me (or us), they will be the right cards for someone.
 

jpiniero

Lifer
Oct 1, 2010
14,688
5,318
136
I’ll say it again. I don’t know why people are crapping on intel for this.
I welcome a 3rd competitor to break up then nvidia/AMD duopoly.
I am a big believer in the rule of three, in this case we need a third player to give a low(er) cost option that does something different. Even if they are not the right cards for me (or us), they will be the right cards for someone.

When Intel started, their intent was to replace AMD as the alternative. It's taken them so long that AMD's gotten their act together so that's obviously not going to happen.

If it was fabbed at Intel's own fabs, possibly they could make it work as an media/compute upgrade for older systems.
 
Feb 4, 2009
34,630
15,824
136
When Intel started, their intent was to replace AMD as the alternative. It's taken them so long that AMD's gotten their act together so that's obviously not going to happen.

If it was fabbed at Intel's own fabs, possibly they could make it work as an media/compute upgrade for older systems.

What you said just supports the rule of three.
Need three choices or competitors to have good options.
 

blckgrffn

Diamond Member
May 1, 2003
9,145
3,086
136
www.teamjuchems.com
Yeah, a 6500xt in actual laptop where there is an Intel or AMD iGPU that handles all the encode/decode bits and supports PCIe 4 is obviously the best place for it, I'd not think twice if I saw decent model laptop with that dGPU, ideally with a new Zen3+ CPU so its just the AMD driver set.

The real issue for AMD isn't performance (thankfully!) or driver support (IMO, the last two years have been so much better after a rough OG RDNA launch after a pretty solid Polaris/Hawaii run) but rather that Intel will be able to focus their N6 allocation. For AMD, these have to be worst investments in terms of silicon and therefore they aren't really incentivized to make deals to OEMs on them.

On the flip side, Intel wants to grow their market share and has much more limited internal N6 competition. No doubts OEMs are eager to single source and reap higher discounts when it comes to their CPUs and entry level "gaming laptops" which seem to be everywhere. It seems like Intel could really push these.

Honestly, I'd consider anything with a 1080p screen and an ARC GPU a "gaming" laptop for the everyman. 60 FPS in esports titles at full resolution and decent/high settings? Good enough for that $600 Costco laptop! :) Now, just make it so those games don't crash or have weird graphical glitches, Intel. Also, it'd be nice to have laptops that can charge from USB-C and still have GPUs. The number of laptops that move back to the barrel chargers when the dGPU shows up is a bummer.
 

moinmoin

Diamond Member
Jun 1, 2017
4,975
7,736
136
Compute GPUs at this point are an entirely different line of business now.
At this point at Intel, already? Highly doubt that. Intel for years did iGPUs, only "now" they try scaling that up. Ponte Vecchio for the much delayed Aurora to me seems a very messy first take on a compute GPU. No way that's some business with settled procedures fully independent from all other GPU related endeavors already.
 

beginner99

Diamond Member
Jun 2, 2009
5,211
1,582
136
AMD using the 6500Xt as comparisons is probably not the best move as the will get ridiculed for their missing encoding support and just 4x lanes.Putting this one in an older System will suddenly be worse than the comparison.

I wonder why ARC3 has a bery constant FPS over all games while 6500xt is all over the place, especially F1. Does ARC3 have some fundamental limitation? What is its bus width?

Why intel includes matrix units in this GPU, I don't know. i would think they could save a lot of space not having it in the whole ARC line. Or how does it help gaming? For RT? or upscaling?
 

Tup3x

Senior member
Dec 31, 2016
975
960
136
Lol, non gaming workloads huh. :laughing:

That is the point of the whole story, or my comparison with AMD or Nvidia Mobile GPU-s.

Intel cant guarantee certain operating GPU frequencies, and put it all into standard Mobile GPU specifications.

Intel produces fog, or in specification only shows GPU frequencies for the lowest TDP.



It is very simple, if we look at AMD o Nvidia Mobile GPU specification.Everything is nicely listed, and there is no unnecessary fog.


Intel decided to list worst case scenario. It's much more difficult to guarantee clocks for high TDP versions, because OEMs may have bad cooling and thus thermal throttling. Intel's listed clockspeeds are completely in line with what NVIDIA has posted. Actually they look even better in comparison.
 

LightningZ71

Golden Member
Mar 10, 2017
1,631
1,901
136
AMD using the 6500Xt as comparisons is probably not the best move as the will get ridiculed for their missing encoding support and just 4x lanes.Putting this one in an older System will suddenly be worse than the comparison.

I wonder why ARC3 has a bery constant FPS over all games while 6500xt is all over the place, especially F1. Does ARC3 have some fundamental limitation? What is its bus width?

Why intel includes matrix units in this GPU, I don't know. i would think they could save a lot of space not having it in the whole ARC line. Or how does it help gaming? For RT? or upscaling?
One simple primary reason: Intel XeSS. It uses matrix math heavily in it's calculations for upscaling and smoothing.
 

blckgrffn

Diamond Member
May 1, 2003
9,145
3,086
136
www.teamjuchems.com
AMD using the 6500Xt as comparisons is probably not the best move as the will get ridiculed for their missing encoding support and just 4x lanes.Putting this one in an older System will suddenly be worse than the comparison.

That's not an issue in new to market laptops and that's the only place ARC is launching now. So all that extra silicon is redundant in every Intel laptop they ship, which is why they made a slide about how they can work in conjunction with the units on the iGPU.

In nearly all scenarios, I think that you would prefer to keep all that acceleration on the primary SoC to avoid spinning up the dGPU at all if it can be avoided when constrained by either power on battery or thermals when plugged in.
 
  • Like
Reactions: KompuKare and Tlh97

jpiniero

Lifer
Oct 1, 2010
14,688
5,318
136
At this point at Intel, already? Highly doubt that. Intel for years did iGPUs, only "now" they try scaling that up. Ponte Vecchio for the much delayed Aurora to me seems a very messy first take on a compute GPU. No way that's some business with settled procedures fully independent from all other GPU related endeavors already.

PV got blown up because they thought they would be able to fab for Aurora on 7nm... and they still aren't even close to being able to do that.
 

GodisanAtheist

Diamond Member
Nov 16, 2006
6,947
7,361
136
Seems a bit uncalled for but AMD decided to take potshots at Intel for some reason or another.


- Because AMD knows they're the "other guys" in the GPU space and they have far fewer loyal customers than NV does. If I was a bettiing man, I'd say that most "Radeon" buyers would be happy to go with whatever company gave them the best price to performance, while most Nvidia buyers would only ever go with Nvidia (and why not NV hasn't given anyone a reason to go elsewhere in recent memory).

Intel is going to hurt AMD's GPU marketshare much more than it will hurt NV's, at least initially.
 
  • Like
Reactions: Tlh97 and Saylick

moinmoin

Diamond Member
Jun 1, 2017
4,975
7,736
136
PV got blown up because they thought they would be able to fab for Aurora on 7nm... and they still aren't even close to being able to do that.
Not only that. Imo its huge amount of chiplets also shows they thought their packaging tech allows them to brute force scale up to target performance they wouldn't have been able to achieve otherwise. Another case of biting off more than one can chew it seems.
 
  • Like
Reactions: xpea
Feb 4, 2009
34,630
15,824
136
- Because AMD knows they're the "other guys" in the GPU space and they have far fewer loyal customers than NV does. If I was a bettiing man, I'd say that most "Radeon" buyers would be happy to go with whatever company gave them the best price to performance, while most Nvidia buyers would only ever go with Nvidia (and why not NV hasn't given anyone a reason to go elsewhere in recent memory).

Intel is going to hurt AMD's GPU marketshare much more than it will hurt NV's, at least initially.

Yup, not intentional trolling and I admit I am mainly an nvidia guy.
I have owned far more ATI/AMD cards than nvidia. I have traditionally wanted nvidia cards. I have liked the lower cost & similar performance ATI/AMD have offered.
Only card I was excited to own was an old school ATI 9500PRO agp card.
 

DrMrLordX

Lifer
Apr 27, 2000
21,709
10,983
136
Intel needs competitive GPU based compute for its DC business.

There's always Ponte Vecchio!

Oh wait.

PV got blown up because they thought they would be able to fab for Aurora on 7nm... and they still aren't even close to being able to do that.

7nm/Intel 4 is still vapor. It's kinda like their 10nm killing Xeon Phi. Starting to notice a pattern?

Not only that. Imo its huge amount of chiplets also shows they thought their packaging tech allows them to brute force scale up to target performance they wouldn't have been able to achieve otherwise. Another case of biting off more than one can chew it seems.

Do we have any solid info on how their packaging tech is failing them in this instance?
 
  • Like
Reactions: NTMBK