• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

News Intel GPUs - we've given up on B770, where's Celestial already

Page 264 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
The website has some gaming results.

Roughly 50% faster than B60. Makes more sense why they didn't make B770. That's below 5060 Ti level. They would have had to sell it at $350 maximum, ideally $300. Even before the AI slop inflation it wouldn't have made sense for Intel.
The imaginary gaming version(B770 ?) with a bit more thermal headroom(and potential missing game-specific optimizations for G-31 ?) would probably be somewhat above or around 5060 Ti on average esp. for raster. It was never that exciting a prospect even for gaming/consumer PoV (B580 at least got 12GB VRAM for its price range). It could have only made some sense in early 2025 or by H1 2025.

Then there's CPU-bottleneck issues with Battlemage, some games not being as optimized for some Intel platforms, DLSS4/4.5 being better vs XESS both quality and in-game availability. It wouldn't have been as clear a choice in many cases over a 16GB 9060XT/5060 Ti as a B580 was over some 8GB cards like RTX 4060/5050 or RX 7600.

Lemme think, they wasted their time with a refresh, and their dGPU is behind their iGPU, so optimistically they could have done a B770 in early 2025. Then a C770 could have come by mid-2026.
For over a year now it seemed that next dGPUs after BMG series were going to use newer IP than Xe3 and those would have real volume planned for somewhere likely in 2027(or maybe end 2026 but doubtful).

If memory pricing doesn't ease enough till 2028 then they might not do another Gaming dGPU till then(assuming they want to do dGPUs) and if they do they will use Xe3p or possibly newer IP.

For me it's hard to see how it would benefit Nvidia to stay in an extreme niche market, therefore if the partnership becomes long term, I cannot see other than Xe team being replaced entirely.
The partnership might be primarily on DC-side, the scope might be far from final though. Even if hypothetically Intel abandons dGPUs and AI-accelerators why would they necessarily need Nvidia for its iGPUs? In such scenario they can have a smaller team for just iGPU and graphics IP.
Nvidia isn't abandoning its ARM CPU roadmaps in either DC(Vera->Rosa and so on) or consumer(N1/GB10 and successors) for partnership neither has it officially committed to using Intel's Process nodes for any of its products yet. With Intel's management anything is possible but abandoning GPU IP completely seems beyond stupid.

There's some rumors regarding Titan Lake using Nvidia's "iGPUs" but that's most likely a misinterpretation of Serpent Lake(Halo APU w/ Nvidia GPU) being moved out of Titan Lake series to its separate thing.

If Intel does well with their own IP, then the question is why not expand everywhere where people are trying to shoehorn niche Nvidia parts are going to be? If they have a class-leading part, then why not Xe in laptops, and halo APUs? Being perf/w uncompetitive is why they don't have a laptop variant anymore.
iGPUs are happening and even Halo(RZL-AX) could come. Laptop dGPUs took a heavy blow with Alchemist mobile dGPUs, too many things missing and broken. But laptop dGPUs anyway have become Nvidia only territory since some time.
 
Last edited:
Xe3P | Crescent Island (CRI) | 8 | 8 | 256-bit | TBD | 16 |

Same amount of XVE and SIMD16 ALU width on Xe3p apparently (unless it is a placeholder entry). There is a bigger register file size if that matters.


Controls the register file size for allocation. Supported values vary by platform:

Pre-XeHP: 128 only
XeHP through Xe2: additionally 256 or auto
Xe3: additionally 32, 64, 96, 160 or 192
Xe3P: additionally 512
Xe3PLPG: additionally 320 or 448

And here a feature list:


It is quite a bit bigger than Xe3. I haven't fully checked if there is something interesting. One thing I noticed is that Xe3LPG has a feature called FeatureHasEfficientSIMD32, whereas Xe3/Xe3P does not.
 
The partnership might be primarily on DC-side, the scope might be far from final though. Even if hypothetically Intel abandons dGPUs and AI-accelerators why would they necessarily need Nvidia for its iGPUs? In such scenario they can have a smaller team for just iGPU and graphics IP.
Any way you think of it, Nvidia partnership long-term is a bad thing for Intel GPU, and holistically not just niche segments.

The last sentence is addressed by the strategies Pat Gelsinger had, which made sense. They said they are already spending money and time on the iGPU, why not monetize it? iGPUs aren't being directly monetized, only indirectly through having sales(instead of zero if they didn't have iGPU). But directly people aren't buying iGPUs.

The solution was then dGPUs, to build on the development. The long term having only iGPUs is suboptimal. And we know if it goes to iGPUs again, they will "optimize" for iGPU, meaning driver and support will be not as good as if they had dGPU.
iGPUs are happening and even Halo(RZL-AX) could come. Laptop dGPUs took a heavy blow with Alchemist mobile dGPUs, too many things missing and broken. But laptop dGPUs anyway have become Nvidia only territory since some time.
If Intel has the most efficient uarch, there's no reason to keep Nvidia. So if you don't have a partnership already, then even if yours is slightly behind, its still worth going with yours. If you are though, then the long term is in doubt.

The only reason they don't have mobile dGPU is because they are behind. It's same reason for Nvidia dominating mobile GPUs. In desktop, you can compensate by having a more power hungry GPU, you cannot do that for laptops. It's a contradictory strategy fueled by Intel's desperate need of finance because of their own bad choices.
 
Any way you think of it, Nvidia partnership long-term is a bad thing for Intel GPU, and holistically not just niche segments.

The last sentence is addressed by the strategies Pat Gelsinger had, which made sense. They said they are already spending money and time on the iGPU, why not monetize it? iGPUs aren't being directly monetized, only indirectly through having sales(instead of zero if they didn't have iGPU). But directly people aren't buying iGPUs.

The solution was then dGPUs, to build on the development. The long term having only iGPUs is suboptimal. And we know if it goes to iGPUs again, they will "optimize" for iGPU, meaning driver and support will be not as good as if they had dGPU.
Halo-APUs might be a niche segment at present but DC isn't and may be there's more focus on Xeon with NVLink than Nvidia's iGPUs for now? Nvidia doesn't have an entry to x86 iGPU market without Intel or AMD, so ideally Intel shouldn't give up that privilege easily but may be with Intel management who knows...

Yes, with dGPUs one angle was/is trying to monetize more from your current team but they are also mostly still trying to play for DC/Accelerator through enough adoption and software bring up assist via dGPUs. After Xeon Phi exit they pushed the "Gen" team for compute, HPC or later AI.

Intel shouldn't quit dGPUs or Halo(-AX) but even hypothetically if management did, iGPUs can still survive. They probably won't go back to previous model which wasn't focused on gaming or Pro stuff as much but yes without dGPUs some studios might not consider working on them.

Either way can't rely on Nvidia for IP and for how long can you depend on it? What happens if some years from now ARM ecosystem is ok enough for gaming, can't be certain if Nvidia could equally prioritize x86 APUs as its own ARM ones ?
If Intel has the most efficient uarch, there's no reason to keep Nvidia. So if you don't have a partnership already, then even if yours is slightly behind, its still worth going with yours. If you are though, then the long term is in doubt.

The only reason they don't have mobile dGPU is because they are behind. It's same reason for Nvidia dominating mobile GPUs. In desktop, you can compensate by having a more power hungry GPU, you cannot do that for laptops. It's a contradictory strategy fueled by Intel's desperate need of finance because of their own bad choices.
If in few years they could be on par with Nvidia, which is a big leap, that alone won't be enough. Ecosystem, mind-share, Pro-apps, etc. favor Nvidia and Intel won't dare try laptop dGPUs anytime soon after the Alchemist debacle unless they could achieve parity in other areas as well but that's a long-shot. Even AMD has not made laptop dGPUs of their current IP yet.

Nvidia deal combined with Intel's opaqueness has obvious concerns but complete death of Graphics Team as a conclusion still needs evidence and time ? Surely anything can happen.
 
Last edited:
oh ok, my bad.

PTL Intel Graphics(4Xe3) compared with B390 and some others in 5 games
Yea its pretty much last gen performance even though there's 1/2 the amount of compute units.

I want to see a Core i3 desktop with that 12 Xe3 GPU, but that's unlikely gonna happen. They can charge $100 over the regular Core i3 and would be worth it with the latest API support, plus being integrated and all.
 
Last edited:
I want to see a Core i3 desktop with that 12 Xe3 GPU, but that's unlikely gonna happen. They can charge $100 over the regular Core i3 and would be worth it with the latest API support, plus being integrated and all.

Lunar Lake i9's have 8... you want to increase the Xe by almost 25% to 12 on a i3? It would cost way more then 100.

I do not think at that point its worth it over a DGPU option, as you even get faster GDDR ram.
At that point your doing more Xe computation, and your probably better off getting a intel dGPU, as to keep energy draw and heat off a single source and spread it to different devices.

Unless you need it as a ultra mini, but then you can get a detached gpu.
 
Last edited:
Lunar Lake i9's have 8... you want to increase the Xe by almost 25% to 12 on a i3? It would cost way more then 100.
It won't cost anywhere near that much in silicon.
I do not think at that point its worth it over a DGPU option, as you even get faster GDDR ram.
At that point your doing more Xe computation, and your probably better off getting a intel dGPU, as to keep energy draw and heat off a single source and spread it to different devices.
What dGPU can you get at this level of performance and features for $100? It doesn't exist. This is twice the performance of the Intel A380 and RX 6400 which is pretty much the cheapest modern card you can get. Based on benchmark results they would need to cut it to 4 Xe3 cores to be roughly equal to those while having more recent uarch.

Xe3 doesn't need GDDR to outperform them. It's very bandwidth efficient.

While A380 and RX 6400 is on N6 and Xe3 is on N3E, A380 is 150mm2, RX 6400 is 100mm2, and Xe3 is only 50mm2 and won't need a complex board, power stages, connectors and VRAM. While those dGPUs go well over $100.
 
Last edited:
So g31 is 35% bigger than g21 for ~45% more performance
I don't know if Level1Tech's B70 is a 230W part or what. B60 is ~10% lower performance than B580 in gaming, so that's like only 1.3x over B580. If that's done using 230W, maybe at 300W the gaming performance can be 45% over B580. If that's 290W then that's quite poor.
 
I don't know if Level1Tech's B70 is a 230W part or what. B60 is ~10% lower performance than B580 in gaming, so that's like only 1.3x over B580. If that's done using 230W, maybe at 300W the gaming performance can be 45% over B580. If that's 290W then that's quite poor.
Is intel reference 230W, also a b770 would probably be similarly faster than b70 as b580 is faster than b60
 
So B770 is a 368mm2 die.
Big but was expected.

Some factors that affect Intel dGPUs' area:
  • Use of HP-libraries (see B580)
  • Requiring larger memory-bus to compete.
  • 2 Full-featured media engines that take decent amount of die-area
Some of it can be mitigated with better/more competitive u-arch and/or switch to HD-libraries. Switching to internal fabs later on might help offset costs somewhat. If they continue dGPUs may be they would directly go Xe4 dGPUs in 2028 but that's too far out.
 
Intel shouldn't quit dGPUs or Halo(-AX) but even hypothetically if management did, iGPUs can still survive. They probably won't go back to previous model which wasn't focused on gaming or Pro stuff as much but yes without dGPUs some studios might not consider working on them.
Intel themselves only focused on drivers only when the dGPUs came out. Because the mentality makes sense. With a dGPU you are selling directly and thus directly responsible for consumers, whereas for iGPUs you leave it up to system integrators.
If in few years they could be on par with Nvidia, which is a big leap, that alone won't be enough. Ecosystem, mind-share, Pro-apps, etc. favor Nvidia and Intel won't dare try laptop dGPUs anytime soon after the Alchemist debacle unless they could achieve parity in other areas as well but that's a long-shot. Even AMD has not made laptop dGPUs of their current IP yet.
Yes but the ecosystem and mindshare comes first from having a leadership GPU and architecture for basically forever. If they can do this(which is the doubtful part) for a decade they will achieve the same as well. Winning once in a while like AMD did with R300 or more recently RDNA2 isn't enough. There's such thing as a one-hit-wonder where people forget about you after that.
 
Intel themselves only focused on drivers only when the dGPUs came out. Because the mentality makes sense. With a dGPU you are selling directly and thus directly responsible for consumers, whereas for iGPUs you leave it up to system integrators.
Back then it was like that majorly because the upper management didn't have any big plans on gaming with iGPU then and the plans on dGPU was yet to come plus they had not updated iGPU much from 6th to 10th gen(at least 7th Gen on-wards got a much needed media engine upgrade). They have already indicated will to work for handhelds which is still a relatively small market. Even Qualcomm would likely improve focus on their iGPUs if they could get significant share in PC even if they are quite a bit behind on both software and hardware wrt graphics. Again Intel should continue with dGPUs but if they don't in a worst-case scenario they can still continue to focus on games and other Apps even if somewhat limited. But overall yes, without dGPUs this is a different game and they shouldn't think of abandoning dGPUs as that could set them back lot of years.


Yes but the ecosystem and mindshare comes first from having a leadership GPU and architecture for basically forever. If they can do this(which is the doubtful part) for a decade they will achieve the same as well. Winning once in a while like AMD did with R300 or more recently RDNA2 isn't enough. There's such thing as a one-hit-wonder where people forget about you after that.
Yeah, one would need to be at par or better consistently and build the ecosystem as well.
May be with LPDDR5X/6 both Intel and AMD can better/more creatively use their (i)GPU chiplets in variety of combinations against entry-mid Nvidia dGPUs and there's also prospect of Halo-Tier APUs as well ? This year's Dell XPS line so far has gone iGPU-only route.




Version 3.1.0 alpha Supported Hardware

  • Intel Core Series 3 Processors with 12GB+ of system memory (WCL)
  • Intel Core Ultra Series 3 Processors (PTL)
  • Intel Core Ultra Series 2 (H) Processors (ARL-H)
  • Intel Core Ultra Series 2 (V) Processors (LNL)
  • Intel Core Ultra Series 1 (H) Processors (MTL-H)
  • Intel Arc B Series GPU Cards (BMG)
  • Intel Arc A Series GPU Cards with 8GB+ Memory (ACM)
  • Nvidia RTX GeForce GPUs
Intel's AI-Playground added support for RTX GPUs.
 
Last edited:
Back
Top