News Intel GPUs - Battlemage officially announced, evidently not cancelled

Page 186 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

DavidC1

Senior member
Dec 29, 2023
433
616
96
B980 same as above except maybe 5120 shaders and 225W TBP at $399.

B580 may deliver the same performance as A770 at a lower power limit of 150W and priced at $199.

Baked at TSMC at an optic resolution of 4 nanomeetahs!

Launch and availability possibly sometime in November or afterwards.
From RGT to this guy they are really confusing themselves on the specs.

Battlemage doesn't have "1.25x IPC". Battlemage has twice the Xe cores per the antiquated EU naming. So a seeming "512 EU" Battlemage has twice the compute capability. In fact unlike RDNA3's dual issue, it is a literal doubling in capability, and it'll come with the need for extra transistors to do so. RDNA3 failed to have 2x performance because it literally does not have enough transistors for it and it was Angstronomics revealing of die size that sealed the deal.

Expecting more gains per core/clock for a GPU is silly, because unlike a CPU where parallelism is essentially nonexistent, GPU workloads are a definition of a "embarassingly parallel" workload so adding more units is how you get a faster GPU.

So a 32 Xe core Battlemage will be at best 20% faster than an A770 if it comes with 25-30% faster clocks plus faster memory and that ends up being the anemic top end part, or you'll have a 64 Xe core part with true next generation performance.

You cannot have RTX 4070 Ti performance as he's projecting without the top end B980 card having 64 Xe cores.
Having crud products affects it even more. If you can't make a great small die product, you cannot hope to make a large die product that's even good.
You are essentially saying Intel should have got a "A480" part out rather than the A770, something that slots between what is seen as "video playback only" A380 and the mostly-ignored A580. They would have needed it to price it for $140-150 to even make sense.

A 20% improvement on top of A770 in late 2024/early 2025 is "B480".

People put too much emphasis on the costs of die area. It is not as significant as you think. It is, if the extra die area comes at near-zero performance and thus market positioning sure.

There's a reason silicon is called a 21st century gold mine. The costs of silicon is a fraction of the final cost. It's not real estate where costs of the final product are determined per mm2.
 
  • Like
Reactions: Exist50

DavidC1

Senior member
Dec 29, 2023
433
616
96
If this is true, then I'll buy one.
32 Xe cores at 3GHz with rest of the specs is RTX 4060 Ti at best, but in 2025. So they'll still be behind RTX 3070, which was the original expectation for the A770 based on die size but 2 years later in 2025.

It's essentially fictional non-fiction, where the writers are making stories about the real world.
 

DavidC1

Senior member
Dec 29, 2023
433
616
96
I know it's early, but reports of Xe2 does not sound very good.

Intel is claiming only 1.5x increase over Meteorlake's iGPU with Xe2. This is not enough. RTX 4060 Ti is already 20-30% faster than A770 and has greater than 50% difference in power use, not at the chip level but at the card level! It might explain the strange quietness and rumored absence of notebook parts.

They needed 2x to make the bang. I hope Xe2 is better but this is an official sign of the possible troubles. If it was really good they'd have said more than "GPU's AI performance is awesome!"
 
  • Like
Reactions: Tlh97 and coercitiv
Jul 27, 2020
18,213
11,917
116
They needed 2x to make the bang. I hope Xe2 is better but this is an official sign of the possible troubles. If it was really good they'd have said more than "GPU's AI performance is awesome!"
It's a compute part first and foremost. I don't think Intel cares THAT much about gaming. If it's a passable third option behind AMD, I don't mind. Intel would need to set up a world class GPU team with zero interference from management to beat AMD in the GPU performance/W dept, let alone Nvidia. I don't think Intel has assigned as many engineers to their GPU effort as AMD. If they have or if their team is bigger, then wow. Mismanagement galore!
 

DavidC1

Senior member
Dec 29, 2023
433
616
96
It's a compute part first and foremost. I don't think Intel cares THAT much about gaming. If it's a passable third option behind AMD, I don't mind. Intel would need to set up a world class GPU team with zero interference from management to beat AMD in the GPU performance/W dept, let alone Nvidia. I don't think Intel has assigned as many engineers to their GPU effort as AMD. If they have or if their team is bigger, then wow. Mismanagement galore!
Sorry that is cope. Gaming performance does matter. Besides their compute marketshare for their GPU is a blip on the radar compared to even their dGPU gaming marketshare.
 
  • Like
Reactions: Exist50 and Tlh97

ToTTenTranz

Member
Feb 4, 2021
110
165
86
I know it's early, but reports of Xe2 does not sound very good.

Intel is claiming only 1.5x increase over Meteorlake's iGPU with Xe2. This is not enough.
1.5x over Meteorlake isn't that bad as it's clearly above Phoenix (when the drivers are working properly). Strix Point will be faster than Phoenix but it'll be even more bandwidth-starved, so the actual gaming performance upgrade from Phoenix to Strix Point could be miserable.


It's not because the GPUs aren't coming out at all, or because they are coming but those specs are wrong?
 

DavidC1

Senior member
Dec 29, 2023
433
616
96
1.5x over Meteorlake isn't that bad as it's clearly above Phoenix (when the drivers are working properly). Strix Point will be faster than Phoenix but it'll be even more bandwidth-starved, so the actual gaming performance upgrade from Phoenix to Strix Point could be miserable.
Meteorlake at low power is behind AMD significantly. Yes at 30-40W they are ok. At 15-20W they are not, neither CPU nor GPU.

Also this is a dGPU thread. 1.5x over Alchemist in late 2024 is bad.
It's not because the GPUs aren't coming out at all, or because they are coming but those specs are wrong?
Obviously he's pointing to the specs. The Xe core and ALU counts make no sense.
 

Kepler_L2

Senior member
Sep 6, 2020
479
1,944
106
1.5x over Meteorlake isn't that bad as it's clearly above Phoenix (when the drivers are working properly). Strix Point will be faster than Phoenix but it'll be even more bandwidth-starved, so the actual gaming performance upgrade from Phoenix to Strix Point could be miserable.
It's 1.5x over the 64 EU version of Meteor Lake iGPU.
It's not because the GPUs aren't coming out at all, or because they are coming but those specs are wrong?
He's a clueless youtuber and doesn't know anything about specs, perf or price.
 

Hans Gruber

Platinum Member
Dec 23, 2006
2,229
1,157
136
People keep talking about efficiency of the GPU's from AMD, Nvidia and Intel. This upcoming generation will be the 1st GPU generation where all 3 companies will be on some variant of TSMC 4nm. N5 really didn't cut it compared to Nvidia on a 4nm variant with the 40 series. Nvidia is going with whatever the hot rod 4nm TSMC silicon is for Blackwell. From what I have read, AMD is going with N4P. Intel is going with a variant of N4 as well. It will be as close to an apples to apples comparison in the upcoming GPU generation.

The reason why ARC never reached it's full potential was because of hardware design flaws in the architecture. They used software workarounds to make the GPU's work quite well. The A750 overperformed from what was expected. The A770 was supposed to be better than a 3070 on paper. It never reached anything close to that performance.

Supposedly, Battlemage will fix those hardware design flaws that ARC had. One could argue that AMD had the same problem with RDNA3. Never admitting that 20-25% of the GPU's at the higher sku's were disabled or non functional. On the bright side for AMD, If RDNA4 is cooked right, they will over perform and will do it on much better silicon than N5.

I will add that ARC was on N6 (7nm) which was a large step behind even N5 in efficiency. Which means that Battlemage should have huge efficiency gains simply from being on some variant of N4. If Intel is shooting for 4070 performance at the $200-300 price range. They would have a real mainstream winner if the architecture and drivers are sound.

If Nvidia knocks it out of the park with Blackwell on similar silicon, I wouldn't know what to think. With regards to AMD. They seem to be focused on margins rather than mass producing good products at an affordable price. The correct way of thinking is to buy cheap and upgrade often from a consumer standpoint. That is why the 1060/2060/3060 cards dominate the steam survey. Before those it was the 970 because the 960 sucked.

The 4060 cranks out performance with only 120w of power. Maybe even a little less. The pricing was wrong with the 4060 from the beginning. If Battlemage has a card that is equal to or better than the 4060 with a 120w power usage for $200. We would know it was the silicon for the 40 series cards rather than the efficiency of the design. It's probably a little of both.
 

ToTTenTranz

Member
Feb 4, 2021
110
165
86
Also this is a dGPU thread. 1.5x over Alchemist in late 2024 is bad.
1.5x over what in Alchemist? 50% over the A770? At what price?

If we're looking at 1.5x A770 then it's around RX 7700XT performance at 1440p. If Intel offers that level of performance for e.g. $350-380 with 16GB VRAM then it's not a bad product at all, especially assuming it has raytracing performance between a 4060 Ti and a 4070.

Sure, the margins at that point with a large chip would be terrible when compared to Nvidia, but let Intel worry about that.
 
  • Like
Reactions: Mopetar
Jul 27, 2020
18,213
11,917
116
Sure, the margins at that point with a large chip would be terrible when compared to Nvidia, but let Intel worry about that.
Precisely. They've taken enough from the PC builder community over the last decade with their crappily stagnant Skylake and derivatives. Time for them to give something back by eating some loss on their GPUs :)
 

ToTTenTranz

Member
Feb 4, 2021
110
165
86
Precisely. They've taken enough from the PC builder community over the last decade with their crappily stagnant Skylake and derivatives. Time for them to give something back by eating some loss on their GPUs :)

I doubt they'd suffer any losses, to be honest. Truth is AMD and Nvidia got way too comfortable with the ridiculous margins they got from both crypto-booms and now they can't really tell their investors their margins and ASP for consumer GPUs are going way down.
Intel never really went through that because they had no GPUs to sell during the height of the crypto-craziness, to it's perhaps easier for them to get lower margins. Also because they're newcomers and need to invest to get marketshare if they ever really want to have a chance here.
 
  • Like
Reactions: igor_kavinski

mikk

Diamond Member
May 15, 2012
4,178
2,211
136
MSI Claw got another improvement with bios 1.09 versus the already improved bios 1.06. Faster than Rog Ally MSI says.

 

DAPUNISHER

Super Moderator CPU Forum Mod and Elite Member
Super Moderator
Aug 22, 2001
28,887
21,771
146
Using ARC cards the last year, the one thing that is certain, is whatever you think the performance is? There is more to be had in many titles. You can never look at a snapshot of ARC and it be the same. There will be games that change tiers in performance over time as the team optimizes for them.

The Claw has QoL issues and stiff competition keeping it from being worth what is being charged, is the bigger issue.
 

DavidC1

Senior member
Dec 29, 2023
433
616
96
Sure, the margins at that point with a large chip would be terrible when compared to Nvidia, but let Intel worry about that.
People like you constantly make this argument but it's a fallacy, especially for a mega corp like Intel. Their worries are eventually going to turn into cancellations.

1.5x over A770 in 2025 is a failure. That's RTX x50 territory.