poke01
Diamond Member
If this is true, then I'll buy one.B770 16GB
32 XE cores for 4096 shaders
boost clock between 2.8 and 3 GHz
16 or 32 MB of L2 cache
16 GB GDDR6 @ 20-24 GB/s
256 bit bus
768 GB/s total bandwidth
TBP ~200W
$299 price
If this is true, then I'll buy one.B770 16GB
32 XE cores for 4096 shaders
boost clock between 2.8 and 3 GHz
16 or 32 MB of L2 cache
16 GB GDDR6 @ 20-24 GB/s
256 bit bus
768 GB/s total bandwidth
TBP ~200W
$299 price
From RGT to this guy they are really confusing themselves on the specs.B980 same as above except maybe 5120 shaders and 225W TBP at $399.
B580 may deliver the same performance as A770 at a lower power limit of 150W and priced at $199.
Baked at TSMC at an optic resolution of 4 nanomeetahs!
Launch and availability possibly sometime in November or afterwards.
You are essentially saying Intel should have got a "A480" part out rather than the A770, something that slots between what is seen as "video playback only" A380 and the mostly-ignored A580. They would have needed it to price it for $140-150 to even make sense.Having crud products affects it even more. If you can't make a great small die product, you cannot hope to make a large die product that's even good.
It's notIf this is true, then I'll buy one.
32 Xe cores at 3GHz with rest of the specs is RTX 4060 Ti at best, but in 2025. So they'll still be behind RTX 3070, which was the original expectation for the A770 based on die size but 2 years later in 2025.If this is true, then I'll buy one.
🙁It's not
I could see myself paying $500 for it but if it's $399, where's the pre-order page????You cannot have RTX 4070 Ti performance as he's projecting without the top end B980 card having 64 Xe cores.
It's a compute part first and foremost. I don't think Intel cares THAT much about gaming. If it's a passable third option behind AMD, I don't mind. Intel would need to set up a world class GPU team with zero interference from management to beat AMD in the GPU performance/W dept, let alone Nvidia. I don't think Intel has assigned as many engineers to their GPU effort as AMD. If they have or if their team is bigger, then wow. Mismanagement galore!They needed 2x to make the bang. I hope Xe2 is better but this is an official sign of the possible troubles. If it was really good they'd have said more than "GPU's AI performance is awesome!"
Sorry that is cope. Gaming performance does matter. Besides their compute marketshare for their GPU is a blip on the radar compared to even their dGPU gaming marketshare.It's a compute part first and foremost. I don't think Intel cares THAT much about gaming. If it's a passable third option behind AMD, I don't mind. Intel would need to set up a world class GPU team with zero interference from management to beat AMD in the GPU performance/W dept, let alone Nvidia. I don't think Intel has assigned as many engineers to their GPU effort as AMD. If they have or if their team is bigger, then wow. Mismanagement galore!
1.5x over Meteorlake isn't that bad as it's clearly above Phoenix (when the drivers are working properly). Strix Point will be faster than Phoenix but it'll be even more bandwidth-starved, so the actual gaming performance upgrade from Phoenix to Strix Point could be miserable.I know it's early, but reports of Xe2 does not sound very good.
Intel is claiming only 1.5x increase over Meteorlake's iGPU with Xe2. This is not enough.
It's not because the GPUs aren't coming out at all, or because they are coming but those specs are wrong?It's not
Meteorlake at low power is behind AMD significantly. Yes at 30-40W they are ok. At 15-20W they are not, neither CPU nor GPU.1.5x over Meteorlake isn't that bad as it's clearly above Phoenix (when the drivers are working properly). Strix Point will be faster than Phoenix but it'll be even more bandwidth-starved, so the actual gaming performance upgrade from Phoenix to Strix Point could be miserable.
Obviously he's pointing to the specs. The Xe core and ALU counts make no sense.It's not because the GPUs aren't coming out at all, or because they are coming but those specs are wrong?
It's 1.5x over the 64 EU version of Meteor Lake iGPU.1.5x over Meteorlake isn't that bad as it's clearly above Phoenix (when the drivers are working properly). Strix Point will be faster than Phoenix but it'll be even more bandwidth-starved, so the actual gaming performance upgrade from Phoenix to Strix Point could be miserable.
He's a clueless youtuber and doesn't know anything about specs, perf or price.It's not because the GPUs aren't coming out at all, or because they are coming but those specs are wrong?
Then why are people bothering with oneAPI? https://github.com/oneapi-community/awesome-oneapiBesides their compute marketshare for their GPU is a blip on the radar compared to even their dGPU gaming marketshare.
I hope Intel doesn't make the same mistake with Battlemage and equips B750 with 12GB VRAM. I don't know why Intel thought having two 8GB cards (A580 and A750) in their line-up was a good idea.The A750 overperformed from what was expected.
1.5x over what in Alchemist? 50% over the A770? At what price?Also this is a dGPU thread. 1.5x over Alchemist in late 2024 is bad.
Precisely. They've taken enough from the PC builder community over the last decade with their crappily stagnant Skylake and derivatives. Time for them to give something back by eating some loss on their GPUs 🙂Sure, the margins at that point with a large chip would be terrible when compared to Nvidia, but let Intel worry about that.
Precisely. They've taken enough from the PC builder community over the last decade with their crappily stagnant Skylake and derivatives. Time for them to give something back by eating some loss on their GPUs 🙂
Most likely compared to Z1 , not Z1 extremeFaster than Rog Ally MSI says.
People like you constantly make this argument but it's a fallacy, especially for a mega corp like Intel. Their worries are eventually going to turn into cancellations.Sure, the margins at that point with a large chip would be terrible when compared to Nvidia, but let Intel worry about that.