• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

News Intel GPUs - we've given up on B770, where's Celestial already

Page 73 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
My biggest hope is that XeSS is able to do it's magic on a native rendered resolution of 900p on Renoir/Cezanne VEGA 8, upscaling to 1080p. Looking at what's written about FSR 2.0, it seems that it's overhead is going to be too much for anything from a 1060 on down, which would make it next to useless for those APUs. XeSS, being able to use the DP4a matrix math that is available on Radeon VII and the 7nm APUs could make them at least somewhat viable. As for Rembrandt, I suspect that it's going to love XeSS and FSR2 in the 6WGP setup and XeSS in the half config.
 
My biggest hope is that XeSS is able to do it's magic on a native rendered resolution of 900p on Renoir/Cezanne VEGA 8, upscaling to 1080p. Looking at what's written about FSR 2.0, it seems that it's overhead is going to be too much for anything from a 1060 on down, which would make it next to useless for those APUs. XeSS, being able to use the DP4a matrix math that is available on Radeon VII and the 7nm APUs could make them at least somewhat viable. As for Rembrandt, I suspect that it's going to love XeSS and FSR2 in the 6WGP setup and XeSS in the half config.

I'm going to go out on a very short and sturdy limb here and say I don't think you're getting your wish. TAA upscaling works best when target resolution is well above the amount of detail in a given scene no matter the details of how it's done. Ironically this makes it most useful for scaling to 4k and least useful where it's needed most, scaling on lower end hardware to lower resolutions.

Not that it won't "work", just that the quality will be kinda shite and obviously so.
 
Ironically this makes it most useful for scaling to 4k and least useful where it's needed most, scaling on lower end hardware to lower resolutions.

The only reason why so many people are sticking with that very old hardware is the GPU crisis. If the GPU market normalizes somewhat and 1080p and 1440p cards become fairly affordable, then I see upscaling to 1440p, 4k and 4k+ (VR) as the normal use case. You can already get 4k/60Hz monitors for $200-300, so I can see a lot of people going for that, especially if they don't play FPS, but games like Elden Ring.
 
Slides and stuff:


Looks interesting. I still think that nvidia has to be the potentially biggest loser here as AMD and Intel will squeeze in the laptop space, I for one would prefer to have one vendors video card running things. I am beyond tired of the hybrid graphics that my Lenovo Workstation attempts to employ. And to be clear, I think AMD will be squeezing with their iGPUs and not with mobile dgpus, but Intel can likely offer some awesome bundling deals and include their GPUs in Evo type platform solutions.
 
I still think that nvidia has to be the potentially biggest loser here as AMD and Intel will squeeze in the laptop space
Definitely. Nvidia's try to buy Arm might have been its effort to preserve a similar market of its own. As is it does look like it makes perfect market sense for Nvidia to focus even more on server and high end graphics markets only, with (actual) mobile graphics potentially delegated to whatever is sufficient(ly cheap) to keep Nintendo's SoCs fed.
 
HWUB has die size and transistor counts (TSMC N6):

G10 - 406 Sq mm, 21.7 B transistors

G11 - 157 Sq mm, 7.2 B transistors



G11 managed to keep all the extras vs the 141mm2 6500XT at seeming minimal die size increase - but does it do 3D better? Should be interesting to find out!

6500XT has "only" 5.4bn transistors too, and since that includes more cache (I am under the impression that is about as dense as it can be?) G11 is somehow a lot denser as well. I am not going to try and do that math and show you all how bad I am at it 😀

(corrected several times below - 6500XT was really 107mm so this is essentially 50% larger but only 1.8bn transistors more, so it's also less dense)
 
Last edited:
^^Design looks quite nice.

G11 managed to keep all the extras vs the 141mm2 6500XT at seeming minimal die size increase - but does it do 3D better? Should be interesting to find out!

6500XT has "only" 5.4bn transistors too, and since that includes more cache (I am under the impression that is about as dense as it can be?) G11 is somehow a lot denser as well. I am not going to try and do that math and show you all how bad I am at it 😀
It's rather nice package. AV1 decoder and encoder, ML stuff, ray tracing (at least theoretically)... Not too bad. Suddenly NVIDIA's MX garbage sound even worse than what they did before.
 
Everything I see says the 6500xt Navi 24 die is 107 sq mm, not 141, so the Intel die is much larger.
You would be correct. It's 107mm^2m, not 141mm^2.

Hilariously G11 is also not very far from the size of an entire Rembrandt. Which, you know, packs an entire Zen 3 CCX in there too, alongside some other bits and pieces.
 
1648661922025.png

My bad! Sorry for spreading the misinformation. I swear it was *not* a complete pull from the air number!

Google was just so confident that I didn't read further down, where all the results say 107.
 
Back
Top