Info [Digital Foundry] Xbox Series X Complete Specs + Ray Tracing/Gears 5/Back-Compat/Quick Resume Demo Showcase!

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Det0x

Golden Member
Sep 11, 2014
1,027
2,953
136
Much more information can be found in the video

Xbox one x.png
Xbox one xx.png
Xbox one xxx.png
Xbox one xxxx.png

For comparison: NVIDIA claims that the fastest Turing parts, based on the TU102 GPU, can handle upwards of 10 billion ray intersections per second (10 GigaRays/second) @ Anandtech

Not sure if this should be posted in the "Graphics Cards" or "CPUs and Overclocking" forum, admins can delete one of the threads
 
Last edited:

Gideon

Golden Member
Nov 27, 2007
1,608
3,573
136
My all surprise with both the consoles is the low ram amount.

I was expecting more than 16GB:
- OS are more memory hungry​
- 4K requires 2x more ram capacity​
- RT requires more ram​
- Special sound require more ram​
- Games should grow in complexity so more ram for them​
It seams the market requires some new type of intermediate RAM, one with more capacity than GDDR6/GDDR5/DDR4, slower and cheaper.
Probably something very similar to NVME but with less capacity and cheaper without the need of flash capability.

Yeah it's just that any kind of RAM is prohibitively expensive when going over 16GB, thus they opted to optimise the crap out of I/O (I really suggest you look at the PS5 presentation about the SSD they use, and how it differs from just using an m.2 drive on current PCs, it's very informative) to use as RAM extension.

On PCs a 10x faster SSD usually means ~2x better loading times. Both of the new consoles have several fixed-function blocks that cut out a lot of the filesystem/OS/CPU overhead.

On PS5 they aimed to get 100x faster loading time (compared to a HDD) with a 100x faster SSD. Their target was 5.5 GB/s (which they sometimes beat significantly, with compression). This means you can keep a lot less stuff in the video memory and stream it in on-the-fly at quite impressive speeds (hundreds of milliseconds).

Xbox for instance has 100GB of SSD space set aside for the purpose. The "Sampler feedback" feature of DX 12 Ultimate means you can even keep parts of the textures off of GPU Memory (Microsoft estimated often only ~50% of large textures are actually sampled at a time).
 

Spjut

Senior member
Apr 9, 2011
928
149
106
Well actually you are quite wrong. If you look at most DX12 and Vulkan titles, Polaris clearly had and still has a big advantage over Pascal. Even today RDNA has a small, but decent advantage in DX12 and Vulkan titles over Turing.

And if you actually look purely at MS titles, clearly in those games AMD is heavily favored.

The case for Nvidia is even worse if you compare Kepler and Maxwell to their GCN competitors in modern games. I'd also bet on the RDNA2 if I had to get a new GPU this year based on how GCN aged. The new worst case must be Doom Eternal, Kepler and Maxwell are horrible compared to their GCN competitors.

I'd assume that many on these kind forums use the latest or the 2nd latest architecture and miss how much worse the older architectures have aged.
 
  • Like
Reactions: RetroZombie

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
Well actually you are quite wrong. If you look at most DX12 and Vulkan titles, Polaris clearly had and still has a big advantage over Pascal. Even today RDNA has a small, but decent advantage in DX12 and Vulkan titles over Turing.

And if you actually look purely at MS titles, clearly in those games AMD is heavily favored.

OK lets look at two of the most recent and best examples of what DX12 and Vulkan has to offer, Gears 5 and Doom Eternal. The former released last year in September, and Doom Eternal just launched a couple of days ago. Gears 5 is DX12 only, and Doom Eternal is Vulkan only.

Here are the results for Gears 5, using one of Guru3D's most recent GPU reviews date 3/13/2020:

1584914178447.png
1584914192445.png

Not really seeing the dominance you're saying AMD has in one of the most optimized DX12 titles out there. Now for Doom Eternal. Keep in mind the game just launched, so the performance will likely increase even further after several patches and driver updates. I have the game myself and I can attest that it runs flawlessly on my Titan Xp. These benchmarks comes from PCGH.de's performance review of Doom Eternal on 3/20/2020:

QNnP9p.jpg

oS5qoG.jpg


Again, not really seeing any of the dominance you're speaking of in what is perhaps the best Vulkan title out there.
 

uzzi38

Platinum Member
Oct 16, 2019
2,565
5,574
146
OK lets look at two of the most recent and best examples of what DX12 and Vulkan has to offer, Gears 5 and Doom Eternal. The former released last year in September, and Doom Eternal just launched a couple of days ago. Gears 5 is DX12 only, and Doom Eternal is Vulkan only.

Here are the results for Gears 5, using one of Guru3D's most recent GPU reviews date 3/13/2020:

View attachment 18503
View attachment 18504

Not really seeing the dominance you're saying AMD has in one of the most optimized DX12 titles out there. Now for Doom Eternal. Keep in mind the game just launched, so the performance will likely increase even further after several patches and driver updates. I have the game myself and I can attest that it runs flawlessly on my Titan Xp. These benchmarks comes from PCGH.de's performance review of Doom Eternal on 3/20/2020:

QNnP9p.jpg

oS5qoG.jpg


Again, not really seeing any of the dominance you're speaking of in what is perhaps the best Vulkan title out there.
Depends on what you look at.

I can also see the 580 sitting between the 1660Ti and 1070 in those charts.

That's certainly a case of FineWine I don't mind personally.
 

GodisanAtheist

Diamond Member
Nov 16, 2006
6,717
7,013
136
OK lets look at two of the most recent and best examples of what DX12 and Vulkan has to offer, Gears 5 and Doom Eternal. The former released last year in September, and Doom Eternal just launched a couple of days ago. Gears 5 is DX12 only, and Doom Eternal is Vulkan only.

Here are the results for Gears 5, using one of Guru3D's most recent GPU reviews date 3/13/2020:

View attachment 18503
View attachment 18504

Not really seeing the dominance you're saying AMD has in one of the most optimized DX12 titles out there. Now for Doom Eternal. Keep in mind the game just launched, so the performance will likely increase even further after several patches and driver updates. I have the game myself and I can attest that it runs flawlessly on my Titan Xp. These benchmarks comes from PCGH.de's performance review of Doom Eternal on 3/20/2020:

QNnP9p.jpg

oS5qoG.jpg


Again, not really seeing any of the dominance you're speaking of in what is perhaps the best Vulkan title out there.

-He just means that Polaris punches above it's weight class in DX12 and Vulkan and while it's not always true,in the examples above where you have Polaris cards encroaching on the 1070, it does appear to be true.
 
  • Like
Reactions: Krteq

lobz

Platinum Member
Feb 10, 2017
2,057
2,856
136
OK lets look at two of the most recent and best examples of what DX12 and Vulkan has to offer, Gears 5 and Doom Eternal. The former released last year in September, and Doom Eternal just launched a couple of days ago. Gears 5 is DX12 only, and Doom Eternal is Vulkan only.

Here are the results for Gears 5, using one of Guru3D's most recent GPU reviews date 3/13/2020:

View attachment 18503
View attachment 18504

Not really seeing the dominance you're saying AMD has in one of the most optimized DX12 titles out there. Now for Doom Eternal. Keep in mind the game just launched, so the performance will likely increase even further after several patches and driver updates. I have the game myself and I can attest that it runs flawlessly on my Titan Xp. These benchmarks comes from PCGH.de's performance review of Doom Eternal on 3/20/2020:

QNnP9p.jpg

oS5qoG.jpg


Again, not really seeing any of the dominance you're speaking of in what is perhaps the best Vulkan title out there.
Well, I have a 1070 which was almost 2 times more expensive than the 480/580 and I can't really see why must the supposedly waaaaaaaaay more inferior card catch mine with time in modern games and APIs, especially when NVIDIA drivers should be so much more advanced etc.

I mean, I see why it does, but from a simple customer standpoint I can't really see why it should.
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
It's fine to use Polaris as an example, and I would agree that it has aged remarkably well. Some GPUs are like that. But what I was originally responding to was the assertion that AMD gets an advantage over Nvidia GPUs because both consoles use AMD GPUs. If I'm not mistaken, in PC space developers don't optimize around specific architectures generally speaking because we obviously aren't all running the same hardware. They optimize for the graphics API, whether DX11, DX12, OpenGL or Vulkan.

Now some GPUs are inherently better for some APIs than others. Maxwell was a great DX11 GPU, but mediocre/passable for DX12/Vulkan titles. Pascal improved DX12/Vulkan performance considerably for Nvidia, and Turing much more. Basically, Nvidia has had a gradual approach to optimizing their architectures for low level API performance. Contrast that with AMD, who have been optimizing their architectures for low level APIs for much longer.

AMD had ACEs in their GPUs as far back as Tahiti (7970) and they were unusable for years due to lack of a low level API to access and utilize them. So my point is, that Polaris aging well has more to do with its greater optimization for low level APIs like Vulkan and DX12, rather than simply because AMD has a monopoly on console GPUs.
 

Stuka87

Diamond Member
Dec 10, 2010
6,240
2,559
136
It's fine to use Polaris as an example, and I would agree that it has aged remarkably well. Some GPUs are like that. But what I was originally responding to was the assertion that AMD gets an advantage over Nvidia GPUs because both consoles use AMD GPUs. If I'm not mistaken, in PC space developers don't optimize around specific architectures generally speaking because we obviously aren't all running the same hardware. They optimize for the graphics API, whether DX11, DX12, OpenGL or Vulkan.

Now some GPUs are inherently better for some APIs than others. Maxwell was a great DX11 GPU, but mediocre/passable for DX12/Vulkan titles. Pascal improved DX12/Vulkan performance considerably for Nvidia, and Turing much more. Basically, Nvidia has had a gradual approach to optimizing their architectures for low level API performance. Contrast that with AMD, who have been optimizing their architectures for low level APIs for much longer.

AMD had ACEs in their GPUs as far back as Tahiti (7970) and they were unusable for years due to lack of a low level API to access and utilize them. So my point is, that Polaris aging well has more to do with its greater optimization for low level APIs like Vulkan and DX12, rather than simply because AMD has a monopoly on console GPUs.

I think it has more to do with AMD building more future proof GPU's. Possibly because during this time period, they knew that they could not create a new generation of chips every year, and wanted to stretch them out.

But its also that nVidia has shown they purposely do not want future proof cards. They want new cards to look better than older cards. And for people on older cards to feel like they need to upgrade sooner than they really should.

This was really apparent when Kepler cards fell on their face when Witcher 3 launched. nVidia had all of their GameWorks API's in that game, with the Tessellation levels set sky high so they could push the 980Ti hard because it was the only card made that could run it at full settings. Until AMD set a hard limit for max tessellation and then their cards got significantly faster. So then CDPR changed their side to do the same. But even after these changes, Kepler cards suffered, and nVidia eventually released a driver update to help, but not entirely solve, their performance issues.
 

Dribble

Platinum Member
Aug 9, 2005
2,076
611
136
I think it has more to do with AMD not being able to compete in writing high performance drivers, which were required to extract max DX11 performance. When you got to a low level API it's much less dependent on AMD's driver team, which was probably the biggest reason they introduced mantle in the first place.
 

lobz

Platinum Member
Feb 10, 2017
2,057
2,856
136
I think it has more to do with AMD not being able to compete in writing high performance drivers, which were required to extract max DX11 performance. When you got to a low level API it's much less dependent on AMD's driver team, which was probably the biggest reason they introduced mantle in the first place.
That has ca. 5-10% to do with drivers.
 

Guru

Senior member
May 5, 2017
830
361
106
OK lets look at two of the most recent and best examples of what DX12 and Vulkan has to offer, Gears 5 and Doom Eternal. The former released last year in September, and Doom Eternal just launched a couple of days ago. Gears 5 is DX12 only, and Doom Eternal is Vulkan only.

Here are the results for Gears 5, using one of Guru3D's most recent GPU reviews date 3/13/2020:

View attachment 18503
View attachment 18504

Not really seeing the dominance you're saying AMD has in one of the most optimized DX12 titles out there. Now for Doom Eternal. Keep in mind the game just launched, so the performance will likely increase even further after several patches and driver updates. I have the game myself and I can attest that it runs flawlessly on my Titan Xp. These benchmarks comes from PCGH.de's performance review of Doom Eternal on 3/20/2020:

QNnP9p.jpg

oS5qoG.jpg


Again, not really seeing any of the dominance you're speaking of in what is perhaps the best Vulkan title out there.
I mean all I see is Polaris actually being faster than Pascal in general. 570/580 over 1060 3/6gb, Vega 56 faster than 1070, etc... And again GOW 5 is an unreal engine game, a game engine that outright without specific optimizations really favors Nvidia hardware.

Doom Eternal again I see Polaris punching above its weight, Vega 56 on par with 1080, 64 on par with RTX 2060, 590 almost on par with 1070, which was one class higher card.

RDNA seems to have also taken most of its DX12 ques from GCN 5, and improved on it. For example I see the RX 5700xt beating 2070 super, which is a $500 card, one tier higher card than the 5700xt, yet it loses to a $100 cheaper card.
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
I think it has more to do with AMD building more future proof GPU's. Possibly because during this time period, they knew that they could not create a new generation of chips every year, and wanted to stretch them out.

But its also that nVidia has shown they purposely do not want future proof cards. They want new cards to look better than older cards. And for people on older cards to feel like they need to upgrade sooner than they really should.

That's a very cynical view towards Nvidia. You're right that AMD has made their GPUs more future oriented, but what good was it when many of those features were unusable for years due to lack of a low level API to access and utilize them? They just took up die space and increased the TDP of the GPU.

Nvidia's more gradual approach, while conservative, has been more successful over the long term. I would argue that Nvidia won the DX11/OpenGL era quite handily, and now the DX12/Vulkan era is in full swing, they still have the edge; although AMD is now much more competitive to the benefit of all consumers. Can't wait to see AMD's high end solution this year! :cool:
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
RDNA seems to have also taken most of its DX12 ques from GCN 5, and improved on it. For example I see the RX 5700xt beating 2070 super, which is a $500 card, one tier higher card than the 5700xt, yet it loses to a $100 cheaper card.

At 4K though the 2070 super's superior memory compression helps it to pull ahead of the 5700xt. That's not to say that I think either card is really a 4K capable GPU, just that Turing seems to have better performance at higher resolutions.
 

Dribble

Platinum Member
Aug 9, 2005
2,076
611
136
That's a very cynical view towards Nvidia. You're right that AMD has made their GPUs more future oriented, but what good was it when many of those features were unusable for years due to lack of a low level API to access and utilize them? They just took up die space and increased the TDP of the GPU.

Nvidia's more gradual approach, while conservative, has been more successful over the long term. I would argue that Nvidia won the DX11/OpenGL era quite handily, and now the DX12/Vulkan era is in full swing, they still have the edge; although AMD is now much more competitive to the benefit of all consumers. Can't wait to see AMD's high end solution this year! :cool:
Well the current RTX cards are not conservative introducing both ray tracing and tensor cores. Both are future technologies that take up die space, and both are missing from current AMD cards. They are likely to be the ones working better in the future - look at DLSS 2, as the tech improves they are pulling away from their AMD contemporaries.
 

GodisanAtheist

Diamond Member
Nov 16, 2006
6,717
7,013
136
That's a very cynical view towards Nvidia. You're right that AMD has made their GPUs more future oriented, but what good was it when many of those features were unusable for years due to lack of a low level API to access and utilize them? They just took up die space and increased the TDP of the GPU.

Nvidia's more gradual approach, while conservative, has been more successful over the long term. I would argue that Nvidia won the DX11/OpenGL era quite handily, and now the DX12/Vulkan era is in full swing, they still have the edge; although AMD is now much more competitive to the benefit of all consumers. Can't wait to see AMD's high end solution this year! :cool:

- Not my post, but the first half of that post is not cynical at all.

AMD was fighting a losing war on two fronts, and made two very forward looking archs with Bulldozer and GCN that heavily banked on the market choosing advanced software and dev strategies (Highly threaded software for BD and Close to Metal/Compute software for GCN).

NV made archs that were very good at winning the benchmarks of the day, they understood fundamentally that the first impression really matters and its really what sticks around since no one goes back and re-reviews cards months/years after release. As the market leader, they got to set the pace and thanks to their substantially more involved developer outreach program was able to keep things favorable to their ecosystem for longer.

Nothing wrong with that, its a smart strategy and just the way it is.

The second half of that post does definitely need some substantiation (can we demonstrate performance regression on older cards with newer drivers)?
 
  • Like
Reactions: Stuka87

Stuka87

Diamond Member
Dec 10, 2010
6,240
2,559
136
The second half of that post does definitely need some substantiation (can we demonstrate performance regression on older cards with newer drivers)?

That was in reference to Kepler cards falling off badly. When Witcher 3 came out there was a huge stink that the GTX 960 was out performing the GTX 780, even though the 780 had way more horse power. A driver update that came out later made Kepler a little better. nVidia even asked review sites to turn off GameWorks (nVidia's own technology) features to make Kepler cards look better.
 

GodisanAtheist

Diamond Member
Nov 16, 2006
6,717
7,013
136
That was in reference to Kepler cards falling off badly. When Witcher 3 came out there was a huge stink that the GTX 960 was out performing the GTX 780, even though the 780 had way more horse power. A driver update that came out later made Kepler a little better. nVidia even asked review sites to turn off GameWorks (nVidia's own technology) features to make Kepler cards look better.

-That's fair. NV's prior generation has been NV's current generation's biggest competitor for a long time now.
 

sontin

Diamond Member
Sep 12, 2011
3,273
149
106
Well the current RTX cards are not conservative introducing both ray tracing and tensor cores. Both are future technologies that take up die space, and both are missing from current AMD cards. They are likely to be the ones working better in the future - look at DLSS 2, as the tech improves they are pulling away from their AMD contemporaries.

And Mesh-Shading, VRS, Sampler Feedback... Every one of Turings new features has been adopted by Microsoft for DX12 Ultimate. Even AMD has osborned Navi1 three weeks ago.
 
  • Like
Reactions: Carfax83

Guru

Senior member
May 5, 2017
830
361
106
And Mesh-Shading, VRS, Sampler Feedback... Every one of Turings new features has been adopted by Microsoft for DX12 Ultimate. Even AMD has osborned Navi1 three weeks ago.
Those are NOT Nvidia features, those are ALL DX12 features. Nvidia implemented it in Turing, now MS is advertising DX12 with a new strategy, calling it DX12 ultimate. Again all those features are part of DX12, Nvidia would not have been able to implement them if there was no API support for them! They would have had to run conversion software and done multiple levels of hardware to software rendering to output raytracing in games.
 

sontin

Diamond Member
Sep 12, 2011
3,273
149
106
Mesh Shading, Sampler Feedback and VRS were not available through D3D prior last year and was first introduced by nVidia's openGL/Vulkan extensions and NVAPI. DXR and DML based on the work nVidia has done with Optix and Cuda for ML.

Fact is in the last two years nVidia has been on the forefront of shaping graphics API while AMD introduced a new generation of products which has been killed 8 months after the launch...
 
  • Like
Reactions: Carfax83

DisEnchantment

Golden Member
Mar 3, 2017
1,590
5,722
136
Well then .. N7+ it is. I wished this info came from AMD's disclosure not like the way it is.
So at best +4% perf vs N7P, but significant boost in density and the gain in efficiency too.
No wonder the SoC is 360.4 mm only with 8 Zen2 Core and 56 CUs.
My paper napkin math is working out the RDNA2 CU at ~88% the size of RDNA1, granted cache configuration is different.
Anyway big gains would come from Physical design (clocks) and uarch. Perf/clock would be more than GCN--> RDNA1 .

Baseline 15TF , 2.1 GHz @56CUs/310mm2, at >15 % perf/clock improvement, This chip would be close to 2x Navi10 performance.

Untitled.png
 
Last edited:
  • Like
Reactions: lightmanek

Mopetar

Diamond Member
Jan 31, 2011
7,797
5,899
136
It's fine to use Polaris as an example, and I would agree that it has aged remarkably well. Some GPUs are like that. But what I was originally responding to was the assertion that AMD gets an advantage over Nvidia GPUs because both consoles use AMD GPUs.

If you were expecting AMD to top the charts that'd probably be pretty silly. They just don't have anything to compete at the high end so unless developers were intentionally tanking performance on NVidia cards for some stupid reason I'm not sure what your argument actually is.

If you look at the results you posted it looks like the 5700 XT hangs right in there with (or even occasionally surpasses) the 2070 Super a lot of the time. However, across a wider set of games the 2070 Super has a ~10% edge depending on what you're looking at, but it typically does have a higher overall performance.

When you consider that it's pretty clear that AMD does get a some advantage in these titles. Is it earth shattering? No, but I don't think it needs to be either. There's really only so much extra performance you can wring out of optimizations and anything too large is more likely a result of not even trying on the other architecture or some kind of underhanded shenanigans.
 
  • Like
Reactions: Elfear

Gideon

Golden Member
Nov 27, 2007
1,608
3,573
136
Well then .. N7+ it is. I wished this info came from AMD's disclosure not like the way it is.
So at best +4% perf vs N7P, but significant boost in density and the gain in efficiency too.

Very interesting, where did you get that? As far as I know both Navi and Xbox Series X are on N7P