• Guest, The rules for the P & N subforum have been updated to prohibit "ad hominem" or personal attacks against other posters. See the full details in the post "Politics and News Rules & Guidelines."

Info [Digital Foundry] Xbox Series X Complete Specs + Ray Tracing/Gears 5/Back-Compat/Quick Resume Demo Showcase!

Det0x

Senior member
Sep 11, 2014
369
257
136
Much more information can be found in the video

Xbox one x.png
Xbox one xx.png
Xbox one xxx.png
Xbox one xxxx.png

For comparison: NVIDIA claims that the fastest Turing parts, based on the TU102 GPU, can handle upwards of 10 billion ray intersections per second (10 GigaRays/second) @ Anandtech

Not sure if this should be posted in the "Graphics Cards" or "CPUs and Overclocking" forum, admins can delete one of the threads
 
Last edited:

Bouowmx

Senior member
Nov 13, 2016
983
369
116
As the Scarlett chip hints, AMD RDNA 2 (still) will not be using the full density of TSMC 7 nm.
15300 M transistors
360 mm^2
42.5 M/mm^2
 

Gideon

Senior member
Nov 27, 2007
840
1,273
136
Full specs in table form:

CPU 8x Cores @ 3.8 GHz (3.6 GHz w/ SMT) Custom Zen 2 CPU
GPU 12 TFLOPS, 52 CUs @ 1.825 GHz Custom RDNA 2 GPU
Die Size 360.45 mm2
Process 7nm Enhanced
Memory 16 GB GDDR6 w/ 320b bus
Memory Bandwidth 10GB @ 560 GB/s, 6GB @ 336 GB/s
Internal Storage 1 TB Custom NVME SSD
I/O Throughput 2.4 GB/s (Raw), 4.8 GB/s (Compressed, with custom hardware decompression block)
Expandable Storage 1 TB Expansion Card (matches internal storage exactly)
External Storage USB 3.2 External HDD Support
Optical Drive 4K UHD Blu-Ray Drive
Performance Target 4K @ 60 FPS, Up to 120 FPS

Bear in mind, these clocks are fixed (guaranteed performance, no fluctuation. These have to be pretty close to what base-clock is for the usual GPUs and CPUs. They apparently tested that it can hold those clocks in a metal container in the desert, according to DigitalFoundry video.

This bodes quite well for Big Navi, as the TDP of this 15.1 x 15.1 x 30.1cm box can't be all that crazy (probably ~200W Total). This could mean excellent boost clocks for a 300W Desktop GPU part.

I'm really excited by a lot of things (Virtual Memory on the SSD with compression, also stuff similar to HBCC for textures). I guess they can pull of pretty amazing stuff with Raytracing, VRR + smart upscaling in the years to come.

The only (considerable) let-down is the memory. There is 10GB of 560 GB/s memory (recommended for GPU use) and 6GB @ 336 GB/s memory (os + internals + recommended part for CPU code), which kinda diminishes the "unified memory" value considerably (you'll wanna copy data around the GPU and CPU parts). But I guess this is still highly preferable to flat-out having less memory. And they said it was absolutely essential to make a shippable product for acceptable cost.

The other minor nag is that the SSD seems to be PCIe 3.0 (considering it has the sam max-throuput as an evo 970).

Overall this thing is pretty insane and I'm afraid won't be too cheap (In the PC world you'll get a similar GPU alone for $800-1200). I Guess they'll sell the External SSD separately, but we'll see. Worst case, it's probably e.g. $499 for the console + $199 for SSD or something. hopefully it's still only ~$500-600 total, and well if it isn't, it will be at some point for sure :D
 

blckgrffn

Diamond Member
May 1, 2003
6,862
160
106
www.teamjuchems.com
The theoretical performance on this should actually be really good, and a far cry from the 7790/7850 caliber of performance offered by the One/PS4 at launch the last time around.

It's easy to get wrapped up in the new GPU power, I know it's the first set of numbers I really focused on, but the CPU power on tap is more of a multi-generational leap. Wow.

How am I going to time flipping my 5700 XT to get an RDNA 2 card? Especially when the driver woes are going to taint the 5600xt/5700/5700xt even after they are largely sorted with further RDNA specific enhancements and maturity...
 

DisEnchantment

Senior member
Mar 3, 2017
578
1,079
106
Basically AMD feature matched all of Turing's, VRS, Mesh shading, HDR, RT etc in a relatively smaller power envelope and in a die around 300mm2. (If we assume 60mm is for Zen2 8 Cores)

For comparison: NVIDIA claims that the fastest Turing parts, based on the TU102 GPU, can handle upwards of 10 billion ray intersections per second (10 GigaRays/second) @ Anandtech
Indeed this is a massive difference. 38x vs Turing. Imagine this on a full blown high end Desktop RDNA2 GPU.

I'm really excited by a lot of things (Virtual Memory on the SSD with compression, also stuff similar to HBCC for textures).
Indeed the velocity Architecture sounds like good old HBCC from Vega but nicely implemented.

Also considering that the chip is fabricated on N7P and not on N7+ all those efficiency gains will come purely from architecture since Navi is already N7P.

RDNA2 will be major leap forward for AMD.

Speculation follows :D...

If 56 CUs can fit in lets say 320 mm2 to be on the larger side and assuming linear scaling for RT with TF

~320 mm2 would fit 56CUs, 56 CUs@2.10 GHz = ~15.77 TF, ~493 Giga Intersections/sec
~410 mm2 would fit 72CUs, 72 CUs@2.00 GHz = ~18.43 TF, ~576 Giga Intersections/sec
~460 mm2 would fit 80CUs, 80 CUs@1.90 GHz = ~19.46 TF, ~608 Giga Intersections/sec
~550 mm2 would fit 96CUs, 96 CUs@1.80 GHz = ~22.12 TF, ~691 Giga Intersections/sec

Mind boggling jump from current HW RT capabilities. I think the first two would be very likely.
 
  • Like
Reactions: Gideon and Det0x

psolord

Golden Member
Sep 16, 2009
1,284
321
136
So are we really going to assume, that AMD managed 38X the RT performance in their first try, compared to what Nvidia managed with their first try? So Nvidia, who are leaders in the field are suddenly complete idiots?

Are they even comparing the same RT quality/resolution or are they using something like 8bit performance vs nvidia's 64bit performance? Or whatever the RT resolution is measured at.
 

Tuna-Fish

Golden Member
Mar 4, 2011
1,025
547
136
For comparison: NVIDIA claims that the fastest Turing parts, based on the TU102 GPU, can handle upwards of 10 billion ray intersections per second (10 GigaRays/second) @ Anandtech
Indeed this is a massive difference. 38x vs Turing. Imagine this on a full blown high end Desktop RDNA2 GPU.
So are we really going to assume, that AMD managed 38X the RT performance in their first try, compared to what Nvidia managed with their first try? So Nvidia, who are leaders in the field are suddenly complete idiots?
I believe that this is comparing apples vs oranges. nVidia is reporting 10Brays/s, MS is reporting raw intersection tests. A single ray can have dozens of tests, so depending on specifics the XSX can be weaker or stronger than current nV. I'd expect it to be a bit stronger, like at most 50% over 2080. (So why didn't they report rays? Because the amount of tests per ray depends on the scene. The 10GRay/s number from nV is very non-specific.)
 

maddie

Diamond Member
Jul 18, 2010
3,161
1,837
136
So are we really going to assume, that AMD managed 38X the RT performance in their first try, compared to what Nvidia managed with their first try? So Nvidia, who are leaders in the field are suddenly complete idiots?

Are they even comparing the same RT quality/resolution or are they using something like 8bit performance vs nvidia's 64bit performance? Or whatever the RT resolution is measured at.
What does 1st try have to do with anything? That's like saying you can't get 100% in an exam if doing it for the 1st time.
 

psolord

Golden Member
Sep 16, 2009
1,284
321
136
What does 1st try have to do with anything? That's like saying you can't get 100% in an exam if doing it for the 1st time.
When you are consistently the bad student, yes you cannot expect to just surpass the good student by a factor of 38X on your first try.

Somebody is lying.
 
  • Like
Reactions: n0x1ous and Tup3x

Tup3x

Senior member
Dec 31, 2016
321
154
86
I don't think it's possible to make any assumptions yet about RT performance compared to Turing. I doubt that there's that much difference between the two since both are first tries.
 

DisEnchantment

Senior member
Mar 3, 2017
578
1,079
106
I believe that this is comparing apples vs oranges. nVidia is reporting 10Brays/s, MS is reporting raw intersection tests. A single ray can have dozens of tests, so depending on specifics the XSX can be weaker or stronger than current nV. I'd expect it to be a bit stronger, like at most 50% over 2080. (So why didn't they report rays? Because the amount of tests per ray depends on the scene. The 10GRay/s number from nV is very non-specific.)
Thanks for the insightful answer. The AT article was not detailed enough though but with some luck MS will say something about it next GDC online event this Thursday (or is it Wednesday?)
 

Guru

Senior member
May 5, 2017
790
291
106
RDNA2 seems to be an incredible improvement over RDNA1, in fact considering both are using the same 7np process, RDNA2 seems to be 30% faster just due to architectural improvements, plus ray tracing hardware support, plus all the features that Nvidia was touting with Turing, and considering RDNA2 is in BOTH the major consoles, its safe to say that all the PC ports are going to be optimized for RDNA2 and Ryzen 2, which includes AMD's hardware implementation of ray tracing.
 
  • Like
Reactions: Olikan and lobz

uzzi38

Senior member
Oct 16, 2019
894
1,105
96
Couple of interesting pieces of information haven't been brought up yet that I felt would make things a little interesting.

Austin Evans in his video points out the PSU is a 300W one - well, it's actually a 315W PSU capable of outputting 255W and 60W seperately.


From the boards itself within the system, it looks like the SoC is powered solely by the 255W part alone.

If that's the case, the ENTIRE SoC - CPU and GPU, must be pulling a little less than that 255W to allow for a bit of headroom (the PS4 Pro and Xbox One had 250 and 245W PSUs despite maxing out power draw at around 180W). Lets say, for example, 225W for the SoC.

8 Zen 2 cores at a max of 3.6GHz (taking SMT clocks here) should pull around 40W on their own. Which means that 1.8GHz 52CU RDNA2 GPU is pulling around 180W, give or take a bit (exact power efficiency of the CPU compared to Renoir and Matisse CCDs is unknown, so my estimate could be a bit generous on the CPU side).

Just some food for thought.
 

Saylick

Senior member
Sep 10, 2012
698
212
116
Couple of interesting pieces of information haven't been brought up yet that I felt would make things a little interesting.

Austin Evans in his video points out the PSU is a 300W one - well, it's actually a 315W PSU capable of outputting 255W and 60W seperately.


From the boards itself within the system, it looks like the SoC is powered solely by the 255W part alone.

If that's the case, the ENTIRE SoC - CPU and GPU, must be pulling a little less than that 255W to allow for a bit of headroom (the PS4 Pro and Xbox One had 250 and 245W PSUs despite maxing out power draw at around 180W). Lets say, for example, 225W for the SoC.

8 Zen 2 cores at a max of 3.6GHz (taking SMT clocks here) should pull around 40W on their own. Which means that 1.8GHz 52CU RDNA2 GPU is pulling around 180W, give or take a bit (exact power efficiency of the CPU compared to Renoir and Matisse CCDs is unknown, so my estimate could be a bit generous on the CPU side).

Just some food for thought.
Sounds like 80 CUs @ 1900 MHz with a 300W TDP is totally within the realm of possibility for Big Navi. That ought to provide >19 TF of throughput, or about 45% more TF than the RTX 2080 Ti at similar IPC.
 

Bouowmx

Senior member
Nov 13, 2016
983
369
116
Was full-rate INT32 mentioned for RDNA 2? I watched the video in OP, but not another Digital Foundry video called "DF Direct".
 

alcoholbob

Diamond Member
May 24, 2005
6,138
257
126
So we are going to get ~2080 Super performance in the TDP of a 5700. Not bad. I hope we get something like this for the desktop platform. I imagine if you undervolt it you could get 1080 Ti performance for about 120 W.
 

lixlax

Member
Nov 6, 2014
138
78
101
Couple of interesting pieces of information haven't been brought up yet that I felt would make things a little interesting.

Austin Evans in his video points out the PSU is a 300W one - well, it's actually a 315W PSU capable of outputting 255W and 60W seperately.


From the boards itself within the system, it looks like the SoC is powered solely by the 255W part alone.

If that's the case, the ENTIRE SoC - CPU and GPU, must be pulling a little less than that 255W to allow for a bit of headroom (the PS4 Pro and Xbox One had 250 and 245W PSUs despite maxing out power draw at around 180W). Lets say, for example, 225W for the SoC.

8 Zen 2 cores at a max of 3.6GHz (taking SMT clocks here) should pull around 40W on their own. Which means that 1.8GHz 52CU RDNA2 GPU is pulling around 180W, give or take a bit (exact power efficiency of the CPU compared to Renoir and Matisse CCDs is unknown, so my estimate could be a bit generous on the CPU side).

Just some food for thought.
Seems fair enough. But based on Renoir 40W seems a little bit ona a low side (although we'll need to see actual benchmarks and typical clocks that these things run at).

On the GPU side I took Sapphire Pulse and Nitro as examples. The pulse has 1815MHz gameclock (aka the typical clock while gaming) and a 241W TDP, while the Nitro has 1905MHz gameclock and 265W TDP. Thats with only 40CU, a hypothetical 52CU card with 1825MHz gamclock could easily be close to 300W using RDNA 1. As power consumption above 175-200W (for GPU portion) seems highly unlikely it basically means RDNA 2 indeed has some rather nice efficiency gains.
 
  • Like
Reactions: lightmanek

uzzi38

Senior member
Oct 16, 2019
894
1,105
96
Seems fair enough. But based on Renoir 40W seems a little bit ona a low side (although we'll need to see actual benchmarks and typical clocks that these things run at).

On the GPU side I took Sapphire Pulse and Nitro as examples. The pulse has 1815MHz gameclock (aka the typical clock while gaming) and a 241W TDP, while the Nitro has 1905MHz gameclock and 265W TDP. Thats with only 40CU, a hypothetical 52CU card with 1825MHz gamclock could easily be close to 300W using RDNA 1. As power consumption above 175-200W (for GPU portion) seems highly unlikely it basically means RDNA 2 indeed has some rather nice efficiency gains.
Actually, seems I might have been a bit hard on the XSeX's SoC, because it's sharing power with the memory, NVME + expansion and HDMI2.1, and like you said it seems I may have been too positive on the CPU's power consumption, which means the GPU is actually more efficienct than I first proposed.

This is from a Chinese review (https://zhuanlan.zhihu.com/p/113142604) of a 4800H laptop which appears to have released yesterday, and it showed that when Renoir downclocked itself to 3.5GHz in the middle of it's CB runs, it was running at 39W. So yeah, definitely was a bit too positive on the CPU side. Probably about 10W too high, givem the V/f curve on Zen 2.

Knock off another 10W for the NVMe + expansion (both 4W each) and HDMI2.1 and that leaves you closer to 160W for the GPU (including the GDDR6 memory, even less if you subtract that too).
 

Krteq

Senior member
May 22, 2015
830
356
136
Couple of interesting pieces of information haven't been brought up yet that I felt would make things a little interesting.

Austin Evans in his video points out the PSU is a 300W one - well, it's actually a 315W PSU capable of outputting 255W and 60W seperately.


From the boards itself within the system, it looks like the SoC is powered solely by the 255W part alone.

If that's the case, the ENTIRE SoC - CPU and GPU, must be pulling a little less than that 255W to allow for a bit of headroom (the PS4 Pro and Xbox One had 250 and 245W PSUs despite maxing out power draw at around 180W). Lets say, for example, 225W for the SoC.

8 Zen 2 cores at a max of 3.6GHz (taking SMT clocks here) should pull around 40W on their own. Which means that 1.8GHz 52CU RDNA2 GPU is pulling around 180W, give or take a bit (exact power efficiency of the CPU compared to Renoir and Matisse CCDs is unknown, so my estimate could be a bit generous on the CPU side).

Just some food for thought.
That doesn't mean much. You know, The PS4 had a 250W PSU, the PS4 Pro had 310W. OneX was 245W.

For example, OneX max power draw was about 175W

//nvm, you already stated that in your post, my bad
 

uzzi38

Senior member
Oct 16, 2019
894
1,105
96
That doesn't mean much. You know, The PS4 had a 250W PSU, the PS4 Pro had 310W. OneX was 245W.

For example, OneX max power draw was about 175W

//nvm, you already stated that in your post, my bad
Which is why I'm assuming worst-case scenario, and even then it that post I mucked up and was too negative in my worst-case. See the update.

From the information we have, there's reasonable backing for the claim that the SoC on the XSeX is pulling 225W or less, and the GPU + GDDR6 memory is pulling 160W or less.
 

Det0x

Senior member
Sep 11, 2014
369
257
136
96 CU's dont sound all that far fetched for a 250/300 TDP part anymore, but imagine the memory system you need to feed a beast like that.
 

exquisitechar

Senior member
Apr 18, 2017
357
328
106
As the Scarlett chip hints, AMD RDNA 2 (still) will not be using the full density of TSMC 7 nm.
15300 M transistors
360 mm^2
42.5 M/mm^2
TSMC's density figures are clearly a pipe dream for GPUs.

Anyway, this is very impressive, what they packed into 360mm^2. 16 more CUs than Navi 10 with more features, a lot more uncore and 8 Zen 2 cores in a 44% bigger die.
 

ASK THE COMMUNITY