Info [Digital Foundry] Xbox Series X Complete Specs + Ray Tracing/Gears 5/Back-Compat/Quick Resume Demo Showcase!

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Det0x

Golden Member
Sep 11, 2014
1,028
2,953
136
Much more information can be found in the video

Xbox one x.png
Xbox one xx.png
Xbox one xxx.png
Xbox one xxxx.png

For comparison: NVIDIA claims that the fastest Turing parts, based on the TU102 GPU, can handle upwards of 10 billion ray intersections per second (10 GigaRays/second) @ Anandtech

Not sure if this should be posted in the "Graphics Cards" or "CPUs and Overclocking" forum, admins can delete one of the threads
 
Last edited:

uzzi38

Platinum Member
Oct 16, 2019
2,629
5,938
146
TSMC's density figures are clearly a pipe dream for GPUs.

Anyway, this is very impressive, what they packed into 360mm^2. 16 more CUs than Navi 10 with more features, a lot more uncore and 8 Zen 2 cores in a 44% bigger die.

I find it hilarious how despite the fact that Renoir has I/O on die (doesn't scale with node, not dense) and less L3 cache than Matisse (Cache is extremely dense), Renoir is actually the most dense N7 product from AMD so far, at around 62MTr/mm^2 iirc.

But I'm going a bit off on a tangent here.
 

DisEnchantment

Golden Member
Mar 3, 2017
1,602
5,788
136
Also, since the xCloud is also going to run the same XSX HW, AMD will sell more HW than just consoles chips.
More details tomorrow from MS Online event.

96 CU's dont sound all that far fetched for a 250/300 TDP part anymore, but imagine the memory system you need to feed a beast like that.

If AMD does nothing and just package the chip as is for PC, a 310mm2 56CU@2.0GHz = ~14.34 TF / 250W, add 15% perf/clock increase = ~1.7x the performance of Navi10 at a very manageable 310mm2 with a decent 320bit wide bus


One thing I wonder though if MS didn't really thing their console is potent enough to handle RT would they make a big deal about it in all these reveals. But we will learn soon enough.
 

GodisanAtheist

Diamond Member
Nov 16, 2006
6,810
7,166
136
Is the quick response from Sony going to be damage control or steal MS' thunder?

I can't imagine the underlying tech is going to be all that different, I figure the mix of base components will change.

Let's see...
 

Saylick

Diamond Member
Sep 10, 2012
3,154
6,367
136
Is the quick response from Sony going to be damage control or steal MS' thunder?

I can't imagine the underlying tech is going to be all that different, I figure the mix of base components will change.

Let's see...
Apparently both consoles were going to have hardware unveils at GDC but since it was canceled due to the coronavirus, MS and Sony made other plans. With that said, I imagine the underlying hardware has to be pretty much locked in at this point with any remaining changes be software/firmware tweaks. Sony probably had the unveil ready but if MS didn't make the announcement, I'm guessing Sony wouldn't have either.
 
  • Like
Reactions: lightmanek

uzzi38

Platinum Member
Oct 16, 2019
2,629
5,938
146
Is the quick response from Sony going to be damage control or steal MS' thunder?

I can't imagine the underlying tech is going to be all that different, I figure the mix of base components will change.

Let's see...
I've seen some of where the leaks for the specs of Oberon and Arden came from (AMD, if you're reading this, keep putting your regression test results on Github, they make for some interesting reads, kthxbye), and I still think 36CUs clocked at 2GHz is the most likely scenario for the PS5. As such, it will probably not be as performant as the Series X. I look forwards to seeing whether or not the PS5 uses RDNA1 with RDNA2 features like VRS and HW RTRT (like how last gen consoles had RPM from Vega), or it's actually RDNA2. I hope it is the second, because in which case, both consoles will likely be 2080 tier or higher, and that would be fantastic.
 
  • Like
Reactions: Saylick

alcoholbob

Diamond Member
May 24, 2005
6,271
323
126
I've seen some of where the leaks for the specs of Oberon and Arden came from (AMD, if you're reading this, keep putting your regression test results on Github, they make for some interesting reads, kthxbye), and I still think 36CUs clocked at 2GHz is the most likely scenario for the PS5. As such, it will probably not be as performant as the Series X. I look forwards to seeing whether or not the PS5 uses RDNA1 with RDNA2 features like VRS and HW RTRT (like how last gen consoles had RPM from Vega), or it's actually RDNA2. I hope it is the second, because in which case, both consoles will likely be 2080 tier or higher, and that would be fantastic.

Weren't the rumors that there was going to be both a base model and a pro model at launch? They could just unveil the pro model specs to match the XSX, after all even the XSX is supposed to have a lower end "Lockhart" model with only a 4TF GPU.
 

uzzi38

Platinum Member
Oct 16, 2019
2,629
5,938
146
Weren't the rumors that there was going to be both a base model and a pro model at launch? They could just unveil the pro model specs to match the XSX, after all even the XSX is supposed to have a lower end "Lockhart" model with only a 4TF GPU.
In the Github repos I've seen, I haven't seen any mention of a weaker console. So to be completely honest, I can't verify that rumour.

But Microsoft called the Series X what it is for a reason - because there will also be a Series S. Fairly confident it will show up, but absolutely no clue what the specs could actually be.
 

alcoholbob

Diamond Member
May 24, 2005
6,271
323
126
So it was 36 CUs but they are quoting performance at 2.23ghz in order to close the gap with XSX, but admitted the GPU may have to down lock. With hdmi 2.1 VRR the difference between these two may be academic either way, unless we get an actual 40% difference in performance in some games.
 

Saylick

Diamond Member
Sep 10, 2012
3,154
6,367
136
Eurogamer: PS5 Specs

CPU: 8x Zen 2 @ 3.5 GHz (Variable)
GPU: Custom RDNA2, 36 CUs @ 2.23 GHz (Variable)
Memory: 16 GB GDDR6 - 256 bit bus, 448 GB/s
Storage: Custom 825 GB SSD
IO Throughput: 5.5 GB/s RAW, 8-9 GB/s compressed
Expandable Storage: NVMe SSD slot
External Storage: USD HDD
Optical Drive: 4K Blu-ray

The design decision to use variable clocks is an interesting one. Here's how Eurogamer explained it:
It's really important to clarify the PlayStation 5's use of variable frequencies. It's called 'boost' but it should not be compared with similarly named technologies found in smartphones, or even PC components like CPUs and GPUs. There, peak performance is tied directly to thermal headroom, so in higher temperature environments, gaming frame-rates can be lower - sometimes a lot lower. This is entirely at odds with expectations from a console, where we expect all machines to deliver the exact same performance. To be abundantly clear from the outset, PlayStation 5 is not boosting clocks in this way. According to Sony, all PS5 consoles process the same workloads with the same performance level in any environment, no matter what the ambient temperature may be.

So how does boost work in this case? Put simply, the PlayStation 5 is given a set power budget tied to the thermal limits of the cooling assembly. "It's a completely different paradigm," says Cerny. "Rather than running at constant frequency and letting the power vary based on the workload, we run at essentially constant power and let the frequency vary based on the workload."

An internal monitor analyses workloads on both CPU and GPU and adjusts frequencies to match. While it's true that every piece of silicon has slightly different temperature and power characteristics, the monitor bases its determinations on the behaviour of what Cerny calls a 'model SoC' (system on chip) - a standard reference point for every PlayStation 5 that will be produced.

"Rather than look at the actual temperature of the silicon die, we look at the activities that the GPU and CPU are performing and set the frequencies on that basis - which makes everything deterministic and repeatable," Cerny explains in his presentation. "While we're at it, we also use AMD's SmartShift technology and send any unused power from the CPU to the GPU so it can squeeze out a few more pixels."

It's a fascinating idea - and entirely at odds with Microsoft's design decisions for Xbox Series X - and what this likely means is that developers will need to be mindful of potential power consumption spikes that could impact clocks and lower performance. However, for Sony this means that PlayStation 5 can hit GPU frequencies way, way higher than we expected. Those clocks are also significantly higher than anything seen from existing AMD parts in the PC space. It also means that, by extension, more can be extracted performance-wise from the 36 available RDNA 2 compute units.
 
Mar 11, 2004
23,074
5,557
146
Interesting. I'm genuinely surprised that I think both systems seem to be going to have 1TB of NAND built-in with expansion capability. I had a hunch we'd see a much smaller amount (like ~100GB), with an expansion.

Is the quick response from Sony going to be damage control or steal MS' thunder?

I can't imagine the underlying tech is going to be all that different, I figure the mix of base components will change.

Let's see...

I don't think it is a quick response, and I don't think there was a chance Sony would really steal thunder. I've posted this multiple times, but I think the Series X was actually targeted to be quite a bit more powerful (closer to 14-15TF) but after Microsoft found out the PS5 would be closer to 10TF they relaxed clock speeds since they didn't need to push as much. I have a hunch (which, I might be wrong, but I think others maybe like Digital Foundry posited it too), that they have been looking at reworking the streaming console that was supposed to launch alongside the Series X, and were looking at beefing it up to match the PS5 sometime in 2021, at a significantly reduced price.

I've seen some of where the leaks for the specs of Oberon and Arden came from (AMD, if you're reading this, keep putting your regression test results on Github, they make for some interesting reads, kthxbye), and I still think 36CUs clocked at 2GHz is the most likely scenario for the PS5. As such, it will probably not be as performant as the Series X. I look forwards to seeing whether or not the PS5 uses RDNA1 with RDNA2 features like VRS and HW RTRT (like how last gen consoles had RPM from Vega), or it's actually RDNA2. I hope it is the second, because in which case, both consoles will likely be 2080 tier or higher, and that would be fantastic.

Seems you were pretty on the mark (regards to CUs and clock speeds, with Sony having a "Boost" where it can go higher). I have a hunch the PS5 is more unique this time around. This looks like its a reversal of the PS4/One era, where this time Sony went more custom and less powerful, while Microsoft just went for strong specs across the board, with fairly common components (so no weird stuff that developers have to specifically work around). I have a feeling its more original Navi/RDNA1 with some extra features (that aren't necessarily the same as RDNA2, even if they're for the same things, like the ray-tracing), which if RDNA2 is a big leap over RDNA1 then it might be extra problematic for Sony.

Its frankly a bit baffling at Sony's direction here, its like they didn't learn anything from the previous gen. Then again, that seems to be a recurring problem for them (see their hubris after PS1 sales, then they ignored PS2 hardware complaints because of sales and then it bit them pretty hard with the PS3). Which, this seems more like Microsoft with the One, where they tried to come up with novel solutions, when the technology ended up providing those solutions (namely fast SSD, Sony appears to have come up with some solution themselves when PCIe 4.0 can outdo it while Microsoft is just rolling with NVMe stuff which is plenty capable while being able to offer stuff like the external expansion although it seems Sony is going pretty standard with their expansion which is refreshing especially considering there were rumors they were going to do their own proprietary thing; I get the same feeling with regards to the GPU, Sony customized it, when RDNA2 is going to end up offering that capability plus possibly more). Microsoft just went with fairly standard stuff and so seems to have the simpler, common, and more powerful hardware.

One thing I'm not sure about is their audio solution. There was talk that it was based on path tracing, which perhaps it is, but this just sounds like yet another attempt at HRTF for the masses (which just about every company seems to have tried). I like that they're aiming to make it so it adapts to your speaker config (and out of the gate seems like headphones is the initial focus, mostly because its the most controllable). I worry though this will be half-baked, and then canned (much like so many others have been), or potentially even worse that maybe we'll see some patent spat clip it (as happened with Aureal and then say Doom 3). Plus, there's the issue of compatibility with other formats. But maybe it'll be different, maybe path tracing will be able to finally get things there.

Weren't the rumors that there was going to be both a base model and a pro model at launch? They could just unveil the pro model specs to match the XSX, after all even the XSX is supposed to have a lower end "Lockhart" model with only a 4TF GPU.

There's been rumors that there's two versions of the PS5. But from what I can gather, that likely came from there being two dev kits or dev kit modes where they clocked differently. Seems that final hardware will offer both essentially. My guess is Sony floated the idea, but decided to go with one system instead. I've seen rumors saying there's actually quite a bit of strife happening at Sony as some are demanding profitability, while others are trying to I think make the case that some compromise on price/setup would be more palatable in the market (so having one single system instead of two at launch; I'd guess that the pressure of the more powerful Xbox made that argument almost null though, they couldn't come out with less than 10TF; I thought we might have seen the base system have some CUs disabled, lower clocks, and so it'd offer ~10TF, with binned fully enabled chips being more expensive but the combination being able to offer ~20% higher performance, but seems the more powerful one is what will offer the ~10TF). Supposedly price is a big issue right now with Sony unable to hit the price point they think is good (which, for some reason I have a hunch is $499 and not $399, meaning, we might see a $549 or even $599 launch price if the profit always people win out; and I think the gaming market people can see that going above $499 is likely going to hurt them in sales, especially against a more powerful system).

I think Microsoft themselves has said Lockhart, at least in the form it was intended to be (hybrid streaming focused system) is cancelled. Not sure it was publicly to gamers (think it might've just been in like investors conference call or something?). I theorized that Microsoft might be retooling that console though, to more closely match the PS5 (so lower spec). But then I thought the Series X would be closer to 15TF. My thoughts being that Microsoft was going big, then found out Sony was only ~10TF with the PS5, so feeling they could rework Lockhart as a full console but less powerful (closer to 10TF) and bring it out next year at a lower price. I think they probably decided to just roll with what they have, relax clock speeds a bit and see how things go. They can probably match Sony's price (won't be surprised if they undercut it either), while being significantly more powerful. If they need to, perhaps they can do an update that will bump clock speeds higher. Or if they feel the need they could launch a more affordable cheaper system (drop the optical drive, smaller amount of flash or maybe they don't even have flash and instead rely on their new external thing). Hit $499 with the Series x and $349 for the lower end one. My guess is now, they just wait. If the PS5 jumps out to a big sales lead, they greenlight the cheaper system. If they get a good lead, they hold off and plan for like 2-3 years down the road, where they can make the Series X the cheaper system (with some tweaks), and then launch a higher spec'ed one. Or if streaming becomes more viable perhaps they go the opposite route and just keep the Series X but then offer a much smaller system that just streams for like $199 or maybe even $99.

So it was 36 CUs but they are quoting performance at 2.23ghz in order to close the gap with XSX, but admitted the GPU may have to down lock. With hdmi 2.1 VRR the difference between these two may be academic either way, unless we get an actual 40% difference in performance in some games.

I don't think it'll be academic (looks like the PS5 will have the more unique architecture - which generally isn't a good thing especially if its less powerful; I won't be surprised if early games the Xbox has a sizable performance advantage, which could matter for people on older TVs stuck between 30 or 60 Hz), but I'm not sure it'll matter that much. Unless one of them has a lot higher sales than the other, it probably won't matter a lot, as games have been designed for a wider variety of hardware now, and they'll still be releasing most games on the previous gen systems, so they'll just get scaled to what the hardware offers.
 
  • Like
Reactions: lightmanek

uzzi38

Platinum Member
Oct 16, 2019
2,629
5,938
146
@darkswordsman17

Sorry, I'm on mobile and that was a really long post to quote, so instead I'm just going to reply to one portion like this - about the PS5.

I believe it's RDNA2 but missing some features as opposed to RDNA1 with features tacked on. I was reading up on AMD's patents for RTRT, and my understanding is it would require some minor changes to the shaders and major changes to the TMUs, so if they've updated these portions I'd say it's quite likely the shaders themselves are improved also, and that's what makes this all interesting.

As things stand, I'm fairly confident that the PS5 is somewhere in between a 2070 Super and 2080 Super in performance, and the Series X is within 5% of a 2080Ti.

Though that's assuming that unlike Navi10, both consoles don't run into a bottleneck in their fixed-function hardware somewhere, something that's kind of likely given the Series X crams 7 WGPs into a single Shader Array.

(For those that don't know the terminology surrounding RDNA, open up AMD's official "die shot" of Navi10. What I mean is the Series X is of the same structure as Navi10, but each quadrant has an extra 2 Dual-Compute Units).
 

Mopetar

Diamond Member
Jan 31, 2011
7,837
5,992
136
I don’t think the performance gap between the Xbox and PS5 will be as vast as people are imagining. Comparing raw FLOPs gives a rough idea but even with the same architecture (or close enough) it’s not easy to get perfect linear scaling when it comes to having more CUs. The Xbox should still have an edge, just not as much as one might think from the napkin math.

The two differ in other ways as well which ultimately makes it a question of where the bottlenecks are at in particular titles. The PS5 looks like it can stream massive amounts of data in from the SSD which for some games might be a bigger deal in terms of perception of performance. People will notice the half second extra load time a little more easily than ~15% better graphics.

It will also be interesting to see how first and third party titles differ. Third party games almost have to take a lowest common denominator approach, but thankfully the hardware is pretty similar so it won’t be too much of a pain for developers compared to the past. However first party titles get to play to the maximum strengths of each architecture. If some of what each company has worked on pans out that could lead to some unique opportunities.
 

psolord

Golden Member
Sep 16, 2009
1,916
1,194
136
So you're only speaking out of bias here. Thanks for clearing that up.

Why am I speaking out of bias here? When someone speaks the truth, he is immediately biased because he spoke against your team? Can you grow up please? I have my AMD cards in my signature and I have uploaded thousands of Radeon videos on my channel (hobbyist, amateurish, non motentized).


It's not my fault they have made a constant mess ever since the Tahiti (ok Navi is good). Lets see them provide something truly remarkable, I mean with good price/perf/power draw and I will be the first to buy mate. Navi fills all these checkboxes, but RT is here to stay, so I will wait for something with RT.
 

GodisanAtheist

Diamond Member
Nov 16, 2006
6,810
7,166
136
It will also be interesting to see how first and third party titles differ. Third party games almost have to take a lowest common denominator approach, but thankfully the hardware is pretty similar so it won’t be too much of a pain for developers compared to the past. However first party titles get to play to the maximum strengths of each architecture. If some of what each company has worked on pans out that could lead to some unique opportunities.

- This right here.

Having the more powerful console is worth squat if you cannot bring solid exclusive titles and in that regard Sony definitely has the leg up on MS.

At most, I can see cross platform titles holding on to their FPS lock more consistently with the SeX over the PS5, bot not necessarily appearing much different on either.
 

lobz

Platinum Member
Feb 10, 2017
2,057
2,856
136
Why am I speaking out of bias here? When someone speaks the truth, he is immediately biased because he spoke against your team? Can you grow up please? I have my AMD cards in my signature and I have uploaded thousands of Radeon videos on my channel (hobbyist, amateurish, non motentized).


It's not my fault they have made a constant mess ever since the Tahiti (ok Navi is good). Lets see them provide something truly remarkable, I mean with good price/perf/power draw and I will be the first to buy mate. Navi fills all these checkboxes, but RT is here to stay, so I will wait for something with RT.
Whenever someone implies I should 'grow up please, I smile a bit quietly :) Especially because it's usually followed by such entitled delusions as if you could determine for us all, what the 'truth' is.
'My team' is the best joke to see though, as I run a 1070 in my main PC and I'll buy whatever's best for the budget I have at the time.
I've read your comment 2 times and I still have nothing else to say than what I already did originally. Maybe when I grow up even more, I'll have something else to say too, who knows :)
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
RDNA2 seems to be an incredible improvement over RDNA1, in fact considering both are using the same 7np process, RDNA2 seems to be 30% faster just due to architectural improvements, plus ray tracing hardware support, plus all the features that Nvidia was touting with Turing, and considering RDNA2 is in BOTH the major consoles, its safe to say that all the PC ports are going to be optimized for RDNA2 and Ryzen 2, which includes AMD's hardware implementation of ray tracing.

People said the same thing about the Xbox One and PS4, and look how that turned out. PC games are different, because there are more abstraction layers between the hardware and software, and ultimately PC games are optimized around the use of DX11, DX12 or Vulkan unlike with consoles where the hardware is always consistent. I don't think Nvidia or Intel is going to be handicapped in gaming because both consoles use AMD hardware. In theory perhaps, but in practice, it doesn't matter; just like it didn't for the current generation.
 
  • Like
Reactions: GodisanAtheist

Headfoot

Diamond Member
Feb 28, 2008
4,444
641
126
When the difference is 38x its just common sense to question whether its the same metric being used. There is no standard in that space yet, everyone is making up figures that favor their architecture. If you wanna place bets on whether the FPS will come out 38x better on RDNA2 than on the equivalent nvidia one, I'll take your bets on that. Please put down a lot of money, corona's made me take a hit in the market lately
 

lobz

Platinum Member
Feb 10, 2017
2,057
2,856
136
When the difference is 38x its just common sense to question whether its the same metric being used. There is no standard in that space yet, everyone is making up figures that favor their architecture. If you wanna place bets on whether the FPS will come out 38x better on RDNA2 than on the equivalent nvidia one, I'll take your bets on that. Please put down a lot of money, corona's made me take a hit in the market lately
Nobody in their right minds want to do that. I'm all for making sensible assumptions.
Dragging nonsense into this (AMD bad student, NVIDIA good student) on the other hand is nothing but diverting bias. I mean, NVIDIA as a good student at bringing RTX into games? RTX currently is a piece of garbage in 95% of the cases.
 

RetroZombie

Senior member
Nov 5, 2019
464
386
96
My all surprise with both the consoles is the low ram amount.

I was expecting more than 16GB:
- OS are more memory hungry​
- 4K requires 2x more ram capacity​
- RT requires more ram​
- Special sound require more ram​
- Games should grow in complexity so more ram for them​
It seams the market requires some new type of intermediate RAM, one with more capacity than GDDR6/GDDR5/DDR4, slower and cheaper.
Probably something very similar to NVME but with less capacity and cheaper without the need of flash capability.
 

gdansk

Platinum Member
Feb 8, 2011
2,085
2,578
136
My all surprise with both the consoles is the low ram amount.

I was expecting more than 16GB:
- OS are more memory hungry​
- 4K requires 2x more ram capacity​
- RT requires more ram​
- Special sound require more ram​
- Games should grow in complexity so more ram for them​
It seams the market requires some new type of intermediate RAM, one with more capacity than GDDR6/GDDR5/DDR4, slower and cheaper.
Probably something very similar to NVME but with less capacity and cheaper without the need of flash capability.
4K actually requires 4 times the memory in theory, e.g. for a frame buffer. But in practice it rarely requires 2x as much VRAM because of texture compression and many other things not scaling with resolution (like meshes).
 

Guru

Senior member
May 5, 2017
830
361
106
People said the same thing about the Xbox One and PS4, and look how that turned out. PC games are different, because there are more abstraction layers between the hardware and software, and ultimately PC games are optimized around the use of DX11, DX12 or Vulkan unlike with consoles where the hardware is always consistent. I don't think Nvidia or Intel is going to be handicapped in gaming because both consoles use AMD hardware. In theory perhaps, but in practice, it doesn't matter; just like it didn't for the current generation.
Well actually you are quite wrong. If you look at most DX12 and Vulkan titles, Polaris clearly had and still has a big advantage over Pascal. Even today RDNA has a small, but decent advantage in DX12 and Vulkan titles over Turing.

And if you actually look purely at MS titles, clearly in those games AMD is heavily favored.