• Guest, The rules for the P & N subforum have been updated to prohibit "ad hominem" or personal attacks against other posters. See the full details in the post "Politics and News Rules & Guidelines."

The Next PlayStation's Details Revealed [WIRED]

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

IntelUser2000

Elite Member
Oct 14, 2003
6,282
906
126
They're sure hyping the SSD. Hopefully it lives up to it. I do like the suspend, and them talking about using the SSD to enable you to just jump into a game (but that'd be possible right now, at least the bits about skipping all the menu junk like the startup screens), and being able to uninstall certain parts (so if you just play multiplayer you could ditch the single player stuff, or vice versa). I kinda have my doubts about how prevalent that will be and its not like they couldn't already do that stuff so them tying it to the SSD seems weird. That does kinda indicate to me that the size of it might be a concern. Its making me guess its an embedded NAND setup more (which will be great for performance - and I think they touch a bit on how it'll change games where they can quickly load entire games - which to me seems like it should indicate some tiered storage plans; but also the speed would be nice where they could have super high quality assets that they could have streamed in very quickly, and that will change how game developers design games).
They aren't doing a simple change like on PCs. It's going to change very dramatically to fully take advantage of having an SSD.

They demonstrated a scene loading 10-15x faster(in PCs, SSDs are 2x, and 4x at best). Or doing things previously impossible as skipping loading scenes disguised as slow door openings, or through rock climbing.

They were talking about how its the storage that limits how fast the scenes are updating when you are swinging in Spiderman. And moving to another world seamlessly similar to in the Doctor Strange movie.

There was a thread that went over the patents and supported the claims which seemed otherwise ridiculous.

The differences basically boil down to consoles being a dedicated platform for gaming, and PCs being general purpose. PCs will get there, but after many years, and progressively. Probably 10-15. Plus might need Optane DIMMs or some form of storage-class memory to be widespread.
 
Mar 11, 2004
19,214
1,750
126
They aren't doing a simple change like on PCs. It's going to change very dramatically to fully take advantage of having an SSD.

They demonstrated a scene loading 10-15x faster(in PCs, SSDs are 2x, and 4x at best). Or doing things previously impossible as skipping loading scenes disguised as slow door openings, or through rock climbing.

They were talking about how its the storage that limits how fast the scenes are updating when you are swinging in Spiderman. And moving to another world seamlessly similar to in the Doctor Strange movie.

There was a thread that went over the patents and supported the claims which seemed otherwise ridiculous.

The differences basically boil down to consoles being a dedicated platform for gaming, and PCs being general purpose. PCs will get there, but after many years, and progressively. Probably 10-15. Plus might need Optane DIMMs or some form of storage-class memory to be widespread.
Right its mostly about software optimization and I don't think even most devs quite realize what system throughput like that would make possible.

I've talked about this type of thing before. Like using HBM for system memory and then optimizing the memory slots for NAND instead (while using the HBM as buffer for the NAND). And then leveraging PCIe for larger storage (and SATA beyond that), to stream in assets.

The way Sony is talking, I'm banking on ~100GB of embedded NAND and using the system GDDR6 as the buffer, and then an optimized software stack that treats the NAND almost as cache (think the system memory being like L1/L2 cache and then the NAND would be like L3/L4). They could probably optimize and interface that would let them put the NAND on a separate board (so they could test it prior to assembly, and then replace the NAND should it wear out; not saying they will I actually am guessing not, but it would be possible as I think they could do close to 100GB/s through an optimized IF link using basically PCIe wiring now).
 
Last edited:

IntelUser2000

Elite Member
Oct 14, 2003
6,282
906
126
I've talked about this type of thing before. Like using HBM for system memory and then optimizing the memory slots for NAND instead (while using the HBM as buffer for the NAND). And then leveraging PCIe for larger storage (and SATA beyond that), to stream in assets.
Optimizing it for the DRAM standard doesn't help when the media(NAND) is the bottleneck.

SSD transfers are largely a result of per chip bandwidth x number of channels, not the DRAM buffer. The DRAM buffers on NAND SSDs aren't for caching userspace data(like recently open applications, or used texture data like with Optane Memory caching), but pointers for logical addresses so its not bandwidth intensive at all. The ratio on the 970 Pro for example is 1000:1, meaning for a 1TB SSD, it has a 1GB DRAM buffer. That's enough for 4KB pages with each location taking 4 bytes or something like that.

 
Last edited:
Mar 11, 2004
19,214
1,750
126
I'm not talking about the PS5 doing that. You keep misunderstanding what I'm posting. I understand what the DRAM buffer on SSDs is for, and that's not what I'm actually talking about when I say use the system memory as the buffer, because I'm assuming the NAND they're doing is not setup like a typical SSD. You seem to get what I'm actually meaning for them to be doing (where the system memory would be caching the userspace stuff). For the console, I'm saying, leveraging the bandwidth of the GDDR system memory to offer a wider bus for the NAND. Basically treat the combination of the two like how a CPU treats cache.

My point in comparing the console GDDR+NAND with a theoretical PC using HBM as system memory and moving NAND (which my point there being, is giving NAND a wide bus/channel setup like DRAM to push the performance to DRAM like levels, as like you said, that's the limitation right now), is that you'd have to drastically exceed what say a unified tailored system environment like a console offers in order to get similar results from PCs that are not optimized for that.

Further, I'd guess they'd be looking at treating it as a singular memory pool so as to deal with the logical addressing. Which on PC you couldn't quite do as software wouldn't be optimized for that, so either you'd need to so drastically exceed the typical performance at each step (so that you end up with a speedup roughly similar to what you'd see from a system that was optimized for like a console, where on PC you're losing most of the max performance because of the lack of optimization but because the performance is so high it is still ultimately bringing a huge speedup to the overall throughput of the system; which you could achieve via a closed optimized platform like consoles with much lower spec).

I think we're seeing things in a similar way, but because I'm talking about other aspects you're not quite getting what I'm meaning.

For the consoles, I'm expecting a pool of embedded NAND, probably with a customized I/O chip between it and the system memory, where the I/O chip handles all the data coherency and has similar to memory bus channel routing to the NAND in order for it to offer much higher performance than it would from a typical standalone expansion SSD on PC. Then from the I/O chip, its funneled to the processors themselves (be they unified APU, or individual chiplets). Are you thinking something different? Which they'd likely need older more expensive NAND that'd need high write endurance? Which to me that would be fine since the lower density would work well for it looking to leverage a wider bus (for higher performance). It'd be more expensive, hence part of the reason they'd go with a relatively smaller size.

Which I don't know, would it be possible to chain the NAND chip to the GDDR6 chips? So the I/O would have the normal GDDR memory bus, but then each of the GDDR chips has a bus routing to a NAND chip beyond it. I'm assuming not and the NAND would need its own bus, or they'd need some controllers between the GDDR6 and the NAND? Plus the GDDR6 would need two sets or routing (that to/from the memory controllers, and then to/from the NAND). Or even if it were possible it'd slow things down as it could basically only route to the NAND or to the memory controller (although perhaps that wouldn't be a major issue)?

Then from there they could have a little bay or a cover with an m.2 slot or SATA port, where you could install a drive, which would purely be for storage and not play into how they're leveraging the embedded NAND for performance of the overall system (but where having something faster would still be beneficial when you do need to swap to/from it). So, you'd save your game library to the storage where it'd be pretty quick to get loaded (much quicker than previously, but nowhere close enough for the stuff they're talking about with regards to gameplay impacting use of the fast SSD). And then when you play a game it'd install it to the embedded NAND (since you'll likely play a relatively limited amount of games quite a lot, until you move on then it can uninstall it from the NAND and make room for whatever new game you are playing, where it'll keep it in the embedded RAM to speed things up). So, new game comes out, you'll play it for say a good week straight, during which its installed on the fast NAND. Then you can uninstall it to make room for a different game. Or maybe there's a few games you play online regularly, you could install just the multiplayer portions and have them all ready for quick jumping into a game/match (hence Sony talking about that type of thing as a feature; but because of the limited amount of NAND, it has to juggle what is installed and what isn't - its possbible that for some things it could never install it and just stream it in - thinking stuff like pre-rendered cinematics for instance; or perhaps it installs the base game engine and then swaps assets depending on where you are in the game, so maybe when you start it just moves over like the first 1/4 of the assets, and then as you progress it moves over more while letting ones from earlier go). Which that's the thing, its a good idea that still will likely run into limitations. With some games pushing 100GB these days, and that likely to grow as they push for higher resolutions and higher quality assets. Some stuff like uncompressed audio (which I've seen some claim is one of the big reasons why game sizes ballooned so much) could be streamed in off the disc or storage, or some stuff possibly straight up over the internet - stuff like video and audio, cinematics and things like that (not things it'd need for gameplay processing).

Things are still murky enough though that I think there's still a decent chance, Sony really is just talking about how optimizing the system for a PCIe 4.0 SSD (which maybe they have a custom even wider bus, like I said its absolutely possible for them to do that either via proprietary connection or just leveraging IF through like a PCIe x16 slot). This definitely wouldn't be the first time Sony drastically overhyped certain features/performance. I think its interesting that they are talking more about relatively simple aspects of what an SSD enables, which leads me to believe it very well might not be the type of thing I'm talking about (which I think could change how games are designed).

Which, just realized there's actually pertinent analogy for what I was talking about the GDDR6 and NAND tiering. The Xbox One with its ESRAM and DDR3, where in the new consoles the GDDR6 would be like the ESRAM, and the NAND be like the DDR3. While the overall bandwidth increase might not be that impressive, it should mean a huge change the the amount of data actually being worked with (going from 32MB of ESRAM to 16GB of GDDR6 - yes there's a latency penalty, but the orders of magnitude shift in size of data you can handle more than makes up for it; and then going from 8GB of DDR3 to maybe similar bandwidth but 100GB of NAND - which again, yes latency, but because its non-volatile it means there's some scenarios with a big latency increase, i.e. suspend/startup of a game).
 

IntelUser2000

Elite Member
Oct 14, 2003
6,282
906
126
I'm not talking about the PS5 doing that. You keep misunderstanding what I'm posting.
Ok, I'm sorry about that. I probably rushed towards a conclusion in my head. I deleted the older lengthier article and just put up the link as more of a summary because I felt that you did understand.

I understand what the DRAM buffer on SSDs is for, and that's not what I'm actually talking about when I say use the system memory as the buffer, because I'm assuming the NAND they're doing is not setup like a typical SSD. You seem to get what I'm actually meaning for them to be doing (where the system memory would be caching the userspace stuff). For the console, I'm saying, leveraging the bandwidth of the GDDR system memory to offer a wider bus for the NAND. Basically treat the combination of the two like how a CPU treats cache.
Keep this point in mind as I try to address your points.

My point in comparing the console GDDR+NAND with a theoretical PC using HBM as system memory and moving NAND (which my point there being, is giving NAND a wide bus/channel setup like DRAM to push the performance to DRAM like levels, as like you said, that's the limitation right now), is that you'd have to drastically exceed what say a unified tailored system environment like a console offers in order to get similar results from PCs that are not optimized for that.
The limitation isn't merely the bandwidth, otherwise SSDs on PCs should have been much faster, but the way the developers treat the ecosystem and code their applications.

Of course, the bandwidth helps, but the patent was only talking about a max of 10GB/s, and some figures that were lower and similar to sequential speeds of the current early generation PCIe 4.0 SSDs.

First, is they are looking into replacing the DRAM buffer with SRAM. SRAM is much faster. Yes it is more costly at the same size, but this is where being a machine dedicated for a single purpose can help.

Rather than needing 4KB as the base addressing size, they are going to increase it substantially. This reduces the amount needed for the buffers. They were talking about x megabytes in size. At 1MB, it can be reduced to 4MB for example(256KB with 16MB sizes) on the aforementioned 1TB SSD. This will also help NAND SSDs as the performance is worst on the 4KB sizes, and it gets better as the size grows, improving bandwidth. Internally you can also use a method such as coalescing to improve bandwidth(with some latency impact).

Costs can be lowered too because the SRAM size can be tiny. Also they said it'll allow larger SSDs to scale better in pricing as the amount of BoM the controller takes can be smaller.

For the consoles, I'm expecting a pool of embedded NAND, probably with a customized I/O chip between it and the system memory, where the I/O chip handles all the data coherency and has similar to memory bus channel routing to the NAND in order for it to offer much higher performance than it would from a typical standalone expansion SSD on PC. Then from the I/O chip, its funneled to the processors themselves (be they unified APU, or individual chiplets). Are you thinking something different? Which they'd likely need older more expensive NAND that'd need high write endurance? Which to me that would be fine since the lower density would work well for it looking to leverage a wider bus (for higher performance). It'd be more expensive, hence part of the reason they'd go with a relatively smaller size.
Endurance is not an issue as consoles are mainly gaming and once installed its mostly a read-only system. This is one reason why they can boost speeds to crazy levels, unlike on a PC where the manufacturer cannot assume the type of code the developer is going to use, and you'd need to use technology currently not practical in a console(such as Optane in a DIMM form factor). The file system is supposed to know this from my understanding. Also its optimized for such read operations when it detects it.

If it was like on PCs, then no NAND can survive. The highest ever endurance NAND SSD is the Intel DC P3700 with 17 DWPD, on the select models. At 100MB/s, in just 5 hours it will reach that limit using your 100GB drive size. Clearly unattainable with console's low cost requirements.

Which I don't know, would it be possible to chain the NAND chip to the GDDR6 chips? So the I/O would have the normal GDDR memory bus, but then each of the GDDR chips has a bus routing to a NAND chip beyond it. I'm assuming not and the NAND would need its own bus, or they'd need some controllers between the GDDR6 and the NAND?
Yea, you can't daisy chain the two, if that's what you are saying? You need the NAND to have its own separate link, or have a controller between the two as you put it. If they are going that route they could either use something like the Host Memory Buffer standard where you use system memory(in this case GDDR6), or have a tiny chip dedicated for the NAND just like in most SSDs.

Then from there they could have a little bay or a cover with an m.2 slot or SATA port, where you could install a drive, which would purely be for storage and not play into how they're leveraging the embedded NAND for performance of the overall system...

Which that's the thing, its a good idea that still will likely run into limitations. With some games pushing 100GB these days, and that likely to grow as they push for higher resolutions and higher quality assets. Some stuff like uncompressed audio (which I've seen some claim is one of the big reasons why game sizes ballooned so much) could be streamed in off the disc or storage, or some stuff possibly straight up over the internet - stuff like video and audio, cinematics and things like that (not things it'd need for gameplay processing).
It sounds to me like you are saying they are going to have a slow NAND + fast NAND tier?

The problem of this is also related to cost in addition to others as you mentioned(such as deciding which drive to install what, and game sizes ballooning over time). You'll need to account for two different sets of NAND. It also complicates system design.

...like I said its absolutely possible for them to do that either via proprietary connection or just leveraging IF through like a PCIe x16 slot). This definitely wouldn't be the first time Sony drastically overhyped certain features/performance. I think its interesting that they are talking more about relatively simple aspects of what an SSD enables, which leads me to believe it very well might not be the type of thing I'm talking about (which I think could change how games are designed).
It absolutely can use a proprietary connection. Though PCIe 4.0 sounds sufficient.

Which, just realized there's actually pertinent analogy for what I was talking about the GDDR6 and NAND tiering.
Yes, you are definitely talking about tiering. Sorry if I miss some of your points. Further tiering is something we'll see on PCs. That's not what they are doing on the consoles, at least based on patents.

Summarizing the changes:
-Larger addressing sizes mean much larger page sizes can be used. That reduces the size of the buffer significantly, not only reducing cost, but allowing much faster memory to be used, such as SRAM. It will also allow the SSD to be closer to peak speeds, since its running at a more optimal range compared to the brutal 4KB size.
-The file system is being changed to allow optimization for the console's mostly-read nature, and to allow the addressing changes as stated above. It can also recognize which applications are read only. Vast majority of the time the consoles will be used for gaming.
-WoRM like behavior on the consoles also mean there's little worry for endurance issue on the console SSDs because you install once, and read many times. Things like character saving is going to be a minimal load anyways.
-Oh and one thing I didn't mention above is that it might also have a dedicated chip for decompression. This will reduce the pressure on the CPU so the I/O is less bottlenecked.

Everything mentioned above is not practical in a general purpose PC. PCs are not read-only, addressing and file system changes will introduce compatibility issues, dedicated chips will be short-lived as no one will want it. Microsoft also mentioned similar gains for their Scorpio.
 
Last edited:

ASK THE COMMUNITY

TRENDING THREADS