I wrote a piece on PS4 and Xbox One. I tried to stick to the facts, along with as fair an interpretation as I could. Some have said I was too easy on Microsoft, some think I'm too easy on Sony. Whatever, this is a start. Focused mainly on the systems and launch data.
http://losthammer.wordpress.com/201...13-and-beyond-predictions-and-what-to-expect/
Let me know what you think! I'm also looking for other articles and info to add as time goes on.
at only 32MB, its not even large enough to hold a full frame of a 1080p game in cache
Nice article, however, this part seems wrong:
I'm not sure what you mean - even at 32 bits/pixel, a 1080p frame is only around 8 MB.
What makes the embedded SRAM terrible is it doesn't even reap the benefit of embedded on die SRAM: it's bandwidth isn't much more than the external RAM on the XBone and is still almost half as fast as the PS4s external RAM.
Even if the memory controller is split and it can access both the embedded and external ram simultaneously (eg: rendering threads have full access to esram without blocking access to external bus), the total bandwidth with both the esram and external ram together used concurrently is still slightly less than the PS4 external.
That in and of itself doesn't really mean much, but the extra memory management on the programmer of when to use what and what is mapped to where, all for no real benefit over a single easier to manage pool.
Don't get me wrong, I'm not saying the SRAM is great, I'm just objecting to the claim that a whole frame won't fit in it. The PS4 memory subsystem does appear to be much better.
I wrote a piece on PS4 and Xbox One. I tried to stick to the facts, along with as fair an interpretation as I could. Some have said I was too easy on Microsoft, some think I'm too easy on Sony. Whatever, this is a start. Focused mainly on the systems and launch data.
http://losthammer.wordpress.com/201...13-and-beyond-predictions-and-what-to-expect/
Let me know what you think! I'm also looking for other articles and info to add as time goes on.
Nice article, however, this part seems wrong:
I'm not sure what you mean - even at 32 bits/pixel, a 1080p frame is only around 8 MB.
Nice article, good job on staying neutral in it I'd say.
Appreciated. I really am pretty happy with the DRM fix on the XB1. I obviously still don't think it's perfect by any means, but now at least I can see myself owning one down the road and not feeling dirty about it haha.
The problem is, theoretical bandwidth means so little. Look at the benchmarks for a graphics card in PCI 16x, 8x, and 4x*. There is such a little difference. We have trouble actually saturating the lanes, DDR3 vs DDR5 won't make a huge impact on the games.
*This was a few years back and cold have possibly changed, but that is doubtful. It is just to highlight max theoretical bandwidth isn't a great metric for gaming power.
The problem is, theoretical bandwidth means so little. Look at the benchmarks for a graphics card in PCI 16x, 8x, and 4x*. There is such a little difference. We have trouble actually saturating the lanes, DDR3 vs DDR5 won't make a huge impact on the games.
*This was a few years back and cold have possibly changed, but that is doubtful. It is just to highlight max theoretical bandwidth isn't a great metric for gaming power.
It means a lot when CPU and GPU share the same memory and you are rendering multi sampled depth buffered 1080p with depth of field, etc.
I am not experienced coding to the metal, as DOS was dead by the time I got into development and have stuck with PC. I still can see it being better by having faster memory, but the amount it actually translates to better graphics or increased FPS? I don't think so. Especially with games being locked at 60FPS, I don't think the PS4 will have that big of an advantage. Possibly at the end of it's lifecycle, but for most of it the games should be close enough people won't notice.
I am not experienced coding to the metal, as DOS was dead by the time I got into development and have stuck with PC. I still can see it being better by having faster memory, but the amount it actually translates to better graphics or increased FPS? I don't think so. Especially with games being locked at 60FPS, I don't think the PS4 will have that big of an advantage. Possibly at the end of it's lifecycle, but for most of it the games should be close enough people won't notice.
This is one area consoles are more efficient. On a PC the graphics resources are duplicated in main ram and video ram.Either the API gaurantees availability of the texture once loaded and keeps a private copy to reupload to the graphics card or the programmer must keep a copy ready in the event of "surface lost" as a result of textures being evicted from graphics ram and needing reloading. In a unified memory model this doesn't happen because the graphics RAM is the main ram and the programmer has more direct control of it.
True. Add ddr5 and 33% more gpu and they'd be equals not counting os considerations which may or may not be measurably large.
The problem is, theoretical bandwidth means so little. Look at the benchmarks for a graphics card in PCI 16x, 8x, and 4x*. There is such a little difference. We have trouble actually saturating the lanes, DDR3 vs DDR5 won't make a huge impact on the games.
*This was a few years back and cold have possibly changed, but that is doubtful. It is just to highlight max theoretical bandwidth isn't a great metric for gaming power.