• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

News [Wired] PS5 confirmed to use 7nm Zen 2, Navi, SSD

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Ha 🙂 This is a fun discussion. No pettiness or console warring. No PC supremacists throwing mud on the peasant box. Just like a few mates round some pints for some conversation street.

Weirdly, the least interesting thing is wondering how fast it will be for gaming. I'm sure it will be a great console. What is far more interesting is how the engineering will be, what choices they make, and how they manage heat/power/space/modularity.

I'm concerned with some ideas about soldering on storage ala Apple. It would make for a situation where a single bad nand cell would trash an entire $500 box, so I hope not. More importantly perhaps, it would make replacement impossible unless they also offer an M.2 port, which I would find an acceptable middle ground.

I'm expecting a few (hopefully front AND back) USB C/3.x ports, you know, the good ones. USB naming has gone wakko lately, but the faster (20gbps?) port would be great to plop a $99 2TB external SSD into around 2023ish.

I also don't have anything at all against an external power brick, so long as it's well designed. Leaves room to make a more elegant box with less heat and more space for cooling. The OG PS4 was almost a masterpiece in design that managed to get the PSU inside as well, but it wasn't as quiet as Microsoft's hilarious VCR. A middle ground could have been achieved I think. The X1S is definitely a nice design. I don't honestly care much about the style, it's not like I'm going to stare at it more than a few seconds like ever. But something not loud or failure prone is a good thing. Seems like Sony and MS really really improved things over gen7 there. I donate frequent time at a food bank and resale shop, and a LOT of defective PS3s and 360s get dropped off, but not one dead Xbox One or PS4 this gen. Bizarrely, I see a pretty solid stream of dead or half dead Wii Us, and even a few dead Switches come through.
 
Ha 🙂 This is a fun discussion. No pettiness or console warring. No PC supremacists throwing mud on the peasant box.
Because people expect it to have 16threads at 3.6Ghz and a GPU equal to at least an 1080...kinda hard to make fun of that.
Sound ridiculous to me because the performance jump from the PS4 is just way too big and there is no need for it for sony to be able to sell a new gen but, who knows.
 
Either Navi is vastly more efficient than previous AMD designs, or the specs are unrealistic for a ~200W System TDP.

I think Navi will be better Vega and Polaris in that department, but ~14 TFLOPS for the PS5 also seems unreasonably. I know that architectures aren't directly comparable, but that's slightly more than a 2080 Ti has. The PS4 pro has a little over 4 TFLOPS for comparison.

I think it depends on if Navi is capable of using more than 64CUs. If it is not then 14Tflops is probably too far to stretch 64CUs as that requires a 1.8Ghz clock speed and I doubt that will be cheap as you would need to bin the best silicon for the console to use the lowest voltage.

OTOH if Navi can support more than 64CUs it becomes entirely possible. An 80CU chip with 76 active units (4 turned off to improve yields) running at 1.5Ghz would be 14.5 TFlops and I expect 1.5Ghz would be far easier to hit in the given power budget than 1.8Ghz even if there are more CUs.
 
I 100% expect the PS5 to use dynamic boosting, no way will it just sit at one clock speed like the PS4. This way if a game doesn't thread well, and maybe only uses two threads, those can ramp their speed way up and have the others shut off. If a game needs all the available threads, then the speeds go down a bit per core.

The performance difference (on the CPU side at least) is going to be massive when comparing Jaguar in the PS4 to the Ryzen 2 in the PS5. Even ignoring any higher clock speeds the Ryzen may have, just IPC on a clock for clock comparison is massive.
 
Because people expect it to have 16threads at 3.6Ghz and a GPU equal to at least an 1080...kinda hard to make fun of that.
Sound ridiculous to me because the performance jump from the PS4 is just way too big and there is no need for it for sony to be able to sell a new gen but, who knows.

Yeah it does seem like it should be pretty stout to start out with for sure.

It is just my feeling that this is them putting full trust in Cerny, and the directive of PS5 to be a pretty big statement on the value of their hardware vs just streaming things. Of course as PC tech goes it will continually push the envelope further at various budgets, but I expect a time period where it's not really possible to equal the PS5 $ for $. My reasons for preferring PC aren't solely performance based, but I still love spending time with some quality console exclusives. I did wait a good while to jump into PS4 with a slim, may do the same this time around as well.

Gen8 was a great time for PC gaming, it seems third parties finally started getting their heads straight on not missing the platform, and even Microsoft going in with their IPs. It should keep on improving with Gen9 it seems 🙂
The hardware will be fascinating to see how much they can cram in and what their price strategies will be. This early
 
The specs are on the higher end for now, but they will be a bit less impressive for a 2020 release. Even if we're pessimistic though, even eight Ryzen 2 cores at 1.6 Ghz would be alot faster than the current eight (seven confirmed for gaming?) Jaguar cores at 1.6 Ghz.
 
It would be sad if as marketing gimmick, the amount of rumoured TF is a number created by adding the cpu theoretical maximum performance together with the gpu theoretical maximum performance combined with the theoretical maximum performance of any other accelerator that might be inside the PS5.

This has been done in the past before by other companies.
 
I do wonder how much memory it will have. My guess is GDDR6 256-bit, which means either 16 or 32GB... And 32 feels very high given current DRAM cartel pricing.
 
I do wonder how much memory it will have. My guess is GDDR6 256-bit, which means either 16 or 32GB... And 32 feels very high given current DRAM cartel pricing.
I'm placing my bets on two stacks of Flashbolt-esque(2x410 gigabyte/s @ 16 gigabyte per cube) or Icebolt-esque(2x512 gigabyte/s @ 16+ gigabyte per cube) cubes. One for the CPU and one for the GPU, with CPU one being the one divided between System/Gaming and the GPU one completely for Gaming.
 
Last edited:
I've never heard of "Flashbolt" or "Icebolt" DRAM. Care to link? (This is newer and better than HBM2, no? Is iit JEDEC standardized, or something proprietary from "our friends" at RAMBUS?)
 
I've never heard of "Flashbolt" or "Icebolt" DRAM. Care to link? (This is newer and better than HBM2, no? Is iit JEDEC standardized, or something proprietary from "our friends" at RAMBUS?)
Flashbolt
https://www.anandtech.com/show/14110/samsung-introduces-hbm2e-flashbolt-memory-16-gb-32-gbps

^-- HBM2e Gen2

Icebolt is the HBM3 successor to HBM2;
Linkedin -> HBM3 16Gb Icebolt Planning
https://www.heise.de/newsticker/mel...peicher-mit-410-GByte-s-pro-Chip-4341299.html <-- In one of the images.

Three companies doing HBM2e G2 (3.2 GHz) => Samsung/SK Hynix/Micron and HBM3 in 2020 is Samsung/SK Hynix/Micron as well.
 
Last edited:
Flashbolt
https://www.anandtech.com/show/14110/samsung-introduces-hbm2e-flashbolt-memory-16-gb-32-gbps

^-- HBM2e Gen2

Icebolt is the HBM3 successor to HBM2;
Linkedin -> HBM3 16Gb Icebolt Planning
https://www.heise.de/newsticker/mel...peicher-mit-410-GByte-s-pro-Chip-4341299.html <-- In one of the images.

Three companies doing HBM2e G2 (3.2 GHz) => Samsung/SK Hynix/Micron and HBM3 in 2020 is Samsung/SK Hynix/Micron as well.

Ooh, interesting idea. Isn't Flashbolt the high-end device, though? Wasn't someone working on a reduced-cost version of HBM2?
 
I do wonder how much memory it will have. My guess is GDDR6 256-bit, which means either 16 or 32GB... And 32 feels very high given current DRAM cartel pricing.

DRAM has crashed in price to some of its lowest levels.. so this statement confuses me.
32GB does seem like a stretch, but who says they have to spec their RAM capacity in powers of two?
 
DRAM has crashed in price to some of its lowest levels.. so this statement confuses me.
32GB does seem like a stretch, but who says they have to spec their RAM capacity in powers of two?

For optimal use of a binary based system, powers of 2 works best.
The whole design has to made in such a way that there are not bottlenecks or it is a waste of effort and resources.

This is an interesting article about the old GTX970 and what can happen when the underlying architecture does not have a powers of two memory architecture :
https://hexus.net/tech/news/graphics/79925-nvidia-explains-geforce-gtx-970s-memory-problems/



Nvidia and AMD make engineering decisions on how best to harvest the full-fat architecture of a particular class of GPU. The GM204's premier interpretation is the GeForce GTX 980. The GTX 970, meanwhile, appears to keep most of the GTX 980's goodness intact, save for the disabling of three of the 16 SMM units, thus pushing the number of cores down from 2,048 to 1,664, texture units from 128 to 104, and so forth. We praised Nvidia for keeping the heavy-lifting back-end the same as GTX 980, meaning 64 ROPs and 2,048KB of L2 cache. This, Jonah revealed, is not the case.
GTX 970 is not what you first thought it was

It turns out that GeForce GTX 970 56 ROPs, not the 64 listed on reviewers guides back in September and such knowledge leads to interesting questions. Nvidia has clearly known about this inaccuracy for some time but has deigned not to contact the technical press with an update or explanation. Having fewer ROPs isn't a huge problem for the top-end of the GeForce GTX 970 - we'll explain why a little later on - but does have important ramifications for the memory subsystem, and answers the 3.5GB/0.5GB issue referred to above.


This Nvidia-provided slide gives brief insight into how the GTX 970 is constructed. The three disabled SMs are shown at the top and 256KB L2s and pairs of 32-bit memory controllers on the bottom. Notice the greyed-out right-hand L2 for this GPU? Tied into the ROPs as they are this is a direct consequence of reducing the overall ROP count. GTX 970 has 1,792KB of L2 cache, not 2,048KB, but, as Alben points out, still has a greater cache-to-SMM ratio than GTX 980.

Historically, including up to the Kepler generation, cutting off the L2/ROP portion would require the entire right-hand quad section to be deactivated too. Now, with Maxwell, Nvidia is able to use some smarts and still tap into the 64-bit memory controllers and associated DRAM even though the final L2 is missing/disabled. In other words, compared to previous generations, it can preserve more of the performance architecture even though a key part of a quad is purposely left out. This is good engineering.

But while it's still accurate to say the GeForce GTX 970 has a 256-bit bus through to a 4GB framebuffer - the memory controllers are all active, remember - cutting out some of the L2 but keeping all the MCs intact causes other problems; there is no usual eighth L2 to access, meaning that the seventh L2 will be hit twice. The way in which the L2 work makes this a very undesirable exercise, Alben explains, because this forces all other L2s to operate at half normal speed.


It is all based and designed on fixed powers of two widths.

2^n width busses.
2^n width queues.
2^n deep queues.
2^n width buffers.
2^n deep buffers.
2^n deep memory arrays.
enz...
 
^ I think he meant that if its not 16GB then it does not necessarily mean it has to have 32GB ram (e.g. Xbox One X has 12gb of GDRR5)
 
It's based upon how many memory controllers it has... If it's 256-bit, that would be 8 controllers which means multiples of 8. Wikipedia claims the X1X is 384-bit, which is multiples of 12.
 
Ooh, interesting idea. Isn't Flashbolt the high-end device, though? Wasn't someone working on a reduced-cost version of HBM2?
HBMx costs less than GDDRx at the end cost. Low-cost HBM is more of a successor to WIO2 which is a mobile/smartphone -focused HBM against LPDDRx.
 
Under impression that zen2 L3$ cache might be severely cut, like 1mb/core... so 8mb total instead of 32mb in matisse
The L3 will acting mostly for thread comunication in each ccx, saving area, yields and power
 
Under impression that zen2 L3$ cache might be severely cut, like 1mb/core... so 8mb total instead of 32mb in matisse
The L3 will acting mostly for thread comunication in each ccx, saving area, yields and power

I can definitely see it being cut down- that would be the same ratio as Raven Ridge, and that saved a fair chunk of die area.
 
I can definitely see it being cut down- that would be the same ratio as Raven Ridge, and that saved a fair chunk of die area.

It makes sense to me as well. The copious L3 is great for lots of general purpose multitasking type loads, highly loaded parallel OOO stuff, but seems superfluous in a console with much greater purpose optimization, and with presumably a very fast shared ram pool compared with typical DDR4-2133-3200 levels.
 
Im not liking this, how much money AMD gets out of these console APU? i cant shake off the feeling AMD is seeling a super APU to Sony for less money that they get out of a 200GE, while we get a Intel-like refresh of RR.

And seriusly how much this APU will cost? Sony has to sell the console at $400 with most likely a 1TB SSD, AMD cant be getting more than $100 out of each CPU, and thats already too much.
 
Last edited:
Not sure what the GPU costs and we don’t know enough about it to make a good guess, but since the CPU is just a Zen chiplet, the cost for AMD to manufacture it has been estimated at ~$20 and that was with current prices. As 7nm wafer prices fall, so does AMD’s cost.

They’re probably not getting rich on these deals, but they help fund Radeon R&D and ensures that games are developed for their hardware in a time when they aren’t selling as many discreet cards.
 
Im not liking this, how much money AMD gets out of these console APU? i cant shake off the feeling AMD is seeling a super APU to Sony for less money that they get out of a 200GE, while we get a Intel-like refresh of RR.

And seriusly how much this APU will cost? Sony has to sell the console at $400 with most likely a 1TB SSD, AMD cant be getting more than $100 out of each CPU, and thats already too much.
This thinking has caused once great companies to disappear. The console dept sells to a unique market and should be run as a distinct profit center. Once you get this 'but we're making more money over there' attitude, watch out.

Never forget Intel and the Apple Iphone as a recent example.
 
Back
Top