Discussion Playstation 6 speculation

marees

Golden Member
Apr 28, 2024
1,660
2,265
96
PS6 (AMD Orion APU) specs as leaked by MLID


Sony's next-gen PlayStation 6 full specs leak:
  • 280mm2
  • TSMC 3nm
  • (monolithic die)
  • 160W TDP
  • 54 x RDNA 5 CUs (2 disabled)
  • 8 x Zen 6c cores (1 disabled)
  • 2 x Zen 6 LP cores (for OS)
  • 160-bit 32Gbps GDDR7 memory


Read more: https://www.tweaktown.com/news/1076...-ready-for-next-gen-gaming-in-2027/index.html


PS6 specs (as claimed by MLID)

Memory Bus size seems too small, imo

Leaked specs (the most odd thing is the narrow memory bus)

https://www.tweaktown.com/news/1076...-ready-for-next-gen-gaming-in-2027/index.html

View attachment 130092
 

marees

Golden Member
Apr 28, 2024
1,660
2,265
96
Fwiw, I'd have expected 2nm for something that's expected to release in 2028
 

MrMPFR

Member
Aug 9, 2025
73
122
61
PS6 (AMD Orion APU) specs as leaked by MLID


Sony's next-gen PlayStation 6 full specs leak:
  • 280mm2
  • TSMC 3nm
  • (monolithic die)
  • 160W TDP
  • 54 x RDNA 5 CUs (2 disabled)
  • 8 x Zen 6c cores (1 disabled)
  • 2 x Zen 6 LP cores (for OS)
  • 160-bit 32Gbps GDDR7 memory
Gonna expand on it a bit with some commentary and info.

Smaller SoC makes sense with TSMC absurd wafer prices for post N7 nodes.

N3 already confirmed by Kepler last year IIRC.

TDP seems absurdly low for an entire console with that GPU spec. MLID said they want to get back to roughly PS4 power draw to reduce console size, so it's not SoC TDP, it's power draw for entire console. 85-90% efficiency PSU (similar to PS5) brings that 135-145W. Then you need to factor in SoC VRM loss, CPU power draw, SSD, IO ASIC, fan, and other components. And that probably only leaves ~100W power draw (TDP) for GPU core, MC and GDDR7 chips.

One disabled Zen6C core seems unlikely. N3 yields are unprecedented. Much better than N6. Sure PS4 and PS3 disabled one core, but PS5 didn't so PS6 prob won't either.

If we bump power draw to PS5 level that should leave around 140-150W for GPU, but even then it really only looks more like a 5070 more than a 9070/9070XT in raster. 9070XT ~2.9-3ghz. 19% fewer CUs, 10% lower clockspeed to half GPU power draw and the 10% IPC increase estimate from MLID estimate only works out to PS6 ~5070 raster perf (TPU GPU database relative perf). Even weaker if it has to stay within 160W power draw for entire console.
So unless RDNA5 is extremely energy efficient and has a 15-20% IPC increase I can't match @Kepler_L2's ~9070XT estimate with these HW specs. And this performance at 100W GPU core and mem TDP is fantasy land territory.

160bit MC looks reasonable. 32Gbps GDDR7 over 160 bit is the same BW as 9070XT 20gbps 256bit GDDR6 design. 10MB L2 seems low but with big architectural changes it might be fine.

Cuz AT0 is only ~7 PF or so, AMD is not doing the same matrix cores on gaming dGPUs and DC GPUs like NVIDIA is doing with Rubin.

5090 = 3.3PF NVFP4 sparse, 6000 Pro = 4PF. Hope PS6 has doubled FP8 perf and quadrupled FP4 perf vs RDNA4 like AT0, but I don't think that has been confirmed yet. If we assume AMD has an alternative to NVFP4 then that goes even further in reality. IIRC the current DLSS TF and FSR4 runs using FP8 and INT8. This also applies for Cooperative Vectors API.
If PS6 uses NVFP4 like format for FSR5 and doubles ML throughput vs RDNA 4 then the PS5 would have 2.5-3X higher perf for the FP part than 5080 currently has with DLSS4.

The stuff about physics, unique assets and AI enhanced NPCs sounds interesting, but let's wait for the Road to PS6 presentation by Cerny. But a 900-1080p internal res -> 4K with FSR4 for 4K 60FPS mode leaves plenty of headroom for additional GPU compute per frame beyond raster. Probably plenty of additional features introduced likely enabled by work graphs.

30-40GB. Leaning towards 30GB. Assuming FP4 quantized models using NVFP4 like format and not MXFP4 and features like work graphs, neural texture and asset compression being used the 30GB of VRAM should be enough.

Fwiw, I'd have expected 2nm for something that's expected to release in 2028
Late 2027 release it seems like. PS4 and PS5 cadence for the third time.
 
  • Like
Reactions: marees

MrMPFR

Member
Aug 9, 2025
73
122
61

Proving 5090 RT claim is nonsensical​

MLID's 5090 RT perf claim looks like flawed extrapolation perf from the 6-12X RT uplift internal estimate by Sony. Could be based on a game selection from medium RT to heavy RT and PT. It's not it's frame time ms cost comparison for RT specific workloads. See Kepler L2's reply (next comment) and ignore this entire post. TBH in this light the 6-12X RT uplift seems rather underwhelming as the math here isn't pure RT/PT ms. Look at Doom TDA PT and any other game with a lot of RT or PT. Pure RT ms speedup from 7900 XTX -> 9070 XT and 9070 XT -> 5070 TI is far greater making it harder to catch up to NVIDIA within 6-12X figure. So the estimate is probable not even real (misinfo to troll leakers?), since if it was true then RDNA5 PT perf would've been a joke.

IGNORE

PS5 anemic RT is already a joke.
Let's illustrate why the 5090 RT claim is not true or accurate. This is based on raw specs and clocks, PS5 performs better in reality due to low level (bare metal) optimizations and leaner OS. I know this math is far from perfect but it gives a pretty good idea. In TPU's GPU database the 6700XT roughly ~20% stronger in RT than a 6700. Clocks ~10% higher than PS5 in games so let's add that. Now the RX 6700XT is ~33% stronger than the PS5 GPU. RDNA3 didn't push RT vs RDNA2. RT on vs off drop marginally better than RDNA 2 which is why this comparison is possible based on raster numbers.

Let's see how 6700XT and 7900XTX compares in RT here across TPU's test game suite at 1440p (4K useless due to 12GB VRAM): https://www.techpowerup.com/review/gpu-test-system-update-for-2025/3.html.
The RX 7900 XTX has a 2.44X advantage vs 6700XT. Multiply by 1.33 and it's 3.25X faster across these games.
RX 9070 XT from this review https://www.techpowerup.com/review/sapphire-radeon-rx-9070-xt-pulse/37.html is +15% vs 7900XTX or 3.7X faster than PS5. If we assume PS6 has excellent RT and despite weaker core can match 4080-5080 in RT then we get 4.5-4.8X gain over PS5.
But remember that in TPU's RT game suite the 5090 is 1.92X ahead of 9070XT or 7.17X faster than PS5. Note that these workloads are not PT but RT games so the raster/RT ratio will shift a lot for PT games. But you can't have a PS5 GPU with roughly half the raster perf (~9070 raster) of a 5090 match it in RT games where a ton of the rendering pipeline is raster. Again flawed math.

RDNA4 is different though and in PT games the RX 9070 XT manages a 30-50% performance lead over the RX 7900 XTX. That's ~4-5X ahead of the PS5. But remember that NVIDIA completely smokes AMD cards in PT games. In PT games the lead a 5070 TI has over a 9070XT can be anywhere from 1.5-2X as seen in HUB's 9070 XT review. The RTX 5080 is also 1.81X stronger than 9070 XT in Doom TDA with PT enabled. source: https://medium.com/@opinali/doom-path-tracing-and-bechmarks-d676939976e8. Extrapolating 9070 XT vs 5070 TI gives a ~1.5X NVIDIA advantage.

If we now compare 5070 TI with 7900XTX in Black Myth Wukong and Alan Wake 2 (4K Q upscale = 1440p internal res) we get a 2.6X and 3.4X speedup. 3.25 x 2.6-3.4 = 8.45-11.05X faster than PS5.
Let's add the differential up to 5080 and we get 10.14-13.6.

Based on this it sounds like PS6 could exceed NVIDIA Blackwell on the RT front but 6-12X doesn't result in 5090 PT perf, it results in 5070 TI-5080 PT perf. If true that's still impressive as it implies the PS6 per core will offset the overall weaker raster performance with stronger RT performance. But then again IIRC Kepler said RDNA5's RT feature set exceeds that of the 50 series so this markes a lot of sense.

But please ignore MLID's sensational 5090 RT claim.
 
Last edited:

Kepler_L2

Senior member
Sep 6, 2020
982
4,141
136

Proving 5090 RT claim is nonsensical​

MLID's 5090 RT perf claim looks like flawed extrapolation perf from the 6-12X RT uplift internal estimate by Sony. Could be based on a game selection from medium RT to heavy RT and PT. PS5 anemic RT is already a joke.
Let's illustrate why the 5090 RT claim is not true or accurate. This is based on raw specs and clocks, PS5 performs better in reality due to low level (bare metal) optimizations and leaner OS. I know this math is far from perfect but it gives a pretty good idea. In TPU's GPU database the 6700XT roughly ~20% stronger in RT than a 6700. Clocks ~10% higher than PS5 in games so let's add that. Now the RX 6700XT is ~33% stronger than the PS5 GPU. RDNA3 didn't push RT vs RDNA2. RT on vs off drop marginally better than RDNA 2 which is why this comparison is possible based on raster numbers.

Let's see how 6700XT and 7900XTX compares in RT here across TPU's test game suite at 1440p (4K useless due to 12GB VRAM): https://www.techpowerup.com/review/gpu-test-system-update-for-2025/3.html.
The RX 7900 XTX has a 2.44X advantage vs 6700XT. Multiply by 1.33 and it's 3.25X faster across these games.
RX 9070 XT from this review https://www.techpowerup.com/review/sapphire-radeon-rx-9070-xt-pulse/37.html is +15% vs 7900XTX or 3.7X faster than PS5. If we assume PS6 has excellent RT and despite weaker core can match 4080-5080 in RT then we get 4.5-4.8X gain over PS5.
But remember that in TPU's RT game suite the 5090 is 1.92X ahead of 9070XT or 7.17X faster than PS5. Note that these workloads are not PT but RT games so the raster/RT ratio will shift a lot for PT games. But you can't have a PS5 GPU with roughly half the raster perf (~9070 raster) of a 5090 match it in RT games where a ton of the rendering pipeline is raster. Again flawed math.

RDNA4 is different though and in PT games the RX 9070 XT manages a 30-50% performance lead over the RX 7900 XTX. That's ~4-5X ahead of the PS5. But remember that NVIDIA completely smokes AMD cards in PT games. In PT games the lead a 5070 TI has over a 9070XT can be anywhere from 1.5-2X as seen in HUB's 9070 XT review. The RTX 5080 is also 1.81X stronger than 9070 XT in Doom TDA with PT enabled. source: https://medium.com/@opinali/doom-path-tracing-and-bechmarks-d676939976e8. Extrapolating 9070 XT vs 5070 TI gives a ~1.5X NVIDIA advantage.

If we now compare 5070 TI with 7900XTX in Black Myth Wukong and Alan Wake 2 (4K Q upscale = 1440p internal res) we get a 2.6X and 3.4X speedup. 3.25 x 2.6-3.4 = 8.45-11.05X faster than PS5.
Let's add the differential up to 5080 and we get 10.14-13.6.

Based on this it sounds like PS6 could exceed NVIDIA Blackwell on the RT front but 6-12X doesn't result in 5090 PT perf, it results in 5070 TI-5080 PT perf. If true that's still impressive as it implies the PS6 per core will offset the overall weaker raster performance with stronger RT performance. But then again IIRC Kepler said RDNA5's RT feature set exceeds that of the 50 series so this markes a lot of sense.

But please ignore MLID's sensational 5090 RT claim.
MLID doesn't understand that any RT performance claims from AMD, NVIDIA, Intel, Sony, etc. always refers to RT-specific frametime comparisons, not FPS comparisons, otherwise in RT-light titles like Resident Evil enabling RT would increase performance which is obviously nonsense.
 

MrMPFR

Member
Aug 9, 2025
73
122
61
MLID doesn't understand that any RT performance claims from AMD, NVIDIA, Intel, Sony, etc. always refers to RT-specific frametime comparisons, not FPS comparisons, otherwise in RT-light titles like Resident Evil enabling RT would increase performance which is obviously nonsense.
Providing an update here.
I've tried to get specifics and be critical of the claim, but got multiple flat out rejections from MLID and community. Now acting like it was RT ms uplift all along. TBH this feels like gaslighting given the earlier comments made by MLID: Watch Die Shrink excerpt vid from ~1 month ago and the latest segment in PS6 vid from a week ago with extrapolated AW2 perf. That extrapolation was oversimplification to make console peasants comprehend (heavily paraphrasing). Well then it shouldn't have been included!

I found also out how that the flawed 12X extrapolation does indeed line up with 5090 BTW. Well at 1080p internal res but that's not gonna tax the RT cores, reduce raster enough and adress 5080 -> 5090 scaling bottleneck. If we move to 4K Quality upsale (1440p) now the gain roughly aligns with a 5080. 4K native and it's even worse and the 5090 buries everything else to a much larger degree. PT compute scales really well.
But like I said RT ms makes the 6-12X even more underwhelming since it requires larger RT speedup than vs entire frame. It's also mathematically impossible to fit 5090 within that estimate in PT games for entire frame (best case for MLID). This is painfully obvious the higher the resolution.
50 series is lightyears ahead of AMD in PT, it's a joke to suggest otherwise. Not having OMM, SER and proper BVH traversal processing in HW really hurts performance.

You might think all this is futile and you're probably right but I just couldn't resist.

One thing is for sure though. This 5090 RT thing won't age well. It is as memeable as Zen 6 7ghz.
 
Last edited:
  • Like
Reactions: marees