• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Discussion Playstation 6 speculation

Page 5 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Latest from MLID is PT def stronger than 5080, so it seems like he's pulling back from the PS6 5090 PT perf claim.

Anyone who thinks 9070XT/5070 TI raster and >= 5080 PT is outlandish in current titles?

Assuming we'll see significant fine wine on both fronts if work graphs are leveraged.
I think a plausible path is that 9070XT=/+ raster and 4090 or tad above in Path-Tracing? More plausible than equalling a 5090 in PT.
 
I think a plausible path is that 9070XT=/+ raster and 4090 or tad above in Path-Tracing? More plausible than equalling a 5090 in PT.
No idea. Quantifying this stuff is very hard. For each game there are many questions and variables. A few:
- How is the traversal and shading ms ratio?
- What kind of materials are used and how much variety?
- What RT effects are employed?
- What is the game scene composition and how much does it vary-

Also I'm not sure how much the compiler can exploit RDNA 5's architectural changes without work graphs integration, but a full implementation for PT is certainly superior.

There's prob an incentive from all parties to do this moving forward as the current EI pipeline is very inefficient. It will be interesting to see if NVIDIA will deliberately not fund these efforts. They know RDNA 5 will annihilate them here unless 60 series brings equally consequential changes to scheduling, cachemem, and execution.
I'll remain skeptical because that will require clean slate, but then again we haven't really seen that since Volta/Turing so you could say it's about time. Then again they didn't tout that for Rubin DC, so I'm more inclined we'll see that with Feymann.

MLID is on crack. Magnus should be 5080/4090 performance (if it's 300+ W power draw) but PS6, no way.
Raster =/= PT.

Last time I checked @Kepler_L2 said PS6 ~9070XT raster but IDK if that's still the case. I assume that's within existing game pipelines and not work graphs which should skew things heavily in RDNA 5's direction.
 
Sony isn't going to delay the launch just because of the launch price. A delay to switch to 4 GB memory chips and 20 GB total is something I could see happening.
I see a delay more likely than reducing RAM amount.

A delay is a problem for a year. Castrating the console is a problem for the whole generation. Sony won't repeat Microsoft's mistake with the Series S.


Though realistically, I don't think the console will have either. It'll launch in 2027 with the originally intended RAM amount. What can change is release price.



If they plan to launch end of 2027 then they'd have to secure RAM supplies right now - with current pricing trends, and manufacturers now want 3-5 year long contracts.
Odds are the RAM supply contracts were made in the beginning of the year and production is already being allocated to ensure enough volume for launch window.
Sony and others deciding to do this so early is part of the reason why VRAM got so expensive throughout the last quarter.

I.e. the PS6 is probably part of what caused VRAM price spikes, not a victim of of them. The victim here is almost always the consumer, either on pre-built PCs, laptops or DIYs.



The Pro is going to look real bad when GTA 6 is 30 fps on it and Magnus is 60.
There's no reason to believe the 8-core Zen2 can't run GTA6 at 60FPS. There's already a bunch of open-world games set in cities with pedestrians and cars running at 60FPS on the PS5 Pro: CP2077, Spider Man 2, even GTA5 with RT, etc.

IMO the videogame with the most complex and diverse set of NPCs, models, each with their own geometries and textures, activities, preferences, etc. is Watch Dogs Legion and that game runs at locked 60FPS on vanilla PS5, Series X and Series S.
 
I see a delay more likely than reducing RAM amount.

A delay is a problem for a year. Castrating the console is a problem for the whole generation. Sony won't repeat Microsoft's mistake with the Series S.

They are already doing the handheld, which is slower than the PS5 even. And for it to work, LPM support has to be mandated for all PS6 games.

That is unless it gets cancelled.
 
They are already doing the handheld, which is slower than the PS5 even.

Slower in rasterization, yes. Though it'll be faster in ML loads for sure, and there's a good chance it'll be faster at raytracing. Rasterization performance isn't the single most important metric for next-gen.


Regardless, we're talking about RAM amount, not compute throughput. It's easier to scale down in compute than RAM requirements, as that's what most developers complained about in the Series S.
 
I see a delay more likely than reducing RAM amount.
40 GB won't happen, sadly.
PS6 is probably part of what caused VRAM price spikes, not a victim of of them
I think that's very unlikely, they just got no chance outbidding server people, or just normal OEM PCs that have higher margin than PS6 will ever have.

Also booking supply 2 years early would be unusual for this, I'd say zero chance they've secured on good terms enough supply to launch it.

Sony is sitting pretty there, just look at announced exclusives coming soon: Uncharted 5, Last of Us Part III
 
Last edited:

Behold the PS6 leaker war!
In case someone wonders what this is about this is MLID and @Kepler_L2 disagreeing about what the leaked RT figures of 6-12X from MLID is. MLID claims it's total frametime for demanding RT games, Kepler thinks it's RT ms cost improvement.
MLID doubled down in latest Broken Silicon.

Began on Xitter on Friday. I think this is where it originated from:
 
Would be great if MLID just stuck to leaks and rumours instead of trying to speculate and analyse because he clearly doesn't know what he's talking about.
Not the first time with erroneous "analysis". IIRC he also claimed a PS5 is as fast as a 3060 TI and that a PS5 Pro is the equivalent of a 4070.
Oh I know another one: 10 FPS x 12 = 120FPS in AW2 = 5090 path tracing for PS6.

Talking about professionalism at 42:48-43:25 while trashtalking Kepler for RDNA 3 leaks etc... The internet is such a shitshow.
/rant
 
Last edited:
I've listened to MLID on and off and he has a frustrating lack of the technical knowledge to correctly understand and communicate details. He also doesn't seem interested in admiting or learning from his mistakes.

I'd love to find a weekly podcast that went over tech news and future hardware prediction at an enthusiast knowledge level. Tech Poutine is a little too focused on HPC for me, but something close to that.

Edit: For the PS6, I'm most looking forward to RT/PT in most titles and more subtle interactivity in the environment and characters. Vegetation and clothing that move and respond in more subtle and realistic ways.
 
Last edited:
I've listened to MLID on and off and he has a frustrating lack of the technical knowledge to correctly understand and communicate details. He also doesn't seem interested in admiting or learning from his mistakes.


This is a bit true. It looks like he saw some documents from Sony claiming "10x faster RT" and then he overestimated how much that would change performance in path traced games. Though the biggest problem isn't that, it's his reaction when called out.



Though even if he's not always able to properly interpret the data he's given, he's still the top leaker around.
 
kepler says the limited (4x zen6c) core count of PS6 handheld will hold back next gen games — just like the series S did for current gen
 
kepler says the limited (4x zen6c) core count of PS6 handheld will hold back next gen games — just like the series S did for current gen
Can't they just reduce the complexity of stuff? Scaling back number of NPCs etc...?

Worried about the work graphs driven procedural stuff and non graphics ML stuff. ML cost gonna be flat.

Will Sony make Handheld support mandatory similar to XSS?
 
Can't they just reduce the complexity of stuff? Scaling back number of NPCs etc...?

Worried about the work graphs driven procedural stuff and non graphics ML stuff. ML cost gonna be flat.

Will Sony make Handheld support mandatory similar to XSS?
depends on game, engine & effects being used I guess

starfield for example was optimized for 30-60 fps

so maybe path tracing games would target 30fps for the handheld
 
true. but for games only the 4x 6c are used. they will become the bottleneck
I think Sony's reasoning is that 4x Zen6c at ~4GHz (assuming a similar Zen5->5c clock ratio) are substantially faster than 4x+2x Zen2 at 3.2GHz. Also, more CPU cores means more clients taxing the unified memory controller, and Canis doesn't have a whole lot of bandwidth available.
 
I think Sony's reasoning is that 4x Zen6c at ~4GHz (assuming a similar Zen5->5c clock ratio) are substantially faster than 4x+2x Zen2 at 3.2GHz. Also, more CPU cores means more clients taxing the unified memory controller, and Canis doesn't have a whole lot of bandwidth available.
They're not gonna run the CPU cluster at 4GHz that would be over the 15W budget already.
 
Will Sony make Handheld support mandatory similar to XSS?
I'm wondering that as well. You either have to develop for 4 cores baseline, or you develop for the 8 core main console knowing the experience on the handheld will be flawed.

I mean, technically Sony could have an "optimized for PS handheld" badge for applicable games, but I have to think the plan is to have all PS6 titles run on both.

Edit: But then again I don't know that core usage statistics for PS5 and projected ones for PS6. It may be that 4 are enough most of the time.
 
Last edited:
The dynamic clocks in PS5 are only about a 10% deviation and 3.6 GHz is still way too high for 15W budget. Canis most likely will be limited to ~2.4GHz
If by "limited to 2.4GHz" you mean 2.4GHz boost or even average then I think that's way too low.

The Deck's 4c Zen2 cluster at 7nm pushes ~2.7GHz at around 4W, and when the TDP is set to 15W the cores will go up to >3GHz quite often.

1777541330193.png


I find it hard to believe the 4c Zen6c cluster on N3(P? X?) would clock so much lower than the 4c Zen2 cluster on N7.

I do get that Zen6LP cluster might clock around 2GHz or even lower, but the Zen6c needs higher single-threaded performance and there's little reason to believe it would clock lower than a predecessor from 4 architecures and ~2.5 process nodes ago.


Handhelds usually run at 15w-25w, so asking them to handle path tracing is a bit of a stretch.
It depends on how hard path tracing is on the given architecture.
 
Back
Top