News [Wired] PS5 confirmed to use 7nm Zen 2, Navi, SSD

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

TheELF

Diamond Member
Dec 22, 2012
3,973
731
126
Were did you guys see 16t?
All the articles I saw state 8 cores,there are cores without SMT in ZEN land,the ps5 could just be two 2200g (equivalents) stuck together to keep with traditions (actually to keep cost down and make production easy) .
Of course we also know that game code often gets into situations where one or two cores are extremely loaded, while others are nearly idling. Or, you have some background OS/friends list/minor tasks that don't need full performance from a core/thread.

I'm not saying 100% that it will have modular power states such as per core turbo/dynamic clocks, but it would honestly surprise me if it didn't, for two reasons :
We know from way back that most cases of stutter in games can be negated by running all your cores at the same speed,so I doubt they will be willingly introducing even more stutter to the PS5.
While it would be nice if devs would finally adress this problem in code that's even more implausible.
 

NTMBK

Lifer
Nov 14, 2011
10,237
5,021
136
Were did you guys see 16t?
All the articles I saw state 8 cores,there are cores without SMT in ZEN land,the ps5 could just be two 2200g (equivalents) stuck together to keep with traditions (actually to keep cost down and make production easy) .

The core in the 2200G isn't any smaller than the core in a fully enabled Zen with SMT- they just disable it. It wouldn't reduce costs unless they actually went in and customised the CPU... which would in turn increase design costs.

I'm conflicted about whether SMT will be enabled or not. On the one hand sharing the CPU between two threads can increase jitter, due to slight variance in the per-thread performance. But on the other hand it's basically "free" CPU throughput, and modern game engines already need to deal with the variance arising from multithreading.

I do think that they will have locked clock speeds, though.
 

dacostafilipe

Senior member
Oct 10, 2013
772
244
116
The core in the 2200G isn't any smaller than the core in a fully enabled Zen with SMT- they just disable it. It wouldn't reduce costs unless they actually went in and customised the CPU... which would in turn increase design costs.

I'm conflicted about whether SMT will be enabled or not. On the one hand sharing the CPU between two threads can increase jitter, due to slight variance in the per-thread performance. But on the other hand it's basically "free" CPU throughput, and modern game engines already need to deal with the variance arising from multithreading.

I do think that they will have locked clock speeds, though.

I don't think that there's need for compromise here. Even with SMT is enabled, devs will certainly have the option to not use it by pinning the processes.

We are talking about a highly customised OS and highly customised code.
 

Guru

Senior member
May 5, 2017
830
361
106
Seems like a October/November 2020 launch date. The specs so far seem really awesome, it's going to be a very powerful machine.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
Ten times faster on paper, but in practice, I'd wager many users wouldn't notice the difference between an NVMe SSD and a decent SATA one, especially if they were coming from an older system with a mechanical hard drive.
Arkaign said:
I believe 99% chance that the new storage setup for PS5 SSD is just the Ryzen 2 using X4 PCIe 4.0 via m.2

It doesn't matter how fast PC SSDs are. Read the quotes from Cerny. Because its a stable platform with single SKU across the entire platform, the developers can do things that can't be done on PC. The resulting hardware can also be bought for low cost because a Playstation deal with Sony is guaranteed very high volume shipment.

You can have NVMe SSDs with 20GB/s bandwidth, but it won't change anything.

This is a natural trade-off between flexibility and performance. The gap that will always exist despite efforts to change it, and beliefs that it can be otherwise.

VirtualLarry said:
And 4KQD1 performance, is barely 50% better between a "good" NVMe and a "good" SATA SSD, and that benchmark most closely tracks with daily "seat of the pants" feel for SSDs.

Even this doesn't matter. Performance benchmarks with Optane proves it. The bottleneck is very likely due result of having HDDs as primary storage for 40 plus years. This is going to take many years, maybe decades to fully change.

A PC may have superior specs, but a console will do far above its class(and do things PCs can't do).
 
Last edited:

coercitiv

Diamond Member
Jan 24, 2014
6,204
11,912
136
AMD's next-gen PlayStation/Xbox chip moves closer to final version
It’s believed that AMD’s upcoming Gonzalo APU will be powering the next generation of consoles. Now, a qualification sample of the SoC has been spotted in the 3DMark database, and its codename reveals a lot about the chip.
Back in January, a rumored engineering sample of Gonzalo named "2G16002CE8JA2_32/10/10_13E9" was sighted. Using Marvin’s AMD Name Decoder showed the chip’s CPU has eight physical cores running at a 1GHz base clock and 3.2GHz boost clock. [...] the PCI-ID shows an AMD Navi 10 Lite GPU running at 1GHz
Now, a new, slightly different code—ZG16702AE8JB2_32/10/18_13F8—has been found by leaker TUM_APISAK. The ‘Z’ at the start is what indicates this is a qualification sample, meaning progress on the chip is moving along smoothly, and while the number of cores and their boost clock speed remain unchanged, it appears the base clock has been upped to 1.6GHz.

Additionally, the qualification sample’s previous PCI-ID has changed from 13F8 to 13E9, indication a different version of Navi 10 Lite that’s now running at 1.8GHz.
 
  • Like
Reactions: Saylick

TheELF

Diamond Member
Dec 22, 2012
3,973
731
126
The core in the 2200G isn't any smaller than the core in a fully enabled Zen with SMT- they just disable it. It wouldn't reduce costs unless they actually went in and customised the CPU... which would in turn increase design costs.
What?!Artificial product segmentation...and they don't even make money off of it?
You sure they don't at least use faulty 2400g CPUs for the 2200g?

For me it's more a matter of compatibility,changing up the core topology would demand a high level of redesign of pretty much everything,they claiming that there will be games that run on both consoles that screams to me that the topology will not change at all.
They put a x86 CPU in the consoles so they won't have to port games anymore they aren't gonna code different versions of games for different consoles.
 

DrMrLordX

Lifer
Apr 27, 2000
21,634
10,849
136
The specs so far seem really awesome, it's going to be a very powerful machine.

I'm inclined to agree. That's a lot of power for a console. I kinda wonder if the CPU will be power or thermal limited. Based on what I've seen of Zen1, maintaining boost clocks of ~3 GHz is trivial at pretty low power points. On 7nm it should sip power. It should stay close to max boost through a lot of workloads.
 

NTMBK

Lifer
Nov 14, 2011
10,237
5,021
136
This is definitely true, though I think there are going to be good reasons for this to change.

At 8C/16T, 3Ghz may be quite a bit of TDP (in terms of the overall budget at 7nm in a smallish console box). In isolation, obviously it would have plenty of room to do so even without exotic cooling such as in the X1X.

However, they're also seemingly cramming a 14.2TF Navi custom GPU into the APU, and wanting that to be clocked as highly as possible in contrast to the CPU portion, as that's by far what will make it impressive visually, while even at ~2Ghz, a Ryzen2 is an absolutely monumental improvement over Jag.

Finally, in terms of what has come before :

PS2/OG Xbox/GC/Wii, were all far too old and low power to worry about power/heat.

PS3 had Cell+RSX, and ~2005-era didn't have much in the way of power modes really, besides, the 8 cell 'cores' were also frequently used as extra graphics compute (eg; with Naughty Dog titles most notably). So no real need or ease of implementation for power states here.

X360 had a similar situation with PS3, gen7 you pretty much wanted your max CPU anyway, so no power modes.

PS4/X1, now we're talking a gen where it could have come in handy. Well that is, if they hadn't gone with the ultimate potato processor. Even maxed out at 1.6 (PS4), 1.75 (X1), 1.83 (X1S), 2.1 (Pro), or 2.3 (X1X), there was never really room to give up any CPU performance. They were essentially 8-Core netbook ultra low power processors, designed to fit into little tiny mobiles and tablets, and the potential savings would be nominal if any, at the cost of going from atrocious performance to completely unacceptable performance.

Now with Ryzen2 sharing space with a sizable Navi portion, you're talking a full fat desktop CPU that calculates out to a pretty reasonable performance level, and with a bit of extra power draw at higher clocks. Of course we also know that game code often gets into situations where one or two cores are extremely loaded, while others are nearly idling. Or, you have some background OS/friends list/minor tasks that don't need full performance from a core/thread.

I'm not saying 100% that it will have modular power states such as per core turbo/dynamic clocks, but it would honestly surprise me if it didn't, for two reasons :

It would allow for quieter/cooler/more efficient use, thus also being more reliable with less constant full clock heat.

It would allow for more effective use of a given TDP. Eg; 4 threads @ 3.4Ghz, 8 threads @ 2.2Ghz, 4 threads at 1.4Ghz, in a given example where draw/need was strongest and weakest. As long as the thermal design is capable of running all 8C at some reasonable max if absolutely necessary (albeit possibly with some extra fan noise), I think this is a good way of setting it up.

Just my thoughts, YMMV :)

For comparison, take a look at the Epyc 3251. This is a 14nm embedded 8-core part, which manages 2.5GHz with 3.1GHz boost clocks at 50W- including the power consumption for memory controller, 10Gbps network controller, PCIe I/O power, etc. I think a 3GHz target for 7nm should be perfectly feasible.
 

coercitiv

Diamond Member
Jan 24, 2014
6,204
11,912
136
It should stay close to max boost through a lot of workloads.
R7 1700/2700 does 3/3.2Ghz base clocks within 65W TDP, and gaming workloads can easily be considered as just 60-70% of that, so keeping the clocks around 3Ghz we'd probably be looking at something like 40-45W average package power. Bring in 7nm and shave that power in half and you're down into 20-25W territory without considering the custom aspects of the design.

I expect the APU will be thermally limited, after all it powers a livingroom entertainment system which needs to strike balance between space/noise/cost, meaning chip runs as high as possible while fan runs as low as possible :)
 
Last edited:

maddie

Diamond Member
Jul 18, 2010
4,740
4,674
136
R7 1700/2700 does 3/3.2Ghz base clocks within 65W TDP, and gaming workloads can easily be considered as just 60-70% of that, so keeping the clocks around 3Ghz we'd probably be looking at something like 40-45W average package power. Bring in 7nm and shave that power in half and you're down into 20-25W territory without considering the custom aspects of the design.

I expect the APU will be thermally limited, after all it powers a livingroom entertainment system which needs to strike balance between space/noise/cost, meaning chip runs as high as possible while fan run as low as possible :)
Don't forget that we are almost certainly looking at 7nm+ here, with an even further reduction in power consumption and also a small density increase.

EUV reduced cost of production alone pretty much guarantees this.
 
  • Like
Reactions: NTMBK and coercitiv

Arkaign

Lifer
Oct 27, 2006
20,736
1,377
126
R7 1700/2700 does 3/3.2Ghz base clocks within 65W TDP, and gaming workloads can easily be considered as just 60-70% of that, so keeping the clocks around 3Ghz we'd probably be looking at something like 40-45W average package power. Bring in 7nm and shave that power in half and you're down into 20-25W territory without considering the custom aspects of the design.

I expect the APU will be thermally limited, after all it powers a livingroom entertainment system which needs to strike balance between space/noise/cost, meaning chip runs as high as possible while fan runs as low as possible :)

Perhaps. It just feels strange for me to think a console in 2020 would still just be clocked at 100% constantly, seems inefficient.

I'm hesitantly optimistic about Zen @ 7nm but not expecting a 1:1 improvement for a lot of reasons.

Their previous processes were really really good.

Vega 10 to Vega 20 (14 to 7nm) at roughly identical transistor count (12.5 to 13.2) resulted in about 12% (?) higher clock speeds, with broadly similar power, and mixed bag with themals, though I know the reasons were complex and varied there. At any rate though, they didn't get an enormous decrease in power/heat.

Vega VII is interesting here because that product is rated at 300W TDP for 14.2TF on 7nm, which is the rumored performance target for PS5 GPU. But that would be a really tough TDP and cooling setup to put into a console, and we still have a Zen2/etc to get in there. Now I expect great things from Cerny and AMD, and this custom APU will be almost assuredly far more efficient than Vega was at 14 or 7nm.

Things can get a lot better for sure, will be interesting to see.
 

Tuna-Fish

Golden Member
Mar 4, 2011
1,353
1,542
136
What?!Artificial product segmentation...and they don't even make money off of it?
You sure they don't at least use faulty 2400g CPUs for the 2200g?

There is very little in the CPU that is required when running two threads and not required when running only one. The proportion of CPUs made that have a fault that makes them unable to run with two threads but still capable of running with one is probably << 0.1%. Disabling HT/SMT is pure product segmentation, for both of the CPU makers.

I don't think that there's need for compromise here. Even with SMT is enabled, devs will certainly have the option to not use it by pinning the processes.

This precisely. The devs have full control over scheduling, so shipping with 16 threads just allows the devs to make use of them if their workload suits it. On the other hand, if there is a thread that can't be easily split and limits the performance, the first fix is to pin that to a core that doesn't do interrupts and idle the other thread.
 

coercitiv

Diamond Member
Jan 24, 2014
6,204
11,912
136
Perhaps. It just feels strange for me to think a console in 2020 would still just be clocked at 100% constantly, seems inefficient.
Check out my previous post, Techspot mentions qualification samples that arguably match console APU specs and feature a base clock of 1.6Ghz and boost clock of 3.2Ghz for the CPU.

Vega 10 to Vega 20 (14 to 7nm) at roughly identical transistor count (12.5 to 13.2) resulted in about 12% (?) higher clock speeds, with broadly similar power, and mixed bag with themals, though I know the reasons were complex and varied there. At any rate though, they didn't get an enormous decrease in power/heat.
Vega is hard to quantify as power benchmark because the max boost and power/thermal limits lead to various degrees of throttling and efficiency loss. Paper specs were 12% higher clocks for VII, yet performance improved by 20-30% based on resolution. Some of that performance jump may rightfully be attributed to the extra mem bandwidth, but thing is that extra bandwidth costs power as well. So at the end of the day we're still looking at 25% performance improvement.

Meanwhile, the data we have on the new 7nm Epyc CPUs suggests they pretty much managed to double the core count and keep a similar TDP. This is a far better comparison, as we compare same type of silicon running in the same high efficiency clock domain (2Ghz - 3Ghz). I wouldn't use the same 50% power cut figure if we were to talk about 4ghz+ clocks or ultra-low power solutions, those represent the edge of the efficiency curve where various diminishing returns apply.
 

Arkaign

Lifer
Oct 27, 2006
20,736
1,377
126
Check out my previous post, Techspot mentions qualification samples that arguably match console APU specs and feature a base clock of 1.6Ghz and boost clock of 3.2Ghz for the CPU.


Vega is hard to quantify as power benchmark because the max boost and power/thermal limits lead to various degrees of throttling and efficiency loss. Paper specs were 12% higher clocks for VII, yet performance improved by 20-30% based on resolution. Some of that performance jump may rightfully be attributed to the extra mem bandwidth, but thing is that extra bandwidth costs power as well. So at the end of the day we're still looking at 25% performance improvement.

Meanwhile, the data we have on the new 7nm Epyc CPUs suggests they pretty much managed to double the core count and keep a similar TDP. This is a far better comparison, as we compare same type of silicon running in the same high efficiency clock domain (2Ghz - 3Ghz). I wouldn't use the same 50% power cut figure if we were to talk about 4ghz+ clocks or ultra-low power solutions, those represent the edge of the efficiency curve where various diminishing returns apply.

Ah awesome. So you're saying 1.6 to 3.2Ghz? If so, that make a ton more sense to me. Having the thing just run flat out at max clock 100% of the time makes no logical sense to me, especially if it means that it cuts into the GPU budget in a device probably aiming at 150W max (though I could see 200W on the higher end perhaps). Navi should be a lot more efficient for gaming purposes than Vega, so I could see them fitting 14.2TF and 8-Core Zen2 @ 150W-200W if they're really really clever with how it's utilized.
 

jpiniero

Lifer
Oct 1, 2010
14,600
5,221
136
The fixed clock is done for consistency reasons. It can still shut cores off as needed to save power.
 

Mopetar

Diamond Member
Jan 31, 2011
7,837
5,992
136
Either Navi is vastly more efficient than previous AMD designs, or the specs are unrealistic for a ~200W System TDP.

I think Navi will be better Vega and Polaris in that department, but ~14 TFLOPS for the PS5 also seems unreasonably. I know that architectures aren't directly comparable, but that's slightly more than a 2080 Ti has. The PS4 pro has a little over 4 TFLOPS for comparison.
 
  • Like
Reactions: beginner99

Arkaign

Lifer
Oct 27, 2006
20,736
1,377
126
Either Navi is vastly more efficient than previous AMD designs, or the specs are unrealistic for a ~200W System TDP.

I think Navi will be better Vega and Polaris in that department, but ~14 TFLOPS for the PS5 also seems unreasonably. I know that architectures aren't directly comparable, but that's slightly more than a 2080 Ti has. The PS4 pro has a little over 4 TFLOPS for comparison.

Well, fwiw, AMD has been far behind Pascal, and now drastically further behind Turing in terms of TF efficiency in terms of how it applies to gaming. That might change, but more likely it will probably mean this 14.2TF is more in line with a 1080ti or close to it. In the tighter range of console optimization and API, that should net pretty great results for a $500 console.
 
May 11, 2008
19,561
1,195
126
It would be fun to see Sony succeed. Perhaps with a better software ecosystem too.

I once mentioned a possible future with enhanced xboxes replacing for 80% of the consumers the need for the pc.
But seeing how microsoft is turning windows 10 in a glorified tablet os, i am inclined to say that the future upcoming xbox as a game or home entertainment system will end up like another zune.

What is weird, i see with every update behavior in windows 10 that i have not seen since windows 98 and what was gone since windows 2000 but is reappearing in windows 10.
Programs that can take hostage of the screen. If a program hangs or that it has for example a modal dialog window open. The whole windows 10 refuses to minimize anything at all.
Everything is blocked. It is insane how microsoft has messed up windows 10. And it will only get worse.
It is as if i am using the first version of android. And i have seen this on many different hardware configurations.
The way the user interface behaves currently, it is as if the windows 10 UI is turning into something like a cooperative operating system used to be instead of a preemptive operating system.



If Sony play their cards right, and allow the PS5 to be also used in a bit more open manner allowing home brew software on it. I foresee a spectaculair jump in use cases.
 

maddie

Diamond Member
Jul 18, 2010
4,740
4,674
136
Either Navi is vastly more efficient than previous AMD designs, or the specs are unrealistic for a ~200W System TDP.

I think Navi will be better Vega and Polaris in that department, but ~14 TFLOPS for the PS5 also seems unreasonably. I know that architectures aren't directly comparable, but that's slightly more than a 2080 Ti has. The PS4 pro has a little over 4 TFLOPS for comparison.
Fair reservations, but I ask this.

An 8C/16T Zen2 based CPU as the X86 part of the package will need what sort of GPU subsection to be fully utilized, especially as they're speaking of up to 8K resolution?

It seems like a ~14 TF GPU is not outrageous for that sort of CPU power or resolution targets.

Use that as a guide to the expectations.
 

DrMrLordX

Lifer
Apr 27, 2000
21,634
10,849
136
Perhaps. It just feels strange for me to think a console in 2020 would still just be clocked at 100% constantly, seems inefficient.

Not only does it use a boost map, but also consider that the part will likely be clockspeed-constrained by design. We're talking about a chip that would probably be capable of speeds higher than 3.2 GHz. AMD will be effectively throttling it back to a base clock to 1.6 GHz and a boost clock of 3.2 GHz to save on power. It'll stay close to 100% in any workload that demands it because . . . 20W sounds probable, for the entire chip, at full load. At idle it'll chew up far less.
 
  • Like
Reactions: DarthKyrie

VirtualLarry

No Lifer
Aug 25, 2001
56,343
10,046
126
Just saying, I recently (last night) put together an older ASRock AB350M Pro4 board, with 4.90 UEFI on it, and a brand-new Ryzen R3 1200 CPU, which I instantly clocked up from 3.1 to 3.80Ghz, boosted Vcore to 1.37500V, and installed Win10.

Inside Win10, in CPU-Z, the clock speed reported, stayed fixed at 3.80Ghz (actually, slightly less, as PCI-E clock was like 99.98Mhz or somesuch), but the core voltage of the chip reported dropping down to 0.6V. Obviously, not at load.

So, the Zen architecture can maintain a "fixed clock speed", and still slow down / idle cores and lower Vcore and whatnot to conserve power when not at load. Whilst still maintaining the "illusion" of a fixed clock speed.
 
Mar 11, 2004
23,075
5,557
146
Well I was wrong (I expected them to announce around E3 and then release holiday season this year), although I have a hunch that Sony might have adjusted their plans (think they might be taking longer to make sure they can do some ray-tracing, although my guess is that's just the DXR type of stuff that isn't using specialized hardware, although they might do something so that it can do ray-tracing a bit better, which with the more powerful CPU they might free up compute units for ray-tracing, but I personally hope that unless its some hybrid form that actually does speedup lighting like Nvidia claimed, that ray-tracing is largely a waste of resources for consoles compared to just doing raster tricks, outside of limited games that can spare the resources for it), or maybe AMD taking longer to get Zen 2 out or something related to production (Apple's production cycles for instance) has pushed it from this year to next.

For the SSD, I'm guessing they're gonna put NAND right on the main board and using a custom controller (since a console won't likely have quite the same usage scenario that PC/datacenter SSDs will have they can probably use/tune it differently). I'd also guess they're using main system memory for the buffer and since it'll be GDDR6 the bandwidth should be high (as well as removing latency by removing that layer from typical PCs where its main system memory in DDR4, then to the small embedded memory used for cache of the SSD then to the NAND; here it'd be able to be right from the larger GDDR6 pool to the NAND). Which they wouldn't even need a lot if they were just using the NAND to load games into (while using other means for storage) when you actually play them. Just like 128GB would be enough to cover most games (even with how they've ballooned in size) in that you could load the whole game in NAND and then swap in and out to the main GDDR6 fast (load times should be great). I could see them going for a bit more just to cover OS and larger/multiple games, but I kinda doubt we see them going for big PCIe SSDs that some seem to assume it would be).

They could do a traditional HDD on top of that (even just offer an empty bay for a 2.5" drive), but I'd prefer if they included an external enclosure that the console could stack on top of that could hold 3 2.5" or 2 3.5" drives in). Also, I think I'd like if they made the Blu-ray drive an optional external unit and move to USB thumb drive for the physical aspect (make kiosks that you can take a thumb drive to and download the games onto it, maybe make it so it can burn Blu-ray games too; they could also have it able to print vinyl posters or something for the people that want physical art). Maybe make it so that you could plug several thumb drives into that external enclosure I mentioned before so that it'd make transferring them to the NAND fast and you wouldn't have to swap games around plus less issues with scratched discs and worn out optical drives/dirty lasers).

With 7nm and modern cooling (vapor chamber and heatpipes), 200-300W (I'd guess 200W with some headroom to push to 225-250 in particularly demanding parts, with a 300W power supply) should be feasible for a console to handle and would offer a pretty good level of performance (but yes the discrete PC hardware that its based on will be faster). And, if they were to put some ARM cores in to handle the OS, networking, and then a video encode/decode block, to further maximize the main CPU/GPU for gaming (and efficiency so that when its not being used for games it can be powered down fully). I also have a hunch that its not going to be a monolithic APU, so the chips will be spread out a bit more (helping to spread out the heat density), with an I/O chip and maybe the memory in between the CPU and GPU chips, and then the NAND branched off from the memory on like the front edge of the board). Or maybe they'd do something weird like have them on separate boards (think say a cube, where CPU would be on one side, GPU on the other, then adjacent to each would be the memory (which would be GDDR and NAND here) on its own board, and then the power supply on the 4th board (or maybe the I/O board - by that I mean the physical input/output ports like USB and HDMI). Or something like Apple's Mac (where it was a triangle with CPU/GPU/storage where the CPU and GPU had their own memory on their board).

It'll be interesting to see. Glad to see them mention PSVR support, but I'm hoping that Sony puts even more emphasis on VR and launches with an updated headset (offering higher resolution 2560x1440p or maybe 2880x1440 or 3200x1800), that uses a single USB-C cable with inside out tracking and improved controllers).
 
Mar 11, 2004
23,075
5,557
146
~14TF would be in line with the rumors of Vega 64 level +~15% from Navi. Figure probably a bit lower performance in the console. CPU doesn't tend to make a huge impact on TF level. And consoles already tend to be more efficient than their desktop counterparts (especially AMD's GPUs where they're often pushed outside of efficient clock range and have excessive voltage levels stock as well). Seems entirely reasonable that it could offer that.

Not only does it use a boost map, but also consider that the part will likely be clockspeed-constrained by design. We're talking about a chip that would probably be capable of speeds higher than 3.2 GHz. AMD will be effectively throttling it back to a base clock to 1.6 GHz and a boost clock of 3.2 GHz to save on power. It'll stay close to 100% in any workload that demands it because . . . 20W sounds probable, for the entire chip, at full load. At idle it'll chew up far less.

20W for the entire chip? I assume you mean just CPU? But even then I think that's a bit low. I think there will be definite load balancing (so I could see it being as low as 20W for some games/parts of games). Which I think it'd be smart for them to focus on GPU power (and enabling it to have headroom up to 200W would help it, where they'd balance load for ~250-270 total max system load under gaming, with it aiming more for probably 150-180W GPU and 25-40W CPU). And then use aggressive idle states (one reason I hope they integrate ARM cores to manage the OS/apps/network/communication and then video encode/decode blocks, and this way it can drop clocks when you have the game paused, and would even allow you to fully suspend the main CPU/GPU (think hibernation mode, where it'd keep everything suspended in memory/caches so it'd be instantaneous when you came back).