If Raven Ridge 2018 is 14nm+ how does it achieve the energy efficiency gains shown in the slide?
Current tools are bad at predicting future performance. So refreshed bins potentially can have increased TT(nominal) corners or whatever. Edit: process performance is never static, GlobalFoundries, TSMC, everyone increases the performance of their node over time.
It can be seen in Stoney Ridge on 28HPA:
G1/2016 Bin: A9-9400 => 2.4 GHz / 3.2 GHz / 800 MHz / 10W
G2/2017 Bin: A9-9420 => 3.0 GHz / 3.6 GHz / 847 MHz / 10W-15W
G3/2018 Bin: A9-9425 => 3.1 GHz / 3.7 GHz / 900 MHz / 10W-15W
(Even, if in 10W cTDP those clocks would still be achievable.)
It is very key to note A9-9425 vs A9-9430(G2) (3.2 GHz / 3.5 GHz w/ 25W TDP) and A9-9410(G1) (2.9 GHz / 3.5 GHz also w/ 25W TDP)
2400GE/2200GE (35W) based on 2017 RR:
3.2 GHz / 3.8 GHz
3.2 GHz / 3.6 GHz
2800H/2600H based on 2018 RR:
3.35 GHz / 3.8 GHz
3.25 GHz / 3.6 GHz
Under HP:
2800H
2600H
2300U => 15W-25W
A9-9430 => 25W
A6-9230 => 25W
The HP lineup peaks out on 25W TDP. So, it is a refresh that might be 15W-25W like the 2300U SKU. Following the above switch for A9-9400 to A9-9420.
2700U w/ 2.2 GHz & 3.8 GHz @ 15W TDP
2800H w/ 3.35 GHz & 3.8 GHz @ 15W-25W TDP. With focus of it being more 25W rather than 15W. Like the A9-9420/A9-9425 for 15W.
1.15 GHz increase in nominal frequency with a very minor hit in TDP. (OT: It's minor since basically 15W and up all use 45W TDP copper heatsinks. Since, Intel screwed up their actual TDP measurements. Hence, why AMD jams these awful dual-gpu mobile solutions that rarely work after a driver update or bios update or windows update. It's free because Intel made it free for them.)