Info Ryzen 4000 Mobile Chips Unveiled at CES

Page 5 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

guachi

Senior member
Nov 16, 2010
761
415
136
Acer Swift 3 with Ryzen 7 4700U
AMD Ryzen 4000 Mobile APUs

AMD debuted (finally!) their Ryzen 4000 mobile chips at CES that are confusingly Ryzen 2 CPUs that are analogous with their Ryzen 3000 desktop parts.

I like what I see, especially on the power front. I know many people said that 7nm Ryzen chips had the potential to be very power efficient and if AMD's slide deck is to be believed they have succeeded. Most of the power efficiency has come from the 7nm process. As well, the 7nm process looks to be allowing AMD to cram up to 8 cores onto a laptop chip.

Do you guys think AMD has a product that will be as competitive on the laptop as the 3000 series is on the desktop? I'm thinking that the 4600 will be the best buy like the 3600 is in the desktop space. The problem in the laptop space is AMD needs design wins. At least on the desktop I don't need some company to choose for me, I can just by the chip myself.
 

Thunder 57

Platinum Member
Aug 19, 2007
2,674
3,796
136
I understand what you are saying, but you still dont understand that the 2400G were and now the 3400G is the top sold product, there is a reason for that happening and i dont think it is just because the extra threads. Most people dont need anything more than a 2200G yet the APU market crown went from the A8-9600 to the 2400G and now the 3400G.

There are only two reasons for going 3400G instead of a 3200G:
1) Gaming
2) Video/Image editing (that uses the iGPU as well).

im specially worried about Vega segmentation in desktop, if the Renoir full die end up being a 8C/16T Vega 8 die. That means a 4200G could be as low as as Vega 5 product.

BTW, if GPU prices were back to normal, the RX5500XT 4GB would be a $99 product.

I think SMT is a bigger deal why the 3400G sells despite there being the cheaper option. Gaming with an iGPU on the desktop is... kind of dumb, unless it's older or less demanding games. And those threads will help a lot with editing and transcoding.

The 5500XT is overpriced. Rumor is that is on purpose because of unsold Polaris inventory. No one would buy those if the 5500XT was the same price. Realistically Polaris and the 5500XT would drop in price. Hopefully that happens.
 
  • Like
Reactions: CHADBOGA

thetrashcan

Junior Member
Jan 13, 2020
4
16
41
It's interesting that the base CPU clockspeeds are 200MHz higher on the U-series parts with SMT disabled. While the 4800U has an extra CU enabled and 9% higher GPU clocks compared to the 4700U, the 4600U and 4500U have the same GPU configuration and the 4600U still loses 200MHz on its base clocks.

I'm also curious to see how Zen 2 will perform with one quarter of the L3 cache per core. Shaving off 24MiB cache should save approximately 20 mm^2 compared to the two CCX's on the chiplet die, so it certainly make sense from a cost standpoint, but whether the improved latency from moving the memory controller back on-die and the improved bandwidth of LPDDR4X will be enough to mitigate the higher cache miss rate remains to be seen. An interesting point of comparison is that Renoir's 8 cores + 8MiB L3 should weigh in at approximately 42 mm^2, while Tiger Lake's 4 cores + 12MiB L3 appears to be around 38.2 mm^2 from the die shots.

The drop in CU count seems like a sensible decision as well - it seems pretty clear that Picasso was starved for memory bandwidth, evidenced by the fact that Vega 9 @ 1300MHz and Vega 11 @ 1400MHz in the Surface Laptop 3 had essentially the same performance despite Vega 11's 32% edge in theoretical FLOPS. There's no reason to spend die space that if it won't impact performance. The boost in bandwidth from LPDDR4X should lead to a much more balanced system overall, with better GPU performance despite the drop in theoretical throughput.
 

moinmoin

Diamond Member
Jun 1, 2017
4,944
7,656
136
I'm also curious to see how Zen 2 will perform with one quarter of the L3 cache per core. Shaving off 24MiB cache should save approximately 20 mm^2 compared to the two CCX's on the chiplet die, so it certainly make sense from a cost standpoint, but whether the improved latency from moving the memory controller back on-die and the improved bandwidth of LPDDR4X will be enough to mitigate the higher cache miss rate remains to be seen. An interesting point of comparison is that Renoir's 8 cores + 8MiB L3 should weigh in at approximately 42 mm^2, while Tiger Lake's 4 cores + 12MiB L3 appears to be around 38.2 mm^2 from the die shots.
For Raven Ridge and Picasso the smaller L3$ was widely considered adequate due to the lack of inter-CCX communication with only one CCX which cut out much of the higher latency (which requires caching to avoid otherwise). For Zen 3 we already have the leaked plan of unifying the L3$ between two CCXs on one chiplet. Since Renoir faces the issue of now having two CCXs but not enough L3$ to mask inter-CCX latency, it makes sense that L3$ is unified through microcode logic and this is first introduced in Renoir.
 

Gideon

Golden Member
Nov 27, 2007
1,625
3,650
136
An interesting take from Anandtech's twitter feed (about the recent Renoir article). So Anantech (kinda) confirmed again, that there will be a 15W Ryzen 9 4900U boosting to 4.3 GHz.

Raghunathan said:
"eight Zen 2 cores, with frequencies at 1.8-4.3 GHz at 15 W. So were you briefed on any upcoming Ryzen 9 4000 series SKUs."
https://twitter.com/fragman1978
Dr. Ian Cutress said:
'We'll make some if we feel there is an appropriate market and opportunity for them' or something to that effect was the official line

https://twitter.com/IanCutress
 
  • Like
Reactions: lightmanek

uzzi38

Platinum Member
Oct 16, 2019
2,625
5,897
146

RetroZombie

Senior member
Nov 5, 2019
464
386
96
For Raven Ridge and Picasso the smaller L3$ was widely considered adequate
The L2 cache was shared between cores in raven ridge vs non shared in summit ridge.
So it's 6MB vs 8.5MB of 'total cache', that's why performance was very 'adequate'.

Not sure about renoir we have zero info yet, it's interesting to see if it's two ccx with one pool of L3 cache or two ccx with each one having it's own exclusive L3.
 

RetroZombie

Senior member
Nov 5, 2019
464
386
96
The 5500XT is overpriced. Rumor is that is on purpose because of unsold Polaris inventory.
That and the complete marketing failure that amd decided to name their gpus.

This would be the correct one:
TIER
GIVEN
NAME
LINE UP
NAME
OR
ELSE
T1​
VEGA64​
RX580​
RX590​
T2​
VEGA56​
RX570​
RX580​
T3​
RX580​
RX560​
RX570​
T4​
RX570​
RX550​
RX560​
T5​
TX560​
RX540​
RX550​
T6​
RX550​
RX530​
RX540​

That's why many many say the RX5700 line is polaries replacement, they are just being mislead by the previous line up 'incorrect naming'.
I see you aren't one of those that confuse things.
 

moinmoin

Diamond Member
Jun 1, 2017
4,944
7,656
136
The L2 cache was shared between cores in raven ridge vs non shared in summit ridge.
So it's 6MB vs 8.5MB of 'total cache', that's why performance was very 'adequate'.
Indeed, and that mode is only possible if there is no more than one CCX.

Not sure about renoir we have zero info yet, it's interesting to see if it's two ccx with one pool of L3 cache or two ccx with each one having it's own exclusive L3.
Since it's based on Zen 2 there will be two CCXs. There is no way that Renoir includes significant changes like unifying L3$ on die. That leaves a different handling of the cache in microcode, something that can be back ported from Zen 3, an approach AMD uses for all APUs so far, using current gen silicon design with next gen microcode.

That's why many many say the RX5700 line is polaries replacement
Vega lives on as graphics-less compute for datacenters. Polaris disappears. Navi arrives.
 
  • Like
Reactions: amd6502

raghu78

Diamond Member
Aug 23, 2012
4,093
1,475
136
Don't take it as a confirmation, as it's still unclear if it'll actually hit the market.

Since thee was already an ASUS spec sheet for the ROG G14 Zephyrus mentioning a Renoir 4900HS CPU (binned 35w version of 4900H) its very likely we will see a Ryzen 9 4900H with 4.3 Ghz max boost.

 
  • Like
Reactions: lightmanek

Shivansps

Diamond Member
Sep 11, 2013
3,851
1,518
136
The drop in CU count seems like a sensible decision as well - it seems pretty clear that Picasso was starved for memory bandwidth, evidenced by the fact that Vega 9 @ 1300MHz and Vega 11 @ 1400MHz in the Surface Laptop 3 had essentially the same performance despite Vega 11's 32% edge in theoretical FLOPS. There's no reason to spend die space that if it won't impact performance. The boost in bandwidth from LPDDR4X should lead to a much more balanced system overall, with better GPU performance despite the drop in theoretical throughput.

With DDR4-2400 that those mobile APU supports? yes, but thats not true for desktop were DDR4-3000/3200 is more common, or with the DDR4-3200/LPDDR4X-4266mhz that mobile Renoir supports. With DDR4-3200 you need iGPU overclock. Stock is not enoght, and thats true on 3200G and the 3400G.

I think SMT is a bigger deal why the 3400G sells despite there being the cheaper option. Gaming with an iGPU on the desktop is... kind of dumb, unless it's older or less demanding games. And those threads will help a lot with editing and transcoding.

I played a lot on a 2200G then on a 3200G for a total for 10 months after a i had to sell my 1700 and my RX480 i was able to play every game i had. This is my 2200G on Witcher 3.

I was also able to play Assassin's Creed Unity when they give up for free, along with Tropico 6 and several others games. To the point i was thinking about not buying an dgpu, until i got a cheap RX570.

You just need to drop resolution a bit, 900p is perfectly fine performance/quality for an APU. But there are cases were you need to go back to 720p and thats when things starts to get ugly.
In my experience you need to overclock the Vega 8 A LOT to match the stock Vega 11, 1500mhz on 2200G to match 2400G stock, and 1600mhz on 3200G to match 3400G. Always on DDR4-3200,

If you want to see new games, well, here is Jedi Order on a 3200G

And RDR2 on a 3400G
https://www.youtube.com/watch?v=8HbvP2UF6hk

Thats kinda top of the line yes, but a 3200mhz ram performs just slightly slower.
 
Last edited:
  • Like
Reactions: OTG

Arkaign

Lifer
Oct 27, 2006
20,736
1,377
126
^^ I really wish they'd make a Quad Channel AM4 variant for future Zen based APUs. Memory Bandwidth is the killer on these things for sure. Could cram in a thousand CUs, but not be able to feed them any better.
 

JasonLD

Senior member
Aug 22, 2017
485
445
136
^^ I really wish they'd make a Quad Channel AM4 variant for future Zen based APUs. Memory Bandwidth is the killer on these things for sure. Could cram in a thousand CUs, but not be able to feed them any better.

Since DDR5 in 1-2 years would effectively be like quad channel in dual channel, I doubt they would go that route.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
^^ I really wish they'd make a Quad Channel AM4 variant for future Zen based APUs. Memory Bandwidth is the killer on these things for sure. Could cram in a thousand CUs, but not be able to feed them any better.

It's not worth going above dual channel. It means you have to double the memory controller real estate on the die.

Plus each channel is 64-bit width, so that's 64 extra lines of wires that has to be routed. Two extra channel means 128 of them, which increases board complexity and/or requiring more board layers either of which raises cost of production.

Value boards will then have to be separately designed for the dual channel operation, reducing design reuse. That's another cost adder.
 
  • Like
Reactions: Thunder 57

Shivansps

Diamond Member
Sep 11, 2013
3,851
1,518
136
^^ I really wish they'd make a Quad Channel AM4 variant for future Zen based APUs. Memory Bandwidth is the killer on these things for sure. Could cram in a thousand CUs, but not be able to feed them any better.

DDR5 is the way to go. But, belive me, i actually used those little APU for gaming, once you have DDR4-3200 the iGPU is the main problem. This is demostrated by the RX550 (8CU Polaris, GDDR5) and the GT1030 (GDDR5) not being much faster.

Just take a look to this

But desktop and mobile are diferent things, Mobile Picasso iGPU was bottlenecked by DDR4-2400. This is the reason why AMD was able to affort the iGPU downgrade on mobile. On desktop they may drop (i hope) the whole idea of Vega segmentation.

This is a 3200G vs 3400G Stock vs 3400G OC

As you can see, Vega 11 OC performs faster (on DDR4-3466) and Vega 8 needs 1725mhz to match a Vega 11 at 1600mhz. It is memory starved but not as much as people belive.
 
Last edited:

Arkaign

Lifer
Oct 27, 2006
20,736
1,377
126
It's not worth going above dual channel. It means you have to double the memory controller real estate on the die.

Plus each channel is 64-bit width, so that's 64 extra lines of wires that has to be routed. Two extra channel means 128 of them, which increases board complexity and/or requiring more board layers either of which raises cost of production.

Value boards will then have to be separately designed for the dual channel operation, reducing design reuse. That's another cost adder.

I realize that, it's more 'wishful thinking' territory.

I think the Mobo side of things actually wouldn't be that bad. I just got a new X79 board with m.2 and true quad channel memory for like $69. Kind of a strange brand, and only 4 ram slots, but interesting indeed to play with.

On the flip side, the memory controller on the Zen side of the equation would be a significant undertaking. It'd be perhaps worth it if say they had a 4-8TF APU to sell for $300+, to do the modification necessary. But for value parts it will never happen.

At least AM5 and a tremendous uptick in bandwidth is in sight.
 

Arkaign

Lifer
Oct 27, 2006
20,736
1,377
126
DDR5 is the way to go. But, belive me, i actually used those little APU for gaming, once you have DDR4-3200 the iGPU is the main problem. This is demostrated by the RX550 (8CU Polaris, GDDR5) and the GT1030 (GDDR5) not being much faster.

Just take a look to this

But desktop and mobile are diferent things, Mobile Picasso iGPU was bottlenecked by DDR4-2400. This is the reason why AMD was able to affort the iGPU downgrade on mobile. On desktop they may drop (i hope) the whole idea of Vega segmentation.

This is a 3200G vs 3400G Stock vs 3400G OC

As you can see, Vega 11 OC performs faster (on DDR4-3466) and Vega 8 needs 1725mhz to match a Vega 11 at 1600mhz. It is memory starved but not as much as people belive.

I hear what you're saying, but like anything it's a matter of balance. The APUs are always a little starved with Vram bandwidth, thus they keep them fairly smallish in CUs etc. If they increased them very much, it would rapidly be beyond the limits of 128-bit DDR4.

The 1030 and 550 are abominable performers, truly only valuable to add specific display outputs for multiscreen stuff or for a basic HTPC or office duties. It's also possible that I'm just a bit spoiled, but I find them painful.
 
  • Like
Reactions: CHADBOGA

Shivansps

Diamond Member
Sep 11, 2013
3,851
1,518
136
I hear what you're saying, but like anything it's a matter of balance. The APUs are always a little starved with Vram bandwidth, thus they keep them fairly smallish in CUs etc. If they increased them very much, it would rapidly be beyond the limits of 128-bit DDR4.

The 1030 and 550 are abominable performers, truly only valuable to add specific display outputs for multiscreen stuff or for a basic HTPC or office duties. It's also possible that I'm just a bit spoiled, but I find them painful.

I agree that makes no sence to make the igpu LARGER on DDR4, there is just no point, a little of stock OC would be more than enoght, as 1600mhz seems to be the sweetspot for Vega8/11 on DDR4-3200. But Renoir gets boths a top iGPU downgrade and a severe Vega segmentation as Vega 8 is been promoted from a $99 SKU all the way up to a $350 SKU on desktop. I hope they to at least drop the idea of segmenting Vega, but with a die of max 8CU i doubt it.
 
  • Like
Reactions: Arkaign

Arkaign

Lifer
Oct 27, 2006
20,736
1,377
126
It seems like Vega is the last gremlin from the 'old stock' AMD stack to go away. We're nearing the launch of RDNA2 (RX6800?) and yet they still have ol Vega on this thing. I think it shows a level of indifference on their part, as it's bound to be fairly limited in any regard.

Still, I think an RDNA or RDNA2 Navi design would help a lot for this kind of design. Vega in all its incarnations was notoriously inefficient for its rated Tflop capabilities (obviously under perfect conditions). This is why the Vega 64 at 12.6TF was only 'real world' close to GTX1080 AIBs at 9TF. Navi has quickly shown a vast improvement in this TF vs real world performance, and thus it also means a lower/smaller SKU can equal a larger/hungrier Vega GCN design, bippity boppity boo : Ryzen 5800 APU w/Navi 2+ and DDR5 for 2021?

I dare say we could conceivably see an APU approximately equal to a 3700X + GTX 1060/RX570ish if they are on 2nd gen 7nm EUV, and don't starve it (it should grow to 200% the approximate resources vs Vega+DDR4 models, which with efficiency gains means 2.5-3x the performance potential). Here's hoping. I'd like to recommend them more, but I just end up pointing people towards steals on Ryzen 1700/1800/2700 as far as value stuff goes, or just building a great used box.
 

amd6502

Senior member
Apr 21, 2017
971
360
136
Threatripper is a candidate for a big MCM quad channel APU. But i think the market for big quad channel iGPUs is so limited that TR might never get such a platform upgrade.

Would be nice if AM4 did get MCM APU that targets higher speed dual channel.


Renoir did look like a small BGA package but I was a bit shocked to read they actually squeezed it to 150mm2 (I would've guessed just under 200mm2). It probably fits on same micro bga socket as Stoney (tiny version of FP5) which I believe was like 120mm2 or 125mm2.

Renoir is mobile oriented but still great for desktop budget builders who don't use the pricey high frequency DDR4 anyway.
 

lobz

Platinum Member
Feb 10, 2017
2,057
2,856
136

4800U looks like yet another ehh IGP, but attached to an awesome CPU.
Why? These 7nm Vega iGPUs are faster than ICL's iGPUs in cheaper laptops, if there's gaming or heavy professional graphics usage on more expensive laptops, there's always a dGPU and if midrange or somewhat more pricey laptops come out where gaming is none of the selling points, then they're more than adequate.

My biggest fears were poor & cheap designs and the choice of going with Vega. The 1st is no longer a fear for me, thank God that not 95% of AMD laptops are going to be a complete nightmare. The second? Well, when I saw the leaked sheets hours before all the presentations and saw the numbers of 6, 7 and 8 CUs, I nearly fainted. It all turned out to be a very nice surprise in the end :) I'm content with them beating ICL's iGPU with so little die space used, compared to what they could have done on 7nm.

Of course a while later comes Tiger Lake (some people say Q1, but I say this -> big words and fuss Q2, CNL and Radeon VII-like availability in Q3, with most designs coming for the holidays)... with the mighty DG1 as the iGPU with supposedly 80% performance uplift over ICL's iGPU. We'll see if that makes any sense and if it won't take too much power and thermal headroom away from the CPU cores. Coz I guess nobody thinks that on a very similar node intel has suddenly found 100% higher performance per Watt from 11th to 12th Gen :)
 
  • Like
Reactions: spursindonesia

maddie

Diamond Member
Jul 18, 2010
4,738
4,667
136
It seems like Vega is the last gremlin from the 'old stock' AMD stack to go away. We're nearing the launch of RDNA2 (RX6800?) and yet they still have ol Vega on this thing. I think it shows a level of indifference on their part, as it's bound to be fairly limited in any regard.

Still, I think an RDNA or RDNA2 Navi design would help a lot for this kind of design. Vega in all its incarnations was notoriously inefficient for its rated Tflop capabilities (obviously under perfect conditions). This is why the Vega 64 at 12.6TF was only 'real world' close to GTX1080 AIBs at 9TF. Navi has quickly shown a vast improvement in this TF vs real world performance, and thus it also means a lower/smaller SKU can equal a larger/hungrier Vega GCN design, bippity boppity boo : Ryzen 5800 APU w/Navi 2+ and DDR5 for 2021?

I dare say we could conceivably see an APU approximately equal to a 3700X + GTX 1060/RX570ish if they are on 2nd gen 7nm EUV, and don't starve it (it should grow to 200% the approximate resources vs Vega+DDR4 models, which with efficiency gains means 2.5-3x the performance potential). Here's hoping. I'd like to recommend them more, but I just end up pointing people towards steals on Ryzen 1700/1800/2700 as far as value stuff goes, or just building a great used box.
I'm not too sure this is the Vega of last time. It seems they tweaked the internals to get more performance/MHz. Wonder if this is how the original Vega was supposed to perform. A lot of us believed that the 1st Vega design was borked.

In any case, someone fixated on a technical number is curious, as at the end of the day, the performance seen by the user is the only thing that matters. Who cares how many CU are there really, when FPS is what really matters?
 
  • Like
Reactions: scannall

Shivansps

Diamond Member
Sep 11, 2013
3,851
1,518
136
I'm not too sure this is the Vega of last time. It seems they tweaked the internals to get more performance/MHz. Wonder if this is how the original Vega was supposed to perform. A lot of us believed that the 1st Vega design was borked.

In any case, someone fixated on a technical number is curious, as at the end of the day, the performance seen by the user is the only thing that matters. Who cares how many CU are there really, when FPS is what really matters?

Im not sure that they changed ANYTHING on the Vega cores that Renoir has, it is still called Vega, they would have changed the name to Vega2 or something, with a tons of slides to back it up, they did nothing of that so it is still the same old Vega to me.

Im convinced that the "59%" they mentioned is archived by higher freqs and higher memory bandwidth (DDR4-2400 vs DDR4-3200) on those 15W TDP that mobile Renoir is running on. Zen2 vs Zen+ and improved memory controller helps as well.

The problem here is that they are going to move Vega 8 from a $99 APU to a $300+ one and segment it all the way down. It not going to be so bad if they at least drop the idea of segmenting Vega and all desktops APU end up with Vega 8. They are removing a previously avalible feature from a SKU, this is the same as Intel removing HT from the I7s in favor of segmentation and move it to I9s. No matter how good the performance is at the end of the day, it would be higher if they did not cut and segment that feature.

Im willing to look the other way if all desktop SKUs have Vega 8. But not to profit driven segmentation of a previously avalible feature.
 

RetroZombie

Senior member
Nov 5, 2019
464
386
96
yet they still have ol Vega on this thing. I think it shows a level of indifference on their part
Why don't you like vega?
It seams one of the best architectures ever made.
Vega (and polaries) to me had two major problems, bad marketing (product naming) and the bitcoins.

You clearly can see it today that nvidia/amd new product lines have problems against those old products, because they were THAT good.
 
Last edited:
  • Like
Reactions: amd6502

Arkaign

Lifer
Oct 27, 2006
20,736
1,377
126
Why? These 7nm Vega iGPUs are faster than ICL's iGPUs in cheaper laptops, if there's gaming or heavy professional graphics usage on more expensive laptops, there's always a dGPU and if midrange or somewhat more pricey laptops come out where gaming is none of the selling points, then they're more than adequate.

Well, I guess I should wait for real world results before any firm declarations, but the UserBenchmark GPU results are something like 10% faster than the 4C/8T 1065g7 IGP. Which if that carries over to common performance, I can't find terribly exciting. I mean, 110 (or even 140!) percent of bleh is still blehhhh. For context, besides the raw numbers themselves, a 'decent' GPU scores at least 50% on their scale. The 1065 scores 15%, the 4800 Vega scores 16%. For context, this is right at DDR3 64-bit GT1030 levels, or fairly ghastly in 3d. The eight year old 28nm 7850 mid-range card is notably quicker despite it being basically terrible in features and memory for modern era. This lack of uptick in IGP despite them being possible many many years ago in consoles is kind of sad. The PS4 GPU portion from 7 years back is better, but of course it has GDDR5 and is also tied to that hideous Jaguar CPU side. I'm not asking for a 10+TF beast on PC APU, but dang, at least 2-3TF would be real nice.

I remember the hope the Hades Canyon Radeon NUC presented, and it's around what I'd consider 'bare minimum' for respectable 1080p gaming performance, and that even still dips into the 20fps range with massive stutters in AC Odyssey and Origins, though at least most of the time it's 30-50ish, and should be lockable at a console-like 30fps. And that scores 35% on the GPU benchmark, or well more than twice as fast as a GT1030 DDR3 / Vega 4800U APU does at 15-16%.

I mean sure though, I do like how basically all modern IGP is capable of competent desktop work at 4k60+, hardware h.26x, etc.

I also fully realize and hope I can adequately express this is my perspective on it. I wouldn't give one of these to my sons, it would be erratic under major titles, constantly having to tweak and lower settings to see playable performance. And I'm a definite budget king, I love setting up value options to donate.