• Guest, The rules for the P & N subforum have been updated to prohibit "ad hominem" or personal attacks against other posters. See the full details in the post "Politics and News Rules & Guidelines."
  • Community Question: What makes a good motherboard?

Info Ryzen 4000 Mobile Chips Unveiled at CES

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Gideon

Golden Member
Nov 27, 2007
1,104
1,964
136
Everybody likes to focus on the very top-end, but the entire lineup seems very competitive.

Take a look at the available 15W SKUs (also note Intel's pricing before discounts):

AMD Ryzen 4000 15W U-Series CPUs
source: anandtechCores/ThreadsBase FreqTurboL2L3Compute UnitsIGP Freq
Ryzen 7 4800U8C / 16T1.84.24 MB8 MB8 CUs1750
Ryzen 7 4700U8C / 8T2.04.14 MB8 MB7 CUs1600
Ryzen 5 4600U6C / 12T2.14.03 MB8 MB6 CUs1500
Ryzen 5 4500U6C / 6T2.34.03 MB8 MB6 CUs1500
Ryzen 3 4300U4C / 4T2.73.72 MB4 MB5 CUs1400


Intel Ice Lake-U 15W CPUs
source: anandtechCores/ThreadsBase Freq1C TurboAC TurboL3GPU EUsGPU FreqPrice
Core i7-1065G74C/8T1.33.93.58 MB641100$426
Core i5-1035G74C/8T1.23.73.36 MB641050$320
Core i5-1035G44C/8T1.13.73.36 MB481050$309
Core i5-1035G14C/8T1.03.63.36 MB321050$297
Core i3-1005G12C/4T1.23.43.44 MB32900$281

Now we obviously need to know the battery life, before making any wild-reaching conclusions, but it sure looks promising for now as *lenovo has already claimed similar battery life to Ice-Lake. If that's truly the case (withiini 5-10%) I see very-little reason to prefer Ice Lake to Renoir, particularily in the low end.

Take Acer Swift 3 for example. AMD version starts 100$ cheaper at ($599) while Intel starts at ($699). If that's 4300U vs 1005G1 I'd know my choice.

The other one is Lenovo yoga slim. AMD starts @ $699. Intel Starts at $1210 (with an 1065G7 though). With such a price difference in the same chassis, I'd rather eye something like a Ryzen 5 4600U with more SSD and memory, rather than the top end SKUs.

* in this article Lenovo claims that AMD lasts about 14h with 60.7Wh battery, and Ice Lake version has similar battery life estimates.
 

Shivansps

Diamond Member
Sep 11, 2013
3,110
786
136
Im 99% positive that Athlon Silver/Gold is Raven2. Not even Picasso.

What i notice:

AMD is starting to segment at bit more as if they have monopoly. Eg like Intel 3 years ago. Thats what lack of compettition looks like. People need to opose that crap before they start behaving like Intel and nv.

Lots of disabling of smt. Do we not need efficiency on mobile? No thank you amd.!

Ryzen 9 not released - do they even have to? Shareholders surely glad but thanx for nothing !

4c renoir apu. Really nessesary? At such a small die? 6c cut gfx is plenty to get near 100% yield.

Amd is kicking Intel to death all over. Renoir will be far superior and the artificial hamstrung of the apu shows it. Intel needs to change top management and get other culture. Whatever. We need some compettition now or at least in 4 years. Not some paper product produced in tiny numbers. I cant endure 6 bulldozer years again.
And dont forget the Vega CU downgrade on Renoir across the board, not even Intel dare to do such a thing.
 

Gideon

Golden Member
Nov 27, 2007
1,104
1,964
136
While I'm not a fan of the CU downgrade (particularily as it's vega) I understand why they had to do it. This chip is already twice the size of the Zen 2 CPU chiplet (at roughly 150 mm2). They need to be able to produce these in volume as defects add up really quickly.

Dr. Ian Cutress mentions the die size and the defect density of 0.09 per cm2 in this article.

Based on this, if we take a Die-per-wafer calculator and do some on-the-napkin calculations (bear in mind, the wafers are 300mm). Assuming dimentions of 12.5 x 12mm for Renoir and 14 x 14mm for a hypothetical 12CU version (very rough, but 16CUs would certainly be bigger).

Taking that into account:
  • With a 150mm2 chip you get on average ~331 good dies and ~47 defective dies (some of which are salvageable)
  • With a 200mm2 chip, you only get ~246 good dies and ~47 defective dies (again some, but not all, salvageable)
  • ... etc

This adds up quickly and non-linearly. That's 30% less chips (and on average 30% higher price per die). Is all of this worth it for "MUH Desktop APUs?" Probably for us but not for the people responsible for finances at AMD. Remember AMD needs to do both. Be able to poroduce enough of these chipis (on a highly booked and expensive node) as well as also price them competitively.

Pics of the very rough calculation above:
Screenshot 2020-01-07 at 18.18.48.pngScreenshot 2020-01-07 at 18.18.40.png


Unfortunately engineering is always tradeoff, there are no "free lunches". AMD just chose to prioritize CPUs this round.

Hopefully Van Gogh is not an Apple only product, and you'll get your beefier APU as well.
 

uzzi38

Golden Member
Oct 16, 2019
1,237
2,270
96
And dont forget the Vega CU downgrade on Renoir across the board, not even Intel dare to do such a thing.
Oh no, they cut unnecessary portions (for the market it's aimed for) from a cost sensitive part. How dare they?!?!

Yes, that was sarcasm. Look, lets be real here - you're not getting worse performance on the new Vega 8 then you will be getting on the Vega 11 of the 3400G. You get better memory overclocking (so more bandwidth) and we know Vega on 7nm can push all the way up to 2200mhz, but it'll also be clocked much higher out of the box.

I'll take a more optimised version with the same or slightly better performance over a fat one for no reason every day of the week. That's like complaining that the Radeon VII is worse than the Vega 64 because the Vega 64 has 64 CUs while the Radeon VII has 60. Actually no, I guess the 5700 vs the Vega 64 is a better comparison. You get the idea though - it's a dumb argument.
 

Topweasel

Diamond Member
Oct 19, 2000
5,327
1,525
136
Good points and remember that AMD for the Laptops the 8 core CPU is premium over premium. It might even get them 3900x money. On the desktop, its a 3700x the same way a 9900KF is a 9900k. The ones under it lose value even quicker on the desktop compared to laptops. It was smart for them to leverage the higher GPU clocks for what is already a really good iGPU, on a harsher process (how where they to know 7nm was going to go as smoothly as it was), to increase yields and just stay at same retaliative performance.
 

Shivansps

Diamond Member
Sep 11, 2013
3,110
786
136
Oh no, they cut unnecessary portions (for the market it's aimed for) from a cost sensitive part. How dare they?!?!

Yes, that was sarcasm. Look, lets be real here - you're not getting worse performance on the new Vega 8 then you will be getting on the Vega 11 of the 3400G. You get better memory overclocking (so more bandwidth) and we know Vega on 7nm can push all the way up to 2200mhz, but it'll also be clocked much higher out of the box.

I'll take a more optimised version with the same or slightly better performance over a fat one for no reason every day of the week. That's like complaining that the Radeon VII is worse than the Vega 64 because the Vega 64 has 64 CUs while the Radeon VII has 60. Actually no, I guess the 5700 vs the Vega 64 is a better comparison. You get the idea though - it's a dumb argument.
Oh look im getting the same performance out of my new 4 core than my old 6 core, them there is no need for the 6 core one and i can price the 4 core as the 6 core was priced.

Sorry, thats not a argument, thats a justification. And is a little early to start with that expecially when we still dont know the performance.

BTW, faster ram was exactly the whole point in this, Picasso is bandwidth limited, now we are getting more bandwidth but they are downgrading the CUs. If it was Navi i could possible understand, not with Vega.
 
Last edited:

Shivansps

Diamond Member
Sep 11, 2013
3,110
786
136
While I'm not a fan of the CU downgrade (particularily as it's vega) I understand why they had to do it. This chip is already twice the size of the Zen 2 CPU chiplet (at roughly 150 mm2). They need to be able to produce these in volume as defects add up really quickly.

Dr. Ian Cutress mentions the die size and the defect density of 0.09 per cm2 in this article.

Based on this, if we take a Die-per-wafer calculator and do some on-the-napkin calculations (bear in mind, the wafers are 300mm). Assuming dimentions of 12.5 x 12mm for Renoir and 14 x 14mm for a hypothetical 12CU version (very rough, but 16CUs would certainly be bigger).

Taking that into account:
  • With a 150mm2 chip you get on average ~331 good dies and ~47 defective dies (some of which are salvageable)
  • With a 200mm2 chip, you only get ~246 good dies and ~47 defective dies (again some, but not all, salvageable)
  • ... etc

This adds up quickly and non-linearly. That's 30% less chips (and on average 30% higher price per die). Is all of this worth it for "MUH Desktop APUs?" Probably for us but not for the people responsible for finances at AMD. Remember AMD needs to do both. Be able to poroduce enough of these chipis (on a highly booked and expensive node) as well as also price them competitively.

Pics of the very rough calculation above:
View attachment 15421View attachment 15422


Unfortunately engineering is always tradeoff, there are no "free lunches". AMD just chose to prioritize CPUs this round.

Hopefully Van Gogh is not an Apple only product, and you'll get your beefier APU as well.
How do you know thats thats the case and not because they are doing the minimum effort as possible to:

1) Sell dGPUs
2) Lack of APU competition
3) Maximize profits and dont care about customers.
4) All of the above

There is a expected backlash of selling a smaller version of the exact same GPU core at the same or higher price while compensanting with higher freqs.
 

uzzi38

Golden Member
Oct 16, 2019
1,237
2,270
96
Oh look im getting the same performance out of my new 4 core than my old 6 core, them there is no need for the 6 core one and i can price the 4 core as the 6 core was priced.
AHAHAHHAHAHAHAHAHA

Looks like you forgot the 3600 and 2700 both exist. Nice attempt though.

Also, stop pretending the only difference is the CU count. You're also getting twice the core count on top.
 
  • Like
Reactions: Thunder 57

zir_blazer

Senior member
Jun 6, 2013
940
85
91
I'm highly impressed by Renoir CPU side. Since most of the early Roadmaps were about Renoir being 4C I was not expecting 8C, much less that much being doable @ 15W. GPU side is a little meh, supposedly a small Vega revision (Radeon VII?) instead of Navi. Seems like AMD just wanted to beat Intel CPU side and be on par with GPU, which somehow makes sense, as after all, anyone that purchases a gaming Notebook will go for a discrete GPU instead of relying on the integrated one, and anyone that needs brute CPU would have picked Intel that offered higher end parts. Basically, Renoir is enough GPU for mainstream users, overkill CPU, but that last one is precisely what allows AMD to go after the premium mobile CPU segment.

What interest me the most is how Renoir would perform in Desktop. Giving that Renoir is monolithic, AMD decided to cut Cache L3 to 1/4 compared to Matisse (8 MiB instead of 32 MiB for 8C), so you have less CCX-to-CCX latency but also much less Cache L3. Still, Renoir could technically allow APUs to grow up to the 330 U$D Ryzen 3700X price range, assuming that it gets released before Zen 3.
 

Topweasel

Diamond Member
Oct 19, 2000
5,327
1,525
136
AHAHAHHAHAHAHAHAHA

Looks like you forgot the 3600 and 2700 both exist. Nice attempt though.

Also, stop pretending the only difference is the CU count. You're also getting twice the core count on top.
Like isn't that the whole point on this.

Lets put it this way your are creating a new mobile sku, you decide you can only hit the amount of good dies as you want with one die size. To accomplish this do you A.) only offer 6 cores and the same iGPU (which will be a lot faster with new clocks) or double the core count and maintain iGPU performance with higher clocks?

In the end its a CPU not a GPU.
 

Shivansps

Diamond Member
Sep 11, 2013
3,110
786
136
AHAHAHHAHAHAHAHAHA

Looks like you forgot the 3600 and 2700 both exist. Nice attempt though.

Also, stop pretending the only difference is the CU count. You're also getting twice the core count on top.
What attempt? im saying exactly what you are saying to justify this, its a example.

You are saying that you dont care if the exact same the GPU core is smaller if it has at least the same perf. Well i used that for my example. If you dont like it, well thats your problem, because thats exactly what you said.
 

Shivansps

Diamond Member
Sep 11, 2013
3,110
786
136
Like isn't that the whole point on this.

Lets put it this way your are creating a new mobile sku, you decide you can only hit the amount of good dies as you want with one die size. To accomplish this do you A.) only offer 6 cores and the same iGPU (which will be a lot faster with new clocks) or double the core count and maintain iGPU performance with higher clocks?

In the end its a CPU not a GPU.
Is an APU not a CPU. GPU side counts as much as the CPU side.
 

uzzi38

Golden Member
Oct 16, 2019
1,237
2,270
96
Is an APU not a CPU. GPU side counts as much as the CPU side.
No it doesn't. The CPU side is far, far, far, far, far, far, far, far, far, far, far, far, far, far, far, far, far, far, more important to focus on.

APUs are mobile products first. Desktop variants are naught more but an after-thought.

You tell me what's more important then for mobile. Smaller, lower leakage dies allowing for better power efficiency and battery life or fat iGPUs? What's more important for the plebs - a responsive experience in Windows, and a better experience in applications or 20% better performance in gamong perf?
 

uzzi38

Golden Member
Oct 16, 2019
1,237
2,270
96
What attempt? im saying exactly what you are saying to justify this, its a example.

You are saying that you dont care if the exact same the GPU core is smaller if it has at least the same perf. Well i used that for my example. If you dont like it, well thats your problem, because thats exactly what you said.
Yes, that is exactly what I think, because the chip provides other very significant advantages, to the point where it crushes in those segments. I do not believe your viewpoint even has a smidgen of a ground to start on. Simple. As.
 

Shivansps

Diamond Member
Sep 11, 2013
3,110
786
136
No it doesn't. The CPU side is far, far, far, far, far, far, far, far, far, far, far, far, far, far, far, far, far, far, more important to focus on.

APUs are mobile products first. Desktop variants are naught more but an after-thought.

You tell me what's more important then for mobile. Smaller, lower leakage dies allowing for better power efficiency and battery life or fat iGPUs? What's more important for the plebs - a responsive experience in Windows, and a better experience in applications or 20% better performance in gamong perf?
That dosent means that i have to like a forced CU downgrade, specially if is the same GPU core, let me make that clear.

Now the whole point of an APU is combining a decent CPU with a decent GPU, AMD choose to go for CPU perf while downgrading the GPU and hope to make up for with higher freq. For example 4300U vs 3300U:
3300U
4/4 2.1ghz base / 3.5Ghz turbo
Vega 6@1200mhz

4300U
4/4 2.7ghz base / 3.7ghz turbo
Vega 5 @1400mhz

Is clear that the 4300U is gona be faster and at least match the 3300U in gaming, but it could be even better with Vega 6, and higher freq is not free, specially when you are overcloking the whole GPU, i dont think this is about power efficiency, i think the reason was to improve profits due to lack of competition as there is no need to provide better GPU perf than that and i dont have to like that.

And im far more worried about desktop counterparts, if Renoir has a max of 8CU for the 8/16 is a problem for desktop as that would mean a significant downgrade to negate gains in the $100-$150 area.
 

Thala

Golden Member
Nov 12, 2014
1,128
441
136
Is clear that the 4300U is gona be faster and at least match the 3300U in gaming, but it could be even better with Vega 6, and higher freq is not free, specially when you are overcloking the whole GPU, i dont think this is about power efficiency, i think the reason was to improve profits due to lack of competition as there is no need to provide better GPU perf than that and i dont have to like that.
Of course it is not about power effciency. Clocking higher is certainly more expensive power wise, than going wider (e.g. more CUs). Going higher frequency is about yield and margin.
 

Panino Manino

Senior member
Jan 28, 2017
282
331
106
I understand that the GPU will perform much better despite the lower CU because of the faster memory. 3 more CU wouldn't really make a difference but this is still disappointing for me because I was expecting an APU with killer battery with Navi that uses much less power.
Was really not possible?
 

teejee

Senior member
Jul 4, 2013
352
187
116
How do you know thats thats the case and not because they are doing the minimum effort as possible to:

1) Sell dGPUs
2) Lack of APU competition
3) Maximize profits and dont care about customers.
4) All of the above

There is a expected backlash of selling a smaller version of the exact same GPU core at the same or higher price while compensanting with higher freqs.
Very little backlash since gaming-oriented laptops have dGPU and office-laptop have no need for fast IGP.
And there are no indication of reduced performance, probably even a bit faster IGP (or?) than 3000-series anyway.

AMD should not waste die area (money) on all their APU’s just because of a niche market. This niche market was important when their CPU’s were inferior, but now they need a die that can compete with Intel in high volumes in all middle and high end laptops.
 

scannall

Golden Member
Jan 1, 2012
1,740
1,165
136
Very little backlash since gaming-oriented laptops have dGPU and office-laptop have no need for fast IGP.
And there are no indication of reduced performance, probably even a bit faster IGP (or?) than 3000-series anyway.

AMD should not waste die area (money) on all their APU’s just because of a niche market. This niche market was important when their CPU’s were inferior, but now they need a die that can compete with Intel in high volumes in all middle and high end laptops.
A larger die also means more wafers needed. Apple moving to 5 nm should free up some 7 nm space, but it's hard to know TSMC's capacity.
 

tomatosummit

Member
Mar 21, 2019
32
16
41
And im far more worried about desktop counterparts, if Renoir has a max of 8CU for the 8/16 is a problem for desktop as that would mean a significant downgrade to negate gains in the $100-$150 area.
This is the only big thing for me.
On the laptops, there was never enough power or memory bandwidth and the performance was pretty bad, so the new 8cu design can improve heavily there.

On the desktop there's no low hanging fruit to improve performance even with extra cu. You're still limited by memory bandwidth, although I suspect 3600/3866mhz memory might be more attainable and maybe even 4000+ but the value segment isn't going to be there.
As teejee said, it's too small of a niche for them to care. The mini pcs will have a performance improvement over the last ones but just not a great deal without spending money and effort into clocking them.

I was honestly hoping the fp6 laptop socket would have a trick more than just lpddr4. The rumoured 16/20cu apu coupled with some additional on package memory, even a 64bit gddr6 chip pair would give an additional ~100GB/s of memory bandwidth and the new laptop socket would be a good chance to implement that. So many of amd's products seem limited by their platform decisions.
 

Shivansps

Diamond Member
Sep 11, 2013
3,110
786
136
This is the only big thing for me.
On the laptops, there was never enough power or memory bandwidth and the performance was pretty bad, so the new 8cu design can improve heavily there.

On the desktop there's no low hanging fruit to improve performance even with extra cu. You're still limited by memory bandwidth, although I suspect 3600/3866mhz memory might be more attainable and maybe even 4000+ but the value segment isn't going to be there.
As teejee said, it's too small of a niche for them to care. The mini pcs will have a performance improvement over the last ones but just not a great deal without spending money and effort into clocking them.

I was honestly hoping the fp6 laptop socket would have a trick more than just lpddr4. The rumoured 16/20cu apu coupled with some additional on package memory, even a 64bit gddr6 chip pair would give an additional ~100GB/s of memory bandwidth and the new laptop socket would be a good chance to implement that. So many of amd's products seem limited by their platform decisions.
DDR4-3400 was possible on Picasso, i expect Renoir to break DDR4-4000 overcloking and make easier to overclock DDR4-3000/3200 rams to 3600-3800...

But the APUs that actually make sence to do such a thing are the $100-$150 ones, no one in is right mind is going to buy a $300 8C APU for IGP gaming. If the $100 4200G ends up being a 6CU APU is a huge problem, and as things are now it may not even be a 6CU one due to segmentation. And the gain in memory bandwidth is just wasted. The iGPU CUs is now going to be the bottleneck, specially for 900p/1080p.

This is clear when you see that a GDDR5 8CU RX550 is not that much faster than a 3400G Vega 11.

This never happened before i belive, when i heard Renoir was to be Vega i expected them to keep CU count, not reduce it.
 

tomatosummit

Member
Mar 21, 2019
32
16
41
The thing you're right on is that it's a very price sensitive segment. 3600mhz ram is still significantly more expensive than the common as muck 3200mhz and I do think the cpu prices will drop in response to intel's 10th gen. They'll need to hit 6cus at £/$100 or 8cu at £/$150 to get equal or better gpu performance, I'm putting those number down as the price for apus tends to be fully dictated by where the cpu cores would place them in performance and am not taking overclocking into consideration.
It's such a small market that amd just doesn't give a shit.

The only situation I can see that cost doesn't come into it is for the ultra small desktop pcs. I personally have one I built around the 2400g at a similar size to a mac mini and it doesn't look like it'll be worth upgrading and it's now two years down the line. NUCs are a similar one but onl that intel dpgu one has been performance orientated and cost a bomb.
 

TimCh

Member
Apr 7, 2012
52
45
91
DDR4-3400 was possible on Picasso, i expect Renoir to break DDR4-4000 overcloking and make easier to overclock DDR4-3000/3200 rams to 3600-3800...

But the APUs that actually make sence to do such a thing are the $100-$150 ones, no one in is right mind is going to buy a $300 8C APU for IGP gaming. If the $100 4200G ends up being a 6CU APU is a huge problem, and as things are now it may not even be a 6CU one due to segmentation. And the gain in memory bandwidth is just wasted. The iGPU CUs is now going to be the bottleneck, specially for 900p/1080p.

This is clear when you see that a GDDR5 8CU RX550 is not that much faster than a 3400G Vega 11.

This never happened before i belive, when i heard Renoir was to be Vega i expected them to keep CU count, not reduce it.
Yes it is huge problem for people who needs high fps wants spend money on premium memory, but don't want to see buy a discrete GPU to get way higher performance.
 

krumme

Diamond Member
Oct 9, 2009
5,898
1,524
136
A larger die also means more wafers needed. Apple moving to 5 nm should free up some 7 nm space, but it's hard to know TSMC's capacity.
Lisa said they are "planning for success" - imo it means lots of 7nm euv capacity is reserved.
Good times ahead.
 

ASK THE COMMUNITY