Question AMD Rembrandt/Zen 3+ APU Speculation and Discussion

Page 54 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

izaic3

Member
Nov 19, 2019
61
96
91
Alright, so we've had some leaks so far. I don't know if any of it's been confirmed yet, as it's pretty early, but here is what I've surmised so far (massive grain of salt of course):

If if turns out to have RDNA 2 and 12 CU, I could see iGPU performance potentially almost doubling over Cezanne.

If I've made any mistakes or gotten anything wrong, please let me know. I'd also love to hear more knowledgeable people weigh in on their expectations.
 
Last edited:

moinmoin

Diamond Member
Jun 1, 2017
4,954
7,673
136
Biggest takeaway for me is that the performance drop on battery power is not as bad as previous gen.
This worries me a little actually. The way this worked on Renoir and Cezanne is that short bursts to inefficient frequencies are avoided on battery unless the workload takes a longer time. (At least on Renoir this behavior is changeable by the user.) If Rembrandt changes that I hope it doesn't do so while using more energy for short bursts.
 

tamz_msc

Diamond Member
Jan 5, 2017
3,821
3,643
136
Total non sense, if Intel chips perf tank less in battery mode it means that it consume a lot more power than RMB, what matters is the total throughput over battery life time, or do you mean that running half the time at 10% higher perf is the better behaviour..?.
It does not use a "lot more" power to give the same performance plugged vs unplugged. I want the same CPU performance for light, bursty loads whether the laptop is plugged or unplugged. This was not possible on earlier CPUs from AMD. For example Renoir dropped 40% performance in Geekbench when unplugged. The only thing that changes on Intel when you plug it in is PL1, PL2 remains the same. So boosting behaviour remains the same.
 
  • Like
Reactions: scineram

tamz_msc

Diamond Member
Jan 5, 2017
3,821
3,643
136
This worries me a little actually. The way this worked on Renoir and Cezanne is that short bursts to inefficient frequencies are avoided on battery unless the workload takes a longer time. (At least on Renoir this behavior is changeable by the user.) If Rembrandt changes that I hope it doesn't do so while using more energy for short bursts.
The 10 s delay to boost that was present on Renoir (dunno if it's also there on Cezanne) was a big reason for sluggishness on battery power.
 

uzzi38

Platinum Member
Oct 16, 2019
2,635
5,984
146
Chinese review of 6800H and 6600H is up:


Biggest takeaway for me is that the performance drop on battery power is not as bad as previous gen. Still not as good as Intel's, but a step in the right direction.

Oh and the 12 CU iGPU is at MX450 level, which confirms my predictions.
There's surprisingly little difference between DDR5-4800 and LPDDR5-6400. Some games are even regressions. Between that and the 50W GD6 1650 performing only 7% faster than the 35W GD5 1650 has me a little concerned some of the benches there are CPU-bound, but still, I like that first preview.

I'm happy about the boost delay being gone without a hit to battery life. Rather, battery life is still up gen on gen - that's a win in my book.
 
  • Like
Reactions: Tlh97

tamz_msc

Diamond Member
Jan 5, 2017
3,821
3,643
136
There's surprisingly little difference between DDR5-4800 and LPDDR5-6400. Some games are even regressions. Between that and the 50W GD6 1650 performing only 7% faster than the 35W GD5 1650 has me a little concerned some of the benches there are CPU-bound, but still, I like that first preview.

I'm happy about the boost delay being gone without a hit to battery life. Rather, battery life is still up gen on gen - that's a win in my book.
1650 is out of the question. Those comparisons that AMD made must have used a really poorly configured system.
 

mikk

Diamond Member
May 15, 2012
4,141
2,154
136
Chinese review of 6800H and 6600H is up:


Biggest takeaway for me is that the performance drop on battery power is not as bad as previous gen. Still not as good as Intel's, but a step in the right direction.

Oh and the 12 CU iGPU is at MX450 level, which confirms my predictions.


The iGPU is a little underwhelming, I expected a bit more than +71% over the Vega based 5900HS with DDR4-3200. It's a big improvement sure, it's just that there was a big hype about RDNA2. Give Vega 8 50% more units, shrink it to 6nm, add DDR5/LPDDR5 support and it can't be much worse than this.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
Give Vega 8 50% more units, shrink it to 6nm, add DDR5/LPDDR5 support and it can't be much worse than this.

That assume Vega scales linearly with CUs, and we know from testing the lower CU ones it doesn't.

RDNA2 is about perf/watt.

Still 71% is 71%. 40% faster than the Iris Xe G7 which was the previous fastest so very respectable and Intel has no answer until Meteorlake.
 

insertcarehere

Senior member
Jan 17, 2013
639
607
136
That assume Vega scales linearly with CUs, and we know from testing the lower CU ones it doesn't.

RDNA2 is about perf/watt.

Still 71% is 71%. 40% faster than the Iris Xe G7 which was the previous fastest so very respectable and Intel has no answer until Meteorlake.
v2-70a00759d48cd647155d840c555f36e5_720w.jpg

From the perspective of laptops classes that will actually use igpus, the real fly in the ointment here is that it takes rather high power limits for the 12CUs of RDNA2 graphics to show their supremacy. Many of the 13-14" thin & lights which can benefit from great igpu performance may struggle with continuously cooling 40-50w out of a single chip.
 
  • Like
Reactions: BorisTheBlade82

rainy

Senior member
Jul 17, 2013
505
424
136
The iGPU is a little underwhelming, I expected a bit more than +71% over the Vega based 5900HS with DDR4-3200. It's a big improvement sure, it's just that there was a big hype about RDNA2. Give Vega 8 50% more units, shrink it to 6nm, add DDR5/LPDDR5 support and it can't be much worse than this.

Here is a thorough testing of Radeon 680M on that Polish website: https://www.purepc.pl/amd-radeon-680m-rdna-2-test-wydajnosci-apu-rembrandt

Btw, advantage over Vega 8 is about 100 percent, sometimes even more.
 

DisEnchantment

Golden Member
Mar 3, 2017
1,608
5,816
136
That assume Vega scales linearly with CUs, and we know from testing the lower CU ones it doesn't.

RDNA2 is about perf/watt.

Still 71% is 71%. 40% faster than the Iris Xe G7 which was the previous fastest so very respectable and Intel has no answer until Meteorlake.
Vega is worse than RDNA2 in using the available BW. Vega won't scale well from 8 to 12 CU (hence AMD cut the CUs in Vega 11)
But still doubtful if 12 CUs of RDNA2 are getting the needed BW, once STX will bring SLC, RDNA2 (or its successor) will be able to stretch its legs.

These bean counters at AMD seems to have calculated every mm2 of Si, otherwise 8MB of IF cache would really help here, but power would have been a constraint too.

Which leaves some sliver of probability of a refresh on N5 to have some IF and even higher clocks.
 

uzzi38

Platinum Member
Oct 16, 2019
2,635
5,984
146
The iGPU is a little underwhelming, I expected a bit more than +71% over the Vega based 5900HS with DDR4-3200. It's a big improvement sure, it's just that there was a big hype about RDNA2. Give Vega 8 50% more units, shrink it to 6nm, add DDR5/LPDDR5 support and it can't be much worse than this.

1. No it wouldn't. Too few ROPs.

2. Add 50% more units and the GPU power alone would be approaching 60W under load. Without including CPU nor SoC power consumption.

3. You're heavily overestimating the performance boost from memory. You have been from the very beginning. You just refuse to listen.

bd4775a17318673e3d9ed94b154b0f84.jpg
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
v2-70a00759d48cd647155d840c555f36e5_720w.jpg

From the perspective of laptops classes that will actually use igpus, the real fly in the ointment here is that it takes rather high power limits for the 12CUs of RDNA2 graphics to show their supremacy. Many of the 13-14" thin & lights which can benefit from great igpu performance may struggle with continuously cooling 40-50w out of a single chip.

I was gonna say that's an H chip so 45W is normal, but.

At 25W when compared to the 1165G7 it's only 20-30% faster. And that's when on LPDDR5-6400, not that it matters.

On the plus side memory won't matter as much.
 
  • Like
Reactions: majord

moinmoin

Diamond Member
Jun 1, 2017
4,954
7,673
136
v2-70a00759d48cd647155d840c555f36e5_720w.jpg

From the perspective of laptops classes that will actually use igpus, the real fly in the ointment here is that it takes rather high power limits for the 12CUs of RDNA2 graphics to show their supremacy. Many of the 13-14" thin & lights which can benefit from great igpu performance may struggle with continuously cooling 40-50w out of a single chip.
The big question is if iGPU is better than dGPU within the same power limit for the whole system. The comparison between 6600H and 6800H does make it seem that the latter is more suited to 42W and above while the former does surprisingly well with 25W already (sure half the CUs helps) while not improving much above that:

v2-f6c3b7e588c029b5040ee793e7163942_r.jpg


But those are H-series chips. The U-series ones should be better suited to lower TDP (<30W) laptops.

By the way, they also provided a showing of DDR4-3200 vs LPDDR4X-4266 on Cezanne too.

Less than 10% faster on average.
I'd think the advantage of using LPDDR memory is not faster speed per se but rather overall less Joules/bits. Though I'm not aware of recent measurements of such. Anybody?
 

insertcarehere

Senior member
Jan 17, 2013
639
607
136

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
Here is a thorough testing of Radeon 680M on that Polish website: https://www.purepc.pl/amd-radeon-680m-rdna-2-test-wydajnosci-apu-rembrandt

Btw, advantage over Vega 8 is about 100 percent, sometimes even more.

Advantage over the 5800U Vega 8 is 100%. But if the Chinese test is accurate, the 680M iGPU gains immensely from higher TDP. So at equal TDPs it's going to end up in the 60-70% range.

The Chinese site is showing the advantage at same 25W over Iris Xe G7 is 20-30%.

Looks like Intel won't be very far behind in iGPUs. Interesting.

I'd think the advantage of using LPDDR memory is not faster speed per se but rather overall less Joules/bits. Though I'm not aware of recent measurements of such. Anybody?

There's going to be bit of an advantage using the U, if we assume that AMD is using different chips. But mostly it's about leakage, and the U chips actually lose when it comes to clock frequency scaling versus power.

LPDDR's benefits are mostly about leakage. Active power reduction is not that much. It's pretty much to enable connected standby. Since DRAM has to be active way more than CPU, the active power is what matters most of the time.
 

moinmoin

Diamond Member
Jun 1, 2017
4,954
7,673
136
There's going to be bit of an advantage using the U, if we assume that AMD is using different chips. But mostly it's about leakage, and the U chips actually lose when it comes to clock frequency scaling versus power.
Well yes, they use different binning. But that's the point, the H-series is officially binned for 45W whereas the U-series for 15-28W. So if you want to run your chip (or buy a laptop to run) in the latter range you wouldn't buy the former expecting it to perform equally.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
Well yes, they use different binning. But that's the point, the H-series is binned for 45W whereas the U-series for 15-28W. So if you want to run your chip (or buy a laptop to run) in the latter range you wouldn't buy the former expecting it to perform equally.

I am saying the differences shown by that review will not change greatly with the U. So the 20-30% over Iris Xe and 60-70% over Vega 8 is going to be pretty accurate.
 

insertcarehere

Senior member
Jan 17, 2013
639
607
136
The big question is if iGPU is better than dGPU within the same power limit for the whole system. The comparison between 6600H and 6800H does make it seem that the latter is more suited to 42W and above while the former does surprisingly well with 25W already (sure half the CUs helps) while not improving much above that:

My assumption as a non-engineer is that it'd be easier to do the latter simply because cooling two dies instead of one means the heat density is lower, but we'll see.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
My assumption as a non-engineer is that it'd be easier to do the latter simply because cooling two dies instead of one means the heat density is lower, but we'll see.

Too complex to say without comparing identical architectures. Meaning 12CU iGPU versus 12CU dGPU. And you'd have similar memory bandwidth, etc.

So looks like the RX 680M allows scaling performance with higher TDPs, something which Vega 8 couldn't do, or didn't benefit much from it.
 

uzzi38

Platinum Member
Oct 16, 2019
2,635
5,984
146
Hold up a second, the efficiency numbers make no sense whatsoever. How is the 6600H at 25W within 5% of the performance of the 6800H consistently across all tests at 25W? The only way that's possible is if even the 6600H is running the GPU at the voltage floor, which is frankly impossible.

Voltage floor on RDNA2 is around 1300MHz on desktop. The Rembrandt iGPUs shouldn't be even close to that. At worst the 6600H should be running around 1.7GHz at 25W, albeit that's assuming the performance scales linearly with clocks, using the 25W and 42/54W values as a basis.
 

rainy

Senior member
Jul 17, 2013
505
424
136
Advantage over the 5800U Vega 8 is 100%. But if the Chinese test is accurate, the 680M iGPU gains immensely from higher TDP. So at equal TDPs it's going to end up in the 60-70% range.

Looks like we need to wait for review of 6800U, to see a real difference between 680M and Vega 8 at similar TDP.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
Hold up a second, the efficiency numbers make no sense whatsoever. How is the 6600H at 25W within 5% of the performance of the 6800H consistently across all tests at 25W? The only way that's possible is if even the 6600H is running the GPU at the voltage floor, which is frankly impossible.

Is it 5% average? I'm seeing 10-15% on the non-Esports titles. The E-sports titles are CPU bound.

At 25W it must be power bound, that's why there are much smaller differences between the 6600H and the 6800H. I saw the same with Haswell iGPUs. The 15W 40EU version was only 5-10% faster than the 15W 20EU version. 28W 40EU was an additional 10% faster. Really it was memory bound as well but at 15W power bound too.

Looks like we need to wait for review of 6800U, to see a real difference between 680M and Vega 8 at similar TDP.

1.7x was AMD's claim. The 2x number was with the 6800U at 28W and 5800U at 15W.
 

mikk

Diamond Member
May 15, 2012
4,141
2,154
136
That assume Vega scales linearly with CUs, and we know from testing the lower CU ones it doesn't.

RDNA2 is about perf/watt.

Still 71% is 71%. 40% faster than the Iris Xe G7 which was the previous fastest so very respectable and Intel has no answer until Meteorlake.


71% against DDR4 Vega8 to be exact. I think it shows that Vega (despite the bad reputation at times) was fine for the lower CU variants on iGPUs and RDNA2 is not a game changer. Intel has no answer (which I think is not a big problem because of the poor SKU adoption of Rembrandt-U), against a 192EUs Xe HPG based GPU AMD needs another big improvement next year to stay in touch for sure. Intel with a TSMC 3nm tile has a big advantage as AMD is likely to use 5nm in 2023. Xe HPG also should solve the mediocre clock speeds of Xe LP (+the mediocre perf/watt....DG2 on TSMC 6nm is reportedly 50% more efficient). Intel expects they have the graphics leadership with Meteor Lake and Arrow Lake.