Zen 2 APUs/"Renoir" discussion thread

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

eek2121

Platinum Member
Aug 2, 2005
2,904
3,906
136
With the setup AMD has right now, the IGP needs to be on the same die as the memory controller. So they could make a new IO die on TSMC 7 nm and use that. I agree that they are more likely to do monolithic.

I still think there's a chance they put a GDDR6 controller, although obviously that would only be on mobile.

Not necessarily. 7nm would work fine in place of the 2nd chiplet. If they add a cache for the graphics chip, it would help with the small latency penalty that exists. Chiplet to chiplet latency isn’t terrible to begin with, and is much lower than PCIE latency.
 

majord

Senior member
Jul 26, 2015
433
523
136
There is nothing wrong with Vega. At lower clocks it sips power. The reason why it used so much power on the desktop is due to it being outside the efficiency curve.

And a low clocked Navi probably Sips even less power + is better when bandwidth constrained

I really don't think you can argue a case for navi not being benefitial
 

moinmoin

Diamond Member
Jun 1, 2017
4,934
7,620
136
I still think there's a chance they put a GDDR6 controller, although obviously that would only be on mobile.
LPDDR4X would be preferable. Though they likely want to use the same APU die on desktop again so it will keep the DDR4 IMC.
 

DrMrLordX

Lifer
Apr 27, 2000
21,583
10,785
136
So they could make a new IO die on TSMC 7 nm and use that.

Someday, AMD will want to get away from GF altogether. I think that is when they will start looking at a TSMC node for their I/O dice. But not this generation.

I still think there's a chance they put a GDDR6 controller, although obviously that would only be on mobile.

GDDR6 in mobile? That would be eccentric. That's the route they're going with the consoles, but I don't think the power usage from GDDR6 would be favorable for low power mobile devices.

LPDDR4X would be preferable.

Yeah, I was gonna say . . .
 

Hans de Vries

Senior member
May 2, 2008
321
1,018
136
www.chip-architect.com
Last edited:

moinmoin

Diamond Member
Jun 1, 2017
4,934
7,620
136
Furthermore based on the Linux kernel driver patches Phoronix reports that Renoir is apparently using an updated DCN 2.1 as its display engine. Raven Ridge and Picasso were DCN 1.0 and Navi 10 is DCN 2.0. Truly a mix and matching of old and new IP blocks.
 
  • Like
Reactions: Ottonomous

Gideon

Golden Member
Nov 27, 2007
1,608
3,573
136
Earlier leaks combined with the "flute" leak suggest 8 cores, 8MB L3 and Vega with 20CU.

Going from 2400 MHz to 4266 MHz is a significant increase to support the extra CU

Yeah, that seems excellent news. Vega 11 was heavily bandwidth limited. with 20CUs and twice the bandwidth, it should finally be a decent 1080p (low-to-medium settings) card.
 

Olikan

Platinum Member
Sep 23, 2011
2,023
275
126
With lpddr4x and 8Mb of L3 cache, Renoir should have close to twice the avaible bandwidh to the igp, against raven ridge

It's interesting that Vega is more tolerant to latency than Navi, GCN 4-clock cadence can do wonders to it... (probably NOT a "trade off" effect, since Navi compression should give better results)...
 

DrMrLordX

Lifer
Apr 27, 2000
21,583
10,785
136
Renoir seems to be confirmed with LPDDR4x @ 4,266 GHz
https://lists.freedesktop.org/archives/amd-gfx/2019-August/039227.html
contains: "LPDDR"
https://lists.freedesktop.org/archives/amd-gfx/2019-August/039229.html
contains:"dram_speed_mts = 4266.0"

Earlier leaks combined with the "flute" leak suggest 8 cores, 8MB L3 and Vega with 20CU.

Going from 2400 MHz to 4266 MHz is a significant increase to support the extra CU

Wow. Wish I had had RAM like that on my Kaveri. Overall, it looks to be shaping up as a capable product.
 

amd6502

Senior member
Apr 21, 2017
971
360
136
Renoir seems to be confirmed with LPDDR4x @ 4,266 GHz
https://lists.freedesktop.org/archives/amd-gfx/2019-August/039227.html
contains: "LPDDR"
https://lists.freedesktop.org/archives/amd-gfx/2019-August/039229.html
contains:"dram_speed_mts = 4266.0"

Earlier leaks combined with the "flute" leak suggest 8 cores, 8MB L3 and Vega with 20CU.

Going from 2400 MHz to 4266 MHz is a significant increase to support the extra CU

RR/Picasso have actually some spare CU's. My guess is 11CU more than maxes out 2800 DDR4. Would be interesting to see 2400g / 3400g tested for gains of 3200 DDR4 versus the 2400g's officially supported limit of 2667 DDR4 memory.

~16CU at desktop frequencies would max out 4266 RAM. Suppose they target desktop variant to handle 4500 MHz DDR4. Then I think 18 CU would be totally adequate. And those who want to OC the memory controller and mess around with exotic RAM like 4700 could benefit from a 19th CU. 20CU seems a bit overkill; but maybe they get to run at a more efficient lower frequency, which might be a big plus for gaming laptops.

This new high tech memory is really a game changer for the iGPU capabilities. I really didn't expect ~20CU. That's quite an amazing surprise.

However such a big iGPU is going to take wattage. If these are 7nm it would help over 12nm Vega; gpu power reduction could be as much as 50% from that. An RX 560 equivalent Vega at 12nm could easily take 65W. So now at 7nm, that's ~33W. This means iGPU gaming laptops possible at the 35W-45W wattages.

Going 7nm on the GPU would pretty much guarantee a monolithic APU I think.

The cheap option would be a 12nm Vega on the IO hub and reuse of existing Matisse chiplet. It would have the advantage of reuse of 8c chiplets would be nice to those looking forward to a higher core count APU. However 12nm also means a fully capable GPU would draw ~65W and therefore 95W tdp APU parts (or somewhere between if using throttling).

For DIY market that would be fine, but they would loose some OEM who would want to stick these in an SFF or all-in-one (and those wanting lightweight thermal solutions). Between that and the mobile market, this seems like a big deal.

So I guess 7nm monolithic seems likeliest.
 
Last edited:

rancherlee

Senior member
Jul 9, 2000
707
18
81
I could see 20CU getting enough bandwidth, if they are clocked pretty low. Playing around with my 2400G I find ~3000 ddr4 to be the point of diminishing returns, most games don't gain any more performance beyond that at STOCK 1250 clocks. 16CU would probably be Ideal @ 4266 but maybe 20CU at a lower clock gives similar performace with less power? (much like you could take a 580 and downclock/undervolt it and still beat a 570 while using less power)
 
  • Like
Reactions: Tlh97 and amd6502

NTMBK

Lifer
Nov 14, 2011
10,208
4,940
136
Big GPU which is good at compute, LPDDR4X, efficient CPU... This sounds like a great chip to put in a Macbook Pro.
 

jpiniero

Lifer
Oct 1, 2010
14,510
5,159
136
Big GPU which is good at compute, LPDDR4X, efficient CPU... This sounds like a great chip to put in a Macbook Pro.

At this point I still believe Apple will stick with Intel until their own processors are ready.
 

NTMBK

Lifer
Nov 14, 2011
10,208
4,940
136
imho it's best to avoid Apple products altogether.

I'm not a fan of their current unreliable keyboards, poor port selection and lack of repairability. But it would be a high profile contract for AMD to win.
 

chrisjames61

Senior member
Dec 31, 2013
721
446
136
imho it's best to avoid Apple products altogether.


Being in printing I love Macs. OSX is just so much better than Windows as to be laughable. That being said at home I use AMD products and Linux because I like building computers. The big drawback is no decently priced modular Macs that can actually be upgraded. Not since the G5. Even the first generation Mac Pro's were expensive as all get out. Sorry for going off topic.
 
  • Like
Reactions: amd6502 and OTG

amd6502

Senior member
Apr 21, 2017
971
360
136
Being in printing I love Macs. OSX is just so much better than Windows as to be laughable. That being said at home I use AMD products and Linux because I like building computers. The big drawback is no decently priced modular Macs that can actually be upgraded. Not since the G5. Even the first generation Mac Pro's were expensive as all get out. Sorry for going off topic.

I guess the one thing they have going for is that because OSX is based on unix, it is usually better than Windows. That, and like you say, some very nice collection of legacy software in certain niche areas, especially for artists. Well, the last thing doesn't really apply to me. People who are not fully satisfied with the hardware support or other weaknesses of linux can simply put a dual boot or VM setup using both linux and Windows. I think that is hard for OSX to beat.
 

LightningZ71

Golden Member
Mar 10, 2017
1,627
1,898
136
Fully overclocked to near 1600Mhz, the 2400G used to hit diminishing returns for memory bandwidth at around 3400 CL16. That was on 11 CUs. If they could run the iGPU at around 1350Mhz or so with 20CUs, and get the memory controller to support the same width of LPDDR4X at 4266Mhz, I could see it falling comfortably between the RX550 and RX560 in capablities. They apparently also are reusing the 7nm blocks from Vega VII, so the other minor improvements there are going to be included as well (this is indicated by the driver commits in linux). I can see this product allowing AMD to remain competitive in the mobile space with much of Intel's offerings for the currently being released new generation.
 
  • Like
Reactions: amd6502

amd6502

Senior member
Apr 21, 2017
971
360
136
Fully overclocked to near 1600Mhz, the 2400G used to hit diminishing returns for memory bandwidth at around 3400 CL16. That was on 11 CUs. If they could run the iGPU at around 1350Mhz or so with 20CUs, and get the memory controller to support the same width of LPDDR4X at 4266Mhz, I could see it falling comfortably between the RX550 and RX560 in capablities. They apparently also are reusing the 7nm blocks from Vega VII, so the other minor improvements there are going to be included as well (this is indicated by the driver commits in linux). I can see this product allowing AMD to remain competitive in the mobile space with much of Intel's offerings for the currently being released new generation.

3400g only raised the GPU freq from 1250 to 1400 MHz, so that's quite an overclock you have. (Does one need a B or X-series motherboard for RR/Picasso iGPU overclock?)

RX-560 was only 16CU (or less for some models) clocked around 1200 MHz and would draw around 70W at full load. So I think a 20CU Vega should outperform it, even if they clock it low like ~1GHz.

A 19W mobile APU with 7nm 20CU@700MHz would outpeform a RX 550, and I think such a low TDP might be entirely possible on 7nm.

For ULP 10W and 15w mobile they might use die harvest with most of the CU's disabled, and like Vega 6 or 8 clocked at peak perf/watt. Likely some models with CPU cores disabled as well. That would make 10W tdp quite doable.

The sweet spot for such a large monolithic 7nm APU would be OEM all-in-ones, at ~35W where I could see this well outperforming RX 560.
 

LightningZ71

Golden Member
Mar 10, 2017
1,627
1,898
136
For the 2200/2400g series, it was fairly common to be able to overclock the iGPU to 1500mhz+. There was a "clock stability hole" between 1275-1500 Mhz where the iGPU was, for whatever reason, USUALLY unstable. Once you got it to 1500 though, it seemed to behave quite well. It did require good cooling to hit those numbers though, and, in TDP restricted scenarios, the extra power draw and heat from the iGPU would negatively impact the CPU core clocks.

The one BIG difference between the RX550/560 and the APUs is memory bandwidth. The GPU cores are heavily bandwidth starved on the APU, and even going to dual channel (actually, octa channel, as LPDDR4X is arranged in 16 bit channels, right?) LPDDR4X at 4266mhz will still leave it quite starved for bandwidth. The 2400g, fully maxed out on memory speed and clock still can't get to RX550 levels of performance consistently. It beats the 1030 DDR3, but doesn't meet the 1030 DDR5g very often.
 

JustMe21

Senior member
Sep 8, 2011
324
49
91
AMD will probably need to up the CUs since Intel is adding in variable shaders and ai enhancements to their IGPs. I would hope they also allow more than 2 GB of shared video ram.
 

DrMrLordX

Lifer
Apr 27, 2000
21,583
10,785
136
I am still curious as to why anyone needs AI enhancements in iGPUs, phone SoCs, and the like. What's the play here? Run massive AI learning clusters with these things?
 

moinmoin

Diamond Member
Jun 1, 2017
4,934
7,620
136
I am still curious as to why anyone needs AI enhancements in iGPUs, phone SoCs, and the like. What's the play here? Run massive AI learning clusters with these things?
Silly stuff like Face ID and Animoji.