Article Looks like APU Ryzen 4000 will contain Zen2 and Zen3 next year.

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Kedas

Senior member
Dec 6, 2018
355
339
136
if this is true:

Beginning next year APU Zen2 and then later in the year a Zen3 APU under the same Ryzen 4000 label.
This transition will end the mix between series and micro-architecture. :)

Maybe also because they may add a GPU die in the series 5000 (so the same CPU micro-architecture with or without GPU in the full TDP range from mobile to desktop.)
 
  • Like
Reactions: amd6502

DarthKyrie

Golden Member
Jul 11, 2016
1,545
1,305
146
Sure they can match it in raw computer performance. But whiout enoght bandwidth, it will be limited for compute tasks as well... depending on the task.

There is this new memory type out there called HBM, it's only been around for a few years and AMD has talked about it extensively so I can understand that you might not have heard of it. AMD has said they want to use it on more products and costs have been coming down so I wouldn't be surprised to start seeing it used.

I wouldn't put it past AMD to use HBM on an APU with an 8C/16T CPU chiplet, a beefy GPU chiplet and an I/O die all using TSMC's new packaging.
 

NostaSeronx

Diamond Member
Sep 18, 2011
3,705
1,231
136
GDDR6 RDRAM is very successful, more cheap, and definitely better compared to HBM2. I'm very glad AMD and Nvidia miraculously received RMBS shares to ensure us consumers get the better costing memory DRAMs.
 
Last edited:
  • Like
Reactions: amd6502

soresu

Diamond Member
Dec 19, 2014
3,208
2,480
136
to ensure us consumers get the better costing memory DRAMs
Cost is a relative thing, what doesn't come from the memory may be passed on to the AIB partner in terms of PCB complexity, which is almost certainly higher to accomodate multiple GDDRx chips rather than one central interposer.

It also simplifies HSF design to have it all closer together like that, though I have no idea what the impact to power electronics GDDRx has relative to HBM - I'm still waiting for them to move power electronics onto the interposer, that would make HSF design extremely simple.

As for the shares you mentioned, there is nothing remotely miraculous about incentives - it's a basic business practice, I'm sure that Rambus have used far slimier methods to ensure that business remains in their favor.
 

maddie

Diamond Member
Jul 18, 2010
4,881
4,951
136
Cost is a relative thing, what doesn't come from the memory may be passed on to the AIB partner in terms of PCB complexity, which is almost certainly higher to accomodate multiple GDDRx chips rather than one central interposer.

It also simplifies HSF design to have it all closer together like that, though I have no idea what the impact to power electronics GDDRx has relative to HBM - I'm still waiting for them to move power electronics onto the interposer, that would make HSF design extremely simple.

As for the shares you mentioned, there is nothing remotely miraculous about incentives - it's a basic business practice, I'm sure that Rambus have used far slimier methods to ensure that business remains in their favor.
Not only that, but if there is no radical increase in bandwidth soon for APUs, then we're stuck on marginal improvements going forward. Who knows however, maybe we'll see GDDR6 chips attached to the MB soon.
 

soresu

Diamond Member
Dec 19, 2014
3,208
2,480
136
Not only that, but if there is no radical increase in bandwidth soon for APUs, then we're stuck on marginal improvements going forward. Who knows however, maybe we'll see GDDR6 chips attached to the MB soon.
The great increases in maximum single stack density and bandwidth since the initial HBM2 spec mean it could be very possible - who knows, if they could address some of the latency issues with HBM, you could have a dual stack APU with one stack each for CPU and GPU, making for a very small APU board size potentially for NUC type systems.

That Intel-AMD hybrid Vega M was woefully big for a single package IMHO, AMD should be able to do much better on their own.
 

maddie

Diamond Member
Jul 18, 2010
4,881
4,951
136
The great increases in maximum single stack density and bandwidth since the initial HBM2 spec mean it could be very possible - who knows, if they could address some of the latency issues with HBM, you could have a dual stack APU with one stack each for CPU and GPU, making for a very small APU board size potentially for NUC type systems.

That Intel-AMD hybrid Vega M was woefully big for a single package IMHO, AMD should be able to do much better on their own.
AFAIK, HBM isn't bad from a latency consideration. It's basically a Dram iteration. The only issue is the frequency is much lower but the access cycles are also, so from a ns consideration, it's not terrible.
 

soresu

Diamond Member
Dec 19, 2014
3,208
2,480
136
AFAIK, HBM isn't bad from a latency consideration. It's basically a Dram iteration. The only issue is the frequency is much lower but the access cycles are also, so from a ns consideration, it's not terrible.
To be honest, I think that HBM3 has been strangely long in the making - to the point of the changes to the HBM2 standard in the meantime.

I wonder what changes could be worth such a long dev cycle.
 

moinmoin

Diamond Member
Jun 1, 2017
5,064
8,032
136
Not only that, but if there is no radical increase in bandwidth soon for APUs, then we're stuck on marginal improvements going forward. Who knows however, maybe we'll see GDDR6 chips attached to the MB soon.
I know the upcoming doubling of bandwidth going from DDR4 to DDR5 is not really a "radical" increase (and far from endangering dGPUs), but I would still consider it more than a "marginal" improvement for APUs. ;)
 
  • Like
Reactions: soresu and Olikan

NostaSeronx

Diamond Member
Sep 18, 2011
3,705
1,231
136
To be honest, I think that HBM3 has been strangely long in the making - to the point of the changes to the HBM2 standard in the meantime.

I wonder what changes could be worth such a long dev cycle.
HBM3 restarts from scratch => 2 GHz w/ 2048-bit w/ many 32-bit Pseudo-channels(64x 32-bit) or 16x 128-bit channels.

HBM2E utilize DDR4 or DDR5 dram cells. (DDR5 will be utilized in >4 GHz HBM2e which is supported in TSMC's N7)
HBM3 will switch to LPDDR4 or LPDDR5 dram cells.
 

Olikan

Platinum Member
Sep 23, 2011
2,023
275
126
I know the upcoming doubling of bandwidth going from DDR4 to DDR5 is not really a "radical" increase (and far from endangering dGPUs), but I would still consider it more than a "marginal" improvement for APUs. ;)
And Navi have almost the double of the bandwidth efficiency compared to Vega
 

soresu

Diamond Member
Dec 19, 2014
3,208
2,480
136
I know the upcoming doubling of bandwidth going from DDR4 to DDR5 is not really a "radical" increase (and far from endangering dGPUs), but I would still consider it more than a "marginal" improvement for APUs. ;)
The change to DDR4 was decent for APU's wasn't it?

I'm not sure if we have decent comparisons for Bristol Ridge and Carrizo, didn't Carrizo use DDR3?
 

NostaSeronx

Diamond Member
Sep 18, 2011
3,705
1,231
136
Power delta from standard DDRx cells is?
LPDDR4/LPDDR5 is about 1/3rd the energy for the same BW as DDR4/DDR5. With that measurement being done with 64-bit buses. (64-bit DDR4@3.2 vs 64-bit LPDDR4@3.2)

1024-bit HBM2 w/ DDR4/DDR5 and to a hypothetical 1024-bit HBM3 w/ LPDDR4/LPDDR5 cells would be close to 1/3rd the energy cost. With the 2048-bit it can be anywhere to flat to 2/3rd the energy cost at same BW.

4 GHz HBM2e to 2 GHz HBM3 however could be lower energy than the above. Do to the larger bus and the lower clock-rate/noisy PHY.
 
Last edited:

NostaSeronx

Diamond Member
Sep 18, 2011
3,705
1,231
136
Isn't GF getting into HBM2e manufacturing on their existing nodes? That should bring HBM2 implementation costs down at least a hair.
Yep, even 22FDX supports it.
=> https://www.sifive.com/soc-ip/hbm2-hbm2e-ip-subsystem
Design of gf14nm High Bandwidth Memory IO to meet HBM Memory Specs at 2.4Gbps.
Design of gf22fdxsoi High Bandwidth Memory IO to meet HBM Memory Specs at 2.4Gbps.
// Process node supports – TSMC16/12nm, TSMC7nm, GF14/12nm, GF22FDx
Even supports the other HBM2 => LLHBM // Optional support for LLHBM ||> https://www.renesas.com/us/en/produ...y-dram/low-latency-high-bandwidth-memory.html
 

moinmoin

Diamond Member
Jun 1, 2017
5,064
8,032
136
The change to DDR4 was decent for APU's wasn't it?

I'm not sure if we have decent comparisons for Bristol Ridge and Carrizo, didn't Carrizo use DDR3?
Carrizo supports DDR3 and GDRR5, the latter which was never used but was modified for DDR4 support in Bristol Ridge. But it looks like it never really used more bandwidth than DDR3 offered.

Hard to find decent like for like comparisons, the best I could get quickly is Notebookcheck: https://www.notebookcheck.net/Vega-11-vs-Radeon-R5-Bristol-Ridge_8470_7340.247598.0.html
Biggest issue with such comparisons is that it's never clear whether the CPU or iGPU is the bottleneck keeping performance down. In any case Radeon R5 manages between 10-40% of Vega 11's performance. The result should be spiffy if the first DDR5 APU does the same jump in performance.
 
  • Like
Reactions: amd6502

Shivansps

Diamond Member
Sep 11, 2013
3,875
1,530
136
A better way is to check A8-9600 vs A8-7600 or A8-7680, ryzen APU changed way too much to consider it to be DDR4 only improvement.

Dont get me wrong the jump in performance will be large with DDR5 and Navi, but as i said, this is most likely still 2 year away at least and getting to RX570 perf will be difficult. And in 2 years RX570 will be already like a RX550 today.

Btw the RX550 is kinda like Vega 8 with DDR5.
 

yuri69

Senior member
Jul 16, 2013
535
958
136
AMD APUs has been always targeted at the lowend market, not premium. Do you remember that scrapped Kaveri with GDDR5?

There will be no exotic memory used unless they target premium (?) or other higher margin niche (think of a HPC or console APU).
 

NostaSeronx

Diamond Member
Sep 18, 2011
3,705
1,231
136
Do you remember that scrapped Kaveri with GDDR5?
Do you remember that "Zen" APU promised 1 GB HBM1 and more CUs in 2016!?!

backinmyday.png


They could have skipped Bristol Ridge and gave us a >$300 APU.. instead, of that HBM-less abomination for $170 in 2018.

Also, Kaveri was GDDR5M, not GDDR5. The bus was 32-bit each in DCTA0, DCTA1, DCTB1, DCTB0 or DCT0(Channel A), DCT3(Channel B) being 64-bit.
https://diit.cz/media-gallery/detail/13530/191862 <== GDDR5M interim between DDR3 and HBM, replaces the failed DDR4 spec. (No 3.2 GHz or 1.0V by 2013)
https://diit.cz/media-gallery/detail/13530/191863 <== SK Hynix 2012@JEDEC meet re-affirming their partnership with AMD for GDDR5M and HBM.

Kaveri's DDR3 Phy is larger than Trinity's DDR3 is because it is four 32-bit DRAM PHYs rather than just two 64-bit DRAM PHYs.
 
Last edited:

Topweasel

Diamond Member
Oct 19, 2000
5,437
1,659
136
AMD APUs has been always targeted at the lowend market, not premium. Do you remember that scrapped Kaveri with GDDR5?

There will be no exotic memory used unless they target premium (?) or other higher margin niche (think of a HPC or console APU).
Well not quite. Intel has been selling Laptop CPU's as performance CPU's for a decade. With the 9900 selling upwards of $500. AMD isn't really playing the game like that because they have a defined desktop HP product lineup with the Standard Ryzen fare. So on the desktop market, you can't sell an APU for the same price as the high performance Desktop lineup unless CPU performance between the two are similar. Which they will never be. There is a chance if they move to an 8 core APU that they can price those versions closer to the 3700x-3800x.

But their APU's, specially RR and up. They have been trying to sell as a premium laptop solution. The problem is AMD adoption was at it lowest when Raven Ridge was an undisputed top end laptop solution. Then Intel moved to Coffee lake and started offering 6c i7 Laptop CPU's and started offering 4c solutions that weren't limited to one or two high power i7 selections. This pushed RR and Picasso back to the discount bin pretty quickly and probably took a lot work to see it in such a premium product like the Surface Laptop.

I mention this because Renoir done right (more cores) and with large OEM adoption now, AMD will be once again in position to offer their APU's as a premium product again. At least for the laptops. The question will become, how well will it sell there, and will AMD sell enough that they won't have to have a desktop version. RR's entry into the Desktop was a symptom of OEM demands for a desktop iGPU solution and lack of sales in laptops. More they sell in laptops, the less they offer on desktops. If for no other reason then the chip is worth waay more in the right laptops then it is as the low cost desktop CPU because standard Ryzen's performance will dictate a lower selling price.