AMD Raven Ridge 'Zen APU' Thread

Page 51 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

PeterScott

Platinum Member
Jul 7, 2017
2,605
1,540
136
Is there a cheaper HBM variant to reduce costs? yes

Availability? Cost?

Is there a market for such a premium product given it's power benefit to dGPU competitor? I suggest yes.
Can a new niche be created of a lighter, slimmer product of equal performance? Again I suggest yes

Depends on price/performance.

Can we make 1 die to span both worlds? Ryzen vs Epyc points to a yes.

Neither of these use HBM. We have yet to see a dual HBM/DDR capable product.

Is the additional cost excessive? Ask AMD & customers, but my guess is no.

My guess is that it still doesn't make financial sense and we won't see it anytime soon.


Why do some think the possible market segments are locked for eternity.

Who is talking about eternity? Most of us are just saying not soon. Not this year, or 2018 IMO.

We can revisit the HBM situation late next year...
 
  • Like
Reactions: Thunder 57

maddie

Diamond Member
Jul 18, 2010
4,740
4,674
136
Just because it would make a good product for some people doesn't mean that it's worth the cost of development to make it. There's no denying there's a market for something like this, but is it big enough to make money to produce at this time? I doubt it.
I don't know myself as the specifics are critical, but to dismiss outright smacks of shoddy reasoning.
 

maddie

Diamond Member
Jul 18, 2010
4,740
4,674
136
Availability? Cost?

Depends on price/performance.

Neither of these use HBM. We have yet to see a dual HBM/DDR capable product.

My guess is that it still doesn't make financial sense and we won't see it anytime soon.

Who is talking about eternity? Most of us are just saying not soon. Not this year, or 2018 IMO.

We can revisit the HBM situation late next year...
Strangely enough I agree with some of your points.
#1. Price/performance. Agreed to be critical. Performance in this case includes for example laptop size, run-time, etc plus other non CPU/GPU performance factors.

#2. Missing my analogy completely. I meant that AMD incorporated abilities consuming die space and wasted for some markets to consolidate with one die for all, simplifying production, inventory, etc. It has nothing to do with HBM or DDR specifically.

#3. An opinion, not a fact. Fair enough.

#4. Unless you're privy to private industry material, then see #3.
 

PeterScott

Platinum Member
Jul 7, 2017
2,605
1,540
136
#4. Unless you're privy to private industry material, then see #3.

Again, this is based on currently available info on HBM pricing, posted previously in this thread. IMO that would completely wreck Price/Performance at this time.

Still missing your point on #2. Needing separate dies for HBM/DDR is a serious technical issue. Or is both at the same time? Some seem to be assuming it will a small HBM pool/cache + DDR. That is a lot of data lines. How does this fit into an AM4 motherboard.

If you are staying with the current CU count, why not just go with quad channel DDR. It would probably provide the necessary BW at a much lower cost/lower pin count.
 

Thunder 57

Platinum Member
Aug 19, 2007
2,675
3,801
136
I don't know myself as the specifics are critical, but to dismiss outright smacks of shoddy reasoning.

I'm not dismissing there's a market for it, just the size of that market. You are guessing that the market is large enough to produce such a design now. No doubt there is a market for such an APU. I'm suggesting that it is not worth it for AMD to develop one at this time. The development cost would still be high, and the final product price would also be too high for most. It's an awesome product for the future, no doubt. Right now though, I think AMD just wants to get APU's out the door that compete (and beat) the Intel counterparts. That's the logical first step.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
Needing separate dies for HBM/DDR is a serious technical issue. Or is both at the same time? Some seem to be assuming it will a small HBM pool/cache + DDR. That is a lot of data lines. How does this fit into an AM4 motherboard.

I can't see them using separate dies.

Also can't motherboard design compensate for more data lines?
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
I don't even think that was a Rumor. It was more like wishful thinking.

If you are going to build an HBM based APU, it's going to be an expensive big die (to justify HBM costs) niche product.

AMD likes to get a LOT of use out of a tapeout, so they probably can't justify the cost of taping out such a product.

Even if they taped it out, who would buy it?

HPC and Workstations.....this if it has 1/2 rate DP floating point (like Bristol Ridge).

P.S. Autodesk CFD uses a dual precision solver, but I don't know if it uses the GPU for this? If it doesn't maybe having more GPU hardware with 1/2 rate DP FP would make give Autodesk incentive to make the Solver also work on the GPU?

Here is some info from Feb 2017 indicating Autodesk CFD High Performance Computing only uses the CPU cores for the Dual Precision Solver:

https://knowledge.autodesk.com/supp...A689E9EA-8EB5-4FBB-8E24-3BA54C740CB9-htm.html
 
Last edited:

cbn

Lifer
Mar 27, 2009
12,968
221
106
So with that double precision solver being CPU only (at the moment) we just need some tech and innovation to shake things up.

P.S. I think a FirePro Raven Ridge APU (with HBM2) would particularly nice for a laptop.
 
Last edited:

maddie

Diamond Member
Jul 18, 2010
4,740
4,674
136
Again, this is based on currently available info on HBM pricing, posted previously in this thread. IMO that would completely wreck Price/Performance at this time.

Still missing your point on #2. Needing separate dies for HBM/DDR is a serious technical issue. Or is both at the same time? Some seem to be assuming it will a small HBM pool/cache + DDR. That is a lot of data lines. How does this fit into an AM4 motherboard.

If you are staying with the current CU count, why not just go with quad channel DDR. It would probably provide the necessary BW at a much lower cost/lower pin count.
The motherboard only hosts the DDR4 data lines. The interposer will have the HBM module.

Organic motherboards cannot support the pin density of HBM modules in any case so that was never possible.

Yes. I always meant HBM + DDR4 and I was assuming everyone else thought similarly. By small, I would assume 2 GB HBM. HBCC points the way.

Why are you approaching this so disjointedly. One assumed roadblock at a time. Surely you would have seen that the motherboard would not increase in complexity, yet it still becomes an issue.

Remember this latest started with myself posting the rumor of Vega 28 & Vega 32 having HBM2 memory, translating into a lower cost HBM due to it's market segment as a replacement of Polaris 10. Current pricing of HBM2 therefore cannot be taken as gospel. In other words, if the rumor is true, then HBM modules will soon be quite a bit cheaper than at present.


edit:
A 4 core RR can use dual channel DDR4 for the CPU easily, but might saturated even quad channel with the GPU, assuming memory clocks are low - medium. Also for a low powered APU the power savings of HBM vs DDR4 becomes significant.
 
Last edited:

PeterScott

Platinum Member
Jul 7, 2017
2,605
1,540
136
Remember this latest started with myself posting the rumor of Vega 28 & Vega 32 having HBM2 memory, translating into a lower cost HBM due to it's market segment as a replacement of Polaris 10. Current pricing of HBM2 therefore cannot be taken as gospel. In other words, if the rumor is true, then HBM modules will soon be quite a bit cheaper than at present.

So we should ignore the best information about real shipping products cost (HBM2) because of some rumor of a product that may or may not use HBM?

Lets check back late next year. I am quite sure there won't be an HBM RR by then. Do you think there will be?
 
Last edited:

cbn

Lifer
Mar 27, 2009
12,968
221
106
@ PeterScott,

That quote you just used actually came from post #1259 (author: maddie, not cbn)
 

maddie

Diamond Member
Jul 18, 2010
4,740
4,674
136
So we should ignore the best information about real shipping products cost (HBM2) because of some rumor of a product that may or may not use HBM?

Lets check back late next year. I am quite sure there won't be an HBM RR by then. Do you think there will be?
What is the cost of a single 2 GB HBM2 module and a roughly 350mm^2 interposer. Using the values given for Vega 56 some posts ago. $175 for 2 [5 die] stacks with a roughly 750mm^2 interposer

3 die vs 75$ for a 5 die stack. Say $40. The assembly failure rates drop off faster than the # of die stacked falls off.
1/2 of $25 assembly costs? Say $13

Using your present costs we get roughly $53 additional for a 2GB HBM2 equipped RR on a silicon interposer ready to be soldered to a motherboard. Yes sir, really expensive for a premium product, and this is with your prices, not the possible reduced ones.

The interesting thing is that the motherboard should never "see" the HBM2 module, so a manufacturer can use the same motherboard for both HBM2 APUs and those without HBM2.
 
  • Like
Reactions: krumme

AtenRa

Lifer
Feb 2, 2009
14,001
3,357
136
Since this is primarily about laptops, why are you only searching desktops? Also GT 1030 is quite new. Look how many laptops have 940mx, or GTX 1050.

No one anywhere is arguing that this won't beat Intel iGPUs, but that is a pretty weak battle.

Since you replied to a 3800MHz memory, its not about laptops but desktops since there is no 3800MHz SODIMM memory available.
But even if there was such a SODIMM at that speed, the fact that we are talking about GT1030 (30W TDP) and not for some MX150 version, also steer the discussion to the desktop.

If you want to talk about laptops, there is no way a 15-30W TDP APU even with HBM2 to reach a 15-30W TDP CPU + 25W TDP dGPU gaming performance, simple because of the TDP deficit.
The RR target in Laptops is not the MX150 but the Intel CPUs with iGPU. It will close the CPU performance gap considerable and at the same time manage to increase the iGPU performance substantially over the competition at the same TDP.
 

beginner99

Diamond Member
Jun 2, 2009
5,210
1,580
136
AMD still has their work cut out for them with respect to OEMs. They have to figure out how to get their product WITHOUT a dGPU into those $300-$500 laptops with proper dual-channel memory configurations. And preferably no 5400 rpm spinners.

Yeah. That is the real issue.

A second problem from the past was that the laptops with the best APUs, the ones that actually had better iGPU than intel (the cut-down ones weren't' really that much better), sold in laptops >$800 and hence competing with intel + nv dgpu solutions.
 

beginner99

Diamond Member
Jun 2, 2009
5,210
1,580
136
If you are staying with the current CU count, why not just go with quad channel DDR. It would probably provide the necessary BW at a much lower cost/lower pin count.

Good point. Instead of complicated HBM, quad channel would theoretically work. But then we will have the same problem as before as most laptops would now ship only with dual-channel and hence bandwidth-starved. Plus this takes a lot of space which is a problem in a laptop.

I could also imagine a very small cache of HBM would reduce memory bandwidth by 50% easily, like 1 GB hbm. That would be enough. But now you need the interposer as well and the packaging. So if you actually go with HBM, makes no sense to use a small amount. Therefore I still think intels edram approach is pretty good. With a more powerful AMD GPU it will still scale beyond fast ddr memory. The good thing about the edram is that it can also help the cpu.
 

PeterScott

Platinum Member
Jul 7, 2017
2,605
1,540
136
Since you replied to a 3800MHz memory, its not about laptops but desktops since there is no 3800MHz SODIMM memory available.
But even if there was such a SODIMM at that speed, the fact that we are talking about GT1030 (30W TDP) and not for some MX150 version, also steer the discussion to the desktop.

If you want to talk about laptops, there is no way a 15-30W TDP APU even with HBM2 to reach a 15-30W TDP CPU + 25W TDP dGPU gaming performance, simple because of the TDP deficit.
The RR target in Laptops is not the MX150 but the Intel CPUs with iGPU. It will close the CPU performance gap considerable and at the same time manage to increase the iGPU performance substantially over the competition at the same TDP.

I replied to DDR4-3800 to say that was a pointless argument as this was primarily an OEM laptop chip. Secondarily it's an OEM chip for All-in-One PCs. Only a distant tertiary use will be a handful of self built enthusiast PCs, trying to eek out better GPU performance by using bleeding edge memory speed. Because how many enthusiast are going to spend time and money trying to eek out second class performance of a quad core and APU? Enthusiasts will mostly go for a 6 core R5(or better), and dGPU in their self builds.

And for the nth time, yes it should be a great low end choice for laptops. Giving AMD access to a market Intel practically owned. I can't wait for them to release it. I have said at least 3 times now, that I think RR is AMDs most important CPU. It's the bread and butter of the market. Too bad it wasn't ready for back to school sales this year.
 

PeterScott

Platinum Member
Jul 7, 2017
2,605
1,540
136
What is the cost of a single 2 GB HBM2 module and a roughly 350mm^2 interposer. Using the values given for Vega 56 some posts ago. $175 for 2 [5 die] stacks with a roughly 750mm^2 interposer

3 die vs 75$ for a 5 die stack. Say $40. The assembly failure rates drop off faster than the # of die stacked falls off.
1/2 of $25 assembly costs? Say $13

Using your present costs we get roughly $53 additional for a 2GB HBM2 equipped RR on a silicon interposer ready to be soldered to a motherboard. Yes sir, really expensive for a premium product, and this is with your prices, not the possible reduced ones.

The interesting thing is that the motherboard should never "see" the HBM2 module, so a manufacturer can use the same motherboard for both HBM2 APUs and those without HBM2.

Again you ignoring that your dream needs dual active memory controllers. Both for DDR4 and HBM, and must have all the data connections active to support both. This is really pushing the connections to the small die up significantly. Can they even all fit?

I seriously doubt, as in I am 95%+ certain AMD didn't go through the complication of adding dual active memory controllers (HBM and DDR4) and all the data connections to the current RR die. So we won't see HBM in the next year.

How certain are you that they did, and that we will see HBM on RR in the next year?
 

Sweepr

Diamond Member
May 12, 2006
5,148
1,142
131
AMD-Matisse-Picasso.jpg


https://informaticacero.com/amd-zen-2-llegara-2019-nombre-matisse-se-apoyara-aun-socket-am4
 

.vodka

Golden Member
Dec 5, 2014
1,203
1,537
136
PuL0RIe.jpg


Bristol Ridge's GPU is GCN3 based (28nm), not Polaris (14nm). AMD hasn't ported Polaris to 28nm.

This would be fake based on that.

Besides, why would AMD not use Zen 2 cores for the 2019 APU? Zen 2 is made on 7nmLP, it would be suicide not to take advantage of that process for the APU in 2019, unless there's more going on with the APU apart from a direct reuse of a Zeppelin CCX with improvements on the 14nm+/12nm process...

That "tock, tock, tock" quote comes to mind. Hmm.
 

CatMerc

Golden Member
Jul 16, 2016
1,114
1,149
136
PuL0RIe.jpg


Bristol Ridge's GPU is GCN3 based (28nm), not Polaris (14nm). AMD hasn't ported Polaris to 28nm.

This would be fake based on that.

Besides, why would AMD not use Zen 2 cores for the 2019 APU? Zen 2 is made on 7nmLP, it would be suicide not to take advantage of that process for the APU in 2019, unless there's more going on with the APU apart from a direct reuse of a Zeppelin CCX with improvements on the 14nm+/12nm process...

That "tock, tock, tock" quote comes to mind. Hmm.
Both Polaris and GCN 3 are GFX 8.1 gen. Supposedly Bristol Ridge is grouped with Polaris in drivers. So this is still plausible.

Seems to indicate RR will lag the CPU only parts on zen core usage. 2019 will see Zen 2, but 2019 update of RR doesn't seem to get Zen 2. Just a process refresh.

Also this slide doesn't look so good for the GPU performance gain over Bristol Ridge:

AMD-Ryzen-5-PRO-Mobile.jpg
Note that this is Ryzen 5, not 7. It's likely not fully enabled on the GPU front. We already have a general idea of the performance uplift on the graphics side, and it should be far higher than this.
 
  • Like
Reactions: .vodka

Topweasel

Diamond Member
Oct 19, 2000
5,436
1,654
136
The problem with APU's is that you need to take GPU's into account with changes to the die.

So instead of AMD changing and redesigning the die every year they are probably bundling core changes with GPU changes. So instead of doing two dies with Vega GPUs and is going to hold off for Navi.

The other is also company bandwidth. APUs are always going to be their lowest margin products. Between Ryzen, TR, EPYC,and their GPU updates. I don't think they have enough resources to update APU dies every year and staying on top of it isn't as important.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
The problem with APU's is that you need to take GPU's into account with changes to the die.

They said from the beginning that you get the CPU cores and the GPU cores first, then you get the Fusion versions.

It isn't a huge "loss" on the CPU side anyway. 5-10% difference isn't a life or death deal.

Also this slide doesn't look so good for the GPU performance gain over Bristol Ridge:

Bristol Ridge does 3DMark11 too well and games not so much. The gap is likely due to few reasons. Lack of bandwidth, CPU holding back, sub-optimal use of resources. At this performance level, 3DMark11 is likely running at single or low double digit frames and almost entirely GPU limited. If its more balanced, it would end up 40-50% in games above KBL iGPUs, meaning 20-30% over best Bristol Ridge ones.
 
Last edited:

CatMerc

Golden Member
Jul 16, 2016
1,114
1,149
136
I expect around 3000 score in Firestrike for low powered (15W-35W) R7 Raven Ridge (all 11 CU's).
For 65W variants I expect 3500 ish.

For comparison RX 550 is 4200 ish, A12 9800 (65W) does 2000-2700 depending on source, and Intel's top iGPU offering (with eDRAM) currently does 1400-1800 depending on source (probably power profile related?).

Take note that the 2500U we're seeing in this slide is most likely not fully enabled on the GPU side. On Compubench the 2500U is listed as having Vega 8 graphics, and we already know from Vega 64 and 56 what that 8 means. It also says as much under CL_DEVICE_MAX_COMPUTE_UNITS.
A fully enabled Raven Ridge would have Vega 11 graphics.
 
Last edited:

raghu78

Diamond Member
Aug 23, 2012
4,093
1,475
136
PuL0RIe.jpg


Bristol Ridge's GPU is GCN3 based (28nm), not Polaris (14nm). AMD hasn't ported Polaris to 28nm.

This would be fake based on that.

Besides, why would AMD not use Zen 2 cores for the 2019 APU? Zen 2 is made on 7nmLP, it would be suicide not to take advantage of that process for the APU in 2019, unless there's more going on with the APU apart from a direct reuse of a Zeppelin CCX with improvements on the 14nm+/12nm process...

That "tock, tock, tock" quote comes to mind. Hmm.

APUs always lag the CPUs in incorporating a new core. Zen launched with 8C/16T Summit Ridge in late Q1 2017. Vega launched in Q3 2017. Raven Ridge 4C/8T incorporating Zen and Vega will launch in Q1 2018. This is how AMD develops their CPUs, GPUs and APUs. So even though Zen 2 and Navi will launch in 2019 the 7nm APU incorporating both Zen2 and Navi will launch in Q1 2020. Moreover APUs are high volume and it takes time to ramp volume and improve yields for a mass market mainstream consumer product. Raven Ridge will get a 12LP update in Q1 2019 after Pinnacle Ridge on 12LP arrives in Q3 2018.
 
Last edited: