What plans do you think AMD has for desktop APUs on future sockets?

cbn

Lifer
Mar 27, 2009
12,968
221
106
1.) I think eventually the mainstream socket will no longer support APUs*. (This to free up PCIe lanes making the platform more high performance from a storage** standpoint.)

Example: If the current AM4 did not support APUs then the dCPU would be able to use all 32 PCIe lanes.

2.) I think eventually the Threadripper socket will support APUs. (This because there is room under the heatspreader for a rather large iGPU).

Example: The current Threadripper processor packages have 4 CPU dies, but only two of them are active (The other two dies are inactive and could be replaced with a large GPU die).

*All future mainstream APUs will be BGA only.

**And I/O.
 

eek2121

Platinum Member
Aug 2, 2005
2,904
3,906
136
AMD won't get rid of APUs, the market demand is far too high for them to drop support. I actually have a hunch that AMD will release 7nm EPYC 2 and 7nm Threadripper this year. Both are AMD's highest margin products and both would help pay for lower yields, R&D, etc. I would be surprised if AMD released a 12nm Threadripper at all if they are using TSMC for EPYC. What can you expect for the future? 2 more generations of AM4, then a move to DDR5/PCIE 4 which necessitates a new socket. Contrary to what others claim, there won't be 6 or 8 core CCXes for a while unless Intel forces AMD's hand. Instead AMD will focus on IPC, IMC, and frequency improvements.

The reasoning for the lack of 6-8 core CCXes is due to memory latencies and bandwidth. They would be hurting performance by squeezing more cores into a CCX without a corresponding jump to DDR 5.
 
  • Like
Reactions: amd6502

amd6502

Senior member
Apr 21, 2017
971
360
136
I think 12nm epyc is for this year. And i think for 7nm server might follow consumer as well. This allows for greater testing.

AM4 is just getting warmed up. I think now that it's mature they have the luxury of considering other sockets.

I think TR is for those wanting large lanes and excessive bandwidth. It's not really the role of AM4.

Yet with 7nm later in 2019 if the core count goes up to TR type of counts, a revised platform with the best of both TR (capacity) and AM4 (economy and graphics output) might emerge. Quadchannel actually would make good sense for monster APU with lots of CU and cores.
 

moinmoin

Diamond Member
Jun 1, 2017
4,934
7,619
136
Quite the hodgepodge of a thread already.
- AM4 is only exposing 24 PCIe lanes since manufacturers prefer PGA sockets that can't support that many connections as those are and need to be less complex. Nothing to do with APUs.
- Either way APUs are already bottlenecked by current DDR4 dual channel memory, that's why there won't be an APU with higher specs than 2400G on AM4. This will only change with faster memory, possibly DDR5, on a new socket.
- Threadripper's TR4 socket derives from Epyc's SP3, there won't be an APU supporting version of it. If there will, it will be a new socket which would be against AMD's MO of steadily building their ecosystems and consumer base.
- Epyc is known to be skipping 12nm.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
Remember the thread question is "future sockets".

I use the current sockets merely as a reference point.
 
Last edited:

dark zero

Platinum Member
Jun 2, 2015
2,655
138
106
I think 12nm epyc is for this year. And i think for 7nm server might follow consumer as well. This allows for greater testing.

AM4 is just getting warmed up. I think now that it's mature they have the luxury of considering other sockets.

I think TR is for those wanting large lanes and excessive bandwidth. It's not really the role of AM4.

Yet with 7nm later in 2019 if the core count goes up to TR type of counts, a revised platform with the best of both TR (capacity) and AM4 (economy and graphics output) might emerge. Quadchannel actually would make good sense for monster APU with lots of CU and cores.
Unless... AMD plans to use TR socket to the APUS which has HBM.
 

Glo.

Diamond Member
Apr 25, 2015
5,659
4,419
136
What AMD plans for APUs?

2019 - Raven Ridge APUs will go down the food chain and will become Athlon's, because they cannot compete with the same branding with 7 nm 6/12-8/16T CPUs that will become Ryzen 3/Ryzen 5 CPUs.
2020 - Replacement for APUs will be 6-8 Core CPU, with 1536 GCN core Navi GPU monolithic design with HBM2 on package, that will become new Ryzen 3/5 offering.
 
  • Like
Reactions: Drazick

Insert_Nickname

Diamond Member
May 6, 2012
4,971
1,691
136
2019 - Raven Ridge APUs will go down the food chain and will become Athlon's, because they cannot compete with the same branding with 7 nm 6/12-8/16T CPUs that will become Ryzen 3/Ryzen 5 CPUs.

I wouldn't mind. Those new 2C/4T RR Athlon 200GE already look interesting for HTPC. 4K HDR (Netflix*) playback + very competent CPU for light use all in a nice 35W package.

*I hope at least.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
2020 - Replacement for APUs will be 6-8 Core CPU, with 1536 GCN core Navi GPU monolithic design with HBM2 on package, that will become new Ryzen 3/5 offering.

I think that is a very good idea for BGA mobile.

And then let the Athlon x8 and Athlon x6 parts (ie, APUs without iGPU) make it to whatever the new mainstream socket ends up being called?
 
Last edited:

moinmoin

Diamond Member
Jun 1, 2017
4,934
7,619
136
HBM on the one hand is a real wildcard wrt the possibilities it allows. On the other hand AMD has been plenty optimistic about using HBM since pioneering it with Fury, and it never paid off for them so far. On the contrary it has become more expensive thus risky due to the RAM price hikes.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
HBM on the one hand is a real wildcard wrt the possibilities it allows. On the other hand AMD has been plenty optimistic about using HBM since pioneering it with Fury, and it never paid off for them so far. On the contrary it has become more expensive due to the RAM price hikes.

There is a low cost version is that is coming out (or supposed to come out):

1-630.1559778258.png


(Notice ECC is removed though. This is a negative for any workstation APU that would use FP64)
 

dark zero

Platinum Member
Jun 2, 2015
2,655
138
106
HBM on the one hand is a real wildcard wrt the possibilities it allows. On the other hand AMD has been plenty optimistic about using HBM since pioneering it with Fury, and it never paid off for them so far. On the contrary it has become more expensive thus risky due to the RAM price hikes.
Indeed, but HBM has an adavantage to being used on the flagship parts since they are EXTREMELY useful for their use. Even more than GDDR6 which is targeting the mainstream.
 

Olikan

Platinum Member
Sep 23, 2011
2,023
275
126
Depending on latency, AMD could even use HBM for main memory... pretty sure OEM would like that....

It's interresting that low cost hbm have high pin speed, could improve latency... (whatever it is)
 

Thunder 57

Platinum Member
Aug 19, 2007
2,647
3,706
136
Depending on latency, AMD could even use HBM for main memory... pretty sure OEM would like that....

It's interresting that low cost hbm have high pin speed, could improve latency... (whatever it is)

I'm not surprised the low cost stuff cuts the bus width by a large amount while running faster. That width has cost implications for sure.
 
  • Like
Reactions: Olikan

moinmoin

Diamond Member
Jun 1, 2017
4,934
7,619
136
There is a low cost version is that is coming out (or supposed to come out):

1-630.1559778258.png


(Notice ECC is removed though. This is a negative for any workstation APU that would use FP64)
Low cost here refers to the assembly of HBM, not the cost for the RAM chips used for that. And that's the crux for AMD, RAM in HBM is added cost for external parts, thus higher financial risk and lower margin with little potential for added profit. As is the RAM cost/risk (aside from a couple select products) is fully offloaded to manufacturer and/or consumers. Just imagine how Ryzen's pricing would look with the RAM cost included.
 

moinmoin

Diamond Member
Jun 1, 2017
4,934
7,619
136
Wouldn't the DRAM cost less as well? (re: each die has less TSVs in it than the DRAM dies use for HBM2)

https://en.wikipedia.org/wiki/High_Bandwidth_Memory#HBM_2
Sure, but that doesn't change the fact that it's an additional cost/risk factor for little benefit. Afaik GloFo doesn't do DRAM, so AMD has to look in the market, and the DRAM market is crazy volatile. If GloFo did do DRAM and if that were to be counted toward their wafer supply agreement AMD would have much more of a reason to push for HBM at a bigger scale. As is there isn't.
 

NTMBK

Lifer
Nov 14, 2011
10,208
4,940
136
The mainstream platform won't get more than 24 lanes of PCIe because the vast majority of mainstream users don't need more than that. Adding more lanes increases complexity and costs (more pins in your socket, more traces to route, more layers in your motherboard), making the product less competitive.
 

scannall

Golden Member
Jan 1, 2012
1,944
1,638
136
AM4 will stay the current platform until DDR5 and PCIe 4/5 are ready to launch. Probably 2020 or 2021. You don't actually *NEED* a bunch of different platforms, and each one costs money to develop. Consumer in AM4, Professional in TR4, then the EPYC socket is plenty.
 
  • Like
Reactions: dark zero

LightningZ71

Golden Member
Mar 10, 2017
1,627
1,898
136
Adding a third CCX on a 7nm shrunk version of the existing 14/12nm product would still result in a markedly smaller die. The way IF works allows them to effectively just add another item on the fabric without requiring a wholesale tear up of the logic (a 7nm shrink that is anything more than a dumb shrink will require a new floorplan no matter what). Another CCX will add 8 more MB of L3 cache and will lean on the IF more heavily for CCX <-> CCX communications for cache coherency, etc. To keep this from harming thread performance, there will need to be an increase in IF throughput, probably by clocking it higher. With a smaller die and an improved process, this shouldn't be too big of a problem when looking from afar. The next issue would be RAM bandwidth demands. Keeping 12 cores and 24 threads fed with data is not going to be easy. Qualifying for higher speed RAM can help, but there will still be issues there. I suspect that there will be an additional effort to mitigate the data bandwidth requirements at 7nm, perhaps by introducing an exclusive L4 cache in the 64MB range on die, enabled at various sizes depending on the product tier. The move to 7nm enables this while still fitting in the existing footprint of the 12nm die.

Going forward, I suspect that AMD may introduce a modification of the AM4 socket that will support triple channel RAM. As the cores get faster, and the core count increases, the demands on RAM bandwidth will continue to grow and something will need to give. This would have implications for TR and EPYC as well. I suspect that trying to support 12 DDR channels on the EPYC package would be a path routing nightmare, but perhaps a 6 channel TR may not be beyond the realm of possibility. Has anyone ever considered the possibility of using SO-DIMMS in a desktop, workstation or server platform to keep board real estate usage in check? I know I've seen them in the mini-itx format before.

I don't think that much will change with the APUs in the short term. They are value products in the desktop space and won't support an expensive change in platform. I suspect that they will get a 7nm core shrink like the rest of the stack. They might get split into two different product lines at first, with an early 7nm product being another 4 core chip with a similarly sized (though perhaps graphics IP refreshed) iGPU and later a 2 CCX, 8 core product with the same GPU setup that is destined for mid-range mobile products (this will require another die, but I believe volume and revenue will support that by that time). At some point, I suspect that AMD will introduce a mobile MCM product that has a 7nm die and either a dGPU chip with HBM ram on it (similar to Intel's KL-G) or, perhaps they will do an APU die based around their 7nm Zen cores, a larger iGPU section, but with an HBM controller embedded IN ADDITION TO the two channel DDR controller that can be enabled or disabled as needed. This would allow a product that could be mounted in a normal package with no HBM for a mid-range mobile solution or be mounted on an MCM with HBM on a higher end solution. This would allow AMD to leverage their greater ability to integrate a high performance gpu with a high performance processor as opposed to Intel that currently has to bundle a separate dGPU chip on their EMIB package. This would reduce the disadvantage that AMD has by using MCM, but allow them to keep a similar footprint.

This is all just educated guesses on my part.
 

Topweasel

Diamond Member
Oct 19, 2000
5,436
1,654
136
So much brokenness and missing easy to pick up things.

1. AM4 specifically is PGA, specifically at OEM and MB requests along with APU support. So the APU is part of the reason we don't see the full 32 PCIe lanes.
2. No mid level socket changes. AM4 will stay AM4 till AM5 is ready. If they are going to make a socket change there are too many wholesale changes to make to do a midlevel change. The two biggest PCIe 5.0 and DDR5 are both enough to make a new socket. So no AM4 with 3 channels.
3. ThreadRipper will never support an APU. New socket maybe but honestly I don't see it. It shares a socket with EPYC and we may see a Threadripper or EPYC like APU setup as the socket itself has more than enough connections. But this will be a Server targeted chip. Look at RR, a server using the GCX's as accelerators is one thing. But for the Performance driven HEDT market. No one needs an expensive version of two CPU's that alone sell for way less than a normal SR or PR. It's weak at both categories and therefore have no market.
4. There may be a market for a customize embedded "ThreadRipper mobile APU". What we know as TR is a skunk work product that they saved on development but utlizing server parts to offer a desktop solution. This isn't quite what Intel is doing with there HEDT products. They are specifically binning server parts for HEDT for a margin thing. If Threadripper because it's overall requirements are nearly EPYC's was given decent funding and not done mostly in secret there would probably be room for it to have it's own socket and not be what it is. While on the Desktop it means using TR4 as it is today. That doesn't have to be it's only implementation going forward. TR2 might not share the exact same line as EPYC (which is why there has been some confusion on the extra dies) and maybe they will actually only have 2 real dies in the next one. So they can take the EPYC 2 die option I talked about above, maybe using the APU, for things like an iMac where they would be able to have a 8c16t chip with double the Vega parts in a thin low power setup.

As for the next sockets. I think the thinking will be pretty similar. Now that they have APU's on the same socket. The value of keeping them there is high. It still means any future TR would still be sans GPU. What they may do is tell the mobo guys that it's time to switch over to LGA and therefore be able to use more pins and not have to neuter the APU's less chips as much.
 

wahdangun

Golden Member
Feb 3, 2011
1,007
148
106
Maybe they will add TR4 motherboard with display output, since there are many unused pin in cpu. Just like how some AM4 doesn't have any display output.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
As for the next sockets. I think the thinking will be pretty similar. Now that they have APU's on the same socket. The value of keeping them there is high.

Value will be reduced though if CPU and APU sharing a common socket means the APU's iGPU has be smaller than ideal and the PCIe lanes of the CPU processor are reduced.
 

Topweasel

Diamond Member
Oct 19, 2000
5,436
1,654
136
Value will be reduced though if CPU and APU sharing a common socket means the APU's iGPU has be smaller than ideal and the PCIe lanes of the CPU processor are reduced.
It was reduced because adding GPU functionality meant a certain amount of pins. AMD used PGA at the OEMs request and losing the PCIe lanes was the compromise. In the future if they release a more functional socket it will be LGA and they will have the pins for full PCIe lanes.

There is a point to be made that the target market only has mediocre to poor demand for those lanes anyways.

As for the iGPU not only will pinout have little to do with iGPU size. Also what is ideal? I mean it's current one is the most powerful iGPU right? How much bigger does it need to be?