Speculation: AMD's 7nm processors will all be APUs

Page 4 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Will 7nm Ryzen have integrated graphics?


  • Total voters
    57

Excessi0n

Member
Jul 25, 2014
140
36
101
I AM A MORON.

why were onboard graphics not actually put onboard - i.e. on the motherboard - instead of on the CPU? surely having on-die integrated graphics will be a waste of resources, or at least a design complication, plus the obvious added heat. If we really want to provide people with an alternative to a gaming GPU, and there's new technology that allows it, you could simply design a small chip that fits on the mobo.
this gives the buyer the choice to have it or not, and simplifies the design - of what is arguably the most complex component in the whole system.
I do get that having the GPU on-die makes it more efficient .. slightly. Not sure this is a valid argument for on-die graphics, though.

i'm talking Intel here, and AMD second.

There actually used to be motherboards with onboard graphics. They went out of style... probably ten years ago now? Maybe a little less. iGPUs made them pointless and they make motherboards more expensive, so away they went.
 

Vattila

Senior member
Oct 22, 2004
799
1,351
136
So no it makes 0 sense to add a GPU as we should not forget that most of these servers will run some run of the mill Java Business apps and/or a database. Nothing that profits from a GPU (or any type of AVX).

This segment of the server market is where Intel is strongest, due to superior single-thread performance. There are far more vulnerable and lucrative segments that AMD can attack, including HPC, where parallel compute capability is very much essential. Coincidently, HPC, rendering and virtualisation are application areas that AMD has chosen to demo EPYC's strengths — areas that can benefit from a GCX.

Regarding the cost of including a GCX — as I pointed out earlier in this thread, die cost can be recouped in the high-margin server market, with EPYC prices ranging from $400-$4000.

So a GCX would be both useful and affordable in the server market. I don't see any convincing argument in this thread to conclude otherwise. The only question is whether it is the best use of transistors. The latest rumour, i.e. that second-generation EPYC will have 64 cores, suggests that AMD thinks adding another CCX is the better way forward, perhaps to decisively win the core war against Intel.
 
Last edited:

Vattila

Senior member
Oct 22, 2004
799
1,351
136
I do get that having the GPU on-die makes it more efficient .. slightly. Not sure this is a valid argument for on-die graphics, though.

It is. A graphics unit needs high bandwidth to memory, so needs to be close to memory. In the old days, the memory controller was in the north-bridge on the motherboard, and hence graphics used to be integrated in the north-bridge.

Cost, power and form factor are also arguments for integrating graphics. It will eventually be integrated into server processors as well, due to the demand for compute density and efficiency. AMD has the APU as a central piece of their government-funded exa-scale research:

https://www.hpcwire.com/2015/07/29/amds-exascale-strategy-hinges-on-heterogeneity/

+ Lower overheads (both latency and energy) for communicating between the CPU and GPU for both data movement and launching tasks/ kernels.

+ Easier dynamic power shifting between the CPU and GPU.

+ Lower overheads for cache coherence and synchronization among the CPU and GPU cache hierarchies that in turn improve programmability.

+ Higher flops per m3 (performance density).
 
Last edited:
  • Like
Reactions: moinmoin

maddie

Diamond Member
Jul 18, 2010
4,740
4,674
136
This segment of the server market is where Intel is strongest, due to superior single-thread performance. There are far more vulnerable and lucrative segments that AMD can attack, including HPC, where parallel compute capability is very much essential. Coincidently, HPC, rendering and virtualisation are application areas that AMD has chosen to demo EPYC's strengths — areas that can benefit from a GCX.

Regarding the cost of including a GCX — as I pointed out earlier in this thread, die cost can be recouped in the high-margin server market, with EPYC prices ranging from $400-$4000.

So a GCX would be both useful and affordable in the server market. I don't see any convincing argument in this thread to conclude otherwise. The only question is whether it is the best use of transistors. The latest rumour, i.e. that second-generation EPYC will have 64 cores, suggests that AMD thinks adding another CCX is the better way forward, perhaps to decisively win the core war against Intel.
You said.
"Regarding the cost of including a GCX — as I pointed out earlier in this thread, die cost can be recouped in the high-margin server market, with EPYC prices ranging from $400-$4000."

This node will be a Gloflo exclusive. Do you think there will be production capacity to waste on a niche sub-unit that will affect total sales by reducing production volumes [less die/wafer]. This will still be relevant regardless of recouping costs.
 

scannall

Golden Member
Jan 1, 2012
1,946
1,638
136
Just tossing 2 or 3 video cores, with 256 meg of ram into the chipset would probably be a good idea. Enough to run without a graphics card, do troubleshooting etc.
 
  • Like
Reactions: el etro

DrMrLordX

Lifer
Apr 27, 2000
21,631
10,842
136
Hosting the GPU in the CPU package (or as a part of the CPU die) significantly reduces latency of any CPU <-> GPU communications. That would be vital for any branchy/semi-branchy GPGPU compute situations where you might have small bursts of FPU-intensive math, but not necessarily highly-parallel workloads involving 99.9% or more crunching of raw numbers without significant operational dependencies.

That being said, stuff like IF and NVLink should be making that fact less relevant in the server room.
 

moinmoin

Diamond Member
Jun 1, 2017
4,952
7,661
136
Ryzen distracted from it but you can expect AMD to pick up the push for HSA again at some later point.
 

Topweasel

Diamond Member
Oct 19, 2000
5,436
1,654
136
There actually used to be motherboards with onboard graphics. They went out of style... probably ten years ago now? Maybe a little less. iGPUs made them pointless and they make motherboards more expensive, so away they went.
They died when somewhere between AMD purchasing ATI (the ATI chipset they had was the last one with an iGPU) and when Intel did the refresh of Nehalem. The sole reason for onboard video and CPU integrated video has always been about laptops. Moving the GPU on die allowed better power control and easier packaging for Laptop manufacturers. AMD may have had dreams of heterogenous computing but in the end even AMD is aware that it would be silly to waste die space for server chips on a tech that isn't useful in that market which is why we had the FX line and Ryzen lineup.

Ryzen Mobile is a way to keep HSA in development till someone comes along and says "hey can you make a server chips for HSA" then they will start with a 16 core CPU with 44 GPU units to start. For a 32c 88gu 2S system. It will require a tweaked socket. It might not support 128 lanes. Maybe if the demand is high enough AMD will then create a higher end die like what is suggested here. Maybe that use eventually grows to the point of AMD stops offering non-GPU dies. But there is no way AMD threatens the strength they have now by including a bunch of GPU units so few want just to get a GPU into the main Ryzen lineup.
 

jpiniero

Lifer
Oct 1, 2010
14,591
5,214
136
AMD tried to pitch HSA for more than just server usage though.

For a 32c 88gu 2S system. It will require a tweaked socket.

IIRC Snowy Owl with Vega+HBM is BGA and not a tweaked socket. It's one die (8C) + whatever Vega 20 (or I suppose Navi) has. Need 1/2 DP. We'll have to see if they actually release that version.

Edit: BTW, most likely AMD ends up doing something similar to what Intel has proposed with EMIB, so you'd have a CPU tile tied in with one or more GPU tiles. So there'd be no need to directly put GPU logic into a die.
 

Topweasel

Diamond Member
Oct 19, 2000
5,436
1,654
136
AMD tried to pitch HSA for more than just server usage though.



IIRC Snowy Owl with Vega+HBM is BGA and not a tweaked socket. It's one die (8C) + whatever Vega 20 (or I suppose Navi) has. Need 1/2 DP. We'll have to see if they actually release that version.

Edit: BTW, most likely AMD ends up doing something similar to what Intel has proposed with EMIB, so you'd have a CPU tile tied in with one or more GPU tiles. So there'd be no need to directly put GPU logic into a die.

That assumes that Snowy Owl is still a planned product line. But BGA/SP3.75 or whatever just not SP3. But SO was originally 2 full APU dies. Which is why there was some confusion when Threadripper rumors started.
 

Insert_Nickname

Diamond Member
May 6, 2012
4,971
1,691
136
It is. A graphics unit needs high bandwidth to memory, so needs to be close to memory. In the old days, the memory controller was in the south-bridge on the motherboard, and hence graphics used to be integrated in the south-bridge.

I think you mean the northbridge, no?

It used to be CPU->northbridge (memory controller, AGP or PCIe controller, and graphics in some cases)->southbridge (general I/O, PCI(e), IDE/SATA). Last Intel design with a non-integrated northbridge was Nehalem. Though the LGA-1155 variety wasn't "integrated" per se, with separate dies for CPU and northbridge/graphics on the same package.
 
  • Like
Reactions: Vattila

Vattila

Senior member
Oct 22, 2004
799
1,351
136
I argued before in this thread that a server APU would be good for the virtualisation market, allowing each VM to allocate some on-chip GPU resources.

AMD's main EPYC partner, HPE, is aiming for just this market, with EPYC in their new Gen10 server line, to be presented at Discover 2017 Madrid, November 28-30:

HPE Gen10 server utilizing AMD EPYC™ processors for virtualization leadership

"HPE is extending the world’s most secure industry-standard server portfolio to include servers based on the AMD EPYC™ processors. Come see how the most innovative performance and security technologies from HPE and AMD work together to deliver a flexible platform purposefully built for delivering advanced leadership in virtualization and memory-centric workloads."

https://content.attend.hpe.com/go/a...le=en_US&AEID=&selectedFilters=tag_0:0&kw=amd

Although this generation EPYC servers can be set up with loads of discrete graphics cards, it seems obvious that you can achieve better performance per watt and area with an APU, and this is AMD's direction for their exa-scale research. So, I have little doubt that the server APU is still on AMD's roadmap.

It will be interesting to see how many different dies AMD will design and produce at 7nm.
 

Yotsugi

Golden Member
Oct 16, 2017
1,029
487
106
I argued before in this thread that a server APU would be good for the virtualisation market, allowing each VM to allocate some on-chip GPU resources.
Vega already has bog standard SR-IOV implementation.
Just buy 1P/2P EPYC and attach some MI25's.
and this is AMD's direction for their exa-scale research
The direction is separate CPU and GPU tiles.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
Based on the recently released die shot of the Raven Ridge chip, I made a quick mock-up of what my proposed 12-core 7nm APU die might look like with 3 CCXs and 1 GCX (11 Vega CUs).

9114301_0b7bde766907bced36e9e262572f2cc0.png


Nice and square, and at 7nm it should be less than the size of the 14nm Raven Ridge, as the CCX takes up much less than half of the die of the latter.

PS. Here is the original Raven Ridge die shot:

https://hexus.net/media/uploaded/2017/10/ffef815e-8764-4771-98d5-9e92ba8803ba.png

I like that iGPU size with 1/2 rate double precision floating point and ECC enabled HBM2 memory. Unfortunately I imagine it would need a rather large interposer to make it work.....so maybe the HBM2 could be stacked directly on the GPU part of the die?
 
Last edited: