Separate graphics and compute card at 7nm for AMD?

Mar 11, 2004
23,444
5,852
146
That's not too surprising since they had a large Vega chip for that market on their roadmap. Was Vega 20 supposed to be 7nm?

I think its less that they had hardware specifically for machine learning, and more that GPUs were reasonably well suited for using in machine learning. Don't see that changing, since much of that has use for gaming/graphics tasks in modern engines/games. I mean, its not like they're going to revert to VLIW which should be more efficient for purely graphics processing (effectively it just means you'd be able to cram in more graphics focused units for the size of the chip, since you'd have less of the chip taken up by compute tasks). Which I've wondered if moving to the new modular approach wouldn't lead to something like that. Where they have a graphics focused core, then compute focused core, and then they can mix and match based on the product. And as more unique processing units come up (tensor, I'm sure there will be others), they can have cores focused towards that, and then just slot them in as desired.

Maybe we'll see the individual chips change in size too (would be curious if there may be an optimal size ratio so that you could have a grid of all the same sized spots and then just slot CPU/GPU/memory/etc as needed). Think if you could take a Threadripper like setup, and go, I want 1 module of CPU cores, 2 modules of GPU, and 1 module of HBM. Or maybe if they'd go with a different standard size, where say instead of it being 4 modules, it'd be 8 or 16, but the size of the individual chips would be smaller. Also kinda wonder if that might be beneficial in some ways, like you could place the chips more optimally for performance and/or thermal needs. Like put memory in between CPU and GPU so they can kinda share it as a cache, and because you push hotter chips away from each other, you spread the heat dissipation out more. Which going to modules in a way sorta achieves that on its own as it effectively spreads the total chip out and puts buffers, although then you run into interconnection (which is why they're looking into fiber optics for that even at the chip level, which you can see processors looking like the 3D cube grid from the Terminator processor).
 

Krteq

Golden Member
May 22, 2015
1,009
729
136
That's not too surprising since they had a large Vega chip for that market on their roadmap. Was Vega 20 supposed to be 7nm?
According to this "old" ROCm roadmap, it's supposed to be on 7nm process (7nm GFX9, ...)

amd-vega-20-specificaweqmc.jpg
 

raghu78

Diamond Member
Aug 23, 2012
4,093
1,476
136
This was expected to happen. AMD is going to have separate HPC/ML GPU and separate gaming GPU from 7nm node.Similarly AMD server and desktop CPUs will have separate dies . I expect 7nm Rome to be built at GF 7SoC 6T and 7nm Ryzen / Ryzen TR to be built at GF 7HPC (9T). The server CPUs are going to be optimized for power efficiency and will use a low power process which is designed for maximum transistor density while the desktop CPUs will be optimized for highest single thread performance and clocks and will use a High performance process.
 

prtskg

Senior member
Oct 26, 2015
261
94
101
This was expected to happen. AMD is going to have separate HPC/ML GPU and separate gaming GPU from 7nm node.Similarly AMD server and desktop CPUs will have separate dies . I expect 7nm Rome to be built at GF 7SoC 6T and 7nm Ryzen / Ryzen TR to be built at GF 7HPC (9T). The server CPUs are going to be optimized for power efficiency and will use a low power process which is designed for maximum transistor density while the desktop CPUs will be optimized for highest single thread performance and clocks and will use a High performance process.
In a cpu thread you talked that you are disappointed with AMD's decision of not refreshing vega gpus at 12nm. What if AMD saved money there and instead went for different chips at 7nm? They did something similar with bulldozer. Stopped updating it and instead put money in zen and gpu. It's not like refreshing them at 12nm would have made Vega very competitive but if they go separate dies for graphics and compute card, things can finally get interesting in 2019.
 

Qwertilot

Golden Member
Nov 28, 2013
1,604
257
126
I'm not sure if they've got the money to do that, or rather if they do whether it won't be small & medium cards for gaming (also games consoles etc etc) and a 'big' card for compute. That's the sectors they seem to be most competitive in. Big gaming cards hasn't worked for a while.
 

24601

Golden Member
Jun 10, 2007
1,683
40
86
Looks more like they canceled "14nm+" Vega due to Vega being too broken to fix in time.

7nm Vega is either fixed Vega or simply a pipe cleaner part quick ported to 7nm hoping to be viable among miners (if mining lasts that long).

Could also simply be a process port purely for Apple (Apple loves free AMD "Pro" chips they can get for pennies)

Absolutely 0% chance of any real/serious "Enterprise" or "Machine Learning" use case for any AMD GPUs, as their drivers for compute as well as the various support required to keep "Enterprise" level of compute running 24/7/365 without issue simply doesn't exist in AMD ecosystem.

Nvidia dominates the compute market due to Nvidia writing almost all of the code themselves.

The only compute market that AMD has/does well in is Crypto Mining, and only due to the Miners being willing to troubleshoot the buggy PoS AMD compute ecosystem 24/7/365, edit vbios, hard-mod the cards, and to patch the drivers themselves sometimes when they have to.

I remember the months and months of hard community work that was done for every launch of a new AMD card to get the things actually working semi-stablely for mining.

Imagine how much work you would have to do if you were actually trying to do complex dynamic compute workloads on the damned things instead of doing perfectly parallel 100% predefined crypto workloads hand-tuned per cycle.
 
Last edited:

prtskg

Senior member
Oct 26, 2015
261
94
101
Isn't that more reason to leave 12nm and go for 7nm? If the software stack isn't ready, there is no benefit of hardware in that field.

Edit -
And I wanted to add that your comment about software is out of proportions.
 
Last edited:

Headfoot

Diamond Member
Feb 28, 2008
4,444
641
126
Absolutely 0% chance of any real/serious "Enterprise" or "Machine Learning" use case for any AMD GPUs, as their drivers for compute as well as the various support required to keep "Enterprise" level of compute running 24/7/365 without issue simply doesn't exist in AMD ecosystem.

Disagree. Writing the ML model / algorithm is by far the hardest part. Getting that to work via OpenCL on AMD vs CUDA for nVidia isn't going to be a big deal for the kinds of enterprises that have the knowledge to write ML models from scratch.

Intel and many others are coming out with dedicated ML chips which would be a much larger programming gap than moving from nVidia to AMD. The field is new, anything can happen. Not that I think AMD will get any great share of the market, they just havent been fast enough on ML.