So WHEN is the next big thing for the high end happening?

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Flayed

Senior member
Nov 30, 2016
431
102
86
Modern engines hate mGPU like nothing else so don't bet on it.

Think I read somewhere Intel are going to bring back mGPU by using a software interface between the hardware and Direct X so that it appears to games as if there is only one gpu.
 

tajoh111

Senior member
Mar 28, 2005
298
312
136
Somewhat similar performance, on a totally different platform.
HEDT isn't your usual client stuff.
Intel client 8 cores are priced very sanely.

Intel HEDT pricing never applied to GPUs, nothing to undercut there.

They're not undercutting anything, Navi is just plenty of perf for each xtor spent.

Since you behave that your in the know or work with AMD directly, why does Vega 20 have such bad transistor density for 7nm and will Navi have a similar transistor density?

Looking at Apple, they were able to nearly double their transistor density going from 16(3.3billion transistors 125mm2 to 10nm 4.3 billion @89mm2) to 10nm. They were then able to improve on this effort on 7nm managing to fit 6.9 billion transistors on a 83mm2 area, thus going from 26.4 million transistors to 48 million transistors to 83 million transistors per mm2 from the move from 16,10 and 7nm respectively. They were able to get a bit better than triple the transistor density.

Vega had 12.5 billion transistors on a 14nm/16nm process which is very similar to Apple's with a transistor density of 25.8 million transistors per mm2 at the same process.

Vega 20 manages just over 40million transistors per mm2 which seems pretty bad moving from 14nm to 7nm, particularly if we look at apple who manages to double this at 7nm and even loses to apples 10nm transistor density.

Why is this the case?
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
The I/O section takes more space on GPUs and they don't scale well with processes.

Putting more work in can result in more optimized circuit designs. AMD's GPU team is not in a good state right now.

Vega 20 probably uses 200W just for the chip, which is a far higher W/mm2 density than mobile chips.

Vega 20 @ 200W = 600mW/mm2
Apple Ax @ 5W = 50-60mW/mm2

This is true in Intel chips too. There's about 2x difference in perf/clock between Goldmont and Skylake. If you compare the core sizes though, there's something like 6-7x difference between the two. You can fit 4 Goldmont cores + L3 cache + System Agent equivalent in a single Skylake die.
 
  • Like
Reactions: happy medium

Yotsugi

Golden Member
Oct 16, 2017
1,029
487
106
why does Vega 20 have such bad transistor density for 7nm and will Navi have a similar transistor density?
Not a very high effort design and no.
Looking at Apple, they were able to nearly double their transistor density going from 16(3.3billion transistors 125mm2 to 10nm 4.3 billion @89mm2) to 10nm
Never, ever apply SoCs to anything highs performance, ever.
Putting more work in can result in more optimized circuit designs.
Doesn't make less dense libs any denser.
AMD's GPU team is not in a good state right now
They are, they're just putting the priorities straight.
 

lifeblood

Senior member
Oct 17, 2001
999
88
91
I forgot which article(s) I read it in but Vega only got half the resources (ie, people) that Navi did. Also, Navi has had more time for development. If the articles are to be believed, Vega's performance is not a harbinger of Navi's performance. It's a hard lesson but you have to pick your battles. Sometimes you have to sacrifice product A to make sure product B has what it needs to be successful. They focused on Ryzen first, now they're focused on Navi.

If this was the old AMD then I would have zero faith that Navi was going to be any good. Less than zero, actually. But given their new found drive and competence, I have hope that Navi will actually be a winner.
 

ozzy702

Golden Member
Nov 1, 2011
1,151
530
136
I forgot which article(s) I read it in but Vega only got half the resources (ie, people) that Navi did. Also, Navi has had more time for development. If the articles are to be believed, Vega's performance is not a harbinger of Navi's performance. It's a hard lesson but you have to pick your battles. Sometimes you have to sacrifice product A to make sure product B has what it needs to be successful. They focused on Ryzen first, now they're focused on Navi.

If this was the old AMD then I would have zero faith that Navi was going to be any good. Less than zero, actually. But given their new found drive and competence, I have hope that Navi will actually be a winner.

Lisa Su has worked wonders on the CPU side, and I agree, if not for the success AMD has seen on the CPU side of things I'd have zero hope for Navi to be anything but a complete turd. Hopefully Navi is a winner, I'm certain that in due time AMD will offer compelling alternatives to NVIDIA and look forward to being able to go team red again without it being a massive compromise.
 
  • Like
Reactions: happy medium
Mar 11, 2004
23,077
5,559
146
Modern engines hate mGPU like nothing else so don't bet on it.

I don't think that's more true now (game engines in general are more complex I don't know that there's anything that specifically makes them worse for doing mGPU, although I think they started doing more deferred rendering for different effects which makes syncing a single image trickier?), its just that what was in place languished, and no one puts in the work to keep it being worthwhile like they used to (where both brands and game developers would, now pretty much no one does). The thing is, they also were doing the same thing with regards to supporting stuff like DX12 (where games using it seem to often have worse performance even though we've been told it should offer quite a bit better performance - which we saw is possible with Doom; it'll take them having a reason to care about putting in the work).

I think they'll be more willing to do the extra work with the move to doing cloud gaming. Google apparently has or have gotten some developers to (they're openly touting mGPU on their game streaming service), and I believe all 3 of these companies (AMD, Intel, Nvidia) have said that the future of chips in general is likely to be chiplets, and Nvidia and AMD have explicitly mentioned it with GPUs (Nvidia said that its basically inevitable but they don't know when it'll be feasible, think they said this like a couple years back, maybe around the time after Zen showed chiplet approach can work in CPU). AMD straight up had Navi listed as mGPU on their roadmap at one point.

Which I wouldn't expect miracles (certainly its not an easy thing to accomplish), I'd be surprised if they could even manage 50% performance improvement consistently. But even that would be enough to disrupt prices again (if Navi offers 1080 performance for $250, 50% higher would put it close to the 2080 which is $699-$799; if it scales really well 2 Navi cards could compete with 2080Ti or even Titan RTX). There's been talk of doing mGPU for per eye rendering for VR too.

Their goal is non explicit such that to the software it appears as a single unified GPU (with the GPU and or its driver managing resources). I think they'd like it to function like GPU does in compute where they don't need to tailor it for mGPU, they just throw the load at it and the system/software balances it across the available resources properly. In that situation it aboslutely does not tank performance (in fact has even better utilization than stuff like Crossfire/SLI did). With gaming moving to be rendered in the cloud they'll absolutely want the ability to balance load across their hardware. Won't be easy for sure though.

Longer term, they'd like it so that the chip would send it to the proper hardware (that was partly behind the design of the construction CPU cores, they were planning on replacing, was it FP in the CPU by using GPU which is stronger at that, and then were planning on just meshing CPU and GPU into a single unified pipeline, but things just were nowhere close to ready to start doing that so the way they had the chip designed flopped hard). I think we're gonna end up back to how things were with co-processors and stuff like computers used to be, but it'll be at the chip level using chiplets. Which I think the plan was to do that on a monolithic chip, but because of how the manufacturing has gone, they kinda have to move to separate dice chiplets. Intel actually thought similarly (I remember an old Anandtech article where Intel talked about where they think things are headed and they basically had something like that which was I think 10 years in the future - and this was around 2005 or so they were saying that). The design of the Cell processor I think had a somewhat similar idea (of having like a main generic controller core that managed things, and then a bunch of specialized cores around it; and it flopped because the industry was not ready at all for that; incidentally I wonder if they didn't think "well there's gonna be a lot of work needed to utilize multi-core, we might as well jump beyond that to where those multi-cores are more specialized since that'll be extra work too).

I forgot which article(s) I read it in but Vega only got half the resources (ie, people) that Navi did. Also, Navi has had more time for development. If the articles are to be believed, Vega's performance is not a harbinger of Navi's performance. It's a hard lesson but you have to pick your battles. Sometimes you have to sacrifice product A to make sure product B has what it needs to be successful. They focused on Ryzen first, now they're focused on Navi.

If this was the old AMD then I would have zero faith that Navi was going to be any good. Less than zero, actually. But given their new found drive and competence, I have hope that Navi will actually be a winner.

That might've been the rumors about AMD moving resources to focus on Navi over Vega. Which that was more after Vega was already out though. But they have moved people from the CPU side to their embedded (which handles GPUs) and have been building their GPU division back up after they had to let it flounder as they focused on Zen, so its certainly in better shape now than it was in probably the couple of years prior.

Lisa Su has worked wonders on the CPU side, and I agree, if not for the success AMD has seen on the CPU side of things I'd have zero hope for Navi to be anything but a complete turd. Hopefully Navi is a winner, I'm certain that in due time AMD will offer compelling alternatives to NVIDIA and look forward to being able to go team red again without it being a massive compromise.

Honestly, I think AMD's GPU division has done a pretty decent job considering things. AMD as a whole has less resources than Nvidia, and the GPU division was getting a quite small cut of that the previous few years, yet AMD's GPUs have not been that far off the pace of Nvidia (and actually even has advantage in some specific instances). And in consoles, they've actually been outright solid (thanks to Microsoft especially doing some extra to figure out more optimal ratio of units/memory/etc, but also them optimizing the power of each chip - which probably saved them a good 50W while improving especially sustained performance because of AMD's woeful voltage tuning).

We'll see if Lisa can get the GPU division going like the CPU division, and see if she can get them working in harmony (which I think is their goal, and why they were not going to sell the GPU division, GPU is integral to their overall plan). I have no idea on Navi. They should easily be able to make it a worthwhile upgrade for Polaris users. There's reports that the dGPU Navi we'll be getting isn't the same Navi that's going in the PS4, which no clue what that might mean (I've pointed this out before how it could be pretty meaningful, but might not be and come down to simple stuff like the console version has support for something that wasn't likely to be used on PC anyway). And there's been reports that Navi is both a new architecture as well as rumors that it isn't and is just an evolution of GCN while Arcteryx (slated for 2020 I believe by latest roadmap) will be an all new architecture. So we really don't know what it will be and AMD has become unreliable in providing much info about what they're doing GPU wise.

Personally I have a hunch it'll be a revised GCN, and be decent (I don't think they'll offer 2070 or higher performance, but I'm expecting Vega 64/1080 gaming performance for ~$300, with power in line with Polaris not pushed outside its efficiency curve so ~125-150W; anything beyond that would definitely be nice and appreciated but I'm not expecting wonders). I also personally have a feeling its gonna have some mGPU support and that will be how AMD plans on competing with Nvidia (by getting people to use a couple of cheaper Navi cards to offer similar performance as higher end Nvidia cards). I think Navi in consoles will be the start of AMD's chiplet strategy (and thus is the reason why its considered to be different from dGPU Navi as I think it'll have some InfinityFabric stuff that dGPU Navi won't have, and I think they'll have an I/O controller chip that handles unified memory space, so basically they'll swap memory controllers on chip for InfinityFabric stuff compared to dGPU Navi), where they'll pair a CPU chiplet with a GPU chiplet (we won't see those types of APUs until next year on PC). But I think the GPU will actually be pretty similar to the dGPU Navi beyond that (I could see them making a single chip and just disabling which one depending on its intended use case, just so that they can maximize economies of scale; since I expect the console will have its own I/O chip that will feature a lot of the custom stuff that AMD does for the partner).
 

Yotsugi

Golden Member
Oct 16, 2017
1,029
487
106
We'll see if Lisa can get the GPU division going like the CPU division
You'll see soon enough.
Personally I have a hunch it'll be a revised GCN
Everything after is also that.
I think Navi in consoles will be the start of AMD's chiplet strategy
Rome is the start of AMD's chiplets strategy.
where they'll pair a CPU chiplet with a GPU chiplet
X360 already did that.
(I could see them making a single chip and just disabling which one depending on its intended use case
SC contracts are all separate dies.
I expect the console will have its own I/O chip
The GPU is the NB there.
 
Last edited:
  • Like
Reactions: exquisitechar

JDG1980

Golden Member
Jul 18, 2013
1,663
570
136
They are, they're just putting the priorities straight.

I'll believe that when I see it. Ever since 2015, AMD fans have been saying to ignore the latest lackluster release, they're putting the resources into whatever comes next and that will blow everyone away. And it hasn't happened yet. The fact is that AMD's GPU team was decimated when they laid off the old ATi guys after Hawaii, and has never really recovered.

Maybe they'll do it this time, but I'm not going to be Charlie Brown waiting for Lucy to yank the football away once more.
 

JDG1980

Golden Member
Jul 18, 2013
1,663
570
136
Honestly, I think AMD's GPU division has done a pretty decent job considering things. AMD as a whole has less resources than Nvidia, and the GPU division was getting a quite small cut of that the previous few years, yet AMD's GPUs have not been that far off the pace of Nvidia (and actually even has advantage in some specific instances).

The thing is, as a consumer, I don't grade on a curve. I want the best product - whether that's best in absolute terms, or in terms of perf/$, or perf/watt, or whatever metric I'm trying to optimize. And right now, AMD isn't really being all that competitive in those terms. (One exception is the Radeon WX 5100, currently the best pro card under 75W and the most powerful card of any kind under 75W. But even there, the poor driver support for stuff like OpenGL hurts them.) I don't care if they are working with fewer resources; that's their problem, not mine.

Personally I have a hunch it'll be a revised GCN, and be decent (I don't think they'll offer 2070 or higher performance, but I'm expecting Vega 64/1080 gaming performance for ~$300, with power in line with Polaris not pushed outside its efficiency curve so ~125-150W; anything beyond that would definitely be nice and appreciated but I'm not expecting wonders).

As long as they stick with GCN, they won't be able to compete in terms of perf/transistor and perf/watt. GCN is not optimal for gaming compared to Nvidia's Maxwell and subsequent architectures. And it's clear that they don't know how to fix it (if it even is fixable). The current Chinese team didn't invent GCN and doesn't understand it; the Canadian team that did was laid off in 2013. They clearly can't fix the inherent limitations of the architecture, such as 4 triangles/clock (Nvidia's top architectures have done 6 triangles/clock for years).

I also personally have a feeling its gonna have some mGPU support and that will be how AMD plans on competing with Nvidia (by getting people to use a couple of cheaper Navi cards to offer similar performance as higher end Nvidia cards).

The only way that could possibly ever work is if they managed to pull off transparent multi-GPU with 2+ cards presenting to the system as 1 card. Anything requiring any support from developers is an absolute nonstarter, since no developer will do extra work for AMD support/optimization unless they're paid to.
 

Yotsugi

Golden Member
Oct 16, 2017
1,029
487
106
I'll believe that when I see it.
My child, you're already choking on your words, and it's gonna get worse the next few months.
Gasping for air, gag reflex? Fun.
The fact is that AMD's GPU team was decimated when they laid off the old ATi guys after Hawaii
I'm pretty sure 2010-2012 are before Hawaii.
As long as they stick with GCN, they won't be able to compete in terms of perf/transistor and perf/watt. GCN is not optimal for gaming compared to Nvidia's Maxwell and subsequent architectures
That's a one yikes ticket.
The current Chinese team didn't invent GCN and doesn't understand it; the Canadian team that did was laid off in 2013
That's another one.
They clearly can't fix the inherent limitations of the architecture, such as 4 triangles/clock
They very much can.
(Nvidia's top architectures have done 6 triangles/clock for years)
That's setup.
nV has a bit more when it comes to FF.
 
  • Like
Reactions: exquisitechar

JDG1980

Golden Member
Jul 18, 2013
1,663
570
136
My child, you're already choking on your words, and it's gonna get worse the next few months.
Gasping for air, gag reflex? Fun.

I'm pretty sure 2010-2012 are before Hawaii.

That's a one yikes ticket.

That's another one.

They very much can.

That's setup.
nV has a bit more when it comes to FF.

Again - I'll believe all this when I see it.
After the Vega fiasco, RTG just has no credibility left.
 

JDG1980

Golden Member
Jul 18, 2013
1,663
570
136
Oh gods.

I guess nV had no credibility left after Fermi.
Oh, wait.

Fermi was hot and loud, but at least it won the performance crown. Vega is much worse than that; it couldn't claim a clear win in anything - not in absolute terms (1080 Ti beat it), not in perf/$ (MSRP about the same as 1080 and had no better performance), certainly not in perf/watt (significantly worse than Pascal). About the only thing it did well was GPGPU (and then only if you don't care about CUDA), and the 2nd cryptocurrency bubble was the only thing that stopped it from being a catastrophic financial failure for AMD.

The CPU side of AMD has won back confidence with Ryzen. The GPU side has repeatedly tried and failed to do so - even though we were all promised that Vega would be the second coming. They'll have to prove that they can compete.
 

Yotsugi

Golden Member
Oct 16, 2017
1,029
487
106
Why am I wasting my time arguing with another moron that fails to understand just how swingy anything GPU is.
Fermi was hot and loud, but at least it won the performance crown.
At mere expense of being 1.5 times the size of Cypress, and let's not talk about pricing.
It was a total market failure yet no one ever doubted nVidia.
and the 2nd cryptocurrency bubble was the only thing that stopped it from being a catastrophic financial failure for AMD
It stopped the entire GPU market from being a catastrophic failure, but whatever.
Should I tell you of how bad Tahiti did?
That one was very not nice too.
The CPU side of AMD has won back confidence with Ryzen
Maybe in DIY, but no one cares about DIY.
They'll have to prove that they can compete.
They don't need to prove you anything.



Insulting members and calling them names is not allowed
in the tech forums.


esquared
Anandtech Forum Director
 
Last edited by a moderator:
  • Like
Reactions: happy medium

lifeblood

Senior member
Oct 17, 2001
999
88
91
They don't need to prove you anything.

Actually they do. They need to prove to him, me, and every other potential customer that their product is worth buying. I'm just optimistic that they will be able to actually do it.
 

lifeblood

Senior member
Oct 17, 2001
999
88
91
The only way that could possibly ever work is if they managed to pull off transparent multi-GPU with 2+ cards presenting to the system as 1 card. Anything requiring any support from developers is an absolute nonstarter, since no developer will do extra work for AMD support/optimization unless they're paid to.
Developers will do extra work to support AMD because its AMD GPU's in the consoles. It's the one advantage AMD has.
 

DeathReborn

Platinum Member
Oct 11, 2005
2,746
741
136
I am not sure when the next "big thing" for the high end is going to arrive since we just got RT (sorry but straight performance is not a next big thing), before that we had 3D (a bust) and VR (pretty much a bust too) and RT isn't exactly setting the world on fire.
 

Dribble

Platinum Member
Aug 9, 2005
2,076
611
136
Developers will do extra work to support AMD because its AMD GPU's in the consoles. It's the one advantage AMD has.
They will do extra work to support the consoles not AMD, and the consoles aren't going to have this transparent multi-GPU.
 

swilli89

Golden Member
Mar 23, 2010
1,558
1,181
136
Here's my personal prediction for the little its worth:

AMD:
Little Navi releases September/October. From the tone of AMD's comments, I think a modern-day Radeon 4850/4870 combo is what they are aiming for.
$249 = Geforce 2060+ performance
$349 = Geforce 2070+ performance

I think this is where AMD shoots for, undercut Nvidia and match or exceed performance in those brackets. Which will be all well and good until...

NVIDIA:
7nm Turing successor releases October or later
Usual performance bumps for about the same price we've seen from NVIDIA. I'm thinking 3080 Ti will beat 2080 Ti by 30% or more for $1200. 3060 will be $399 and might approach 2080 performance. Remains to be seen if we get a GTX and RTX line, but if NV wants to really go for AMD's throat they will releases a small die midrange with no tensor cores that they can profitably sell for less then $300 that exceeds basically anything AMD can field until big Navi which is probably 2020 in Q1 or Q2. Unfortunately, I don't think NV really improves perf/$ on 7nm and moreso increases top range performance at the highest price brackets.
 

maddie

Diamond Member
Jul 18, 2010
4,747
4,690
136
Here's my personal prediction for the little its worth:

AMD:
Little Navi releases September/October. From the tone of AMD's comments, I think a modern-day Radeon 4850/4870 combo is what they are aiming for.
$249 = Geforce 2060+ performance
$349 = Geforce 2070+ performance

I think this is where AMD shoots for, undercut Nvidia and match or exceed performance in those brackets. Which will be all well and good until...

NVIDIA:
7nm Turing successor releases October or later
Usual performance bumps for about the same price we've seen from NVIDIA. I'm thinking 3080 Ti will beat 2080 Ti by 30% or more for $1200. 3060 will be $399 and might approach 2080 performance. Remains to be seen if we get a GTX and RTX line, but if NV wants to really go for AMD's throat they will releases a small die midrange with no tensor cores that they can profitably sell for less then $300 that exceeds basically anything AMD can field until big Navi which is probably 2020 in Q1 or Q2. Unfortunately, I don't think NV really improves perf/$ on 7nm and moreso increases top range performance at the highest price brackets.
Do you really expect the performance delta between the 3060 & 3080Ti models to be around 40-50% (30 + a bit)?
 
Last edited: