Is Multi-GPU (for gaming) really dying? Or are people just saying that?

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

bystander36

Diamond Member
Apr 1, 2013
5,154
132
106
I'm fairly certain 3D vision does not render 2 frames for the same time frame, nor should it for that matter, as that would lead to wonky looking animations. 2-way SLI with 3D vision does not specifically split rendering between eyes either, but since the order on which the 2 GPUs delivers frames is fixed when using AFR, they will end up doing so as a side-effect, not by design.
Fun fact. If using AMD's version, HD3D, both left and right frames are sent at the same exact time using frame packing (same goes for Nvidia if you use HDMI). Then the monitor splits it up and displays them at exactly the same time on passive monitors, or alternately displays them on active displays. Nvidia does this too when using HDMI or passive displays.

Another reality is that SLI specifically ignores a 3rd GPU if used in 3D Vision. If it was strictly an AFR approach, there would be no reason to ignore the 3rd GPU.

Using AFR you get poor scaling (since the load on the CPU is basically doubled), you get micro stutter (since AFR does not have any inherent way to smooth out delivery of frames), and you get so-so compatibility (much better than SFR at least).

But rendering two frames for each eye also doubles much of the work as well. At different perspectives, the frames have to recalculate for each position. That isn't to say that some of the info could share, and some can't be shared, but the same thing happens with 3D Vision. This comes down to how well the developers utilize their resources and how much special attention is given.

There are games were the CPU load in 3D Vision is not doubled, while in others it is. It varies from game to game.

With VR SLI/crossfire you get close to perfect scaling since the rendering of the 2 frames (left eye and right eye), is far more exposed and controlled (since the VR SDK is purpose built around rendering 2 frames at a time) and as such you can do a lot more optimization of the CPU side, which is exactly what has been done (this has also been done with a few 3D vision games such as Crysis 2, but they are few and far between). You also get zero micro-stutter, since VR effectively has built in smoothing, thanks to the fact that both frames are shown at the same time instead of one after the other like what happens with AFR. Finally compatibility should be very good, since all VR games has to make use of the SDKs (only possible exception being with Nvidias auto stereo feature).

So all in all multi-GPU on VR is a completely different beast from multi-GPU with 3D vision (or multi-GPU on a normal monitor for that matter).

I do not doubt that there are improvements. Even with 3D Vision and HD3D some of that happens to some extent. Just know that it will not be perfect scaling on the CPU side. Some things have to be calculated twice.

Games specifically designed for VR will likely have great support. It's the games that tack it on that will see less scaling than you seem to expect.
 

dogen1

Senior member
Oct 14, 2014
739
40
91
I've already watched that [URL="[/URL] (and it's hardly recent, seeing as it's over a year old), and I have no idea where you get the 35% from, since the presentation specifically say that performance is nearly doubled.

Slide 13:


Alternatively you might want to look at [URL=" more in depth presentation[/URL], one of the things pointed out here is that you do in fact not have to render shadows twice, so again I have no idea where you're getting your info from.


I was actually referring to the same guy's more recent presentation he did this year. It's called advanced vr rendering performance.

http://gdcvault.com/play/1023522/Advanced-VR-Rendering

He mentions that you can probably work around rendering shadows twice, but the benefits are probably not worth the extra complexity.
 
Last edited:

antihelten

Golden Member
Feb 2, 2012
1,764
274
126
Fun fact. If using AMD's version, HD3D, both left and right frames are sent at the same exact time using frame packing (same goes for Nvidia if you use HDMI). Then the monitor splits it up and displays them at exactly the same time on passive monitors, or alternately displays them on active displays. Nvidia does this too when using HDMI or passive displays.

I didn't know that HD3D did this, always nice to learn.

Another reality is that SLI specifically ignores a 3rd GPU if used in 3D Vision. If it was strictly an AFR approach, there would be no reason to ignore the 3rd GPU.

Yeah, but this doesn't mean that it isn't AFR though, it's simply a result of the timing issues you would run into since you have 2 viewports instead of 1, and mapping 3 GPUs to 2 viewports is a bit iffy (mapping them to 1 is pretty straightforward since you just queue them up linearly).

As such it's much easier for Nvidia to simply dedicate the 3rd GPU to PhysX and avoid the headache.

But rendering two frames for each eye also doubles much of the work as well. At different perspectives, the frames have to recalculate for each position. That isn't to say that some of the info could share, and some can't be shared, but the same thing happens with 3D Vision. This comes down to how well the developers utilize their resources and how much special attention is given.

It only doubles the load if you do it naively, seriously read this presentation for how to get around this.

I do not doubt that there are improvements. Even with 3D Vision and HD3D some of that happens to some extent. Just know that it will not be perfect scaling on the CPU side. Some things have to be calculated twice.

Games specifically designed for VR will likely have great support. It's the games that tack it on that will see less scaling than you seem to expect.

Some things will indeed have to calculated twice, but they should be few enough (and we should have enough CPU headroom with DX12), that we can still achieve at least 90-95% scaling (another thing that will incur a small hit and thus prevent 100% scaling is the transfer of the finished frame from GPU2 to GPU1).

And yes you are right that games that just have VR tacked on and doesn't properly implement the SDKs will see subpar multi-GPU scaling, so I should probably clarify that I meant games that were built with VR in mind from the get go.
 

bystander36

Diamond Member
Apr 1, 2013
5,154
132
106
And yes you are right that games that just have VR tacked on and doesn't properly implement the SDKs will see subpar multi-GPU scaling, so I should probably clarify that I meant games that were built with VR in mind from the get go.

This is the rub. How many games do you figure will be built specifically for VR?

It may be something everyone is talking about, but so was 3D. Only a handful of games were built with 3D support. Most everything else was forced through drivers with a lot of bugs. Helix mod had some great tricks to fix much of the problems for DX9 games, but since DX9 has fallen out of favor, most games have terrible 3D support.

Is VR going to be different on this front? I expect it to be a little better, but do not expect it to be well supported in most games.
 

antihelten

Golden Member
Feb 2, 2012
1,764
274
126
This is the rub. How many games do you figure will be built specifically for VR?

I imagine that all of the games built specifically for VR will be built specifically for VR. This is obviously a complete tautology, but basically I imagine that VR will more or less lead to it's own and fairly distinct genre of games. If you like those games then you should se great usage out of multi-GPU setups, but if you don't like those games then you're stuck with the current quality of multi-GPU setups.

As for the exact number, I would say an absolute ton, possibly enough to challenge the number of non-VR games out there for PC. Unfortunately I also think the vast majority will be complete shovelware, at least in the early days, and amongst the ones that aren't shovelware only a few will probably even require the performance of a multi-GPU setup, at least until we get higher resolution VR HMDs.


Weird, so previously they were capable of achieving nearly a doubling of performance with AMDs affinity multi-GPU, but now can only achieve a 35% speedup.

I can only assume that the second example didn't make use of AMDs affinity multi-GPU (and hopefully not Nvidias implementation either, as that would be pretty disappointing performance).

Here's AMDs and Nvidias VR presentations from the same conference if anyones interested (neither one mentions the exact performance boost as far as I could tell)
 
Last edited:

therealnickdanger

Senior member
Oct 26, 2005
987
2
0
I argue that VR games will predominantly prefer single GPU setups because that offers the lowest cost to consumers, therefore highest market penetration (*snicker*). In fact, I'll go as far as to say that based on cost alone the PS4VR will completely decimate the Oculus and Vive in sales and set the trend for VR programming for years to come.
 

bystander36

Diamond Member
Apr 1, 2013
5,154
132
106
I imagine that all of the games built specifically for VR will be built specifically for VR. This is obviously a complete tautology, but basically I imagine that VR will more or less lead to it's own and fairly distinct genre of games. If you like those games then you should se great usage out of multi-GPU setups, but if you don't like those games then you're stuck with the current quality of multi-GPU setups.

As for the exact number, I would say an absolute ton, possibly enough to challenge the number of non-VR games out there for PC. Unfortunately I also think the vast majority will be complete shovelware, at least in the early days, and amongst the ones that aren't shovelware only a few will probably even require the performance of a multi-GPU setup, at least until we get higher resolution VR HMDs.

You are far more optimistic than I.

While there may be a few demos and a few games only made for VR, I doubt we'll get many high quality VR games in that category. Like everything else, everything is cross platform to get the most money. I doubt there will be enough people buying these sets to get dev's to focus so heavily on VR only games.

I'll expect VR to be a feature in some AAA games, and some will get good attention to quality performance, while others won't. Call me a pessimist, but history seems to agree with me.
 

antihelten

Golden Member
Feb 2, 2012
1,764
274
126
You are far more optimistic than I.

While there may be a few demos and a few games only made for VR, I doubt we'll get many high quality VR games in that category. Like everything else, everything is cross platform to get the most money. I doubt there will be enough people buying these sets to get dev's to focus so heavily on VR only games.

I'll expect VR to be a feature in some AAA games, and some will get good attention to quality performance, while others won't. Call me a pessimist, but history seems to agree with me.

Note that I never said anything about many high quality VR games, I simply said many VR games (i.e. there will likely be a ton of shovelware).
 

Adored

Senior member
Mar 24, 2016
256
1
16
Weird, so previously they were capable of achieving nearly a doubling of performance with AMDs affinity multi-GPU, but now can only achieve a 35% speedup.

I can only assume that the second example didn't make use of AMDs affinity multi-GPU (and hopefully not Nvidias implementation either, as that would be pretty disappointing performance).

Here's AMDs and Nvidias VR presentations from the same conference if anyones interested (neither one mentions the exact performance boost as far as I could tell)

Yes I noticed this yesterday. It would also explain why the dual Nano only equals the 980 Ti in the Steam VR demo.

What happened to the missing 65% from last year is another question. I'm guessing that it's related to a 90fps lock or something.
 

antihelten

Golden Member
Feb 2, 2012
1,764
274
126
Yes I noticed this yesterday. It would also explain why the dual Nano only equals the 980 Ti in the Steam VR demo.

What happened to the missing 65% from last year is another question.

In the "35%" presentation he mentions duplicating shadow rendering, however this presentation from back in January 2015, specifically mentions that you don't necessarily need to do that. As such I think the implementation in the 35% case, might not really be what would be considered best practices.
 

bystander36

Diamond Member
Apr 1, 2013
5,154
132
106
Note that I never said anything about many high quality VR games, I simply said many VR games (i.e. there will likely be a ton of shovelware).

As a gamer, I'm only interested in real games. People getting multi-GPU's are not likely to see any improvements if it isn't high quality. I assume we were talking about real quality gaming here, since we are talking about multi-GPU's.
 

bystander36

Diamond Member
Apr 1, 2013
5,154
132
106
Yes I noticed this yesterday. It would also explain why the dual Nano only equals the 980 Ti in the Steam VR demo.

What happened to the missing 65% from last year is another question. I'm guessing that it's related to a 90fps lock or something.

My guess is that they went from theoretical ideas, to demos that showed things were not as optimistic as they previously hoped.
 

Adored

Senior member
Mar 24, 2016
256
1
16
35% is a number Nvidia has been talking about for a while with this, way back when Valve were claiming 100% on AMD. It's very strange, maybe they ran the first demo without shadows lol?
 

Adored

Senior member
Mar 24, 2016
256
1
16
It will go down the route of offloading to separate GPUs yes, physics, audio etc. Still 3-4 years away from really, really good VR I feel.
 

antihelten

Golden Member
Feb 2, 2012
1,764
274
126
Split frame rendering can divide work amongst cpu,gpu,igpu. With dx-12, this is only the beginning.

SFR isn't really relevant here (here being VR), since you're dealing with 2 separate view frustums.

SFR only really comes into play with VR if you have 4 GPUs.
 

poofyhairguy

Lifer
Nov 20, 2005
14,612
318
126
This is one letter longer than the shortest possible response in the English language, yet is still the most complete, comprehensive and truthful response in this entire thread. I applaud. I do applaud.

We can't let the games beat us.
 

dogen1

Senior member
Oct 14, 2014
739
40
91
I can only assume that the second example didn't make use of AMDs affinity multi-GPU (and hopefully not Nvidias implementation either, as that would be pretty disappointing performance).

Nah, I'm pretty sure they do, those APIs are one of the first things he brings up.

Seems like the main 2 reasons behind it only being 35% faster are the shadows and extra data transfer.

Even if they were able to split the shadow rendering somehow, you'd still be limited from perfect(or near perfect) scaling because of the extra time used for transfer.

And splitting the shadows up seems like it would be pretty difficult. if you could just fully render them on one gpu and somehow reuse them for the other eye, it doesn't seem like you'd benefit much because you still need to transfer the second frame after the first frame is finished. Seems like you'd have to render half of the shadows on each gpu and idk, transfer the other half of the shadows over from the other?... idk. Maybe there's some way to do it, and I'm not nearly an expert on this subject, but that seems complex, and maybe not even worth it.
 
Last edited:

bystander36

Diamond Member
Apr 1, 2013
5,154
132
106
Nah, I'm pretty sure they do, those APIs are one of the first things he brings up.

Seems like the main 2 reasons behind it only being 35% faster are the shadows and extra data transfer.

Even if they were able to split the shadow rendering somehow, you'd still be limited from perfect(or near perfect) scaling because of the extra time used for transfer.

And splitting the shadows up seems like it would be pretty difficult. if you could just fully render them on one gpu and somehow reuse them for the other eye, it doesn't seem like you'd benefit much because you still need to transfer the second frame after the first frame is finished. Seems like you'd have to render half of the shadows on each gpu and idk, transfer the other half of the shadows over from the other?... idk. Maybe there's some way to do it, and I'm not nearly an expert on this subject, but that seems complex, and maybe not even worth it.
The biggest issue with 3D is shadows are usually the same for both eyes, which looks completely wrong. A game needs to render shadows for both eyes for it to look correct. They may be able to calculate where the shadows go once, and some how apply that info to different perspectives, but shadows need something done to them to work correctly.
 

antihelten

Golden Member
Feb 2, 2012
1,764
274
126
Nah, I'm pretty sure they do, those APIs are one of the first things he brings up.

Seems like the main 2 reasons behind it only being 35% faster are the shadows and extra data transfer.

Even if they were able to split the shadow rendering somehow, you'd still be limited from perfect(or near perfect) scaling because of the extra time used for transfer.

And splitting the shadows up seems like it would be pretty difficult. if you could just fully render them on one gpu and somehow reuse them for the other eye, it doesn't seem like you'd benefit much because you still need to transfer the second frame after the first frame is finished. Seems like you'd have to render half of the shadows on each gpu and idk, transfer the other half of the shadows over from the other?... idk. Maybe there's some way to do it, and I'm not nearly an expert on this subject, but that seems complex, and maybe not even worth it.

Whilst he does mention them I don't see any mention of him actually using them in the example, but I only have a transcript of the presentation, you wouldn't happen to have a link to a recording of it?

Thing is though that the shadows and data transfer would also have been present before, when they were seeing practically a doubling of framerate, so that wouldn't really explain it.

And yes splitting up shadows might not be easy, but again they apparently achieved it in the past, so what changed?

The biggest issue with 3D is shadows are usually the same for both eyes, which looks completely wrong. A game needs to render shadows for both eyes for it to look correct. They may be able to calculate where the shadows go once, and some how apply that info to different perspectives, but shadows need something done to them to work correctly.

Actually if you look at the presentation Dogen1 linked, the shadow maps are actually reused for both eyes when you render on a single GPU, so at least in Valves eyes (no pun intended), it wouldn't appear to be an issue.
 

bystander36

Diamond Member
Apr 1, 2013
5,154
132
106
Actually if you look at the presentation Dogen1 linked, the shadow maps are actually reused for both eyes when you render on a single GPU, so at least in Valves eyes (no pun intended), it wouldn't appear to be an issue.
I didn't disagree that some data can be reused, but that doesn't mean they can use it as is for both eyes. A big issue in the past as been that developers use some tricks with shadows that doesn't lend itself to 3D effects.
 

dogen1

Senior member
Oct 14, 2014
739
40
91
Whilst he does mention them I don't see any mention of him actually using them in the example, but I only have a [URL="[/URL] of the presentation, you wouldn't happen to have a link to a recording of it?

Thing is though that the shadows and data transfer would also have been present before, when they were seeing practically a doubling of framerate, so that wouldn't really explain it.

And yes splitting up shadows might not be easy, but again they apparently achieved it in the past, so what changed?


I don't know..

Here's the presentation, there's a section called Multi-GPU Affinity APIs

http://gdcvault.com/play/1023522/Advanced-VR-Rendering
 
Last edited:

bystander36

Diamond Member
Apr 1, 2013
5,154
132
106
I don't know..

Here's the presentation, there's a section called Multi-GPU Affinity APIs

http://gdcvault.com/play/1023522/Advanced-VR-Rendering

I just noticed something that really changes what we are talking about here. In his presentation, he says you save 30%-35% time with 2 GPU's. That's a 50% improvement in rendering speed, not 30%-35%. That makes more sense to me and more in line with what I'd expect.

I also noticed him talking about the biggest issue in rendering these days in the "decoupling the CPU and GPU" section near the end. Basically, he says the biggest issues today is the CPU performance, and that he plans to work on ways to improve the ways to reuse data in the render thread.