DX12 Multi-GPU is a challenge

alcoholbob

Diamond Member
May 24, 2005
6,348
422
126
http://wccftech.com/wardell-multigpu-dx12-challenging-implement-presently-stardock-close/

It seems to me if Stardock pulls this off this will be impressive. SFR is supposed to be much more difficult than AFR and the only game in recent memory to use SFR was Civilization V.

I wonder how DX12 games will be like, will only the AAA titles have Multi-GPU programmed in, and the rest of the DX12 games still rely on gameday AFR SLI/CF patches from Nvidia/AMD to get playable 4K performance?
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
http://wccftech.com/wardell-multigpu-dx12-challenging-implement-presently-stardock-close/

It seems to me if Stardock pulls this off this will be impressive. SFR is supposed to be much more difficult than AFR and the only game in recent memory to use SFR was Civilization V.

I wonder how DX12 games will be like, will only the AAA titles have Multi-GPU programmed in, and the rest of the DX12 games still rely on gameday AFR SLI/CF patches from Nvidia/AMD to get playable 4K performance?

You might want to include the devs quotes.
We're getting close to having multi-GPU @DirectX12 done. It's actually quite challenging to implement presently.

— Brad Wardell (@draginol) September 26, 2015

@ShamisOMally @DirectX12 Mainly it's the API is brand brand new and no one has done this stuff yet.

— Brad Wardell (@draginol) September 26, 2015

@postcards_ @DirectX12 It will be until it's more documented and updated to be more streamlined.

— Brad Wardell (@draginol) September 26, 2015

@I_Am_A_Number @DirectX12 They'll come. We'll be documenting how to do it.

— Brad Wardell (@draginol) September 26, 2015

It's just because nobody has done it before from what I'm understanding. Not that it's going to be hard to do once documented.
 

bystander36

Diamond Member
Apr 1, 2013
5,154
132
106
Not only is SFR more difficult, it shows much less performance. At least that has always been the case in the past. SFR was done in the Mantle version of Civ V, and it has very poorly scaling compared to SLI in DX11, which makes you think DX12 will be no different. There may be some advantages, but it is a hard sale of multi-GPU's if SFR gives poor scaling. I expect a lot more AFR in the future, just like today.

I also don't know if AFR is something that can be done through drivers, like the past. The dev's may have to support it themselves.
 
Feb 19, 2009
10,457
10
76
I played a lot of CIV BE, with SFR its fluid smooth always, no sharp rise or dips.

SFR doesn't have raw performance scaling, what it offers is very good frame latencies, high min fps that's close to avg fps that's close to max fps.

At least the current implementation of SFR. Not sure how it will be in DX12.
 

bystander36

Diamond Member
Apr 1, 2013
5,154
132
106
I played a lot of CIV BE, with SFR its fluid smooth always, no sharp rise or dips.

SFR doesn't have raw performance scaling, what it offers is very good frame latencies, high min fps that's close to avg fps that's close to max fps.

At least the current implementation of SFR. Not sure how it will be in DX12.

While true, if you are only gaining 20-50% more FPS than a single card, it becomes harder to justify the added cost. This is most likely the reason they push for AFR most often.
 

tential

Diamond Member
May 13, 2008
7,348
642
121
While true, if you are only gaining 20-50% more FPS than a single card, it becomes harder to justify the added cost. This is most likely the reason they push for AFR most often.

I think you're missing the point. You're thinking in RAW average FPS gains.

The largest reason I am worried about a dual GPU setup is because of the minimums.

If you are just using a raw FPS counter great for you.
But it's the experience that matters and increasing minimums is a huge deal. I'll pay for it, I'm sure MANY enthusiasts will. I doubt many will pay for simply rendering more frames. If the experience is better on SFR than AFR, no one cares what the ticker says.
 

Pottuvoi

Senior member
Apr 16, 2012
416
2
81
It certainly isn't a flipping a switch, you need to handle data copying etc between cards in your code.

What will be interesting is how they will share the work between GPUs and memory.
Do they shade everything on other card and handle rendering on other.. etc.
 

Noctifer616

Senior member
Nov 5, 2013
380
0
76
I wonder how DX12 games will be like, will only the AAA titles have Multi-GPU programmed in, and the rest of the DX12 games still rely on gameday AFR SLI/CF patches from Nvidia/AMD to get playable 4K performance?

If it's in the engine then any game will probably be able to use it. They already said that regarding the multy adapter when paring dGPU and iGPU.
 
Feb 19, 2009
10,457
10
76
In theory, different GPU architectures can be used in SFR but imagine the hassle involved with that, would be a nightmare for all but very few skilled developers. So I don't expect it to be common.

What would be really sweet, is the iGPU on Intel or AMD CPUs potentially be offloaded to perform a specific rendering task or compute. Freeing up the dGPU and improving performance.
 

biostud

Lifer
Feb 27, 2003
19,229
6,214
136
Its the same. Its just moved from the driver team to the game developers. Same with updating for every new single uarch.

So will they still need to make specific optimizations to nvidia/AMD/intel (and different generations), or will DX12 make it possible to make one code, that will run on both SLI/CF and iGPU+dGPU?

Obviously there will still be code that favors AMD or nVidia hardware, but 0% scaling should be a thing of the past if the game developer has at least implemented a multi GPU rendering code, right?
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
So will they still need to make specific optimizations to nvidia/AMD/intel (and different generations), or will DX12 make it possible to make one code, that will run on both SLI/CF and iGPU+dGPU?

Obviously there will still be code that favors AMD or nVidia hardware, but 0% scaling should be a thing of the past if the game developer has at least implemented a multi GPU rendering code, right?

They still need to make specific code.

If anything, DX12 only fractures it even more compared to DX11. This is the ups and downs of DX12. As long as there isn't a static hardware base, close to metal APIs is problematic.

Hypothetical we could end up with a DX12 only game that cant be run at all on some new GPU/IGP uarch because the developer wont spend more money on updating it.

We already know how easy it was for the developers of BF4/Thief to not update for GCN1.2 due to money.
 

Snafuh

Member
Mar 16, 2015
115
0
16
Its the same. Its just moved from the driver team to the game developers. Same with updating for every new single uarch.

Multi-GPU was always a task for the game developers. Depending on the engine and rendering system it can be really hard to achieve good scaling. It's a lot of development time for a feature used my a small minority.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
Multi-GPU was always a task for the game developers. Depending on the engine and rendering system it can be really hard to achieve good scaling. It's a lot of development time for a feature used my a small minority.

But driver teams still make optimizations for SLi/CF implementations.
 

bystander36

Diamond Member
Apr 1, 2013
5,154
132
106
I think you're missing the point. You're thinking in RAW average FPS gains.

The largest reason I am worried about a dual GPU setup is because of the minimums.

If you are just using a raw FPS counter great for you.
But it's the experience that matters and increasing minimums is a huge deal. I'll pay for it, I'm sure MANY enthusiasts will. I doubt many will pay for simply rendering more frames. If the experience is better on SFR than AFR, no one cares what the ticker says.

Experience is a hard sale. You can't show a benchmark for experience very easily. You may be happy about those numbers, but I highly doubt many will. Without benchmarks, you are relying on the reviewers view, which you know everyone here will consider biased, because it will viewed as an opinion.
 

LTC8K6

Lifer
Mar 10, 2004
28,520
1,575
126
In theory, different GPU architectures can be used in SFR but imagine the hassle involved with that, would be a nightmare for all but very few skilled developers. So I don't expect it to be common.

What would be really sweet, is the iGPU on Intel or AMD CPUs potentially be offloaded to perform a specific rendering task or compute. Freeing up the dGPU and improving performance.

This was already sort of demonstrated in a small way:

McMullen showcased the benefits for a hybrid configuration using the Unreal Engine 4 Elemental demo. Splitting the workload between unnamed Nvidia discrete and Intel integrated GPUs raised the frame rate from 35.9 FPS to 39.7 FPS versus only targeting the Nvidia chip. In that example, the integrated GPU was relegated to handling some of the post-processing effects.

http://techreport.com/news/28196/di...-shares-work-between-discrete-integrated-gpus
 
Feb 19, 2009
10,457
10
76
Yeah, I hope more game engines will bake that application natively so studios will ship titles with iGPU + dGPU SFR support.

It will be great to have the iGPU put to use while we're gaming on the iGPU, getting more bang for our buck.
 

railven

Diamond Member
Mar 25, 2010
6,604
561
126
I'm so glad I'm not in the mGPU camp anymore. Some devs can't even get v-sync write (really, v-sync on caps me to 30 FPS?).

Leaving it in their hands to create working profiles for x-product lines...naaaaah. Single card for me.

Good luck to you mGPU users.
 

BFG10K

Lifer
Aug 14, 2000
22,709
2,999
126
Just doing alternate frame rendering should be relatively straightforward right?
Not necessarily. In many cases it's harder than SFR (e.g. the result of the next frame depends data from previous frame(s) ).

Multi-GPU was always a task for the game developers.
Not true, especially in the 3dfx days, but also today as well. Many games have it implemented entirely on the driver side without their knowledge.

Quake 3 is a good example which has had both AFR and SFR support depending on which vendor ran it, yet it predates all of them.
 

BFG10K

Lifer
Aug 14, 2000
22,709
2,999
126
In theory, different GPU architectures can be used in SFR but imagine the hassle involved with that, would be a nightmare for all but very few skilled developers. So I don't expect it to be common.
More to the point: vendor lock-out. nVidia already locks their own paying customers out of PhysX if they detect a USB monitor.

Do you really think they'll freely allow their dGPUs to assist competitors' with rendering?

Expect patched drivers that only allow combinations of their own cards, "ensuring the best possible experience for nVidia customers". Just like when they disabled SLI that was previously working on ULi boards.

I predict mixed multi-GPU will go nowhere, just like Lucid Hydra.
 
Last edited:

thesmokingman

Platinum Member
May 6, 2010
2,302
231
106
Not only is SFR more difficult, it shows much less performance. At least that has always been the case in the past. SFR was done in the Mantle version of Civ V, and it has very poorly scaling compared to SLI in DX11, which makes you think DX12 will be no different. There may be some advantages, but it is a hard sale of multi-GPU's if SFR gives poor scaling. I expect a lot more AFR in the future, just like today.

I also don't know if AFR is something that can be done through drivers, like the past. The dev's may have to support it themselves.


Wow, getting 120fps is really important in Civ V, while laying waste to your frametimes. :rolleyes:
 

biostud

Lifer
Feb 27, 2003
19,229
6,214
136
They still need to make specific code.

If anything, DX12 only fractures it even more compared to DX11. This is the ups and downs of DX12. As long as there isn't a static hardware base, close to metal APIs is problematic.

Hypothetical we could end up with a DX12 only game that cant be run at all on some new GPU/IGP uarch because the developer wont spend more money on updating it.

We already know how easy it was for the developers of BF4/Thief to not update for GCN1.2 due to money.



One thing is the optimization for different architectures, but will it be easier to make code that split the workload between gpu's independently of hardware? (maybe not perfect scaling, but code that crudely split the workload, and give a minimum of ex. 50% scaling, and when optimized +80%)