DirectX 12 Will Allow Multi-GPU Between GeForce And Radeon

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Feb 19, 2009
10,457
10
76
I was just about to say that.
I've previously seen graphics programmers post about how good it would be if Intel's or AMD's IGPs could be used for dedicated GPGPU when paired with a discrete GPU.

It will benefit Intel and AMD APUs to put their iGPU to work. AMD's iGPU as we know, are no slouch, when paired with a dGPU in existing dual-hybrid CF.

But fully agree that beyond iGPU + dGPU, we won't see implementation between AMD & NV, thats just too much work.
 

Headfoot

Diamond Member
Feb 28, 2008
4,444
641
126
There is also the potential for GPU Compute use split between GPUs.
I.e. the Intel GPU does the GPGPU work and the discrete card does the rendering.

This is probably the most likely outcome of all the "multi vendor GPU" theories. Given that there is a large installed base of Intel iGPUs, it would make sense to utilize some of that for little GPGPU tricks. I could see developers having one or two of the computationally expensive physics routines running off of the iGPU as a checkbox feature that gives you a little more eyecandy for no slow down.
 

Grooveriding

Diamond Member
Dec 25, 2008
9,147
1,329
126
It would be really cool if there were a few developers who did this and we found some games benefited from having a card from AMD and nvidia, giving them each specific work more suited to the architecture, delivering better performance than two of the same card.
 

Smoblikat

Diamond Member
Nov 19, 2011
5,184
107
106
Wasnt there a chip that already did this? Lucid hydra maybe?

Also, you can still do this to an extent. By using an AMD GPU as your main card, and having a dedicated nvidia physX card. Though physx never really took off, so it will be interesting to see what happens with this.
 
Last edited:

HurleyBird

Platinum Member
Apr 22, 2003
2,801
1,528
136
This probably makes the most sense for certain compute workloads, where an APU benefits disproportionately from reduced latency and/or sharing the same memory pool between GPU and CPU.
 

iiiankiii

Senior member
Apr 4, 2008
759
47
91
The GTX 970/980 is suppose to have full DX12 hardware feature sets while the R9 290/290x doesn't have those hardware feature sets. Can somebody tell me what the hardware feature sets are?
 

Shivansps

Diamond Member
Sep 11, 2013
3,918
1,570
136
The only way that something like this may work if the developer is out of the loop.

The defunt Lucid Hydra chip what was doing is taking the DX output, and them split it intro the gpus, so 1 frame was rendered by multiple gpus, this is bad, it also was adding another software layer to something thats already bottlenecked.

Now, if MS can make DX12 to, lets say able to auto render multiple frames in multiple gpus at the same time, lets say, gpu A does one frame, gpu B work on the next one, them you show it "ABABABAB", or, if one gpu its slower than the other one, do "AAABAAABAAAB".
Thats the only way something like that may work, if the developer has to do anything, just forget it.
 

digitaldurandal

Golden Member
Dec 3, 2009
1,828
0
76
Wasnt there a chip that already did this? Lucid hydra maybe?

Also, you can still do this to an extent. By using an AMD GPU as your main card, and having a dedicated nvidia physX card. Though physx never really took off, so it will be interesting to see what happens with this.

It technically did but performance was often worse using hydra than a single card and there were many many driver issues
 

Headfoot

Diamond Member
Feb 28, 2008
4,444
641
126
The GTX 970/980 is suppose to have full DX12 hardware feature sets while the R9 290/290x doesn't have those hardware feature sets. Can somebody tell me what the hardware feature sets are?

This is FUD, so stop repeating this until you can back it up with firm evidence.

No one has confirmed or denied this yet. The specs for DX 12_0 compliance are still under NDA.
 

Headfoot

Diamond Member
Feb 28, 2008
4,444
641
126
Feature levels for DX11.3 may be out, but what is necessary for a GPU to be dx12_0 compliant is not out yet. There may be more requirements beyond mere feature level support due to the nature of DX 12. There may not be. The only people who know for sure can't publicly speak about it yet. Until then it's rumor
 

Elfear

Diamond Member
May 30, 2004
7,163
819
126
The GTX 970/980 is suppose to have full DX12 hardware feature sets while the R9 290/290x doesn't have those hardware feature sets. Can somebody tell me what the hardware feature sets are?

This thread is very informative about the DX12 hardware question. From a non-programmer POV it sounds like GCN-based cards actually have more DX12 features in some areas (i.e. different tier support).

This is FUD, so stop repeating this until you can back it up with firm evidence.

No one has confirmed or denied this yet. The specs for DX 12_0 compliance are still under NDA.


Calm down. It sounded like he was just asking an honest question not making a statement.
 

DalonFalco

Member
Feb 15, 2010
28
0
0
If I were to expect anyone to support this first, it'd be CryEngine. They seem to try to be cutting edge with features in their engines. Whether it'll work well is another question.
 

Red Hawk

Diamond Member
Jan 1, 2011
3,266
169
106
It's definitely cool that this is even possible, but yeah, I can't see the two companies cooperating enough for it to actually get implemented in games. What might be possible is Nvidia cooperating with Intel to get multi-GPU functionality with Nvidia GPUs and DX12 compliant Intel iGPUs, since they don't directly compete and just about every laptop with an Nvidia GPU will have an Intel iGPU as well. Might as well put it to use.
 

exar333

Diamond Member
Feb 7, 2004
8,518
8
91
It's definitely cool that this is even possible, but yeah, I can't see the two companies cooperating enough for it to actually get implemented in games. What might be possible is Nvidia cooperating with Intel to get multi-GPU functionality with Nvidia GPUs and DX12 compliant Intel iGPUs, since they don't directly compete and just about every laptop with an Nvidia GPU will have an Intel iGPU as well. Might as well put it to use.

This is intriguing.

Some sort of Intel/NV cooperation here could be very cool. They don't have a great history of working together in the past, but times do change. They do need each other.
 

psolord

Platinum Member
Sep 16, 2009
2,125
1,256
136
"Split Frame Rendering"

Works very nice in Civ BE Mantle mode, extremely flatline smooth frame latency but overall less performance raw scaling than AFR.

It also doesn't need duplication of the frame buffer so that multi-GPU will truly scale with their vram! :D

Looks like MS and AMD have been working very very close with Mantle, Xbone -> DX12 evolution.

So if you have two cards with 2GBs you would end up with a total framebuffer of 4GBs?

If that is so, then that this is the most important news imo.

The only reason I would be interested in having an AMD and a Nvidia card both at the same time, is if you could use one or the other as the primary card on whim and have the card that is physically connected to the monitor as a pass through card only.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
So if you have two cards with 2GBs you would end up with a total framebuffer of 4GBs?

If that is so, then that this is the most important news imo.

The only reason I would be interested in having an AMD and a Nvidia card both at the same time, is if you could use one or the other as the primary card on whim and have the card that is physically connected to the monitor as a pass through card only.

SFR needs duplication.
 

Lonyo

Lifer
Aug 10, 2002
21,938
6
81
I've got another (two) situations other than the GPU compute I mentioned earlier.
IGP used in racing games and FPS games to render the view shown either in the mirrors or in a scope while NOT up (e.g. hip-aimed scoped weapon).
For the racing game mirror at least, it's typically a distinct scene which would require different textures/etc, and would be limited in terms of rendering power required as it's only a small amount of the screen.

Other situations might include something like a game with any other PiP. I can't think of many I have played that are recent, but OpenTTD (not requiring significant GPU power I know) has PiP for extra view ports. I'm sure other people can think of other PiP types other than racing game mirrors.

My thoughts are mainly that this will benefit systems with IGP (Intel or AMD) and a discrete GPU, where one GPU is significantly weaker than the other, but typically entirely unused, and unsuitable for splitting a common workload.

You guys need to be more creative.
 
Last edited:

Grooveriding

Diamond Member
Dec 25, 2008
9,147
1,329
126
If I were to expect anyone to support this first, it'd be CryEngine. They seem to try to be cutting edge with features in their engines. Whether it'll work well is another question.

I would expect Crytek or DICE to implement this functionality. Would not be surprised if we see it in Battlefield 5, which is on track for a 2016 release.
 

naukkis

Golden Member
Jun 5, 2002
1,004
849
136
You could split workload much different than just SFR/AFR. For example lightning maps or other effect maps can rendered in other GPU and only result bitmap need transfered to rasterizing gpu. Textures are needed only on rasterizing gpu but heavy lighning effects are also both compute and memory intensive so ram will be used effectively.

nVidia demoed this kind of work splitting years ago, instead of other gpu they used cloud-based rendering to these compute heavy effects but same principal can be used when splitting gpu jobs.