DX12 Multiadapter Feature

hawtdawg

Golden Member
Jun 4, 2005
1,223
7
81
All i can think about is the massive amounts of microstutter this is sure to cause.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
146
106
I also have trouble seeing the good part.

Stutter, frame lag etc, pick your poison.

In this example, if I had to guess. Instead of maybe 30-35ms for a frame, it ends up at 45-50ms. Could just as well get a cheap monitor with more input lag.
7725.multiadapter_2D00_dx12_2D00_ue4_2D00_gputimeline.png
 
Last edited:

Hitman928

Diamond Member
Apr 15, 2012
6,696
12,373
136
They're only proposing doing the post processing work (and I'm assuming compute is possible as well) which shouldn't interfere with the pipeline of the dGPU. If done correctly I don't think it should result in stutter at all. Lag is a more interesting analysis as each frame according to their presentation would be delayed though the power of the iGPU will obviously play a role here. However, even with the increased lag per frame, you're still getting more frames per second, so the question is, can you, in a realistic workload, obtain enough fps to more than makeup for the lag. In theory it certainly is possible. Even in their example eventually you'll be displaying frames ahead of just the dGPU alone. I can definitely see an issue though at high fps if the igpu is limited to how quickly it can do the post processing.
 

DrMrLordX

Lifer
Apr 27, 2000
22,937
13,023
136
You are only going to see microstutter from frame lag if the frame lag is non-uniform. Uniform frame lag will produce input lag, but there would be no stuttering.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
146
106
I am sure its non uniform of TDP limited chips that throttle when both CPU and GPU are used. But considering the "input lag" is defacto standard its already meh.
 

nenforcer

Golden Member
Aug 26, 2008
1,777
20
81
It will be nice now as well to have physics hardware acceleration offloaded to the integrated GPU using DirectCompute or its DX12 equivalent.

I think I read that PhysX was opened up but it's doubtful we'll see it running on Intel HD Graphics or AMD APU's. Havoc and Bullet Physics will probably be taking full advantage of this going forward I would imagine.
 

Shehriazad

Senior member
Nov 3, 2014
555
2
46
DX12 multi(brand) GPU and split frame rendering are going to be ftw...but I'd still not mix brands.


I do however hope that split frame rendering becomes a real thing with DX12. This would give APUs and iGPUs a very strong boost and make them more viable for cheap higher end solutions that feature 1 dGPU and 1iGPU to play your games.

Now imagine a 7850K only had to calculate 60% as much as it did before and also needs to use less Vram/bandwidth since it needs to use less of it due to split frame rendering (afaik both GPUs only share a small portion of assets and split up most of it...so unlike 2GB + 2GB = 2GB it would be 2GB + 2GB = 3.5GB or something like it).

Sure...it's still meh since it's a measly DDR3 powered GPU...but the boost would be VERY real if it works in tandem with a 250 in that scenario...or thanks to multi GPU it takes care of post-processing while a STRONK dGPU does the heavy lifting.

The future of APUs/SoCs for "higher than low end" gaming thus seems secured.
 
Aug 11, 2008
10,451
642
126
Could be very nice in mobile, so you dont have to have a powerful (relatively) dgpu, but can get by with a cheaper one and the apu--less cost, less heat generated, better battery life.

For the desktop, might be a place for it, but still seems simpler to just get a more powerful dgpu and not have to worry about any compatibility issues, needing DX12 to make it work, etc.
 

NTMBK

Lifer
Nov 14, 2011
10,452
5,839
136
To be fair, it should look more impressive with a better iGPU.
 

erunion

Senior member
Jan 20, 2013
765
0
0

Its like having a 1 frame buffer.

With how much GPU compute is sitting idle in gaming CPUs I can see why people are interested in exploiting it. But I dont see that added latency(~20-25ms?) as a good trade off.
 
Last edited: