• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

DX12 Multiadapter Feature

I also have trouble seeing the good part.

Stutter, frame lag etc, pick your poison.

In this example, if I had to guess. Instead of maybe 30-35ms for a frame, it ends up at 45-50ms. Could just as well get a cheap monitor with more input lag.
7725.multiadapter_2D00_dx12_2D00_ue4_2D00_gputimeline.png
 
Last edited:
They're only proposing doing the post processing work (and I'm assuming compute is possible as well) which shouldn't interfere with the pipeline of the dGPU. If done correctly I don't think it should result in stutter at all. Lag is a more interesting analysis as each frame according to their presentation would be delayed though the power of the iGPU will obviously play a role here. However, even with the increased lag per frame, you're still getting more frames per second, so the question is, can you, in a realistic workload, obtain enough fps to more than makeup for the lag. In theory it certainly is possible. Even in their example eventually you'll be displaying frames ahead of just the dGPU alone. I can definitely see an issue though at high fps if the igpu is limited to how quickly it can do the post processing.
 
You are only going to see microstutter from frame lag if the frame lag is non-uniform. Uniform frame lag will produce input lag, but there would be no stuttering.
 
I am sure its non uniform of TDP limited chips that throttle when both CPU and GPU are used. But considering the "input lag" is defacto standard its already meh.
 
It will be nice now as well to have physics hardware acceleration offloaded to the integrated GPU using DirectCompute or its DX12 equivalent.

I think I read that PhysX was opened up but it's doubtful we'll see it running on Intel HD Graphics or AMD APU's. Havoc and Bullet Physics will probably be taking full advantage of this going forward I would imagine.
 
DX12 multi(brand) GPU and split frame rendering are going to be ftw...but I'd still not mix brands.


I do however hope that split frame rendering becomes a real thing with DX12. This would give APUs and iGPUs a very strong boost and make them more viable for cheap higher end solutions that feature 1 dGPU and 1iGPU to play your games.

Now imagine a 7850K only had to calculate 60% as much as it did before and also needs to use less Vram/bandwidth since it needs to use less of it due to split frame rendering (afaik both GPUs only share a small portion of assets and split up most of it...so unlike 2GB + 2GB = 2GB it would be 2GB + 2GB = 3.5GB or something like it).

Sure...it's still meh since it's a measly DDR3 powered GPU...but the boost would be VERY real if it works in tandem with a 250 in that scenario...or thanks to multi GPU it takes care of post-processing while a STRONK dGPU does the heavy lifting.

The future of APUs/SoCs for "higher than low end" gaming thus seems secured.
 
Could be very nice in mobile, so you dont have to have a powerful (relatively) dgpu, but can get by with a cheaper one and the apu--less cost, less heat generated, better battery life.

For the desktop, might be a place for it, but still seems simpler to just get a more powerful dgpu and not have to worry about any compatibility issues, needing DX12 to make it work, etc.
 

Its like having a 1 frame buffer.

With how much GPU compute is sitting idle in gaming CPUs I can see why people are interested in exploiting it. But I dont see that added latency(~20-25ms?) as a good trade off.
 
Last edited:
Back
Top