causing a singularity, red and green together...

monstercameron

Diamond Member
Feb 12, 2013
3,818
1
0
78164.png


it actually [REDACTED] works!
Because of its high resource requirements Ashes is also a good candidate for multi-GPU scaling, and for this reason Oxide is working on implementing DirectX 12 explicit multi-adapter support into the game. For Ashes, Oxide has opted to start by implementing support for unlinked mode, both because this is a building block for implementing linked mode later on and because from a tech demo point of view this allows Oxide to demonstrate unlinked mode’s most nifty feature: the ability to utilize multiple dissimilar (non-homogenous) GPUs within a single game. EMA with dissimilar GPUs has been shown off in bits and pieces at developer events like Microsoft’s BUILD, but this is the first time an in-game demo has been made available outside of those conferences.

In order to demonstrate EMA and explicit synchronization in action, Oxide has started things off by using a basic alternate frame rendering implantation for the game. As we briefly mentioned in our technical overview of DX12 explicit multi-adapter, EMA puts developers in full control of the rendering process, which for Oxide meant implementing AFR from scratch. This includes assigning frames to each GPU, handling frame buffer transfers from the secondary GPU to the primary GPU, and most importantly of all controlling frame pacing, which is typically the hardest part of AFR to get right.

source: http://www.anandtech.com/show/9740/directx-12-geforce-plus-radeon-mgpu-preview/3

so it seem heterogeneous is actually better than a homogeneous setup?
 
Last edited:

VulgarDisplay

Diamond Member
Apr 3, 2009
6,188
2
76
The question is how long until Nvidia squashes this?

AMD could just as easily end support through drivers, but they have a lot more to gain by allowing this to continue. Nvidia already has a huge market share lead so they have more to gain by killing mixed gpu systems through drivers.

They have already done it with physx.

The most interesting thing about this is the lack of SLi bridge for nvidia. I just glanced at the charts did they mention this at all in the article? I would imagine if nvidia ditches the bridge next gen this could improve even more.
 
Last edited:

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
1. Pretty amazing that it works, and works very well.

2. Looks like Nvidia has improved performance to the point where a 980 TI is now outperforming the Fury X across all resolutions. So much for the much hyped GCN DX12 async advantage?
 

MrTeal

Diamond Member
Dec 7, 2003
3,916
2,700
136
Even outside of explicitly allowing multi vendor support, it would also make CF and SLI more interesting. Let's say cut Pascal (1070?) ends up being about as powerful as a 980Ti. If you already have a 980Ti, being able to add in a 1070 and having it work about as well at 980Ti SLI would be hugely appealing.
 

VulgarDisplay

Diamond Member
Apr 3, 2009
6,188
2
76
I hadn't even thought about the fact that I have a hd7970 sitting in a box. I could eventually pair it up with my r9 290 for a little more performance.

PC Gaming is truly starting to get exciting again.
 

Sunaiac

Member
Dec 17, 2014
123
172
116
1. Pretty amazing that it works, and works very well.

2. Looks like Nvidia has improved performance to the point where a 980 TI is now outperforming the Fury X across all resolutions. So much for the much hyped GCN DX12 async advantage?


Let's talk about that 450€ GTX680 also :)
 

thesmokingman

Platinum Member
May 6, 2010
2,302
231
106
This is some really interesting info. Wonder how wide spread it could get with DX12? Hell, maybe I'll get a Radeon and GeForce mid-range cards next time :D haha.

Also, I noticed it's an AFR implementation. The thread discussing it a few weeks ago believed it was SFR.

http://forums.anandtech.com/showthread.php?t=2449641


It's only AFR for the moment and it's not in any public or alpha builds yet. I cannot wait till founders get a crack at it. That said, Ashes is one of those games that would benefit from SFR over AFR imo.
 

kasakka

Senior member
Mar 16, 2013
334
1
81
2. Looks like Nvidia has improved performance to the point where a 980 TI is now outperforming the Fury X across all resolutions. So much for the much hyped GCN DX12 async advantage?

But only barely. Considering 980 Ti pretty much trounces the Fury X in other situations. That said, its good because it means us 980 Ti users don't need to worry about performance in future DX12 games.

In any case it's fantastic that you can finally use different GPUs side by side. I am concerned that this will mean more frametime issues and possible stutter, input lag etc.

Also would've been nice if they tried mixing an iGPU in there or does Intel not have DX12 drivers yet?
 

railven

Diamond Member
Mar 25, 2010
6,604
561
126
But only barely. Considering 980 Ti pretty much trounces the Fury X in other situations. That said, its good because it means us 980 Ti users don't need to worry about performance in future DX12 games.

In any case it's fantastic that you can finally use different GPUs side by side. I am concerned that this will mean more frametime issues and possible stutter, input lag etc.

Also would've been nice if they tried mixing an iGPU in there or does Intel not have DX12 drivers yet?

Radeon+GeForce was definitely smoother than GeForce+Radeon based on the videos. Using the timestamp 2:10 you can see it clearly.

I wonder if this could mean both parties can offload compute processes to the iGPU. Finally, put that die hog to use!
 

railven

Diamond Member
Mar 25, 2010
6,604
561
126
Woof, Kepler once again not playing nicely:

78168.png


I got a GTX 680 in a box some where, glad by the time these stuff rolls out I'll be on something else. Maxwell2 seems to be a better candidate for DX12 games (though I don't see the GF playing any DX12 games for a year or two).
 

thesmokingman

Platinum Member
May 6, 2010
2,302
231
106
^^Crazy, that scaling in specific to the order layout is something.

I wonder if this could mean both parties can offload compute processes to the iGPU. Finally, put that die hog to use!

Well, considering it's been possible for AMD to crossfire igpu with discrete for years now, I'd say with DX12's foundation and direction it is most definitely possible.
 

Mopetar

Diamond Member
Jan 31, 2011
8,436
7,631
136
This certainly adds a lot of value to an old GPU when upgrading a system. I suppose the question is how much support there is for older GPUs. An earlier poster mentioned a 7970, but would that be supported?
 

NTMBK

Lifer
Nov 14, 2011
10,411
5,677
136
It's still AFR, which means I'm still not interested. Higher latency and worse consistency, no thanks.

Wake me up when we get tasks offloaded to the IGP to reduce individual frametimes, that's when this actually starts mattering to the majority of gamers.
 
Last edited:

BigDaveX

Senior member
Jun 12, 2014
440
216
116
Looks like Nvidia has improved performance to the point where a 980 TI is now outperforming the Fury X across all resolutions. So much for the much hyped GCN DX12 async advantage?

The GM200 core seems to hold up better under DirectX 12 scenarios than its GM204-based brethren. That being said, I think developers are going to be slow in rolling out async-based code in game, considering the current nVidia-AMD market share split. It probably won't be like, for instance, the scenario we had where the GeForce FX completely imploded the minute you used any DirectX 9 features, and so developers just went full-force on that code and let nVidia users decide for themselves whether they'd rather deal with a slideshow or fall back to DirectX 8.
 

PhonakV30

Senior member
Oct 26, 2009
987
378
136
1. Pretty amazing that it works, and works very well.

2. Looks like Nvidia has improved performance to the point where a 980 TI is now outperforming the Fury X across all resolutions. So much for the much hyped GCN DX12 async advantage?

How much Async compute ? No one knows.in Fable , 5% workload per frame.
 

Despoiler

Golden Member
Nov 10, 2007
1,968
773
136
How much Async compute ? No one knows.in Fable , 5% workload per frame.

This. Neither game is using full blown async implementations. Both games are using what I would call an async lite implementation. They are using it on a few tasks.

Below is from an Ashes Dev
http://www.overclock.net/t/1569897/...ingularity-dx12-benchmarks/1200#post_24356995

I suspect that one thing that is helping AMD on GPU performance is D3D12 exposes Async Compute, which D3D11 did not. Ashes uses a modest amount of it, which gave us a noticeable perf improvement. It was mostly opportunistic where we just took a few compute tasks we were already doing and made them asynchronous, Ashes really isn't a poster-child for advanced GCN features.

Our use of Async Compute, however, pales with comparisons to some of the things which the console guys are starting to do. Most of those haven't made their way to the PC yet, but I've heard of developers getting 30% GPU performance by using Async Compute. Too early to tell, of course, but it could end being pretty disruptive in a year or so as these GCN built and optimized engines start coming to the PC. I don't think Unreal titles will show this very much though, so likely we'll have to wait to see. Has anyone profiled Ark yet?
 

tential

Diamond Member
May 13, 2008
7,348
642
121
This is some really interesting info. Wonder how wide spread it could get with DX12? Hell, maybe I'll get a Radeon and GeForce mid-range cards next time :D haha.

Also, I noticed it's an AFR implementation. The thread discussing it a few weeks ago believed it was SFR.

http://forums.anandtech.com/showthread.php?t=2449641
That's a discussion about sfr.... Not that this game will use sfr.

I posted a lot in that thread after all lol.

I'm excited if this holds true although want be surprised if nvidia/amd squash it. Makes buying a new gpu that much more appealing since I don't need to throw away the old one.


Please let this work please let this work!
 
Last edited:

Teizo

Golden Member
Oct 28, 2010
1,271
31
91
The question is how long until Nvidia squashes this?

AMD could just as easily end support through drivers, but they have a lot more to gain by allowing this to continue. Nvidia already has a huge market share lead so they have more to gain by killing mixed gpu systems through drivers.

They have already done it with physx.

The most interesting thing about this is the lack of SLi bridge for nvidia. I just glanced at the charts did they mention this at all in the article? I would imagine if nvidia ditches the bridge next gen this could improve even more.

In the past, Nvidida would have claimed (actually did claim) quality control. Now, I'm not sure if they will or not. People who buy from both brands do so intentionally, so that point of view carries less weight. I'm sure someone at Nvidia is in talks about doing it though.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
It's still AFR, which means I'm still not interested. Higher latency and worse consistency, no thanks.

Wake me up when we get tasks offloaded to the IGP to reduce individual frametimes, that's when this actually starts mattering to the majority of gamers.

^^ This. Currently its just a glorified AFR mixed SLI/CF mode.

Cool first step, but its not moving any bars.
 

MrTeal

Diamond Member
Dec 7, 2003
3,916
2,700
136
In the past, Nvidida would have claimed (actually did claim) quality control. Now, I'm not sure if they will or not. People who buy from both brands do so intentionally, so that point of view carries less weight. I'm sure someone at Nvidia is in talks about doing it though.

I can't imagine anyone is under any illusion that if Fury X + 980Ti is actually faster than 980 Ti SLI, nVidia would allow that to continue. For AMD seeing all the top scores in 3D Mark '16 being 2xFury X, 2x Titan X would be a huge win. For nVidia, it would be a disaster.
 

Headfoot

Diamond Member
Feb 28, 2008
4,444
641
126
I'm surprised they've gotten this working so quickly into the DX12 life cycle in a game set to come out shortly. As I've said in multiple threads now, DX12's roll out will not be the same as 11 or 10's. This is further evidence bearing that out
 

Grooveriding

Diamond Member
Dec 25, 2008
9,147
1,329
126
This is really cool, but I think at least one - probably nvidia - and likely both of them will try to squash it. It IS part of the DX12 spec though, but we've already seen neither of them feel obligated to support all facets of the API.

It's reasonable that we could possibly see using a halo card from each vendor giving overall better performance than SLI or CF just using one vendor's cards. What gives gamers the best experience is not going to matter to them though if they feel it is losing them money.