causing a singularity, red and green together...

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

ViRGE

Elite Member, Moderator Emeritus
Oct 9, 1999
31,516
167
106
But that's not SFR is it?
Indeed it's not. SFR means each GPU covers the whole rendering pipeline for its section of the frame. I don't think there's a formal name (beyond the generic "heterogeneous") for handing off partially rendered frames to another GPU?
 

Ed1

Senior member
Jan 8, 2001
453
18
81
while it is interesting that both can work to a degree together there are still many thing that IMO hold this back from being really practical .

First, you have the issue of having to install all venders drivers , this can lead to issues and added overhead .

How will G-sync, Free-sync and all the rest specific HW capabilities when only one card supports it .
I can see maybe two similar vender cards working good (as long as speed difference is not to great) or possibly IGPU(Intel) once they get there
 

BigDaveX

Senior member
Jun 12, 2014
440
216
116
I remember a lot of the early multi-GPU solutions used either split-screen rendering or tile rendering as a method of dividing up single frames between different GPUs. However, those GPUs had dedicated hardware to manage the workload plus a fast, direct link between cards, which an iGPU would obviously lack.
 

Techhog

Platinum Member
Sep 11, 2013
2,834
2
26
while it is interesting that both can work to a degree together there are still many thing that IMO hold this back from being really practical .

First, you have the issue of having to install all venders drivers , this can lead to issues and added overhead .

How will G-sync, Free-sync and all the rest specific HW capabilities when only one card supports it .
I can see maybe two similar vender cards working good (as long as speed difference is not to great) or possibly IGPU(Intel) once they get there

Windows 10 is pretty good at handling multiple drivers, and G-Sync/F-Sync should in theory work based on the primary card, but in practice Nvidia will prevent both from working using drivers. I'm really just looking forward to gaining a couple of FPS from an iGP though. Imagine how great that would be with Intel C chips! And it would be a huge selling point for AMD's APUs in the low-end.
 

sm625

Diamond Member
May 6, 2011
8,172
137
106
If it were working properly then there should be no difference between 7970+680 and 680+7970.
 

guskline

Diamond Member
Apr 17, 2006
5,338
476
126
I bought the steam version of Ashes of Singularity so I could run these benchmarks (it was discounted so I thought what the heck!)

I tried to run the benchmarks The same as Ryan Smith did in his article (2560x1440 high with MSAA at 2x). Obviously my rigs below are different than he used for a bench test. I believe he used a 4960x at 4.2 Ghz.

My first results were from my 5960x at 4.4Ghz with a single GTX980TI SC with 1102 core stock
9ulsuh.jpg


My second result was for my 4790K below at 4.7Ghz running 2 R9 290 Sapphires in CF with stock clocks of 1100.
11l6hjn.jpg


The average frame rate for the 2 R9290s in CF seems low so I'm not sure if the setup is recognizing CF setups.

Whatever the case, I'll post these results for your review. My specs for my machines are in my sig.
 
Last edited:

guskline

Diamond Member
Apr 17, 2006
5,338
476
126
Thanks Silverforce11, that explains it! Why wouldn't they release it to those of us who spent the $$$ to buy the Ashes? Regardless, I thought posting these results will help give an idea of the performance on rigs such as mine.
 
Last edited:

dogen1

Senior member
Oct 14, 2014
739
40
91
Thanks Silverforce11, that explains it! Why wouldn't they release it to those of us who spent the $$$ to buy the Ashes? Regardless, I thought posting these results will help give an idea of the performance on rigs such as mine.

Because it's extremely unstable.
 

guskline

Diamond Member
Apr 17, 2006
5,338
476
126
dogen1, when you mention "Because it's extremely unstable" are you referring to the AT multi-cpu built benchmark for Ashes or all Ashes benchmarks?

All I know is when I start Ashes of Singularity from Steam I have the choice of Regular start or DX12, Windows 10 only start. I always opt for DX12, Win 10 since both rigs below have it. On the opening screen there is a choice for Benchmark. If selected you run a 3 minute Benchmark.
 
Last edited:

Noctifer616

Senior member
Nov 5, 2013
380
0
76
dogen1, when you mention "Because it's extremely unstable" are you referring to the AT multi-cpu built benchmark for Ashes or all Ashes benchmarks?

All I know is when I start Ashes of Singularity from Steam I have the choice of Regular start or DX12, Windows 10 only start. I always opt for DX12, Win 10 since both rigs below have it. On the opening screen there is a choice for Benchmark. If selected you run a 3 minute Benchmark.

The build that AT got is not ready for the public. In development there are public build and internal builds. Internal builds are far ahead of the public builds, but they are not finished (have bugs and other issue). The build that AT have tested will be coming to the public in an update next month.
 

dogen1

Senior member
Oct 14, 2014
739
40
91
dogen1, when you mention "Because it's extremely unstable" are you referring to the AT multi-cpu built benchmark for Ashes or all Ashes benchmarks?

All I know is when I start Ashes of Singularity from Steam I have the choice of Regular start or DX12, Windows 10 only start. I always opt for DX12, Win 10 since both rigs below have it. On the opening screen there is a choice for Benchmark. If selected you run a 3 minute Benchmark.

The built AT used. They mentioned in the article how often it crashed.
 

guskline

Diamond Member
Apr 17, 2006
5,338
476
126
Thank you all for the info. Now I have to figure out how to play this game:eek:
 

Red Hawk

Diamond Member
Jan 1, 2011
3,266
169
106
Just read the article on Anandtech. Holy schnitzel! And this is just the tip of the iceberg because it's still AFR. Who knows what's possible with fully explicit mGPU. Like Ryan hinted at in the article, in a few years, the answer to the question, "Put AMD or Nvidia in your build?" might just be...

aoz8kgx8pzknypz7z38n.jpg


1. Pretty amazing that it works, and works very well.

2. Looks like Nvidia has improved performance to the point where a 980 TI is now outperforming the Fury X across all resolutions. So much for the much hyped GCN DX12 async advantage?

Yeah, I'm not exactly surprised by that. The differences in GCN and Maxwell's approaches to async compute are real and could lead to performance differences, but it's silly to draw any conclusions from pre-alpha benches of Ashes, including now. Especially since an Ashes dev said that Ashes uses async for a few compute tasks but not in a truly extensive way.

It's still AFR, which means I'm still not interested. Higher latency and worse consistency, no thanks.

Wake me up when we get tasks offloaded to the IGP to reduce individual frametimes, that's when this actually starts mattering to the majority of gamers.

Devs are working on it! The AT article mentioned how Epic is looking into offloading post-processing tasks to IGPUs with Unreal.
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
1. Pretty amazing that it works, and works very well.

2. Looks like Nvidia has improved performance to the point where a 980 TI is now outperforming the Fury X across all resolutions. So much for the much hyped GCN DX12 async advantage?


But I thought this bench didn't prove anything when AMD was leading? :ninja:
 

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
But I thought this bench didn't prove anything when AMD was leading? :ninja:

To some it did, to others it didn't. There is no need for you to be coy. Right now we have two different DX12 benchmarks where the cutdown GM200 980 TI leads over Fury X. Does it prove anything? No. Does it indicate AMD has a DX12 advantage? No.

On an side note, I will say this: AMD has done a tremendous job of cutting into Nvidia's lead with their long-standing existing hardware vs. Nvidia's (mostly) newer lineup. A stock GTX980 was 20% faster than a 290x at 1440p when it first came out, now it's essentially tied (maybe 3-5% faster) than a 390x, which is just a 290x + 3-5%. That means AMD's Hawaii has gained on GM204 by a matter of 10% in the past year. The same thing happened with the Kepler vs. GCN 1.0 generation. I don't know if AMD's console wins are coming to fruition with respects to devs coding to GCN, or if Nvidia's hardware "is not forward looking," perhaps Nvidia getting lazy with DX11 driver optimization, or if AMD is simply squeezing more and more performance out of their drivers, but it's been impressive to see them continually close the gap throughout the cycle.
 
Last edited:

littleg

Senior member
Jul 9, 2015
355
38
91
Remember that adds latency and essentially kills FPS gaming for many.

Latency isn't important though. That was established when it was realised Nvidia was offloading async to the CPU.

Colour me confuzzled.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
Latency isn't important though. That was established when it was realised Nvidia was offloading async to the CPU.

Colour me confuzzled.

No wonder you are confused when you get it wrong.

For the IGP+dGPU mix you end up with this:
DirectX-12-Multiadpater_NVIDIA_Intel_Link.png