Ashes of the Singularity User Benchmarks Thread

Page 36 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
And no intention to use features that AMD cards does not supports? and then people ask why we think they are biased.

Anyway, i still wonder how much of that compute could be done on a IGP that has DX12 support.

Multithread rendering in DX11.
 

TheELF

Diamond Member
Dec 22, 2012
4,027
753
126
Now, you do not seem to understand that different kinds of workloads aren't exclusively tied to the same resources on the GPU. DX11 as explained by AMD is serial by nature, one job must finish before the other can start.DX11 is obsoleted, hard, by what DX12 brings. DX12 is a paradigm shift. Async compute is another one on top of it. You have a graphics workload, that now can run by itself using its relevant parts of the GPU, while for example you can now have a compute job running in parallel using other parts of the GPU. Compute means flexibility that the graphics pipeline doesn't give you by its very own nature, you can do whatever you want, basically. Go read any gamedev forum, you'll see the term compute *everywhere*, in lots of scenarios and new possibilities. As software catches up to the hardware, you as an end user get more out of those 2000-4000sp, hundreds of texture units, dozens of ROPs, etc you paid hard earned cash for. They all get to do some more work now, more efficiently, all at the same time. It doesn't matter what color GPU you own.

Al of this still means that normally a GCN GPU has to be around 50% idle at all times otherwise you would not get double FPSs.
And the trough is that we still don't know what oxide uses to get double FPS,yes sure it is async something but we don't know what exactly they do do asynchronous.

Your take is that they just run normal everyday game stuff that would normaly run in serial in async.
My take is that that would not get them double FPS,come on half the stuff that's going on has nothing to do with graphics?
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
And thats relevant because? Every NV card that is DX11 capable can run DX12 that has that built-in.

You were claiming bias because they don't support any nVidia only features. Even before there was DX12 they supported it. And as we've seen, supporting DX12 doesn't mean anything as far as performance goes. We've seen the DX11 path be faster for nVidia. If they didn't offer multithread rendering this wouldn't be the case. Imagine how bad they could make it look for nVidia if they tried, as you insinuated.
 
Feb 19, 2009
10,457
10
76
Al of this still means that normally a GCN GPU has to be around 50% idle at all times otherwise you would not get double FPSs.

You haven't understood the article, video or the graphics if you think like that.

It ain't idle. It's the rendering pipeline, if graphics is waiting for compute to finish in front before it can proceed, it increases latency and lowers performance.

Think of a road. It's got Cars and Bikes on it, that are delaying each other from getting to their destination.

What happens if you move the Bikes to their own road, running in parallel?

Oh, if anyone has played Cities Skylines... a good road layout with good parallelism is a key factor in preventing traffic jam in that game. ;)
 

Madpacket

Platinum Member
Nov 15, 2005
2,068
326
126
In terms of real impact to end users in next year or two, their maybe no real impact. However, for current maxwell 2 owners or prospective 980/980ti buyers perceived value is going to be a concern. For current maxwell owners should they sell now to maximize resell value if lack of async compute is confirmed and lowers resell value because of it. For current prospective buyers do they even look at 980 or 980ti knowing future dx12 performance may not be there. Stupid as it seems, but perceived value will sometimes overrule real value.

This...

I just bought a Geforce 960 (thankfully not a 980 or anything really expensive) for my HTPC and feel dirty about it. Remember *any* game developed using DX12 will benefit GCN owners. To what degree that is the question but this is a big blow to Nvidia's image. I guess AMD won the long con by developing Mantle, convincing MS to develop enhanced Vulcan DX12 and getting design wins for the consoles.

Well played AMD!
 

Madpacket

Platinum Member
Nov 15, 2005
2,068
326
126
These are coming soon, 02/2016.

Mirror's Edge:
http://www.vcpost.com/articles/8717...ion-technologies-glass-city.htm#ixzz3kSkxnueB

Tomb Raider:
http://gearnuke.com/rise-of-the-tom...breathtaking-volumetric-lighting-on-xbox-one/

Deus Ex & Hitman (Same Engine/Squaresoft):
http://gearnuke.com/deus-ex-mankind-divided-use-async-compute-enhance-pure-hair-simulation/

https://www.youtube.com/watch?v=D-epev7cT30

Fable (not sure about release date):
https://youtu.be/7MEgJLvoP2U?t=20m47s

Also all the new MS games will be DX12 in their initiative to combine Xbone + Win 10 ecosystem. Arma 3 and DayZ are moving to DX12 too. ARK is due for a DX12 patch, the performance is horrendous so they need it.

I read somewhere that Mark Cerny stated 2016 will be when we really see the benefits of GCN take off. Looks like he's right on track.
 

JDG1980

Golden Member
Jul 18, 2013
1,663
570
136
I just bought a Geforce 960 (thankfully not a 980 or anything really expensive) for my HTPC and feel dirty about it.

Why? For HTPC use, you really can't do much better. No other currently available card (except the GTX 950 based on the same GPU) has both 8-bit and 10-bit HEVC hardware decoding, and the AMD cards (even Fiji) don't yet support HDMI 2.0. For a HTPC, these are much more important considerations than a FPS advantage in gaming.

I wonder if madVR will be able to get any advantage out of DX12 or if it's already pushing the GPU as hard as it can.
 

Shivansps

Diamond Member
Sep 11, 2013
3,918
1,570
136
You were claiming bias because they don't support any nVidia only features. Even before there was DX12 they supported it. And as we've seen, supporting DX12 doesn't mean anything as far as performance goes. We've seen the DX11 path be faster for nVidia. If they didn't offer multithread rendering this wouldn't be the case. Imagine how bad they could make it look for nVidia if they tried, as you insinuated.

The negative scaling on DX12 is just a bad implementation by Oxide, there is just no way that whould happen, worst case scenario you are doing compute+graphics in serial like in DX11, it cant perform worse.
Also DX11 MT support is limited, even with Context Lists, and since all DX11 Nvidia cards supports DX12 that is just irrelevant, you think its good just because Nvidia is slower on DX12 than DX11 """somehow""".

What about implementing some other mayor features of DX12 like ROVs? strange since they dont seem to care to waste time in a feature that is unsupported by most of DX12 pc hardware.

ALso they did not even mention about the posibility of using a DX12 IGP for these compute tasks.
 
Last edited:

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
The negative scaling on DX12 is just a bad implementation by Oxide, there is just no way that whould happen, worst case scenario you are doing compute+graphics in serial like in DX11, it cant perform worse.
Also DX11 MT support is limited, even with Context Lists, and since all DX11 Nvidia cards supports DX12 that is just irrelevant, you think its good just because Nvidia is slower on DX12 than DX11 """somehow""".

What about implementing some other mayor features of DX12 like ROVs? strange since they dont seem to care to waste time in a feature that is unsupported by most of DX12 pc hardware.

ALso they did not even mention about the posibility of using a DX12 IGP for these compute tasks.

You're putting words in my mouth. I simply answered the question posted. There has also been no evidence of bias. That's just sour grapes. nVidia wanted some code put in that improved their performance, and they did.
 

TheELF

Diamond Member
Dec 22, 2012
4,027
753
126
You haven't understood the article, video or the graphics if you think like that.

It ain't idle. It's the rendering pipeline, if graphics is waiting for compute to finish in front before it can proceed, it increases latency and lowers performance.

Think of a road. It's got Cars and Bikes on it, that are delaying each other from getting to their destination.

What happens if you move the Bikes to their own road, running in parallel?

Oh, if anyone has played Cities Skylines... a good road layout with good parallelism is a key factor in preventing traffic jam in that game. ;)

1. (AMD only)
Where do you magically get a second road from?
This second road is a part of the gpu that is doing nothing, in AMDs case a second render pipeline (ACE) ,that's why it can accept the "biker" workload at all.
2. (NVIDIA)
If you only have one pipeline then there is nowhere else to go,no second road,you can only prioritize to make the traffic go as fast as possible,but it is (almost) always as fast as possible.
3. BOTH (and intel too)
You still have a lot of parts of the GPU that don't do anything like the data copy or texture compression so you will still get some performance gains out of that.

Claiming that GCN gets double the performance from 3. only, while until now the biggest gain from mantle or on the consoles was like * 30% and not a 100% is pretty out there.

* Dice got BF4 to 58% gain by using a lot of cpu power with a i7-3970x Extreme with 12 logical cores but in ashes you get almost double the fps (almost 100% ) even with an i3 so obviously it's not the typical mantle/dx12 behavior.
 

Madpacket

Platinum Member
Nov 15, 2005
2,068
326
126
Why? For HTPC use, you really can't do much better. No other currently available card (except the GTX 950 based on the same GPU) has both 8-bit and 10-bit HEVC hardware decoding, and the AMD cards (even Fiji) don't yet support HDMI 2.0. For a HTPC, these are much more important considerations than a FPS advantage in gaming.

I wonder if madVR will be able to get any advantage out of DX12 or if it's already pushing the GPU as hard as it can.

Good points and that's the original reason I bought the card but it's bothersome knowing the card will likely be held back by a number DX12 titles. Oh well, can't have it all until next gen cards I guess.
 

LTC8K6

Lifer
Mar 10, 2004
28,520
1,576
126
Good points and that's the original reason I bought the card but it's bothersome knowing the card will likely be held back by a number DX12 titles. Oh well, can't have it all until next gen cards I guess.

There's no evidence that your 960 will be held back by any DX12 titles.

There's no reason at this point to prefer one card mfg for DX12 titles.
 

DooKey

Golden Member
Nov 9, 2005
1,811
458
136
Video card company fans are lame.....

This crap really doesn't matter at this point except to partisans. Look in this thread and note who they are then I recommend ignoring them in the future.
 

Eymar

Golden Member
Aug 30, 2001
1,646
14
91
I have very similar system DooKey (just different cpu 5930k and 512GB Samsung SSD). No worries about Titan X not supporting async compute? I mean up to 30% performance boost by software seems worthwhile topic, but this thread maybe running in circles. I'm following this one just to see if news that Nvidia makes an official response. However, I guess most people with Titan Xs will probably buy Pascal.
 

DooKey

Golden Member
Nov 9, 2005
1,811
458
136
I have very similar system DooKey (just different cpu 5930k and 512GB Samsung SSD). No worries about Titan X not supporting async compute? I mean up to 30% performance boost by software seems worthwhile topic, but this thread maybe running in circles. I'm following this one just to see if news that Nvidia makes an official response. However, I guess most people with Titan Xs will probably buy Pascal.

I'm not worried. I'll buy the next gen cards when they come out and I'm sure I'll be just fine. If AMD has the lead then I'll buy AMD because I just want to play games regardless of the video card company. Anyway, I just like new stuff when I comes out and I figure most folks that are gaming enthusiasts will buy into the next gen so they can play games well.

Also, unlike some who preach of the performance/$ and then hold onto 3 year old cards (and recommend company X while not buying company X whom they believe needs financial help) I'll continue to ride the way of cutting edge regardless of the brand.
 

Eymar

Golden Member
Aug 30, 2001
1,646
14
91
I have similar point of view, just curious if other maxwell 2 owners are even a bit worried. My next itch is VR and topic is somewhat related so I will slog through.
 

Black96ws6

Member
Mar 16, 2011
140
0
0
The developer just commented on this:
Regarding Async compute, a couple of points on this. FIrst, though we are the first D3D12 title, I wouldn't hold us up as the prime example of this feature. There are probably better demonstrations of it. This is a pretty complex topic and to fully understand it will require significant understanding of the particular GPU in question that only an IHV can provide. I certainly wouldn't hold Ashes up as the premier example of this feature.

We actually just chatted with Nvidia about Async Compute, indeed the driver hasn't fully implemented it yet, but it appeared like it was. We are working closely with them as they fully implement Async Compute. We'll keep everyone posted as we learn more.

Would be hilarious if everyone got all worked up for nothing lol.
 

PhonakV30

Senior member
Oct 26, 2009
987
378
136
I can say that AC in Maxwell 2 is Software driven Not Fully Hardware.Why didn't nVIDIA just admit to this in the first place?
 

Spjut

Senior member
Apr 9, 2011
932
162
106
That'd indeed be good news for Maxwell 2 users. I hope there's something Nvidia can do for us Kepler owners as well, I don't like the prospect of Kepler aging faster:(
 

caswow

Senior member
Sep 18, 2013
525
136
116
That'd indeed be good news for Maxwell 2 users. I hope there's something Nvidia can do for us Kepler owners as well, I don't like the prospect of Kepler aging faster:(

dont get hyped up too much ;)
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
There's no evidence that your 960 will be held back by any DX12 titles.

There's no reason at this point to prefer one card mfg for DX12 titles.

I'm not sure what you call "no evidence". There's plenty of evidence that DX12 is better suited for GCN than either Kepler or Maxwell. What there is no evidence of is that Pascal will be better than GCN. Wishful thinking is all that people have so far.