computerbaseAshes of the Singularity Beta1 DirectX 12 Benchmarks

Page 21 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

sontin

Diamond Member
Sep 12, 2011
3,273
149
106
Im sure NV new about DX-12 and Asunc Compute but they played the DX-11 cards for maximum profits, as im sure they new DX-12 was not going to be released before 2015-16.

This is the best way to gain higher profits but that doesnt make you a technology leader and innovator. And no matter what people believe, NVIDIA the last 3-4 years are not a technology leader. They have stuck to DX-11, they created GameWorks in response to Mantle, they will just only use HBM2 and they will still be second in 14/16nm process.

I will give them the profit award any time but technology leader and Innovator award goes to AMD this round.

Nobody is a leader when he is second. AMD cant beat nVidia in Ashes or Fable:Legends.

And why would AMD be a "technology" leader when they are much slower with Tessellation and dont support CR, ROV or Tiled Ressource 3? :eek:
 

AtenRa

Lifer
Feb 2, 2009
14,001
3,357
136

Your link

02.02.2016 um 18:05 Uhr


My link

New Patch v1.0 build 610.1_64


Note that on February 5th a new patch for this game was released which incorporates new graphics features and image quality improvements. We are using this patch in our testing today. Therefore, all performance today is under the brand new v1.0 build 610.1_64 February 5th patch.

Also Patch 2 was released on February 12 2016.

So no, Rise of the Tomb Rider doesn't run better on NV cards.
 

Pinstripe

Member
Jun 17, 2014
197
12
81
Your link

02.02.2016 um 18:05 Uhr


My link



Also Patch 2 was released on February 12 2016.

So no, Rise of the Tomb Rider doesn't run better on NV cards.

Yes it does, under 1080p, the most relevant resolution on the market. Even in your link the GTX 960 beats a R9 380X.

As I said, Nvidia is the market share winner.
 

Kenmitch

Diamond Member
Oct 10, 1999
8,505
2,249
136
Yes it does, under 1080p, the most relevant resolution on the market. Even in your link the GTX 960 beats a R9 380X.

As I said, Nvidia is the market share winner.

Did you even look at the settings used in the 960 vs 380x comparison? Did you not scroll down and take another look?

Not really sure how you came to a win for NVidia conclusion.
 

Glo.

Diamond Member
Apr 25, 2015
5,696
4,540
136
To see which company has better GPUs look at wider range of GPUs at specific bracket of performance.

Compare GTX980 to R9 390X at 1440p. GTX970 to R9 390, at the same resolution, GTX960 to R9 380 at 1080p, and GTX 980 Ti to Fury X at 4K. Techpowerup suite is good example here.

In every scenario there AMD GPUs are better in much higher amount of games.
R9 380 is faster in 1080p in 12 out of 15 games on techpowerup suite than GTX 960.
R9 390 is faster than GTX 970 in 13 out of 15 games on the same site suite 1440p.
R9 390X is faster than GTX 980 in 9 out of 15 games on TPU review suite in 1440p.
Fury X is faster in 9 out of 15 games faster than GTX 980 Ti in 4K.

All of the GPUs were reference models.

It is extremely off-topic.
 

Pinstripe

Member
Jun 17, 2014
197
12
81
Too little too late. At launch Nvidia was trouncing AMD, and that's what most (read: uninformed) customers look at and that sticks with them. Redeeming the performance a couple weeks later isn't going to get AMD win this fierce game of market share. Unfortunate and "wrong", but that's the nature of competitiveness. Competition = Conflict = Monopolization. Nvidia is nearly there.
 

Glo.

Diamond Member
Apr 25, 2015
5,696
4,540
136
Too little too late. At launch Nvidia was trouncing AMD, and that's what most (read: uninformed) customers look at and that sticks with them. Redeeming the performance a couple weeks later isn't going to get AMD win this fierce game of market share. Unfortunate and "wrong", but that's the nature of competitiveness. Competition = Conflict = Monopolization. Nvidia is nearly there.

Excuse me, but this point is plain stupidity. I am not buying a GPU for 1 week. I buy a GPU to serve me a lot of time, and if in the scale of that time GPU is maturing well it makes it better, regardless of first impressions.
 

Pinstripe

Member
Jun 17, 2014
197
12
81
Excuse me, but this point is plain stupidity. I am not buying a GPU for 1 week. I buy a GPU to serve me a lot of time, and if in the scale of that time GPU is maturing well it makes it better, regardless of first impressions.

Excuse me, but exaggerated and arrogant statements like "GPU for 1 week" isn't serving your argument well, either.

Besides, the world economy is teetering on the brink of recession. In such an environment, the one with deep pockets (Nvidia) will be able to continue to invest in it's ecosystem while AMD, despite their supposedly technological leadership, will keep hemorrhaging money to a point they get crippled. What's the point of owning a product of a company that is hanging on a thread?
 

maddie

Diamond Member
Jul 18, 2010
4,738
4,666
136
Too little too late. At launch Nvidia was trouncing AMD, and that's what most (read: uninformed) customers look at and that sticks with them. Redeeming the performance a couple weeks later isn't going to get AMD win this fierce game of market share. Unfortunate and "wrong", but that's the nature of competitiveness. Competition = Conflict = Monopolization. Nvidia is nearly there.
If you were giving purchasing advice to someone trying to decide between these cards, which ones would you state was the better cards?

380X vs 960
970 vs 390
 

Glo.

Diamond Member
Apr 25, 2015
5,696
4,540
136
And with deep pockets will still not be able to deliver properly functional(Asynchronous Compute) architecture, for modern world(DX12, VR).

I was not referring to what you have written but to stupidity of being uninformed. Being without imagination.
 

Pinstripe

Member
Jun 17, 2014
197
12
81
If you were giving purchasing advice to someone trying to decide between these cards, which ones would you state was the better cards?

380X vs 960
970 vs 390

I'd take the R9 380X over the GTX 960 because the 960 is a piece of shit.

I'd take the GTX 970 over the R9 390 because I had both (well, R9 290 actually) and I prefer the 970.
 

maddie

Diamond Member
Jul 18, 2010
4,738
4,666
136
Excuse me, but exaggerated and arrogant statements like "GPU for 1 week" isn't serving your argument well, either.

Besides, the world economy is teetering on the brink of recession. In such an environment, the one with deep pockets (Nvidia) will be able to continue to invest in it's ecosystem while AMD, despite their supposedly technological leadership, will keep hemorrhaging money to a point they get crippled. What's the point of owning a product of a company that is hanging on a thread?
That is one point of view, but I would assume Nvidia's share price to be affected vastly more than AMD, who is pretty much at the bottom already.

In my opinion AMD has made a concerted effort, by necessity, to survive on much smaller revenue streams.
 

Pinstripe

Member
Jun 17, 2014
197
12
81
And with deep pockets will still not be able to deliver properly functional(Asynchronous Compute) architecture, for modern world(DX12, VR).

I was not referring to what you have written but to stupidity of being uninformed. Being without imagination.

Does it matter? The industry follows the money, not innovation. Innovation is just a side effect of profit making.
 

Erenhardt

Diamond Member
Dec 1, 2012
3,251
105
101
Yes it does, under 1080p, the most relevant resolution on the market. Even in your link the GTX 960 beats a R9 380X.

As I said, Nvidia is the market share winner.

Did you even look at the settings used in the 960 vs 380x comparison? Did you not scroll down and take another look?

Not really sure how you came to a win for NVidia conclusion.

Just scroll down to "apples-to-apples" comparison :rolleyes:

That is how nvidia wins... In the eyes of general public nv is faster is this test. No wonder why the marketshare is like it is...

Also, this 2little2late argument... I do not recall those same people call kepler that. Quite the opposite, I remember a lot of people waited for months for memory crippled, compute castrated nvidia GPUs, waiting in lines to get their kepler SLI upgrade.
 

Glo.

Diamond Member
Apr 25, 2015
5,696
4,540
136
Does it matter? The industry follows the money, not innovation. Innovation is just a side effect of profit making.

Yes, industry follows the money. And if Money lies in VR, and in DirectX12, which hardware will they pick for it? Which one hardware is better for it?
 

Kenmitch

Diamond Member
Oct 10, 1999
8,505
2,249
136
Does it matter? The industry follows the money, not innovation. Innovation is just a side effect of profit making.

For those of us who don't run with the herd I would think it would.

By your lack of logic it's pointless to innovate if company X has all the money.
 

Pinstripe

Member
Jun 17, 2014
197
12
81
Yes, industry follows the money. And if Money lies in VR, and in DirectX12, which hardware will they pick for it? Which one hardware is better for it?

By the time VR takes truly off, Nvidia will already have Volta or Einstein on the market while the remnant of AMD will be making datacenter chips for it's new owner, a Chinese conglomerate.
 
Last edited:

96Firebird

Diamond Member
Nov 8, 2010
5,711
316
126
Yes, industry follows the money. And if Money lies in VR, and in DirectX12, which hardware will they pick for it? Which one hardware is better for it?

The only person who can answer this must also hold patents for a time machine. :sneaky:
 
Feb 19, 2009
10,457
10
76
That's exactly what I said.As well as the AMD white page
GCN can't use all the available shaders with Dx11/one queue while nvidia can and that's why GCN gains a lot from async while nvidia doesn't.

You actually still do not understand.

Just watch the video okay?

https://www.youtube.com/watch?v=H1L4iLIU9xU&feature=youtu.be&t=14m48s

NV cannot utilize all their chip on DX11 as well because the serial nature of it means shaders are idling when copy queues are processed.

Shaders are idling when shadowmaps are processed because its a compute task that uses the front end only.

Similar for other post processing effects.

It is the flaw of serial processing, inherent to all DX11 rendering.

AMD's biggest flaw was they did not work to get DX11 multi-thread rendering like NV did. So they can be CPU bottlenecked if there's a lot of draw calls on the main rendering thread. NV's MT-DX11 driver can use cores 2/3/4 to submit tasks to the GPU to bypass the bottleneck.

This is a separate issue to DX12's Async Shading and the ability to have separate graphics, compute and copy tasks running in parallel.

Seriously just watch the video, it explains it real well in simple terms. If you still don't get it after that, there's no helping you.