Ashes of the Singularity User Benchmarks Thread

Page 37 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

DooKey

Golden Member
Nov 9, 2005
1,811
458
136
I'm not sure what you call "no evidence". There's plenty of evidence that DX12 is better suited for GCN than either Kepler or Maxwell. What there is no evidence of is that Pascal will be better than GCN. Wishful thinking is all that people have so far.

Yeah, that wishful thinking thing.......video card company fans are great at it aren't they?
 

Magee_MC

Senior member
Jan 18, 2010
217
13
81

It seems very relevant. AMD looks to have played a long game and whether or not all of the pieces were intentionally put into play, they do all seem to be coming together for AMD in 2016. This is the picture I've been able to piece together from all of the different discussions on these subjects.

GCN architecture with hardware ACEs - faster AC/more efficient parallel compute processing

Getting GCN into the consoles - weaker processors - more need to take advantage of low level API abilities like AC in order to maximize efficiency

Mantle/Vulkan/Metal/D3D12/XBOne/PS4 - All of the next gen APIs are derived from Mantle or extremely Mantle like, and play to the advantages of the GCN architecture.

Devs are now becoming accustomed to coding their games to take advantage of the strengths of the low level APIs in the console, which will help them to use DX12 to extract maximum performance from AMD's GCN DGPUs.

With AMD_Robert on Reddit saying,
"You will find that the vast majority of DX12 titles in 2015/2016 are partnering with AMD. Mantle taught the development world how to work with a low-level API, the consoles use AMD and low-level APIs, and now those seeds are bearing fruit."
https://www.reddit.com/r/AdvancedMi...ide_games_made_a_post_discussing_dx12/cuom7cc

it seems that AMD may be coming into a GW like position to help devs extract the maximum efficiency from GCN and the new APIs.

It also looks like NVIDIA does support AC on Maxwell, but with an implementation using software/drivers that while functional, is less efficient, has increased latency and uses CPU resources to function, while by comparison AMD's implementation of AC using hardware in the GPU is faster, has less latency, but is more power intensive than NV's software implementation.

This makes me wonder how much of the efficiency between GCN and Maxwell is due to AMD's use of hardware (ACEs) to implement AC as opposed to NV's use of software/drivers.

AMD also has an advantage in the coming VR systems from it's lower latency AC with GCN and its Liquid VR software which is Mantle in a different form.

Another consideration is whether AMD will be able to leverage their APUs to assist their DGPUs in the future.

All of these point to AMD being in a very strong position to benefit from their strengths; GCN in consoles/DGPUs/APUs, low level APIs designed for GCN/GCN type architectures, hardware implementation of AC with lower latency, which leads to better gaming and VR experiences.

On the other side of the picture, NV has several significant challenges coming up with regards its upcoming architectures. Pascal will be a challenge with not only with a node shrink, but also NV having their first crack at implementing HBM.

To add to this, either Pascal has moved to a hardware implementation of AC, which will be another new technology NV needs to get right, or they stayed with the software/driver method in Maxwell and Pascal is more or less an improved smaller version of Maxwell.

If it's the first, then NV could experience growing pains with any or all of the new technologies and possibly by moving to a more GCN like architecture they may become less power efficient compared to GCN. If they haven't incorporated hardware AC then they will need Volta with hardware AC as soon as possible.

The final piece of the picture is that Pascal will be competing with AMDs Arctic Islands which will probably incorporate improvements in all of these technologies based upon AMD's experiences while NV will be working through many of these for the first time.

/RS mode :biggrin:
 
Feb 19, 2009
10,457
10
76
@Magee_MC

It's why I posted awhile ago, that I was excited about Zen HBM2 APU + DX12 multi-adapter.

Can you imagine a powerful APU with Artic Island iGPU on it, parallel execution of rendering with the dGPU?

There's been a lot of doom & gloom for the past year but I saw beyond that, I saw their potential coming together with next-gen API, HBM powered GPUs and APUs.. even the Nintendo NX win is a big deal because it keeps them afloat while their long-term strategy fall into place.
 

monstercameron

Diamond Member
Feb 12, 2013
3,818
1
0
@Magee_MC

It's why I posted awhile ago, that I was excited about Zen HBM2 APU + DX12 multi-adapter.

Can you imagine a powerful APU with Artic Island iGPU on it, parallel execution of rendering with the dGPU?

There's been a lot of doom & gloom for the past year but I saw beyond that, I saw their potential coming together with next-gen API, HBM powered GPUs and APUs.. even the Nintendo NX win is a big deal because it keeps them afloat while their long-term strategy fall into place.

hopefully the nx is more like the wii less like the wii u for both nintendo and amd's sake.
 

Azix

Golden Member
Apr 18, 2014
1,438
67
91
If nvidia has to put in a hardware scheduler to properly support dx12, their power consumption might not be pretty at all. Considering they don't have this in big maxwell and STILL end up near GCN power consumption at the high end. Major hardware changes might not be pretty at all. AMD hasn't been compromising so they might have a smoother transition. They have the hardware features already and even experience with HBM.

Next year will be fun.
 
Feb 19, 2009
10,457
10
76

Goatsecks

Senior member
May 7, 2012
210
7
76
http://www.overclock.net/t/1569897/...ingularity-dx12-benchmarks/2130#post_24379702

We actually just chatted with Nvidia about Async Compute, indeed the driver hasn't fully implemented it yet, but it appeared like it was. We are working closely with them as they fully implement Async Compute. We'll keep everyone posted as we learn more.

I am sure all the, `men of science and citations', will work quickly to show that this is nonsense or, at least, thier previous conclusions are still sound.

The first DX12 game, in alpha, looks like its benchmark results could be unreliable? Who would have thought! :whiste:
 

caswow

Senior member
Sep 18, 2013
525
136
116
I am sure all the, `men of science and citations', will work quickly to show that this is nonsense or, at least, thier previous conclusions are still sound.

The first DX12 game, in alpha, looks like its benchmark results could be unreliable? Who would have thought! :whiste:

trust me nvidia will eat their own words on this one ;)
 

Spanners

Senior member
Mar 16, 2014
325
1
0
http://www.overclock.net/t/1569897/...ingularity-dx12-benchmarks/2130#post_24379702



Looks like Oxide & AMD's FUD and blatant lies blew up in their own faces.

Sorry are we reading different links?

Do you have first hand knowledge of Nvidia's Async Compute capabilities on a hardware level? If not then until Nvidia comes out with something official to clarify this you're just choosing the second hand information that suits your narrative. Quite hilariously from the same people you just called blatant liars and peddlers of FUD.
 

Keysplayr

Elite Member
Jan 16, 2003
21,219
54
91
Why won't you people settle down? Silverforce, I warned you about blindly following the hype and propagating the hype. What if Oxide's statements are true? What if they are false? Would it not just be wiser to say NOTHING at this point and wait for REAL data?
You chastise a member for not taking oxide's word when it suits them. Fine, but don't you see you are doing the exact same? Just in the other direction?

When will this stop.... I wonder.

Also, yeah, nvgpu your comment does not reflect in the link you gave. All it shows is that oxide has talked with Nvidia and Nvidia states they are in the process of fully implementing Async. Datz it.
 
Last edited:
Feb 19, 2009
10,457
10
76
Why won't you people settle down? Silverforce, I warned you about blindly following the hype and propagating the hype. What if Oxide's statements are true? What if they are false? Would it not just be wiser to say NOTHING at this point and wait for REAL data?
You chastise a member for not taking oxide's word when it suits them. Fine, but don't you see you are doing the exact same? Just in the other direction?

When will this stop.... I wonder.

Also, yeah, nvgpu your comment does not reflect in the link you gave. All it shows is that oxide has talked with Nvidia and Nvidia states they are in the process of fully implementing Async. Datz it.

Mate, nowhere does Oxide say anything about hardware async compute in that quote, so I corrected nvgpu, he's trolling with a source he has no understanding of or deliberately misleading.

I have never misrepresented Oxide or AMD when I link them as a source, it's in plain english.

The current issue is quite clear, NV software emulates part of the process and their drivers are borked (if you believe Oxide). So now the ball is in NV's court.
 

iiiankiii

Senior member
Apr 4, 2008
759
47
91
Why won't you people settle down? Silverforce, I warned you about blindly following the hype and propagating the hype. What if Oxide's statements are true? What if they are false? Would it not just be wiser to say NOTHING at this point and wait for REAL data?
You chastise a member for not taking oxide's word when it suits them. Fine, but don't you see you are doing the exact same? Just in the other direction?

When will this stop.... I wonder.

Also, yeah, nvgpu your comment does not reflect in the link you gave. All it shows is that oxide has talked with Nvidia and Nvidia states they are in the process of fully implementing Async. Datz it.

Easy. Nvidia should just make a statement one way or the other instead of letting this snowball like this. Nvidia can stop this by just releasing a statement. But, noooo.... They have to remain silent about the whole thing. Therefore, the community have no choice but to put pieces together and try to figure it out. That's what's happening right now.

It's actually not wiser to say nothing because it would do the community injustice. Instead of allowing this issue to be swept under the rug, the community is exposing it one way or the other; just like the 3.5GB crap. If there's no push back from the community, Nvidia might not be inclined to give us a straight answer.

Like I said, Nvidia can easily put an end to this. But, right now, they're not telling us anything.
 

VR Enthusiast

Member
Jul 5, 2015
133
1
0
Why won't you people settle down? Silverforce, I warned you about blindly following the hype and propagating the hype.

You still didn't answer my question about market share so now would be a good time to repeat it I guess. :)

Show me in this thread where anybody isn't talking about dGPUs as the primary focus of the discussion. Unless I missed it, any and all charts posted in this thread are dGPUs only, which is where Nvidia DOES have 80% market share.
Not the entire Gaming market. Nobody is showing charts for APUs or IGPs or Consoles in this thread. So your eyeroll is sorely misguided.

Do you have a breakdown of Nvidia's sales by graphics card? I'm sure that most sales are slow pre-Maxwell cards that are sold in OEM machines from Dell and HP but still count as a discrete card. So even if Nvidia has 80% discrete market share that number means nothing if 80% of the cards they sell are not faster than APUs.
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
It's funny some of you attack Oxide's credibility when they pointed our the flaws, then now you quote them to justify NV's crap drivers which don't even function for a feature they claim support, which turns out to be a software solution. Classy.

What difference does it make at this point? Are there any DX12 games available right now? AMD had a head start on NVidia driver wise because of Mantle, which is similar to DX12.

Also, the only thing being done in software is the scheduling, which is actually more power efficient than having hardware schedulers like AMD.

NVidia have some of, if not the best driver engineers in the World and so they have the confidence to implement such a thing in software, rather than using transistors for it. We'll see how effective it is when the driver is released.

As for AMD, they have invested heavily in AC to the point of having multiple hardware schedulers, so they're hoping it will pay big dividends in the long run.
 

antihelten

Golden Member
Feb 2, 2012
1,764
274
126
Also, the only thing being done in software is the scheduling, which is actually more power efficient than having hardware schedulers like AMD.

It may be more power efficient, but it also (potentially) comes with significant latency penalties, hence the "potentially catastrophic" remark from Oculus.
 

monstercameron

Diamond Member
Feb 12, 2013
3,818
1
0
What difference does it make at this point? Are there any DX12 games available right now? AMD had a head start on NVidia driver wise because of Mantle, which is similar to DX12.

Also, the only thing being done in software is the scheduling, which is actually more power efficient than having hardware schedulers like AMD.

NVidia have some of, if not the best driver engineers in the World and so they have the confidence to implement such a thing in software, rather than using transistors for it. We'll see how effective it is when the driver is released.

As for AMD, they have invested heavily in AC to the point of having multiple hardware schedulers, so they're hoping it will pay big dividends in the long run.

more efficient if measuring the gpu power only and not total system power. If his is true then it just shifted power draw from the gpu to the cpu and in the process became less flexible. Doesn't sound like a good trade off to me but they made bank on it, so maybe I don't know anything.