Ashes of the Singularity User Benchmarks Thread

Page 38 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

thesmokingman

Platinum Member
May 6, 2010
2,302
231
106
Scheduling in software, ie. using the CPU defeats the point of doing it out of order. The point is to use free CPU cycles to do other things, but that won't happen if the CPU is now taking on the task of scheduling.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
This is great. You can't not laugh lol:

https://youtu.be/Dnn0rgDaSro

hhaha thanks for that. They missed 2 key points: (1) NV screwing over Titan X owners with an after-market 980Ti that beat the Titan X out of the box for $650-700, barely months after TX came out and the after-market 980TI cards have better components and coolers too (2) mobile dGPU overclocking is a bug. <That's perhaps the most hilarious stunt of 2015.

Only NV could lie about GTX970's specs, mobile dGPU overclocking, gimp Kepler, rip TX owners off, and lie about AC shader capabilities for DX12, have atrocious price/performance in low end cards like 750Ti/950/960 and still manage to gain market share quarter after quarter. Holly cow, not even Apple loyalists would likely put up with this type of treatment.

The media is covering this worldwide. Even PCPerspective is covering it, but HardOCP and TechReport haven't posted a single article on this. :cool:

What difference does it make at this point? Are there any DX12 games available right now? AMD had a head start on NVidia driver wise because of Mantle, which is similar to DX12.

In practice it likely won't matter that much but how many PC gamers do you think purchased Maxwell 1-2 over GCN 1.0-1.2 because Maxwell is considered a newer and thus more 'advanced' DX12 architecture? Browsing AT since GTX750Ti and then 970/980 launched, that's all I read how GCN is outdated, old and so on. You cannot possibly believe this wasn't a factor for the average gamer that hears "re-brands, re-brands" or "all R9 200 series are hot and loud and require a 1000W PSU to run" vs. Maxwell is the most advanced and latest and greatest tech... ya right.

What's amazing is how NV's loyal customers continue to just accept this type of treatment. Now we are hearing how DX12 and VR support for Maxwell doesn't matter at all from the same posters who for months were nervous and resisted the idea of buying Fury / Fury X cards because they only have 4GB of HBM vs. 6GB on the 980Ti which means the 4GB was less "future-proof". So I am trying to figure out what is it? Future proofing matters when it comes to 4GB vs. 6GB but when it comes to DX12/VR it doesn't matter? Very inconsistent viewpoints. Sure, AMD did embellish the performance of Fury X in its marketing slides but everyone knows marketing slides need to be vetted by 3rd party reviews. OTOH, the type of stunts NV has pulled over the last 2 years is just insane. Had Apple, Intel, Mercedes, Coca-Cola or any major corporation lied about major things, they would be all kinds of lawsuits and financial repercussions as well as losses of market share.

What I don't understand is why you are defending NV in particular. You seem to upgrade GPUs every 1.5-2 years anyway which makes me think that for you it truly doesn't matter. Therefore, logically it means you shouldn't even care if Maxwell bombs in 2016-2018 DX12 games since next year you will move on to Pascal so who cares about Maxwell's performance in DX12. Or am I wrong?

P.S. I am still of the view that we should wait for 2-3 more games that have DX12 features, preferably not GW titles where NV has full control over. I want to see how Deus Ex Mankind Divided, Rise of the Tomb Raider, Mirror's Edge 2 run next year because making definitive conclusions on GCN vs. Maxwell in DX12 games. Ashes is still just 1 game on 1 game engine. We need a broader picture.
 
Last edited:

Red Hawk

Diamond Member
Jan 1, 2011
3,266
169
106
What difference does it make at this point? Are there any DX12 games available right now? AMD had a head start on NVidia driver wise because of Mantle, which is similar to DX12.

Also, the only thing being done in software is the scheduling, which is actually more power efficient than having hardware schedulers like AMD.

NVidia have some of, if not the best driver engineers in the World and so they have the confidence to implement such a thing in software, rather than using transistors for it. We'll see how effective it is when the driver is released.

As for AMD, they have invested heavily in AC to the point of having multiple hardware schedulers, so they're hoping it will pay big dividends in the long run.

Why it matters right now is because it sets the precedent for the future. People try saying, "Oh it won't matter until Pascal comes out, and that will be better". That overlooks a few things:

1. Some people are in the graphics card market right now, and are looking for a card that will last a few years. If they buy Nvidia, they won't get as much performance out of their cards in DX12 as the AMD competition. Futureproofing to last 4 years or more is generally pointless, but lasting 2-3 years? That's a reasonable expectation, and we'll have a fair amount of DirectX 12 games in less than a year.

2. We don't know if Pascal will actually implement a better asynchronous compute method than Maxwell. As good as Nvidia's engineers are, it may not be possible for them to just leap forward to cutting edge AC support without the preexisting experience and IP. And who's to say AMD will sit still while Nvidia closes the gap? I've said it before, but it could be a similar situation to what AMD had with tessellation: they couldn't just go from horrible tessellation to terrific tessellation performance with one generation, and even when they greatly improved tessellation performance, Nvidia had already moved the bar even further.

3. It's important to criticize Nvidia now to best encourage them to focus on asynchronous compute support going forward. If Pascal is lackluster in AC performance at this point and there's still a chance to improve the design, they need to know to do that. And they need to know to prioritize AC performance in Volta as well. The last thing gamers and developers want is for criticism of Nvidia to be silenced, because you can't improve on a design if you don't hear the criticisms.
 
Last edited:

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
It may be more power efficient, but it also (potentially) comes with significant latency penalties, hence the "potentially catastrophic" remark from Oculus.

Latency is a big problem for VR, but not for regular games.

more efficient if measuring the gpu power only and not total system power. If his is true then it just shifted power draw from the gpu to the cpu and in the process became less flexible. Doesn't sound like a good trade off to me but they made bank on it, so maybe I don't know anything.

We'll have to see what happens when the driver is released. Speculation is useless at this point.

But NVidia has been using the CPU for scheduling compute tasks for a long time now with CUDA for things like hardware accelerated PhysX, so it's not like it's a new tactic of theirs.. And DX12 is going to free up more CPU anyway..
 
Last edited:

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
What I don't understand is why you are defending NV in particular. You seem to upgrade GPUs every 1.5-2 years anyway which makes me think that for you it truly doesn't matter. Therefore, logically it means you shouldn't even care if Maxwell bombs in 2016-2018 DX12 games since next year you will move on to Pascal so who cares about Maxwell's performance in DX12. Or am I wrong?

NVidia doesn't need to be defended. NVidia designs GPUs that perform well for the here and now, not 4 or 5 years down the road. That tactic has cost AMD a lot of market share.

Sure, the ACEs are now going to be very useful, but how long have they been sitting there wasted and taking up die space? Apparently for years..

AMD's long term strategy was brilliant in many ways, but it cost them dearly as well.

Ashes is still just 1 game on 1 game engine. We need a broader picture.

Agreed, although I disagree with you about GW. You severely overestimate the impact of GW on games. Time and time again reality has shown us that it simply does not matter.

HardOCP recently tested the Witcher 3 and the Radeons performed very well. The performance penalty for enabling hairworks was very close between them even..
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
1. Some people are in the graphics card market right now, and are looking for a card that will last a few years. If they buy Nvidia, they won't get as much performance out of their cards in DX12 as the AMD competition. Futureproofing to last 4 years or more is generally pointless, but lasting 2-3 years? That's a reasonable expectation, and we'll have a fair amount of DirectX 12 games in less than a year.

And I would have no problem recommending those people an AMD card like the Fury, if they plan on keeping it long term. Although we don't know the actual performance of Maxwell in AC yet, Fiji is basically guaranteed to perform well regardless..

2. We don't know if Pascal will actually implement a better asynchronous compute method than Maxwell. As good as Nvidia's engineers are, it may not be possible for them to just leap forward to cutting edge AC support without the preexisting experience and IP. And who's to say AMD will sit still while Nvidia closes the gap? I've said it before, but it could be a similar situation to what AMD had with tessellation: they couldn't just go from horrible tessellation to terrific tessellation performance with one generation, and even when they greatly improved tessellation performance, Nvidia had already moved the bar even further.

Asynchronous compute has diminishing returns like everything else. AMD has 8 ACEs, which is probably already overkill. As for NVidia, only time will tell what they decide to do.

If they take the hardware route, or stick with the software scheduler. As I said above, NVidia have a lot of experience using the CPU to schedule compute tasks. That's what they've been doing with hardware PhysX all these years, and CUDA is a totally different API compared to DirectX.

3. It's important to criticize Nvidia now to best encourage them to focus on asynchronous compute support going forward. If Pascal is lackluster in AC performance at this point and there's still a chance to improve the design, they need to know to do that. And they need to know to prioritize AC performance in Volta as well. The last thing gamers and developers want is for criticism of Nvidia to be silenced, because you can't improve on a design if you don't hear the criticisms.

At this stage, it's already too late to criticize. Pascal is very likely a done deal already, since there were so many rumors that it has been taped out.

If NVidia fail with the first iteration, there's always the respin.
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
If that latency stalls your rendering pipeline for x ms, then it most certainly is a problem.

Latency stalls are no problem for a GPU, as there are thousands or tens of thousands of threads in flight, so if one stalls, then another will simply take it's place due to the workload being "embarrassingly parallel."

Latency is a bigger problem for CPUs with their more serial workloads..
 

antihelten

Golden Member
Feb 2, 2012
1,764
274
126
Latency stalls are no problem for a GPU, as there are thousands or tens of thousands of threads in flight, so if one stalls, then another will simply take it's place due to the workload being "embarrassingly parallel."

Latency is a bigger problem for CPUs with their more serial workloads..

The whole point of this thread is that graphics shader and compute shaders are not "embarrassingly parallel" when run together, as they have to be run serially unless you utilize async compute.
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
The whole point of this thread is that graphics shader and compute shaders are not "embarrassingly parallel" when run together, as they have to be run serially unless you utilize async compute.

This is a goalpost switch, and has no bearing on what we were discussing before..

But still, if NVidia is using the CPU for scheduling, then compute tasks will be issued out of order just like with AMD's ACEs, but probably even better as CPUs are the out of order masters. Whatever OoO logic AMD built into their ACEs, is nothing compared to what's in a CPU..

And Maxwell 2 has the capability to run 31 compute tasks in parallel with graphics.
 

antihelten

Golden Member
Feb 2, 2012
1,764
274
126
This is a goalpost switch, and has no bearing on what we were discussing before..

But still, if NVidia is using the CPU for scheduling, then compute tasks will be issued out of order just like with AMD's ACEs, but probably even better as CPUs are the out of order masters. Whatever OoO logic AMD built into their ACEs, is nothing compared to what's in a CPU..

Me talking about latency in the context of async compute in a thread that has been all about async compute is a goalpost switch?, eerm ok.

And no, If Nvidia uses the CPU for scheduling the compute will not be issued just like with AMD's ACEs, since, once again, going through software (i.e. the CPU), can and will come with hefty latency penalties.

And Maxwell 2 has the capability to run 31 compute tasks in parallel with graphics.

You haven't really been paying attention have you?, this whole thread has been about the fact that Nvidia cannot run compute in parallel with graphics, in spite of what they have so far claimed.

This was first demonstrated by the Ashes benchmark.
Then by the B3D test.
And finally confirmed by Nvidia themselves (via oxide), although they are now working on fixing it (via a software implementation).
 

Vesku

Diamond Member
Aug 25, 2005
3,743
28
86
Latency stalls are no problem for a GPU, as there are thousands or tens of thousands of threads in flight, so if one stalls, then another will simply take it's place due to the workload being "embarrassingly parallel."

Latency is a bigger problem for CPUs with their more serial workloads..

YDiZIsr.jpg


Missing a frame refresh is a problem. It's not like a lot of HPC compute where it just reports when it's done, game compute threads need to keep pace with the game.

Nvidia's level of preemption adds an additional layer of difficulty to coordinating Async Compute. It will be very impressive if Nvidia's software engineers manage to optimize their scheduler to such an extent it offsets having to coordinate the compute threads with the regular GPU tasks using PCIe rather than on the GPU die.

If it was easy to do for Nvidia hardware then they'd have the feature working properly already as there are several AAA DX12 PC games in an advanced stage of development and they have been reporting the feature as available.
 
Last edited:

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
Me talking about latency in the context of async compute in a thread that has been all about async compute is a goalpost switch?, eerm ok.

And I told you, that latency in regular games is not a big deal as everything is parallel.. Even in DX11, compute shaders which are done in serial to rendering are still actually executed in parallel.

Asynchronous compute also uses idle shaders for processing and is heavily dependent on availability, so of course latency is going to naturally be involved.. :rolleyes:

And no, If Nvidia uses the CPU for scheduling the compute will not be issued just like with AMD's ACEs, since, once again, going through software (i.e. the CPU), can and will come with hefty latency penalties.

Latency is going to be involved no matter what, but GPUs excel at masking latency by having tens of thousands of threads in flight, so it's not a big deal unless you're doing VR.

You haven't really been paying attention have you?, this whole thread has been about the fact that Nvidia cannot run compute in parallel with graphics, in spite of what they have so far claimed.

On the contrary, I've been posting in this thread since the second page, so I'm very aware of what it's about.

This whole thing was kicked off by an Oxide developer who first claimed that Maxwell didn't even possess the ability to do asynchronous compute, and now the very same developer says that asynchronous compute was broken in the driver from the beginning and NVidia hadn't fully implemented it but is in the process of doing so.

So as far as I'm concerned, Maxwell 2 does support asynchronous compute in parallel with rendering, as stated in the CUDA developer toolkit and multiple sources.

It just doesn't work properly at the moment, but it's no big deal as there are no DX12 games available. But if you want to continue to stick your head in the sand and pretend that the feature isn't broken in the driver, then go ahead, it matters not.
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
YDiZIsr.jpg


Missing a frame refresh is a problem. It's not like a lot of HPC compute where it just reports when it's done, game compute threads need to keep pace with the game.

Nvidia's level of preemption adds an additional layer of difficulty to coordinating Async Compute. It will be very impressive if Nvidia's software engineers manage to optimize their scheduler to such an extent it offsets having to coordinate the compute threads with the regular GPU tasks using PCIe rather than on the GPU die.

*facepalm*

Did I not say that VR was an exception? I'm talking about regular gaming here, not VR..
 

Vesku

Diamond Member
Aug 25, 2005
3,743
28
86
*facepalm*

Did I not say that VR was an exception? I'm talking about regular gaming here, not VR..

It is most detrimental to VR but it also means latency may cause Async Compute tasks to also get pushed to the next frame refresh.

Again, Nvidia claimed they support the feature. Maxwell 2 cards reported at the driver level the feature was available. Yet it wasn't actually doing what developers expected it to do per DX12 specs. There are AAA DX12 games in an advanced stage of development, Lionhead Studio (Fable Legends game in development) submitted an Async Compute library to Unreal 4 for example. I don't think Oxide is the only one to mention to Nvidia their Async Compute support is not working. So there are obviously some hurdles for Nvidia to overcome and they appear to be quite high.
 
Last edited:

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
Wow! Why invent a whole new API? Just offload everything to the CPU. Brilliant!

I can't believe it's actually being put forth that scheduling with software on the CPU is the equivalent of having dedicated hardware onboard the GPU. And that makes Oxides statement of not being capable wrong because they are going to use the CPU for scheduling. Seems to me that confirms his statement that as far as he could tell, Maxwell couldn't do it.

I suppose that's one way of improving efficiency though. Just don't have the hardware on your GPU in the first place. You use less power and less transistors. Then have the CPU perform the tasks. Again, brilliant!
 

antihelten

Golden Member
Feb 2, 2012
1,764
274
126
And I told you, that latency in regular games is not a big deal as everything is parallel.. Even in DX11, compute shaders which are done in serial to rendering are still actually executed in parallel.

Asynchronous compute also uses idle shaders for processing and is heavily dependent on availability, so of course latency is going to naturally be involved.. :rolleyes:

Obviously we're talking about future games that would make use of async, so no, regular games as they exist today are not affected.

The fact that async compute uses idle shaders, really has nothing to do with latency as such, but either way there will always be latency involved with any task, the point is that going through software (the CPU), will generally incur much larger latency penalties.

Latency is going to be involved no matter what, but GPUs excel at masking latency by having tens of thousands of threads in flight, so it's not a big deal unless you're doing VR.

Latencies are not created equally, so saying that "latency is going to be involved no matter what" is a cop out. And GPUs being able to run tens of thousands of thread in parallel isn't going to help when you have to stall the entire rendering of a frame to wait for a compute job to finish. A compute job that could have been run in parallel if you had access to async compute.

On the contrary, I've been posting in this thread since the second page, so I'm very aware of what it's about.

This whole thing was kicked off by an Oxide developer who first claimed that Maxwell didn't even possess the ability to do asynchronous compute, and now the very same developer says that asynchronous compute was broken in the driver from the beginning and NVidia hadn't fully implemented it but is in the process of doing so.

The dev was correct when he claimed that Maxwell 2 didn't possess the ability to do async compute (with the current drivers), so I don't see what you're problem with that is.

The dev himself never said that asynchronous compute was broken in the driver from the beginning, he simply said that Nvidia had told him that it was broken:

We actually just chatted with Nvidia about Async Compute, indeed the driver hasn't fully implemented it yet, but it appeared like it was.

So unless you have a problem with Nvidia's claims (or think that the oxide dev is lying about talking to Nvidia), I really don't see what the issue is.

So as far as I'm concerned, Maxwell 2 does support asynchronous compute in parallel with rendering, as stated in the CUDA developer toolkit and multiple sources.

Because if there's one thing history has taught us, it's that Nvidia's documentation is the most trustworthy thing out there :rolleyes:
 
Last edited:

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
It is most detrimental to VR but it also means latency may cause Async Compute tasks to also get pushed to the next frame refresh.

Not having any idle shaders can also do that, but it's hardly going to be catastrophic.

Heck, games have been using compute shaders for years now at this point with great performance, and that was with DX11 which does compute in serial to rendering..

So there are obviously some hurdles for Nvidia to overcome and they appear to be quite high.

Indeed, but they are not impossible. If NVidia don't implement asynchronous compute by the time the first DX12 comes out this year, which will be Fable Legends, then you can talk..

NVidia have been using the CPU to accelerate graphics in many ways for years now. That's why they've had the edge over AMD.

And they use the CPU to queue up compute tasks with CUDA, so it's not like they haven't done it before.
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
Wow! Why invent a whole new API? Just offload everything to the CPU. Brilliant!

You do realize that the API does not call for hardware schedulers? :sneaky:

I can't believe it's actually being put forth that scheduling with software on the CPU is the equivalent of having dedicated hardware onboard the GPU.

There are both pros and cons for each approach. With AMD's way, you have spend transistors for hardware schedulers with basic OoO logic that increase power and take away die space from other things. But latency is lower, plus it probably makes asynchronous compute easier to implement and saves developers time.

With NVidia's way, you save transistors and power usage that could be devoted elsewhere, at the expense of added latency and increased driver overhead.

But as I've been saying, it's nothing NVidia haven't done before.

And that makes Oxides statement of not being capable wrong because they are going to use the CPU for scheduling. Seems to me that confirms his statement that as far as he could tell, Maxwell couldn't do it.

You don't need hardware schedulers to do asynchronous compute. Only AMD have taken this approach. Intel and NVidia are going to do the scheduling in software..
 

antihelten

Golden Member
Feb 2, 2012
1,764
274
126
You don't need hardware schedulers to do asynchronous compute. Only AMD have taken this approach. Intel and NVidia are going to do the scheduling in software..

Hell you don't even need a GPU for DX12 at all, you can do the whole thing through software, and since you apparently think that doesn't carry any performance penalty, I guess we should all sell our GPUs and just use WARP12.
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
Obviously we're talking about future games that would make use of async, so no, regular games as they exist today are not affected.

The fact that async compute uses idle shaders, really has nothing to do with latency as such, but either way there will always be latency involved with any task, the point is that going through software (the CPU), will generally incur much larger latency penalties.

Again, regular games have been using compute shaders for years in SERIAL with great performance! Asynchronous compute will just build on top of that with more performance..

Sorry, but you are making a big deal over nothing..

Latencies are not created equally, so saying that "latency is going to be involved no matter what" is a cop out. And GPUs being able to run tens of thousands of thread in parallel isn't going to help when you have to stall the entire rendering of a frame to wait for a compute job to finish. A compute job that could have been run in parallel if you had access to async compute.

Not even sure what your point is here. If the compute job is that intensive that it causes the entire frame to stall, then that's the developers fault for overburdening the hardware and asynchronous compute wouldn't stop that..

The dev was correct when he claimed that Maxwell 2 didn't possess the ability to do async compute (with the current drivers), so I don't see what you're problem with that is.

The dev himself never said that asynchronous compute was broken in the driver from the beginning, he simply said that Nvidia had told him that it was broken:

So unless you have a problem with Nvidia's claims (or think that the oxide dev is lying about talking to Nvidia), I really don't see what the issue is.

Not going to bother debating this, as it's too open to interpretation. That said, the game is in alpha, and DX12 drivers from IHVs are still being polished.

Because if there's one thing history has taught us, it's that Nvidia's documentation is the most trustworthy thing out there :rolleyes:

If you don't trust NVidia, thats fine. I certainly won't hold it against you..
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
Hell you don't even need a GPU for DX12 at all, you can do the whole thing through software, and since you apparently think that doesn't carry any performance penalty, I guess we should all sell our GPUs and just use WARP12.

Now you're just being hyperbolic.. o_O

The ACEs only do scheduling and dispatching, which they can do out of order. They don't do any actual processing. It's the shader array that does the heavy lifting.

So in light of that, why is using the CPU a bad thing? The CPU's out of order capabilities are far superior to what is found in those ACEs, which makes it a much better candidate for that sort of thing.

NVidia has been using the CPU for compute tasks for years in CUDA applications, so it must have been working for them.
 
Last edited:

antihelten

Golden Member
Feb 2, 2012
1,764
274
126
Now you're just being hyperbolic..

It's called a reductio ad absurdum argument.

So light of that, why is using the CPU a bad thing? The CPU's out of order capabilities are far superior to what is found in those ACEs, which makes it a much better candidate for that sort of thing.

CPU's throughput capabilities are far superior, but that doesn't mean that their latency capabilities are superior.
 
Last edited:

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
CPU's throughput capabilities are far superior, but that doesn't mean that their latency capabilities are superior.

You must not know much about CPUs. Modern CPUs have an entire arsenal of latency reducing technologies on hand. Branch predictors, on die memory controllers, multi level cache hierarchies, massive caches (up to 20 MB now), SMT, registers and God knows what else..

Like I said, latency is a bigger deal for CPUs because of their workload, so engineers have come up with all kinds of ways to reduce it as much as possible over the years.
 

Good_fella

Member
Feb 12, 2015
113
0
0
hhaha thanks for that. They missed 2 key points: (1) NV screwing over Titan X owners with an after-market 980Ti that beat the Titan X out of the box for $650-700, barely months after TX came out and the after-market 980TI cards have better components and coolers too (2) mobile dGPU overclocking is a bug. <That's perhaps the most hilarious stunt of 2015.

Only NV could lie about GTX970's specs, mobile dGPU overclocking, gimp Kepler, rip TX owners off, and lie about AC shader capabilities for DX12, have atrocious price/performance in low end cards like 750Ti/950/960 and still manage to gain market share quarter after quarter. Holly cow, not even Apple loyalists would likely put up with this type of treatment.

The media is covering this worldwide. Even PCPerspective is covering it, but HardOCP and TechReport haven't posted a single article on this. :cool:

AMD screwed Fury X owners with Fury. And screwed everyone with Fury Nano price.

Now you being completed troll. Blaming Nvidia because AIB partners using better components. So why 290X after market coolers are better?

Gimp Kepler? Stop using Reddit/4chan for sources.

perfdollar_1920.gif


If $20 is atrocious there is nothing we can help you.

Fury X overclockers dream is dream. <That's perhaps the most hilarious stunt of 2015.

The media is covering this worldwide? You mean shills quoting AMD's PR? Looks like Nvidia should stop sending review samples to them? No? :whiste:

Hypocritical.
Double standards.
Rekt.

Wow! Why invent a whole new API? Just offload everything to the CPU. Brilliant!

I can't believe it's actually being put forth that scheduling with software on the CPU is the equivalent of having dedicated hardware onboard the GPU. And that makes Oxides statement of not being capable wrong because they are going to use the CPU for scheduling. Seems to me that confirms his statement that as far as he could tell, Maxwell couldn't do it.

I suppose that's one way of improving efficiency though. Just don't have the hardware on your GPU in the first place. You use less power and less transistors. Then have the CPU perform the tasks. Again, brilliant!

You and all your AMD kind said who need GPU physx if it can be done on CPU.

Hypocritical.
Double standards.
Rekt.