AT has dx12 drivers?

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

metalliax

Member
Jan 20, 2014
119
2
81
Why wasn't the 970 tested in the preview? I think it would be most interesting to see, especially with the supposed issues that have been reported. Perhaps it was intentionally not shown?
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
Looking forward to how developers leverage the performance increases with DX12. Will we get better performing games, or better looking games that utilize the extra performance?

We will have both. Better looking AND better performing games :awe: Games will be more complex, with better physics, A.I and more objects on screen due to having a lot more CPU power on tap.

DX12 is a BIG deal for PC gaming. In fact, I may go so far as to say that this could very well be the end of console gaming as we know it..

Two years from now, 4K is going to be everywhere, on T.V, movies, and PC games; but not on consoles..... :sneaky: Why invest in consoles, when even 1080p 30 FPS isn't guaranteed let alone 4K? PC gaming is where the cutting edge of game technology meets and utterly destroys the feeble consoles with their limited capabilities. And now PC gaming has taken away the one proverbial "ace" that consoles had; efficiency.

Fact is, DX12 is going to make the golden standard of 60 FPS more achievable than ever before, even on low and mid range setups, making console gaming less attractive.

Thats why DX12 is such a milestone.. It's going to open up PC gaming to the masses in a way we've never seen before. :D
 

krumme

Diamond Member
Oct 9, 2009
5,952
1,585
136
It's a win for Intel too. Whatever makes slower CPU perform better also makes faster CPUs perform better.

Intel has the node advantage, they can cram in more cores if they want to.

While single GPU setup on a fast Intel CPU may not see massive gains (in normal games, not pure draw call limited benches such as Star Swarm), multi-GPU setups have seen major boosts in performance. To drive multi-card setups, a strong CPU is still required. This automatically means Intel CPU = default for high-end rigs.

The point is the difference between a quad intel i5 and a quad amd kaveri/carizo will be greatly reduced. Secondly intel can add cores but its bigger cores and they cost. More smaller cores on the same mm2 gives better perf - if thin api is used.

In greater scheme it mean more ressources can be directed from cpu to gpu. Or we can say a thin api is making part of the prior cpu task more parallelized. Again on the foundation of a consolidation on the engines on the market.
 

krumme

Diamond Member
Oct 9, 2009
5,952
1,585
136
We have to remember its the shifting power between software and gaming and the hardware side where the gaming side gains power and the right to define and shape the future.
Its EA with Dice using mantle to reduce cost porting between platforms. Consoles - pc - mobile. And at the same time getting and edge to other devs. Mantle and DX12 is a result of that fight.
 

Morbus

Senior member
Apr 10, 2009
998
0
0
As much as I like thread derails like this.../s


Iirc, Mantle is still in beta and is not going to be released to others till its out of beta. Maybe Ryan can chime in on this.
He didn't anything to the contrary. Actually, he basically said precisely that.

As for the results, I still don't understand the large differences in the improvements between the different GPUs.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
Why wasn't the 970 tested in the preview? I think it would be most interesting to see, especially with the supposed issues that have been reported. Perhaps it was intentionally not shown?

Lots of other GPUs they didnt test.

The only real misfit is the 260X if we can say so. It should have been a 280X. But maybe the drivers dont support GCN 1.0 yet or something else.

Else they have GCN 1.1, GCN 1.2, Kepler, Maxwellv1 and Maxwellv2 in the tests. All that really matters since we can extrapolate from that.
 
Last edited:

krumme

Diamond Member
Oct 9, 2009
5,952
1,585
136
Why wasn't the 970 tested in the preview? I think it would be most interesting to see, especially with the supposed issues that have been reported. Perhaps it was intentionally not shown?

Hehe. We have seen memory leaking with mantle on bf4 affecting 7850 cards with 1gb. Memory management is far more difficult for the devs - that the flipside of the huge possibilities. I think there is a chance 970 could get messed up here but lets wait and see.
(Edit: Perhaps its what fottemberg is indicating here
http://semiaccurate.com/forums/showpost.php?p=228888&postcount=203)
 
Last edited:

Racan

Golden Member
Sep 22, 2012
1,111
1,989
136
Developers may simply as previously in history use the newly found CPU power for something else. And we are back to staus quo on the CPU requirement.

I do not expect AAA games (and others) to simply let this oppotunity pass to add more features, AI etc to the games to make them even better.

Only if these AAA games are PC exclusive, I don't see this happening for xbone and ps4 console ports.
 
Feb 19, 2009
10,457
10
76
Only if these AAA games are PC exclusive, I don't see this happening for xbone and ps4 console ports.

Not entirely true, we've seen PC versions of cross-platform games often have unique features or bonus quality, such as ultra textures and better LOD or even much further view range (Dying Light).

Some of these features will impact draw calls more than others, but its fair to assume developers will continue to push the limits. It's what gives them an edge over their competitors who do not.
 

SPBHM

Diamond Member
Sep 12, 2012
5,056
409
126
it would have been interesting to artificially lock the framerate to 25FPS let's say with the 980 and try to see a power usage reduction with DX12

kind of interesting that a benchmark made to show-off mantle is now mostly showing that DX12 is as good as it, and that Nvidia have far better performance with the old and new API than AMD,

also I tend to use old hardware and seeing that Nvidia is still promising to support this with their 2010 GPUs while AMD only 2012 and higher is relevant.
 

imaheadcase

Diamond Member
May 9, 2005
3,850
7
76
Might as well not even mentioned mantle in the benchmarks, its pretty much all but confirmed amd is just going to focus on DX12 implication vs mantle when new cards come out.
 

Paul98

Diamond Member
Jan 31, 2010
3,732
199
106
Interesting and great to see DX12 can give a huge boost in FPS to anything that doesn't support mantle.
 

krumme

Diamond Member
Oct 9, 2009
5,952
1,585
136
Might as well not even mentioned mantle in the benchmarks, its pretty much all but confirmed amd is just going to focus on DX12 implication vs mantle when new cards come out.

When the new dx12 games come you mean?
I think we are more like a half year from that - at best. Unless its some more patch like implementations.
I think its first when the next gen engines arive - most made for the new consoles in mind - that we will see the big jump in perf and adoption.
 

bystander36

Diamond Member
Apr 1, 2013
5,154
132
106
It's a win for Intel too. Whatever makes slower CPU perform better also makes faster CPUs perform better.

Intel has the node advantage, they can cram in more cores if they want to.

While single GPU setup on a fast Intel CPU may not see massive gains (in normal games, not pure draw call limited benches such as Star Swarm), multi-GPU setups have seen major boosts in performance. To drive multi-card setups, a strong CPU is still required. This automatically means Intel CPU = default for high-end rigs.

That depends on how future games are developed. In current games that Intel has no trouble handling, they will not benefit from DX12, as the bottleneck is on the GPU. All it will do is reduce power draw for a non CPU limited system.

However, there is the possibility that this will open the doors for a massive increase in draw calls. If there is reason to do so, then future games may become CPU bound again, and Intel's advantage will be there again.
 

chimaxi83

Diamond Member
May 18, 2003
5,649
61
101
Might as well not even mentioned mantle in the benchmarks, its pretty much all but confirmed amd is just going to focus on DX12 implication vs mantle when new cards come out.

All but confirmed by whom? Mantle clearly provides almost the same theoretical performance increases as DX12 (within a few percent), so for AMD to immediately jump ship after getting developers on board and dumping money into it doesn't make much sense.
 

Paul98

Diamond Member
Jan 31, 2010
3,732
199
106
After watch the video of the 980 and 290x in DX12 looks like AMD has a bit of work to do on the driver. Was noticing quite a few spikes, once they sort that out I will be interested to see another test.

I would also like to see a DX12 test on something with a real benchmark.
 
Feb 19, 2009
10,457
10
76
That depends on how future games are developed. In current games that Intel has no trouble handling, they will not benefit from DX12, as the bottleneck is on the GPU. All it will do is reduce power draw for a non CPU limited system.

People use to keep saying Mantle only benefits weaker CPUs, absolutely not true. We've seen in BF4 MP with Crossfire, the performance gains and smoothness is huge.

Mantle or DX12 alleviates CPU bottlenecks, which occur on weaker CPUs OR Multi-GPU systems.

In situations where its not CPU limited, making the CPU work less and reducing power use also benefits Intel, because their CPUs are already much stronger than AMD. AMD's CPU will still have to work harder and thus consuming more power for the same level of performance.

Nothing truly helps AMD's CPU besides them making a good CPU architecture, period.
 

krumme

Diamond Member
Oct 9, 2009
5,952
1,585
136
That depends on how future games are developed. In current games that Intel has no trouble handling, they will not benefit from DX12, as the bottleneck is on the GPU. All it will do is reduce power draw for a non CPU limited system.

However, there is the possibility that this will open the doors for a massive increase in draw calls. If there is reason to do so, then future games may become CPU bound again, and Intel's advantage will be there again.

I dont recall this anticipation of massive increase in drawcall when discussing mantle one and a half year ago ;) fair enough its not rts all over where its typically very much needed.
But anyway a thin api is far more than that.
 
Feb 19, 2009
10,457
10
76
All but confirmed by whom? Mantle clearly provides almost the same theoretical performance increases as DX12 (within a few percent), so for AMD to immediately jump ship after getting developers on board and dumping money into it doesn't make much sense.

It makes no sense for a struggling company like AMD to spread their little resources thinly on DX11, DX12 and Mantle optimizations for games. Now that we know DX12 can deliver what was the goal of Mantle, lower CPU overhead, there's no reason to continue pushing Mantle.

Think of the mess that GameWorks titles cause for AMD's driver team. Now think whether they have the time or resources to optimize for Mantle on top of all that.
 

monstercameron

Diamond Member
Feb 12, 2013
3,818
1
0
It makes no sense for a struggling company like AMD to spread their little resources thinly on DX11, DX12 and Mantle optimizations for games. Now that we know DX12 can deliver what was the goal of Mantle, lower CPU overhead, there's no reason to continue pushing Mantle.

Think of the mess that GameWorks titles cause for AMD's driver team. Now think whether they have the time or resources to optimize for Mantle on top of all that.

heh, mantle is a thin driver and as such won't need all the upkeep d3d or ogl needs. Also they only need to update it every so often for tweaks and feature adds. Mantle definitely has a place.
 

krumme

Diamond Member
Oct 9, 2009
5,952
1,585
136
It makes no sense for a struggling company like AMD to spread their little resources thinly on DX11, DX12 and Mantle optimizations for games. Now that we know DX12 can deliver what was the goal of Mantle, lower CPU overhead, there's no reason to continue pushing Mantle.

Think of the mess that GameWorks titles cause for AMD's driver team. Now think whether they have the time or resources to optimize for Mantle on top of all that.

You got it the wrong way. Remember the driver is ultra thin. What you can argue is that the game dev - and in this case the engines - have less incentive to program for mantle. And that a fair argument. But remember what mantle is - its more like a ps4 and even xbox derivative. So its probably very very similar to ps4 api. And probably very close to dx12 anyway. As Johan from Dice said "where have we seen this before..."
 

SilverlightWPF

Junior Member
Feb 4, 2013
2
0
0
So if Mantle was run on Win7, which only has wddm1.3, would it still get the massive improvements we see in the tests?!

It's clear that Dx12 needs wddm2.0 to achieve its results, both are "tied at the hip" .. So is Mantle also getting it's improvements because of Wddm2.0?!
 

Paul98

Diamond Member
Jan 31, 2010
3,732
199
106
After watching the video again, the graphics look different, the 980 looks to me missing certain things compared to the 290x. On the 980 the ships go from having a blue trail to having none, to having blueish glow again. Also when there are explosions at first you will see things coming away leaving a red trail, I don't see that on the 980. There are also ships that shoot out bright red things that are grains of rice shaped. On the 980 there is no color to them at all and you can barely tell the ships are firing.

This is on DX12. I am sure NVidia will get that sort of thing figured out.

Edit: also will account for some of the performance difference.
 
Last edited:

Enigmoid

Platinum Member
Sep 27, 2012
2,907
31
91
The point is the difference between a quad intel i5 and a quad amd kaveri/carizo will be greatly reduced. Secondly intel can add cores but its bigger cores and they cost. More smaller cores on the same mm2 gives better perf - if thin api is used.

In greater scheme it mean more ressources can be directed from cpu to gpu. Or we can say a thin api is making part of the prior cpu task more parallelized. Again on the foundation of a consolidation on the engines on the market.

Nah, its still going to be the same. Haswell i5 is around 50% more powerful that kaveri in ST. Its about 50% more powerful in MT as well. The difference won't change, simply the kaveri will more up to playable fps (ie 30 vs. 45 fps on DX 11 vs. 60 vs. 90 fps on DX 12 assuming no GPU bottleneck - the main point is that kaveri is now getting 60 fps).
 

desprado

Golden Member
Jul 16, 2013
1,645
0
0
It's a win for Intel too. Whatever makes slower CPU perform better also makes faster CPUs perform better.

Intel has the node advantage, they can cram in more cores if they want to.

While single GPU setup on a fast Intel CPU may not see massive gains (in normal games, not pure draw call limited benches such as Star Swarm), multi-GPU setups have seen major boosts in performance. To drive multi-card setups, a strong CPU is still required. This automatically means Intel CPU = default for high-end rigs.
yup It is a win for every type of lower end cpu user too example are core i3 users.
 
Last edited: