D3D12 is Coming! AMD Presentation

Page 8 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

RampantAndroid

Diamond Member
Jun 27, 2004
6,591
3
81
If you have ever coded anything you would realize how easy it is to break something but still have it work.

Or done a few simple interview coding questions and had to calculate the complexity of the function in Big O Notation (there are multiple ways around many interview questions...some more efficient than others.) We're not even a month into Windows 10's life, and NV drivers still being released. Let's check back in 6 months and see where things are. If they're still underperforming, THEN I'll start to wonder. Until then...it doesn't seem terribly off to me.
 
Feb 19, 2009
10,457
10
76
I think the other elephant in the room is that Oxide says they will scale well to 16 cores yet that kind of scaling is completely absent in the benchmark. There really isn't any scaling over 4 cores.

Either Oxide is exaggerating or the engine is still in a very rough state. Though at this point I would really have expected to see some better scaling. If they still need to modify the engine to scale over 4x as many cores (which would be a nightmare to do) then currently the engine is unfinished.



If you have ever coded anything you would realize how easy it is to break something but still have it work.

You might see better scaling with multi-GPU, as it may hit a GPU bottleneck already, certainly when tested against downclock CPUs, 6core i7s are limiting the GPUs.

People assume its a draw call test but its an entire game test, draw call is just one element. There's a ton of effects, geometry, lighting, shadows etc going on that would be a GPU bottleneck.
 

Enigmoid

Platinum Member
Sep 27, 2012
2,907
31
91
You might see better scaling with multi-GPU, as it may hit a GPU bottleneck already, certainly when tested against downclock CPUs, 6core i7s are limiting the GPUs.

People assume its a draw call test but its an entire game test, draw call is just one element. There's a ton of effects, geometry, lighting, shadows etc going on that would be a GPU bottleneck.

Sure but the current FX's are doing terribly and they are most certainly CPU limited. As far as games go, this is pretty much a worst case loss for the FX series.
 
Feb 19, 2009
10,457
10
76
Sure but the current FX's are doing terribly and they are most certainly CPU limited. As far as games go, this is pretty much a worst case loss for the FX series.

Oh yeah that FX is living up to its Faildozer ancestry in this game. No doubts about it!!

I wonder whats going on there since Oxide works closely with AMD/Intel, surely, they can work those extra cores!
 

TheELF

Diamond Member
Dec 22, 2012
4,027
753
126
I think the other elephant in the room is that Oxide says they will scale well to 16 cores yet that kind of scaling is completely absent in the benchmark. There really isn't any scaling over 4 cores.

Either Oxide is exaggerating or the engine is still in a very rough state. Though at this point I would really have expected to see some better scaling. If they still need to modify the engine to scale over 4x as many cores (which would be a nightmare to do) then currently the engine is unfinished.
Scaling doesn't necessarily mean that it will run faster and faster the more and faster cores you have,it can mean that it will run just as fast on much slower cores as long as there are enough of them around.
Kind of how the fx line can keep up with todays lower end cpu's because the games scale quite well.
 
Last edited:

SPBHM

Diamond Member
Sep 12, 2012
5,066
418
126
DX11 results show that AMD hasn't optimized at all. If you look at DX11 games they compete quite well. 390X and 980 trade blows in real life games.

are you telling me Nvidia heavily optimized their drivers for this specific DX11 game benchmark?

I think both have the same level of optimization here (I mean for generic DX11 games without a ton of special optimizations), it's just that this benchmark is far more extreme than the average DX11 game, and highlights the nvidia advantage with their optimizations for DX11 in general and DX11 MT.

the DX12... I suppose is the work AMD did with Mantle, and the Xbone One win paying off, they are prepared for DX12 and DX12 loves GCN?
 
Last edited:

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
are you telling me Nvidia heavily optimized their drivers for this specific DX11 game benchmark?

I think both have the same level of optimization here (I mean for generic DX11 games without a ton of special optimizations), it's just that this benchmark is far more extreme than the average DX11 game, and highlights the nvidia advantage with their optimizations for DX11 in general and DX11 MT.

the DX12... I suppose is the work AMD did with Mantle, and the Xbone One win paying off, they are prepared for DX12 and DX12 loves GCN?

I didn't mention nVidia optimizations at all. They did release a beta driver for it though. So, they must have been doing something.
 

Enigmoid

Platinum Member
Sep 27, 2012
2,907
31
91
Scaling doesn't necessarily mean that it will run faster and faster the more and faster cores you have,it can mean that it will run just as fast on much slower cores as long as there are enough of them around.
Kind of how the fx line can keep up with todays lower end cpu's because the games scale quite well.

If it scales on slower cores it will scale on faster cores as long as its not GPU limited.

I didn't mention nVidia optimizations at all. They did release a beta driver for it though. So, they must have been doing something.

Beta driver may not have many performance improvements. Could just be a driver fix so that things simply are not broken.
 

tential

Diamond Member
May 13, 2008
7,348
642
121
I don't understand GPU manufacturer bias, how does it benefit the consumer? :confused:

No idea. I have an HD7950, was about to get a GTX 980Ti, but now that I found out about 65 inch freesync monitor coming SOON, I'm back to AMD card. Whatever card fits my needs is what I get. I don't care.

I'll wait for a couple more direct x 12 games to be RELEASED before I make a decision but DX12 won't factor in much for me personally as I'm not buying a card now for the future. My card that I will use for playing DX12 games will be Arctic Islands or Pascal. These GPUs we're getting now, if 2016 GPUs don't make them look like intel's igpus I'll be sad.

@3DVagabond

I wasn't addressing the MSAA issue at all because I don't use it personally. I will never have the GPU horsepower/framerate buffer to use MSAA usually because I'm downsampling at the highest resolution I can.


But I'm not surprised Nvidia "lied" about the MSAA bug.

Reviewers should TEST things, not listen to the GPU manufacturers on what they can or shouldn't test.
 
Last edited:

positivedoppler

Golden Member
Apr 30, 2012
1,148
256
136
No idea. I have an HD7950, was about to get a GTX 980Ti, but now that I found out about 65 inch freesync monitor coming SOON, I'm back to AMD card. Whatever card fits my needs is what I get. I don't care.

And I'm not surprised Nvidia "lied" about the MSAA bug. Just like AMD lied about Fury X being an OC dream.

I don't think it's a lie, probably a knee jerk reaction "it can't possibly be my fault it's probably yours." I see it in the engineering world all the time...
 

TheELF

Diamond Member
Dec 22, 2012
4,027
753
126
If it scales on slower cores it will scale on faster cores as long as its not GPU limited.
One would think that,only makes sense right?
But as soon as you have a main thread that has higher demands than the rest of the threads,scaling is much more restricted.
 

Enigmoid

Platinum Member
Sep 27, 2012
2,907
31
91
One would think that,only makes sense right?
But as soon as you have a main thread that has higher demands than the rest of the threads,scaling is much more restricted.

No.

You have 8 slow cores (speed = 1) and get 30 fps. Moving to 8 fast cores (speed = 2) will not change the underlying scaling of the program (unless the program is atypical - say limited by PCI-e speeds or CPU bandwidth) and you should get 60 fps.

Running on a fast or slow core, a demanding main thread will (assuming gpu limited) still be the bottleneck; for instance capable of processing 30 fps on a slow core and 60 fps on a fast core. Moving to faster or slower cores (with the same number of cores) will not change the relative demands of the main threads in general.

Of course there are exceptions that can go either way. Games can be limited by memory latency and moving to faster cores may not scale well; conversely scaling to slower cores (with similar bandwidth and latency) is very good. Games may also have fixed portions such as the 600 hz project cars physics engine in which scaling to a slower CPU increases the proportion of CPU cycles consumed for physics relative to that of other game logic, causing poor scaling on slower CPUs.

Scaling is most definitely not linear in most cases.

Scaling doesn't necessarily mean that it will run faster and faster the more and faster cores you have,it can mean that it will run just as fast on much slower cores as long as there are enough of them around.
While I don't dispute the fact that scaling will allow low clocked slow cores to do well if there are many of them this scaling should still exist if the clockspeed of those cores are cranked up. This behavious should be visible on the same numbers of cores of any speed, as low as the application is not atypically limited by something like bandwidth/memory latency or the PCIe bus.
 

TheELF

Diamond Member
Dec 22, 2012
4,027
753
126
No.

You have 8 slow cores (speed = 1) and get 30 fps. Moving to 8 fast cores (speed = 2) will not change the underlying scaling of the program (unless the program is atypical - say limited by PCI-e speeds or CPU bandwidth) and you should get 60 fps.
And this does happen, the fx-6xxx with the same but faster 6 cores will run the ps4 games faster,nobody said anything different.If its x times faster per core it will run x times faster.
But this is very different from what you said before.
I think the other elephant in the room is that Oxide says they will scale well to 16 cores yet that kind of scaling is completely absent in the benchmark. There really isn't any scaling over 4 cores.
There isn't any scaling over 4 cores because these 4 cores are way faster,fast enough to run all secondary threads not in 5 cores but in 3 cores(or maybe even less,we see no core loads) while not slowing down the main thread that runs alone in one core,so the only gain is in how fast the main thread can be run.
But if you in theory connect 2 or 4 consoles you might get a boost in performance.
They really should have run this benchmark on very slow cpus that would not be able to fill the cards in dx11.
 

Headfoot

Diamond Member
Feb 28, 2008
4,444
641
126
And this does happen, the fx-6xxx with the same but faster 6 cores will run the ps4 games faster,nobody said anything different.If its x times faster per core it will run x times faster.
But this is very different from what you said before.

There isn't any scaling over 4 cores because these 4 cores are way faster,fast enough to run all secondary threads not in 5 cores but in 3 cores(or maybe even less,we see no core loads) while not slowing down the main thread that runs alone in one core,so the only gain is in how fast the main thread can be run.
But if you in theory connect 2 or 4 consoles you might get a boost in performance.
They really should have run this benchmark on very slow cpus that would not be able to fill the cards in dx11.

Multi host network distributed computing is very, very different from single host multi-core computing
 

railven

Diamond Member
Mar 25, 2010
6,604
561
126
Any flack Nvidia gets, they deserve - for years they've been dodging bullets. However, all the egg they get on their face they some how always end up on top. Whomever they are paying off or sleeping with (note: this is a joking exaggeration of a claim, don't ask me for proof) they better keep doing it.

I said it a few times, Nvidia has been rather hush-hush about DX12. I'm interested to see how they will handle it especially with Gameworks. I'm sure whatever back-handed thing they do will work greatly in their favor and will yet again end up on top smiling as the egg drips from their cheeks.
 

Red Hawk

Diamond Member
Jan 1, 2011
3,266
169
106
Any flack Nvidia gets, they deserve - for years they've been dodging bullets. However, all the egg they get on their face they some how always end up on top. Whomever they are paying off or sleeping with (note: this is a joking exaggeration of a claim, don't ask me for proof) they better keep doing it.

I said it a few times, Nvidia has been rather hush-hush about DX12. I'm interested to see how they will handle it especially with Gameworks. I'm sure whatever back-handed thing they do will work greatly in their favor and will yet again end up on top smiling as the egg drips from their cheeks.

It could be as simple as Nvidia is happy with the status quo, so they don't see a particular need to promote DX12. While AMD is lagging behind and is trying to use DX12 as a chance to garner attention and making sure everyone knows about it.

Also, saying Nvidia "dodges bullets" brings to mind Neo from The Matrix. Nvidia should make him their mascot. :p
 

sontin

Diamond Member
Sep 12, 2011
3,273
149
106
Any flack Nvidia gets, they deserve - for years they've been dodging bullets. However, all the egg they get on their face they some how always end up on top. Whomever they are paying off or sleeping with (note: this is a joking exaggeration of a claim, don't ask me for proof) they better keep doing it.

I said it a few times, Nvidia has been rather hush-hush about DX12. I'm interested to see how they will handle it especially with Gameworks. I'm sure whatever back-handed thing they do will work greatly in their favor and will yet again end up on top smiling as the egg drips from their cheeks.

The fact that nVidia gained massiv performance with DX12 in the Star Swarm benchmark says the opposite. The fact that Microsoft is using nVidia's hardware to promote DX12 says the opposite.
 

Glo.

Diamond Member
Apr 25, 2015
5,930
4,991
136
The fact that nVidia gained massiv performance with DX12 in the Star Swarm benchmark says the opposite. The fact that Microsoft is using nVidia's hardware to promote DX12 says the opposite.

Politics ;).
 

tential

Diamond Member
May 13, 2008
7,348
642
121
The fact that nVidia gained massiv performance with DX12 in the Star Swarm benchmark says the opposite. The fact that Microsoft is using nVidia's hardware to promote DX12 says the opposite.

Why wouldn't you use Nvidia to promote your games right now? Even if it's slower, it's what the MAJORITY of gamers are using. It's just common sense. If I was making a game and AMD was 10% faster in this market, I'd use Nvidia. I'd just throw 4 cards in QUAD SLI and run my game live. Won't be any difference really than an equivalent AMD setup to the people viewing it as a demo.

Marketing is marketing my friend. You do what you go to do to make money, it's all about customer perception and it's not fun to think about or do.
 

monstercameron

Diamond Member
Feb 12, 2013
3,818
1
0
Why wouldn't you use Nvidia to promote your games right now? Even if it's slower, it's what the MAJORITY of gamers are using. It's just common sense. If I was making a game and AMD was 10% faster in this market, I'd use Nvidia. I'd just throw 4 cards in QUAD SLI and run my game live. Won't be any difference really than an equivalent AMD setup to the people viewing it as a demo.

Marketing is marketing my friend. You do what you go to do to make money, it's all about customer perception and it's not fun to think about or do.

intel has the most gamers actually.
 

AtenRa

Lifer
Feb 2, 2009
14,003
3,362
136
There are close to 40M AMD GCN based consoles today (PS4 and XBONE) and 4M more are added every quarter.
Now add all those Dekstop/Notebook GCN APUs and dGPUs and we are looking at a minimum of 100M GCN hardware gamers.

Just for those they believe AMD GPU market share is small.
 

sontin

Diamond Member
Sep 12, 2011
3,273
149
106
Why wouldn't you use Nvidia to promote your games right now? Even if it's slower, it's what the MAJORITY of gamers are using. It's just common sense. If I was making a game and AMD was 10% faster in this market, I'd use Nvidia. I'd just throw 4 cards in QUAD SLI and run my game live. Won't be any difference really than an equivalent AMD setup to the people viewing it as a demo.

Marketing is marketing my friend. You do what you go to do to make money, it's all about customer perception and it's not fun to think about or do.

Because AMD supports Oxide from the start? A new company with literally nothing to sell for 2+ years went to AMD to promote their low level API. Dont think they got nothing in response...

And there are other companies promoting their DX12 versions with nVidia:
Snail with King of Wushu or Microsoft with Fable Legends.

The real question is: Why is Snail able to port their game within 6 weeks to DX12 and get a ~20% performance increase on nVidia hardware while Oxide, working for over one year on the engine, archives negative scaling with DX12? This is really eye opening.
 
Last edited:

iiiankiii

Senior member
Apr 4, 2008
759
47
91
Because AMD supports Oxide from the start? A new company with literally nothing to sell for 2+ years went to AMD to promote their low level API. Dont think they got nothing in response...

And there are other companies promoting their DX12 versions with nVidia:
Snail with King of Wushu or Microsoft with Fable Legends.

The real question is: Why is Snail able to port their game within 6 weeks to DX12 and get a ~20% performance increase on nVidia hardware while Oxide, working for over one year on the engine, archives negative scaling with DX12? This is really eye opening.

In the end, it's up to Nvidia and the developers to fix this broken thing. As far as I know, windows 10 driver for Nvidia is a mess right now. It's a good thing this is only a benchmark and not the actual game. It won't look good for Nvidia and its customers if it ran like this in the final game. That's how it is. Nvidia/AMD needs to take care of their side to please their customers. In the end, they're all to blame.