• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

[PCPER] 3DMark API Overhead Feature Test

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Looks like 290X gets a 20-30% draw call advantage over GTX980 with DX12. Probably won't mean a lot in actual average/peak fps, but should help raise minimum fps.
 
Anand's article has a good chart on cpu scaling. Again it is not as simple as "consoles have 8 cores, so moar cores is better". With the 3D Mark overhead test, with a powerful gpu(290x), you get good scaling from 2 cores to 4, and less scaling from 4 cores to 6.

And with weaker cards, there is basically no increase from 2 to six cores. So again, it is what we have seen with mantle. The benefits could be huge to minimal, depending on the game and hardware used, with the most benefit seen with a powerful gpu and weaker cpu.

anand results

Edit: compared to DX11 though, even with a weaker card, there is still a big increase from DX11 to DX12, but the cpu core scaling gets less as the gpu gets weaker. Hopefully with new gpus coming out and DX12 some really impressive results could be obtained if one wants to spend the money for a top of the line gpu.
 
Last edited:
Anand's article has a good chart on cpu scaling. Again it is not as simple as "consoles have 8 cores, so moar cores is better". With the 3D Mark overhead test, with a powerful gpu(290x), you get good scaling from 2 cores to 4, and less scaling from 4 cores to 6.

And with weaker cards, there is basically no increase from 2 to six cores. So again, it is what we have seen with mantle. The benefits could be huge to minimal, depending on the game and hardware used, with the most benefit seen with a powerful gpu and weaker cpu.

anand results

I think one of the biggest benefit from the next gen API's will be more artistic freedom in games. If you go read the Unity documentation there is a section about how to design games to be less CPU intensive.


  • Combine close objects together, either manually or using Unity’s draw call batching.
  • Use less materials in your objects, by putting separate textures into a larger texture atlas and so on.
  • Use less things that cause objects to be rendered multiple times (reflections, shadows, per-pixel lights etc., see below).
 
Interesting but ultimately the numbers are so high that even the weaker 750 ti should have no problem in the future.
 
Looks like 290X gets a 20-30% draw call advantage over GTX980 with DX12. Probably won't mean a lot in actual average/peak fps, but should help raise minimum fps.

Its just an interesting bench, you cannot extrapolate it into gaming performance, far from it.

There's no DX12 games on the near horizon so all of this is moot besides the fact that DX12 CAN be great!
 
It really won't matter much in games. Without this bottleneck developers will aim for a higher number of draw calls, but it won't be on those limits.
 
Why did they not add a run with a more shader intensive workload? I mean, does the lower overhead really help feed the GPU and relate to actual performance increase?

Because, now, it just seems to test the frontend of our GPUs.

PS: That should also be the reason why PCPer got such a high increase when OCing the 960. Because the shaders where not used, there was enough TDP headroom that the frontend got a huge boost in clock.
 
yes, but so far it looks like a bigger win for AMD, because their DX11 performance is a lot worse, and their DX12 performance (for maximum draw calls) is higher, that's a big change.
 
Looks like 290X gets a 20-30% draw call advantage over GTX980 with DX12. Probably won't mean a lot in actual average/peak fps, but should help raise minimum fps.

Its just an interesting bench, you cannot extrapolate it into gaming performance, far from it.

There's no DX12 games on the near horizon so all of this is moot besides the fact that DX12 CAN be great!

I also think that AMD have had a lot more time to work their drivers for mantle (and therefore DX12) so far.
 
Can extrapolate it for cpu overhead at least? The DX11 MT being about twice as good on nvidia than AMD would help explain the latter's need for a faster CPU.
 
Theoretically it doesn't have one.

Ouch a 285 beating a 980 in Anand's test runs. Embarrassing performance.

Not really, here is an example on why:
dx12-960.png
 
http://www.futuremark.com/pressreleases/compare-directx-12-mantle-and-directx-11-with-3dmark

The API Overhead feature test is not a general-purpose GPU benchmark, and it should not be used to compare graphics cards from different vendors.

https://forum.beyond3d.com/threads/directx-12-api-preview.55653/page-8#post-1834187

I do want to caution guys - Microsoft and Futuremark are *really not kidding* when they are telling you that this is not a useful benchmark for comparing GPUs to one another. They make zero effort to even do a consistent amount of GPU work, let alone anything representative. See the pcper overclocking results for instance, but the rabbit hole goes much deeper.

This benchmark is only useful for comparing how well different APIs work on a given system (CPU+GPU), not for comparing systems.

Of course, people will just ignore this and make stupid claims about performance.
 
So are you being intentionally obtuse or has your reading comprehension completely failed? Look at what I wrote and then look at what you responded with. Not the same thing.

http://www.anandtech.com/show/9112/exploring-dx12-3dmark-api-overhead-feature-test/3

Not at all. Did you read this?
http://www.futuremark.com/pressreleases/compare-directx-12-mantle-and-directx-11-with-3dmark

The purpose of the test is to compare the relative performance of different APIs on a single system, rather than the absolute performance of different systems. The API Overhead feature test is not a general-purpose GPU benchmark, and it should not be used to compare graphics cards from different vendors. (We are working on a DirectX 12 benchmark with game-like workloads, which we expect to release soon after the public launch of Windows 10.)

A note is, it doesnt even work within the same vendor. As shown with the GTX960 vs GTX980.
 

Yah I did read and comprehend that...did you? Their warning not to compare between GPUs is complete BS because they go on to say how it works(below). If all you do is swap the GPU on the same system then it very much is a benchmark. AMD's architecture scales better in every multi-core CPU configuration to the point that a 285 goes beyond a 980 before it drops below 30fps. Nvidia just got embarassed.

The 3DMark API Overhead feature test measures API performance by making a steadily increasing number of draw calls. The result of the test is the maximum number of draw calls per second achieved by each API before the frame rate drops below 30 fps.
 
Yah I did read and comprehend that...did you? Their warning not to compare between GPUs is complete BS because they go on to say how it works(below). If all you do is swap the GPU on the same system then it very much is a benchmark. AMD's architecture scales better in every multi-core CPU configuration to the point that a 285 goes beyond a 980 before it drops below 30fps. Nvidia just got embarassed.
*sigh*

The point of Futuremark's statement is that one shouldn't look at a synthetic directed test like the API Overhead benchmark to say that one card is overall faster than another. i.e. you shouldn't be buying your card based on which one has the faster command processor.

By the same token AMD has terrible tessellation performance, but real world benchmarks don't have any kind of gap resembling that.

http://images.anandtech.com/graphs/graph9059/72520.png

The API Overhead test is looking at one very small facet of GPU performance. It's very good at that and tells us some very important details. But one should not extrapolate the overall performance of a GPU from this test.

Car analogy time: it's like saying a Ford Pinto is a better car than a modern car because it had a nicer fuel tank.
 
Last edited:
*sigh*

The point of Futuremark's statement is that one shouldn't look at a synthetic directed test like the API Overhead benchmark to say that one card is overall faster than another. i.e. you shouldn't be buying your card based on which one has the faster command processor.

By the same token AMD has terrible tessellation performance, but real world benchmarks don't have any kind of gap resembling that.

http://images.anandtech.com/graphs/graph9059/72520.png

The API Overhead test is looking at one very small facet of GPU performance. It's very good at that and tells us some very important details. But one should not extrapolate the overall performance of a GPU from this test.

Car analogy time: it's like saying a Ford Pinto is a better card than a modern card because it had a nicer fuel tank.


Watch out guys the analogy is a tarp.
I think the tessellation hardware is interesting, wasn't ati first to implement it in hardware via hd3000.
 
Both synthetic thus far (this & Starswarm) is specific to the case studies these benches aim to look at, it's NOT indicative of games because we don't know what features game engines will push or focus on.

There's no way someone is going to make games with many millions of drawcalls per second. Think about that a bit.

What you CAN extrapolate to, is that CPU usage will be lower across the board in DX12, this means less Total System Power for equivalent workloads.

ps. It is foolish to use synthetics to beat your hated vendor with, just as it was when Starswarm was showcased.
 
Back
Top