(DSOG via MaxPC) Nvidia Finally Officially Speaks About AMD’s Mantle

Page 15 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

SirPauly

Diamond Member
Apr 28, 2009
5,187
1
0
That is a lie. NVAPI does not in any way provide gaming features. The way to think about NVAPI is its the API that supports Nvidia control panel, it allows settings like anti aliasing or how refresh will work but it doesn't provide game features. Its mainly for querying the hardware.

You can read all about it on Nvidia's website: https://developer.nvidia.com/nvapi, unlike with Mantle there is complete developer documentation defining every method call.

I think that invalidates your argument.

That's not necessarily true:

nvidia said:
FarCry 2 reads from a multisampled depth buffer to speed up antialiasing performance. This feature is fully implemented on GeForce GPUs via NVAPI. Radeon GPUs implement an equivalent path via DirectX 10.1. There is no image quality or performance difference between the two implementations
 

BrightCandle

Diamond Member
Mar 15, 2007
4,762
0
76
Then that would suggest that NVAPI contains hidden functions that aren't in their documentation...because there isn't a game related function in there, I have checked.
 

SirPauly

Diamond Member
Apr 28, 2009
5,187
1
0
From your link:


nVidia said:
NVAPI comes in two "flavors": the public version, available above, and a more extensive version available to registered developers under NDA.
 

MathMan

Member
Jul 7, 2011
93
0
0
That is a lie. NVAPI does not in any way provide gaming features. The way to think about NVAPI is its the API that supports Nvidia control panel, it allows settings like anti aliasing or how refresh will work but it doesn't provide game features. Its mainly for querying the hardware.

You can read all about it on Nvidia's website: https://developer.nvidia.com/nvapi, unlike with Mantle there is complete developer documentation defining every method call.
Thank you for this link. It's really quite helpful!

Among other things, it lists:
Frame Rendering:
Ability to control Video and DX rendering not available in DX runtime. Requires NDA Edition for full control of this feature.

Let me repeat that "DX rendering not available in DX runtime."

What do you think that means, good man?

I think that invalidates your argument.
I'm afraid your link didn't quite have the intended result. :p
 

MathMan

Member
Jul 7, 2011
93
0
0
Asynchronous Compute is one thing that isn't in DX and both Nvidia and AMD have support for. Another thing is Advanced Anti-Aliasing Functions that can be found in GCN.
That's good to know (and I'm not up to speed on this kind of stuff), but the question is: is this something that warrants a different graphics API? If Nvidia can do asynchronous compete in layer that runs parallel to a DX environment (in other words, you can have a game that does graphics in DX and compute through whatever other API interface they're using), then why does AMD need to link them together with Mantle? Why didn't they decide to decouple it and run them in parallel?

I believe it's perfectly possible to do this in CUDA: do you calculations there and pass the results as a texture surface to a DX call later on. No need to make CUDA do all the graphics stuff.
 

MathMan

Member
Jul 7, 2011
93
0
0
Yet Intel can extend DX. And you say AMD/nVidia cant?
http://www.xbitlabs.com/news/graphi...ons_to_Speed_Up_Rendering_in_Video_Games.html

Someone is terrible wrong.
Anybody can extend any API as much as they want to. That's just a matter of programming the feature.

The question is: are those extension something that are officially sanctioned by Microsoft as an official DX api. As long as they are not, they are no different than nvapi: they allow you to do some stuff that goes beyond the official features of a particular DX level. The only difference between Intel and Nvidia here is that Intel calls it a Direct X extension while Nvidia calls it 'nvapi'. Maybe they should have called it 'DXext' instead. :)
 

cytg111

Lifer
Mar 17, 2008
25,716
15,198
136
There is two players in the field, AMD and Nvidia (Intel? Common..). And they rely on common directx ground to deliver their product? Two players with a software-lawyer called microsoft?
I do not understand why they havent broken out their own proprietary api's sooner. Having microsoft and the windows-tax as middle-man is just nuts.
 

Noctifer616

Senior member
Nov 5, 2013
380
0
76
That's good to know (and I'm not up to speed on this kind of stuff), but the question is: is this something that warrants a different graphics API? If Nvidia can do asynchronous compete in layer that runs parallel to a DX environment (in other words, you can have a game that does graphics in DX and compute through whatever other API interface they're using), then why does AMD need to link them together with Mantle? Why didn't they decide to decouple it and run them in parallel?

I believe it's perfectly possible to do this in CUDA: do you calculations there and pass the results as a texture surface to a DX call later on. No need to make CUDA do all the graphics stuff.

I am no expert when it comes to programing games, however, from what I understand running compute and graphics at the same time in DX doesn't work efficiently.

http://www.hardocp.com/article/2009/10/19/batman_arkham_asylum_physx_gameplay_review/9#.U98WQGMUihV

When you look at the very last test, High PhysX on the GTX 275 cuts performance almost into half. However, using a dedicated card (GTS 250) which is very weak, stops the performance from dropping so drastically. There is still a performance hit because there is more stuff for the GPU to render, but a GTX 285+GTS250 together do not double the power of the system.

This is the only decent PhysX testing I found, but it's old, if anyone got any better benchmarks please share them.
 

MathMan

Member
Jul 7, 2011
93
0
0
The fact that it is inefficient to run compute and DX in parallel is inefficient doesn't mean that it's caused by DX. It could simply also be a hardware architecture issue. (In fact, it's more likely.)

For example: if the HW pipeline needs to be flushed to reconfigure it from graphics to compute, then you'll incur quite a large cost when continuously switching between them, and no amount of driver optimizations could fix that.

Remember that for GK110, Nvidia explicitly announced HW improvements to increase throughput for dependent compute processes. That's not exactly the same thing as mixing graphics and compute, but it shows that there are things in hardware that can hamper performance.
 

Noctifer616

Senior member
Nov 5, 2013
380
0
76
The fact that it is inefficient to run compute and DX in parallel is inefficient doesn't mean that it's caused by DX. It could simply also be a hardware architecture issue. (In fact, it's more likely.).

Also possible, this is why I am asking for a more up to date PhysX benchmark with newer hardware and software.
 

Lepton87

Platinum Member
Jul 28, 2009
2,544
9
81
I am no expert when it comes to programing games, however, from what I understand running compute and graphics at the same time in DX doesn't work efficiently.

http://www.hardocp.com/article/2009/10/19/batman_arkham_asylum_physx_gameplay_review/9#.U98WQGMUihV

When you look at the very last test, High PhysX on the GTX 275 cuts performance almost into half. However, using a dedicated card (GTS 250) which is very weak, stops the performance from dropping so drastically. There is still a performance hit because there is more stuff for the GPU to render, but a GTX 285+GTS250 together do not double the power of the system.

This is the only decent PhysX testing I found, but it's old, if anyone got any better benchmarks please share them.

Compared to new cards GTS250 might be weak but compared to GTX275 it is not that weak, especially in compute. GTS250 has 705.024 gflops compute power and GTX275 has 1010.72 so that's 70 percent of its compute power.
As it happens compute performance is one the strongest point of GTS250 compared to GTX285/275. Sometimes it's 2 times worse in some metrics but in others it compares very favorably.
You go back and forth beetween 285 and 275 but it changes very little
 

Blitzvogel

Platinum Member
Oct 17, 2010
2,012
23
81
There is two players in the field, AMD and Nvidia (Intel? Common..). And they rely on common directx ground to deliver their product? Two players with a software-lawyer called microsoft?
I do not understand why they havent broken out their own proprietary api's sooner. Having microsoft and the windows-tax as middle-man is just nuts.

OpenGL.......
 

monstercameron

Diamond Member
Feb 12, 2013
3,818
1
0
Well, they were indirectly talking to Huddy. Given the stupid stuff that guy said on his interview, the condescension was probably warranted. :)

did you watch the interview? Huddy was pretty well spoken, clear and didnt speak down or condescend.
 

dacostafilipe

Senior member
Oct 10, 2013
803
301
136
Well, they were indirectly talking to Huddy. Given the stupid stuff that guy said on his interview, the condescension was probably warranted. :)

Huddy may save said absurd stuff, but is that a reason/excuse for the nVidia guys to do even worse?
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
Just an FYI, it's a pretty heated rivalry between these two brands. It's not just the fans who talk trash.
 

MathMan

Member
Jul 7, 2011
93
0
0
did you watch the interview? Huddy was pretty well spoken, clear and didnt speak down or condescend.
I didn't say Huddy was condescending. I said: when you say stupid stuff even in the most well sproken, clear way, like his nonsense about the minimum one frame lag in the GSYNC module because he can't fathom the existence of a look-aside RAM, you're pretty much asking for a condescending reaction.
 

I/O

Banned
Aug 5, 2014
140
0
0
Well, they were indirectly talking to Huddy. Given the stupid stuff that guy said on his interview, the condescension was probably warranted. :)
Tom Peterson is the king of condescension. He's allso full of allot of poop.
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
I didn't say Huddy was condescending. I said: when you say stupid stuff even in the most well sproken, clear way, like his nonsense about the minimum one frame lag in the GSYNC module because he can't fathom the existence of a look-aside RAM, you're pretty much asking for a condescending reaction.

Why is it that anytime someone says or does something wrong it's always somebody else's fault? They said what they said (on both sides), and they are solely responsible for the content.
 

MathMan

Member
Jul 7, 2011
93
0
0
Why is it that anytime someone says or does something wrong it's always somebody else's fault? They said what they said (on both sides), and they are solely responsible for the content.

I agree completely. Huddy said something totally incorrect and he is, indeed, solely responsible. As a chief scientist (or whatever his official title is), he should have known better. Petersen was absolutely right to call him out for that and correct him.

(Isn't it funny how some people take more issue with the tone of what is said than with the substance of what was said?)