[Nvidia.com] Nvidia GameWorks unleashed at GDC.

Page 6 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Keysplayr

Elite Member
Jan 16, 2003
21,211
50
91
I don't know how many times we are going to have this discussion? The nVidia proponents think it's a perfectly fine business practice and the AMD proponents think it's dodgy. That's part of the reasons some people have a brand preference. Nobody is ever going to change the others mind. Have your say and move on.

Because the AMD proponents aren't benefitting. Nothing gets detracted from the original gaming experience if you use an AMD GPU to play a game that has Nvidia only features added to it. The reason AMD proponents think this is dodgy is not really logical. Nothing dodgy about it. What it is, is sour grapes and a sense of entitlement for some reason.

Nobody is trying to change anyone's mind AFAIK. Just trying to remove some mud from people eyes is all. And we are having a conversation and it's moving "along". Not moving on.
 
Last edited:

SirPauly

Diamond Member
Apr 28, 2009
5,187
1
0
Why would you say that? That's a baseless conclusion simply because both now have something proprietary.

Simply because both have something currently unused by others doesn't mean that the average person now suddenly supports proprietary features. If the manufacturer's have chosen to create proprietary features it doesn't mean that makes the consumer happy, it may be to the contrary since now you are limited in what you get with the card.

imho,

One is receiving more -- not limited! One is limited without trying to innovate and only using standards that may be restrictive, limited or simply don't exist.

How does one innovate if one is chained to only standards or waiting for others to invent?

Supporting standards is very important but to innovate, at times, going beyond them improves their customers' gaming experiences and choice to consider!
 

NTMBK

Lifer
Nov 14, 2011
10,411
5,677
136
imho,

One is receiving more -- not limited! One is limited without trying to innovate and only using standards that may be restrictive, limited or simply don't exist.

How does one innovate if one is chained to only standards or waiting for others to invent?

Supporting standards is very important but to innovate, at times, going beyond them improves their customers' gaming experiences and choice to consider!

Going beyond standards is indeed necessary, and I am all for companies striking out on their own and coming up with cool technology. But my preference is for the gains of this tech to then be rolled back into updated standards (see Mantle). Keeping cool tech locked away on one platform limits its hardware spread, but it then also limits its proliferation in software- which is the real problem here.

For example, consider the situation of a developer who is thinking of writing some GPGPU code to pull off a cool trick in her new game. Should she go CUDA? Or OpenCL? CUDA is only supported on a small subset of the market, but OpenCL performance on NVidia is terrible compared to CUDA because NVidia deliberately put a bare minimum of effort into supporting it, and write shoddy drivers. So either she accepts that she's getting reduced performance on NVidia platforms, or she has to write and support both OpenCL and CUDA codepaths (doubling the amount of work required to add this feature), or she only writes a CUDA codepath and accepts that she is not going to be able to support the majority of GPUs. This sort of calculation means that the cost/reward calculation is significantly shifted, and as such the developer will often just drop the whole idea due to the potential rewards not being worth the effort required. And we lose out on some potentially cool technology.

If NVidia had gone all in on OpenCL and poured its resources into it, offering its experience in GPGPU and its undeniably excellent dev tools- or even better, opened up CUDA to all platforms and takers, making it an open standard- then we would be in a very different situation today, and I can guarantee that we would see more GPGPU software, making better use of our GPUs. But NVidia choose to maintain their lock-in, to the detriment of consumers.
 

f1sherman

Platinum Member
Apr 5, 2011
2,243
1
0
There is not much sense in making case of NV should go OpenCL for the sake of indecisive imaginary woman,

when in the real world CUDA has been nothing but huge success for them.
IBM, HP, Amazon, workstations, supercomputer, universities, academic community - they all do wonderful things with CUDA.
 

Keysplayr

Elite Member
Jan 16, 2003
21,211
50
91
There is not much sense in making case of NV should go OpenCL for the sake of indecisive imaginary woman,

when in the real world CUDA has been nothing but huge success for them.
IBM, HP, Amazon, workstations, supercomputer, universities, academic community - they all do wonderful things with CUDA.

Exactly. Nicely stated.
 

NTMBK

Lifer
Nov 14, 2011
10,411
5,677
136
There is not much sense in making case of NV should go OpenCL for the sake of indecisive imaginary woman,

when in the real world CUDA has been nothing but huge success for them.
IBM, HP, Amazon, workstations, supercomputer, universities, academic community - they all do wonderful things with CUDA.

CUDA is a good fit for HPC because you know precisely the hardware you are going to be running on. You write your software for a single hardware configuration, optimize it exclusively for that configuration, etc. Portability doesn't even come into it.

This is precisely the reason why CUDA has been much more successful in HPC than in general purpose computing- because as soon as you can't guarantee the hardware your customer runs, you run into compatibility problems.
 

Keysplayr

Elite Member
Jan 16, 2003
21,211
50
91
CUDA is a good fit for HPC because you know precisely the hardware you are going to be running on. You write your software for a single hardware configuration, optimize it exclusively for that configuration, etc. Portability doesn't even come into it.

This is precisely the reason why CUDA has been much more successful in HPC than in general purpose computing- because as soon as you can't guarantee the hardware your customer runs, you run into compatibility problems.

Guarantee how? And what compatibility problems arise from it?
 

NTMBK

Lifer
Nov 14, 2011
10,411
5,677
136
Guarantee how? And what compatibility problems arise from it?

Guarantee, as in know 100% what hardware they will run it on. Either because you're targeting a single supercomputer (HPC), or because you ship your software with custom workstation SKUs with specific validated hardware setups. But when you just sell the software with no hardware control, and your user can install it on hardware they want (like games, for instance), you no longer have that guarantee.

And what compatibility problems- how about not having CUDA support for a start? Go try running CUDA code on a system with an AMD GPU and let me know how it goes. ;)
 

Keysplayr

Elite Member
Jan 16, 2003
21,211
50
91
Guarantee, as in know 100% what hardware they will run it on. Either because you're targeting a single supercomputer (HPC), or because you ship your software with custom workstation SKUs with specific validated hardware setups. But when you just sell the software with no hardware control, and your user can install it on hardware they want (like games, for instance), you no longer have that guarantee.

And what compatibility problems- how about not having CUDA support for a start? Go try running CUDA code on a system with an AMD GPU and let me know how it goes. ;)

Easy solution for the latter. ;)
 

Keysplayr

Elite Member
Jan 16, 2003
21,211
50
91
What, give 60% of your users a free NVidia GPU so that they can run your code? I know NVidia would love that, but I don't think it's a sustainable business model. :p

Really? Where does free come anywhere near this conversation? That was definitely an air grab.


No. If you want to run CUDA, buy a device that is CUDA capable.
If you don't want to run CUDA, then you don't need to buy a CUDA capable device.
 

NTMBK

Lifer
Nov 14, 2011
10,411
5,677
136
Really? Where does free come anywhere near this conversation? That was definitely an air grab.


No. If you want to run CUDA, buy a device that is CUDA capable.
If you don't want to run CUDA, then you don't need to buy a CUDA capable device.

Now you're looking at this from the point of view of the end user, not developer, which is not what I was talking about. I am talking about being a software developer, and making the decision whether or not to add CUDA to your software.

If CUDA is only supported by a third of my potential audience's PCs, then the cost/benefit calculation is very different to if, say, CUDA was supported on 95% of my potential audience's GPUs. (In a theoretical world where NVidia had turned CUDA into an open standard.) It is much harder to justify the developer time and added complexity if only a third of my audience benefit from it. This means that I am less likely to put CUDA into my software, and there is one less piece of CUDA capable software out there.

This relationship is of course very different if I am developing a piece of software to run on a fixed platform, like a workstation with fixed spec or a supercomputer. In that case I have the guarantee that 100% of my users will have access to CUDA, so I am very likely to use it in my software. This is what I was discussing earlier.
 

Keysplayr

Elite Member
Jan 16, 2003
21,211
50
91
CUDA makes up 2/3. I think devs are and have been properly motivated to code for CUDA. Visit nvidia's CUDA info pages on their site.
 

NTMBK

Lifer
Nov 14, 2011
10,411
5,677
136
CUDA makes up 2/3. I think devs are and have been properly motivated to code for CUDA. Visit nvidia's CUDA info pages on their site.

I know the CUDA info pages intimately, and I know about developer motivation. Thanks for the pointer though. ;)

And no, CUDA does not make up 2/3. 2/3 of discrete GPUs sold, sure, but not 2/3 of the total gaming market. (Though the numbers are actually higher than I thought, ~50% according to Steam. They seem to have changed their methodology, I think that they aren't over-reporting Intel as much as they were.) Still, that's about half your users who won't benefit from it.
 

Keysplayr

Elite Member
Jan 16, 2003
21,211
50
91
I know the CUDA info pages intimately, and I know about developer motivation. Thanks for the pointer though. ;)

And no, CUDA does not make up 2/3. 2/3 of discrete GPUs sold, sure, but not 2/3 of the total gaming market. (Though the numbers are actually higher than I thought, ~50% according to Steam. They seem to have changed their methodology, I think that they aren't over-reporting Intel as much as they were.) Still, that's about half your users who won't benefit from it.

Still more than the third you proposed. But why would we be talking about anything other than the discrete market? That's like hinting that people use their A-8s to mine.
Nobody uses these low end non-discrete GPUs for any type of heavy computational use. So why include them in you numbers? Because a higher number would better support your argument?
 
Last edited:

NTMBK

Lifer
Nov 14, 2011
10,411
5,677
136
Still more than the third you proposed. But why would we be talking about anything other than the discrete market? That's like hinting that people use their A-8s to mine.
Nobody uses these low end non-discrete GPUs for any type of heavy computational use. So why include them in you numbers? Because a higher number would support your argument better?

I'm not talking about mining; I'm talking about gaming right now, but the same logic could be applied to plenty of other applications. There are plenty of algorithms which are well suited to be run on the GPU which aren't traditional rasterization rendering tasks, and as such you want to program them through a GPGPU toolkit like CUDA. Not just long running number crunching algorithms- speeding up parts of the game pipeline so that the whole program runs faster. The most obvious example of this is GPU PhysX, which is built on top of CUDA.

CUDA has traditionally been used lots in scientific computing applications, but it could have a lot of potential outside of that niche- if it weren't for all the associated compatibility problems.

(And the reason I include those systems in the numbers is because they come from the Steam hardware survey, which is the best source we have for information on gaming systems. )

EDIT: And yes, OpenCL obviously exists. But it is much more annoying to work with, the developer tools are severely lacking compared to CUDA, and NVidia cripple OpenCL performance on their hardware.
 
Last edited:

_UP_

Member
Feb 17, 2013
144
11
81
To add to NTMBK's point, there's also hardware acceleration for apps like Photoshop and Premiere. If I'm not mistaken until CS5 they were using CUDA but in the new versions they use OpenCL. That is a big market as well that self a lot of cards on its own and is worth supporting (you don't want to be the only one not supporting it)