**thread name change* Nvidia and AMD moral and immoral business practices

Page 27 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

zebrax2

Senior member
Nov 18, 2007
977
69
91
You cannot disable Cuda, as Cuda *is* the GPU architecture.
Without Cuda, there is no OpenCL, no DirectCompute, and no SM5.0.

Either they will reject the design, lock out the code from the Xbox OS or maybe just not approve the game that will use the code.
 

NoQuarter

Golden Member
Jan 1, 2001
1,006
0
76
Funny how you mentioned that PhysX takes the advantage of GPU, so how is it useless as often video card doesn't have much to do when playing console ports. Fact showed that PhysX don't bring FPS down much knowing the video card is actually doing graphics + PhysX. Another fact is when PhysX is ran on GPU, its performance is better than if it isn't. So how did you come to the conclusion that it is mostly useless? Are you trying to say that any GPU accelerated programs are useless?

I said it's useless as used today, not useless as a concept. It's useless as used because no game dev has the resources or desire to integrate GPU accelerated physics as the primary physics engine. Instead we get just tacked on effects because the support doesn't exist.

Your 2nd claim is a direct contradiction to your 1st point. You know that there are things that GPU does better than CPU and PhysX shows that. You said that developers have little interest on spending much work on PhysX because of IP, and therefore limiting its market potential. However, you didn't mention that down from console box up to high end PC all are capable of running PhysX code. The only difference is, with Nvidia video cards, most of the load will be moved towards the video card instead of CPU. That means, not a single byte is wasted when Nvidia video card is not present, but non-Nvidia video cards will not be utilized. In other words, PhysX works on most platform with or without hardwares from Nvidia.
Wrong, the console box up to high end PC are capable of running PhysX non-parallel CPU code. Just adding an Nvidia card does not offload any normal PhysX code to the GPU, it has to be rewritten specifically to run on the GPU. It's an entirely different set of functions in the SDK and they do not behave like the CPU code. There are very few games that use the CUDA extensions for PhysX, and the ones that do have generally been added by nVidia themselves and only those additional effects are being calculated by the GPU.

Check out Dragon Age from EA, the way it uses PhysX is different from others. No flying papers, but the way spells are displayed. You can freeze time and move the camera just to look at how beautiful spells are. Under 3D, you can clearly see spells are built by little particles surrounding the caster. This is one things that PhysX can do if the developers put hard in it. Again, people who are not using Nvidia video card will see the same effect, but with a Nvidia video card, weaker CPU can perform as well as better CPU. Otherwise, PhysX showed cutting edge graphics for everyone. So what is with the IP limitation again?
Dragon Age PhysX code does not run on the GPU. Everything done in Dragon Age will be exactly the same effect and exactly the same resources used Nvidia or not because that code cannot be offloaded to the GPU since it wasn't written with the PhysX CUDA extensions.

In short, PhysX is free and works on most platforms, and with a Nvidia video card, it actually utilizes those unused CUDA cores to maximize the potential of the video card and offloading CPU. I didn't make this up, this was the fundamental of PPU since the Ageia's time.
I don't think you fully understand how PhysX works. Any code not specifically written to run on the GPU cannot run on the GPU. Almost no games have PhysX code specifically written to run on the GPU even if they are PhysX games.

I mentioned Fusion and claimed that it is a good move because if it can dominate game console, then no one with console can see the game with Nvidia video card if they don't play the PC version, which is what happens now. Did you ever question why PS3 don't have GPU acceleration and how it will be better off if it does? No, because such option is not open. It is the manufacture of PS3 who decides which vendor to use, and they chose ATI.

Again, it is business decisions.

I don't understand your PS3 point. The PS3 uses an nVidia GPU not ATI. nVidia provided a method for limited GPU accelerated PhysX code on the PS3 a while ago but no games have used it yet to my knowledge. Of course there is a non-CUDA PhysX library used often on the PS3 but that's irrelevant to the discussion because that code won't be GPU accelerated.
 
Last edited:

Seero

Golden Member
Nov 4, 2009
1,456
0
0
NoQuarter:
I think you should read up about PhysX before you start commenting it. PhysX code can be ran on CPU, all of them. When PPU is present, PhysX will switch to PPU instead. PhysX will be useless if things don't run without PPU.
 

Scali

Banned
Dec 3, 2004
2,495
0
0
Dirk talked about sharing IP, which included ATI tech. This is part of the new agreement between AMD and Intel. It has nothing to do with IP Intel licensed from ATI before they were acquired by AMD.

Uhhhh...
You said this:
Of note, Intel is free to use any of AMD's graphics tech if they wish to do so.

I certainly do not argue that Intel licenses ATi IP. Both ATi and nVidia (especially them, including the 3DFX portfolio) have patents on crucial technology for 3D acceleration. Which means that it's basically impossible to build a 3D accelerator without this IP. So they all license from ATi and nVidia. This includes things as basic as subpixel correction and perspective correct attribute interpolation.

However, this is NOT the same as your claim that Intel is free to use *any* of AMD's graphics tech. Intel is free to use *some* of AMD's graphics tech.
 
Last edited:

Scali

Banned
Dec 3, 2004
2,495
0
0
Either they will reject the design, lock out the code from the Xbox OS or maybe just not approve the game that will use the code.

I don't think Microsoft wants to go there.
Firstly, it wouldn't hurt them if games use the code. It still helps Microsoft sell games and hardware.
Secondly, if they DO try something like that, they may either violate the contract with nVidia (they would be smart enough to foresee this and put it in writing), or they would at least piss off nVidia badly enough to frustrate any further business, which could mean for example that they can no longer negotiate lower prices, and cannot compete with Sony. It's in Microsoft's best interest to stay on the good side of their hardware suppliers.
 

Scali

Banned
Dec 3, 2004
2,495
0
0
I don't understand your PS3 point. The PS3 uses an nVidia GPU not ATI. nVidia provided a method for limited GPU accelerated PhysX code on the PS3 a while ago but no games have used it yet to my knowledge. Of course there is a non-CUDA PhysX library used often on the PS3 but that's irrelevant to the discussion because that code won't be GPU accelerated.

PS3 uses a derivative of the GeForce 7-series. This is DX9-class hardware, predating Cuda. They have no GPGPU capabilities, they do not even have unified shaders. PhysX uses Cell on PS3.
 

zebrax2

Senior member
Nov 18, 2007
977
69
91
I don't think Microsoft wants to go there.
Firstly, it wouldn't hurt them if games use the code. It still helps Microsoft sell games and hardware.
Secondly, if they DO try something like that, they may either violate the contract with nVidia (they would be smart enough to foresee this and put it in writing), or they would at least piss off nVidia badly enough to frustrate any further business, which could mean for example that they can no longer negotiate lower prices, and cannot compete with Sony. It's in Microsoft's best interest to stay on the good side of their hardware suppliers.

I thought were talking about a situation wherein Microsoft would like to use DirectCompute instead any other GPGPU API. As you have said Microsoft will probably put it in writing and if there is a situation wherein nVidia got pissed off and doesn't want to negotiate prices they will probably go with AMD again.

Anyway all of this is hypothetical and out of topic:)
 

Scali

Banned
Dec 3, 2004
2,495
0
0
I thought were talking about a situation wherein Microsoft would like to use DirectCompute instead any other GPGPU API.

No, I'm saying that the GPGPU APIs probably doesn't mean much to Microsoft in their choice of hardware. They'll probably want the GPU to support DirectCompute, but since both major vendors do, that's not going to be an issue.
They probably don't care about what extra APIs the GPU may or may not support, let alone that they may actively want to disable them. Certainly they won't let that weigh heavier than more practical considerations, such as price, power consumption, performance etc.
They may even see it as extra value. After all, in the end Microsoft just wants to sell XBoxes and even more: games. If they think having Cuda/PhysX will help them sell more games/get an edge on Sony, then there's your value.

As you have said Microsoft will probably put it in writing

Correction: I said that nVidia would put it in writing. They will protect their technology, and not let Microsoft get the better of them.
 

NoQuarter

Golden Member
Jan 1, 2001
1,006
0
76
PS3 uses a derivative of the GeForce 7-series. This is DX9-class hardware, predating Cuda. They have no GPGPU capabilities, they do not even have unified shaders. PhysX uses Cell on PS3.

I know. http://news.cnet.com/8301-13924_3-10198809-64.html implies that nVidia provided a method specifically for the PS3 derivative GeForce 7 to run PhysX on GPU. I'm sure there's a link with more detail but I just googled PhysX ps3 and chose the first link because I remember the story running last year. It's probably a close to metal rudimentary capability though and not written in normal CUDA. I never read details about the announcement so it could simply be a PhysX SDK for Cell that they misreported - but that SDK should have existed for several years, not just released in '09.

Seero said:
NoQuarter:
I think you should read up about PhysX before you start commenting it. PhysX code can be ran on CPU, all of them. When PPU is present, PhysX will switch to PPU instead. PhysX will be useless if things don't run without PPU.

This is simply untrue. PhysX code cannot just switch to GPU mode because the code is incompatible. PhysX code that can run on GPU has to be written for CUDA. CUDA code can only be compiled to run on a CUDA capable graphics processor - G80 or higher. This explicitly requires 2 code paths to be written - one for CPU PhysX and one for GPU PhysX written in C for CUDA.

Only the additional effects written in CUDA and able to be processed by the GPU. Since it makes little sense to increase your workload by writing CUDA extensions for the basic physics effects necessary for non-CUDA systems to run the game most calculations aren't capable of running on the GPU.

The list of PhysX games is a hundred times longer than the list of PhysX on GPU games because most games aren't written with the necessary CUDA extensions for PhysX to run on GPU.
 
Last edited:

Scali

Banned
Dec 3, 2004
2,495
0
0
I know. http://news.cnet.com/8301-13924_3-10198809-64.html implies that nVidia provided a method specifically for the PS3 derivative GeForce 7 to run PhysX on GPU. I'm sure there's a link with more detail but I just googled PhysX ps3 and chose the first link because I remember the story running last year. It's probably a close to metal rudimentary capability though and not written in normal CUDA. I never read details about the announcement so it could simply be a PhysX SDK for Cell that they misreported - but that SDK should have existed for several years, not just released in '09.

Well, I think the news was that Sony signed a PhysX license.

This is simply untrue. PhysX code cannot just switch to GPU mode because the code is incompatible. PhysX code that can run on GPU has to be written for CUDA.

I think you two are arguing from two sides of the API.
PhysX does not require a developer to ever write any Cuda code. The PhysX API is strictly a C++ API. The underlying GPU implementation uses Cuda, but this is not something the developer ever sees.

Having said that though, there are some slight differences between how the API can behave. You need to specifically enable GPU/PPU acceleration in your code, through some simple steps. Once you do that, all code can run on GPU/PPU, but can still run on CPU.
If you don't, then it will always run on CPU.
This is a single codepath however. It's just some initialization.

There are no 'additional effects', not in PhysX anyway. Everything you can do on a GPU or PPU, you can do on CPU as well (just a lot slower in some cases). The developer may write additional effects, and only enable them with GPU or PPU acceleration, but that has nothing to do with PhysX itself, and everything with how that developer chose to design their application.