"Inevitable Bleak Outcome for nVidia's Cuda + Physx Strategy"

MarcVenice

Moderator Emeritus <br>
Apr 2, 2007
5,664
0
0
Nice piece. It's pretty much the same thing I've been saying in so many PhysX discussions though. Although this guy probably worded it a little better :p
 

AstroManLuca

Lifer
Jun 24, 2004
15,628
5
81
It's so true. This is the reason that all of Sony's proprietary music and video formats died without ever putting up a fight. I think the main point he's making is that nVidia is reaching too much. They're getting too greedy and trying to use PhysX as a way of selling hardware, without realizing that PhysX will never take off if only a certain percentage of video cards can use it. Even if it does show up in a lot more games in the future, it'll STILL never take off because no game developer is going to limit themselves to developing for nVidia only, and they'll have to ensure that even their PhysX-enabled games will still run great and look great on non-nVidia platforms.

Using proprietary standards to push your own hardware is so passe anyway. Everything's going to be done with OpenCL in the future. nVidia can either port it or let it die, because they're not going to make any money from it.
 

Scali

Banned
Dec 3, 2004
2,495
0
0
I think he's missing the obvious though:
1) Cuda set the standard for GPGPU, and OpenCL is based on the Cuda model.
2) nVidia owns and develops PhysX.

In other words:
1) Anything written in Cuda can be converted to OpenCL with not too much effort.
2) nVidia can convert PhysX to support OpenCL.

I don't think nVidia will just let PhysX die off. If Havok becomes a threat, they will probably port PhysX over to OpenCL (if they haven't done so already). So PhysX will then continue as a hardware-agnostic physics API.

And another obvious thing: Neither nVidia nor ATi have official OpenCL drivers out yet. It's still vapourware. There is currently no way to make GPU-accelerated physics work on end-user's machines, aside from Cuda.

Ironically nVidia released fully functional OpenCL drivers to registered developers a few days ago. So developers can now actually USE OpenCL on nVidia hardware. Still no sign from ATi though.

Aside from that, Cuda is much like how nVidia has long used OpenGL... they can add extensions whenever they want to support their latest hardware, and they don't have to wait for an official standard to be updated.
So Cuda will probably live on, because it will allow developers to get the most performance from their hardware. The current version of Cuda already has more features than either OpenCL or DX11 CS.
 

thilanliyan

Lifer
Jun 21, 2005
12,040
2,256
126
Originally posted by: Scali
Ironically nVidia released fully functional OpenCL drivers to registered developers a few days ago. So developers can now actually USE OpenCL on nVidia hardware. Still no sign from ATi though.

Aside from that, Cuda is much like how nVidia has long used OpenGL... they can add extensions whenever they want to support their latest hardware, and they don't have to wait for an official standard to be updated.
So Cuda will probably live on, because it will allow developers to get the most performance from their hardware. The current version of Cuda already has more features than either OpenCL or DX11 CS.

Thanks for the informative post. If PhysX is ported to OpenCL and becomes hardware agnostic that would be pretty cool.
 

Scali

Banned
Dec 3, 2004
2,495
0
0
In fact, I'd like to add that Cuda is the name for the GPGPU framework.
It's not the programming language itself (they call that C for Cuda).
In actuality, OpenCL will run on top of Cuda/as part of the Cuda framework ('OpenCL for Cuda'?).
So in that sense Cuda will never disappear. Cuda is just the name of their GPGPU architecture.

So to be exact, currently PhysX is implemented with C for Cuda, and that could be changed to OpenCL (for Cuda).

See this page for more info:
http://www.nvidia.com/object/cuda_what_is.html

The first paragraph is a good summary:
"NVIDIA® CUDA? is a general purpose parallel computing architecture that leverages the parallel compute engine in NVIDIA graphics processing units (GPUs) to solve many complex computational problems in a fraction of the time required on a CPU. It includes the CUDA Instruction Set Architecture (ISA) and the parallel compute engine in the GPU. To program to the CUDATM architecture, developers can, today, use C, one of the most widely used high-level programming languages, which can then be run at great performance on a CUDATM enabled processor. Other languages will be supported in the future, including FORTRAN and C++."

Then when you go to their OpenCL page:
http://www.nvidia.com/object/cuda_opencl.html

"OpenCL? (Open Computing Language) is a new heterogeneous computing environment, that runs on the CUDA architecture. It will allow developers to harness the massive parallel computing power of NVIDIA GPU?s to create compelling computing applications."
 

AstroManLuca

Lifer
Jun 24, 2004
15,628
5
81
But ultimately, the point is that it will no longer require you to buy an nVidia card, right? That's the real issue here. The point of the article is that they're holding back PhysX's potential by making it nVidia-only, and in turn, game developers aren't willing to put the time into giving their games PhysX support since only about a third or maybe half of their potential customers will be able to appreciate the benefits.

I know very little about programming but from a non-programmer's standpoint, the only thing that really matters is that, at the moment, PhysX is NV-only, and that's hurting its adoption and will continue to do so until it becomes available on all hardware.
 

Scali

Banned
Dec 3, 2004
2,495
0
0
Originally posted by: AstroManLuca
But ultimately, the point is that it will no longer require you to buy an nVidia card, right? That's the real issue here. The point of the article is that they're holding back PhysX's potential by making it nVidia-only, and in turn, game developers aren't willing to put the time into giving their games PhysX support since only about a third or maybe half of their potential customers will be able to appreciate the benefits.

nVidia seems to have over 50% marketshare among gamers, if we are to believe polls on sites like Anandtech or the Steam hardware survey.
Developers have supported vendor-specific features for far less popular hardware in the past.

The real issue here is what I've already said:
nVidia is the only one who has a working GPGPU solution. nVidia even offered to help ATi support PhysX on their architecture, but ATi declined and instead teamed up with Intel's Havok. As such it is nVidia-only simply because nobody else has put any effort into making it work.
You could say that ATi *wanted* to keep PhysX nVidia-only. We will have to see if that turns out to be a smart move.

Even if nVidia chooses to support OpenCL in PhysX (once OpenCL support actually emerges), it is no guarantee that it will actually run WELL on ATi hardware (just as Havok being OpenCL is no guarantee that it will run well on nVidia hardware... in fact, both ATi and Intel have a lot to gain by sabotaging nVidia's performance, and I think they will... in fact, I think Intel will even want to sabotage ATi in favour of their upcoming Larrabee).
nVidia can continue to use C for Cuda on their own hardware, and use all the latest features to get better PhysX performance.
And then it's ATi's fault for not accepting nVidia's offer and doing their own implementation for PhysX.

Bottom-line is: people are somehow trying to blame nVidia because ATi failed to deliver a decent Cuda/PhysX-like solution themselves. It's not nVidia's fault that ATi has been caught sleeping in GPGPU-class for the last 2+ years.
 

Qbah

Diamond Member
Oct 18, 2005
3,754
10
81
Originally posted by: Scali
Originally posted by: AstroManLuca
But ultimately, the point is that it will no longer require you to buy an nVidia card, right? That's the real issue here. The point of the article is that they're holding back PhysX's potential by making it nVidia-only, and in turn, game developers aren't willing to put the time into giving their games PhysX support since only about a third or maybe half of their potential customers will be able to appreciate the benefits.

nVidia seems to have over 50% marketshare among gamers, if we are to believe polls on sites like Anandtech or the Steam hardware survey.
Developers have supported vendor-specific features for far less popular hardware in the past.

The real issue here is what I've already said:
nVidia is the only one who has a working GPGPU solution. nVidia even offered to help ATi support PhysX on their architecture, but ATi declined and instead teamed up with Intel's Havok. As such it is nVidia-only simply because nobody else has put any effort into making it work.
You could say that ATi *wanted* to keep PhysX nVidia-only. We will have to see if that turns out to be a smart move.

Even if nVidia chooses to support OpenCL in PhysX (once OpenCL support actually emerges), it is no guarantee that it will actually run WELL on ATi hardware (just as Havok being OpenCL is no guarantee that it will run well on nVidia hardware... in fact, both ATi and Intel have a lot to gain by sabotaging nVidia's performance, and I think they will... in fact, I think Intel will even want to sabotage ATi in favour of their upcoming Larrabee).
nVidia can continue to use C for Cuda on their own hardware, and use all the latest features to get better PhysX performance.
And then it's ATi's fault for not accepting nVidia's offer and doing their own implementation for PhysX.

Bottom-line is: people are somehow trying to blame nVidia because ATi failed to deliver a decent Cuda/PhysX-like solution themselves. It's not nVidia's fault that ATi has been caught sleeping in GPGPU-class for the last 2+ years.

Perhaps in your eyes it was a bad move for ATi to ignore PhysX, but from a business point of view it was the right choice. ATi would be dependant on nVidia, their only competitor in the GPU market. It is not their "fault", it was a proper decision for the company.

As for the speed comment - true. But once PhysX and Havok are properly ported to OpenCL (Havok already is btw) I don't see a reason why one architecture would run it geat and the other totally suck at it. So I'm thinking this part should be fine.

And your statement that "Developers have supported vendor-specific features for far less popular hardware in the past" - there wasn't a single vendor-specific feature that survived until today. ATi learned the lesson with TruForm - though this kind of technology will be only implemented with DX11 (they added it to their R200 cores - back in DX8.1 days).

Tell me, how widespread is hardward accelerated PhysX? Adds fluff to Mirror's Edge and to Cryostasis, which recently hit the US market. Also used for some destructive things in GRAW2. And those are the most implemented cases. How many people play the PhysX-enabled UT3 maps? Mirror's Edge wasn't stellar either - it was a medicore game at best, that sold few copies - not to mention the biggest market for gaming now - consoles - does not support hardware-accelerated PhysX. The software part of it, that runs great on consoles, will run great on any PC provided it has at least a medicore CPU. Cryostasis - did you hear something more about this game? Any nominations to anything? Nope - it's a niche game that wanted to ride on PhysX popularity. PhysX isn't popular, Cryostasis won't be either.

We will have to see how well ATi hadware will run hardware PhysX ported to OpenCL. If it will run great I can see this standard flourishing. Until that's the case, it just won't happen.
 

Scali

Banned
Dec 3, 2004
2,495
0
0
Originally posted by: Qbah
Perhaps in your eyes it was a bad move for ATi to ignore PhysX, but from a business point of view it was the right choice. ATi would be dependant on nVidia, their only competitor in the GPU market. It is not their "fault", it was a proper decision for the company.

But now they are dependent on Intel... which is their competitor in the CPU market AND will be a competitor in the GPU market next year aswell.
I think Intel is far more dangerous than nVidia, because Intel is much bigger, and has its own production facilities.

Also, think of it like this: if ATi were to team up with nVidia, then they would probably have pre-empted Intel's Havok altogether, because there'd be little reason for developers not to use PhysX anymore... You'd get GPU-acceleration on both major GPU brands.
This would also take the sting out of Intel's CPUs... Physics is currently one of the heaviest workloads, and the main reason why games still require fast CPUs.
If ATi would have gone with PhysX, then there would be less incentive to buy fast Intel CPUs for games, which could help AMD's CPU sales aswell.

But now ATi may be able to compete with nVidia... but not with Intel... which may be a bigger problem than nVidia ever was.

Originally posted by: Qbah
As for the speed comment - true. But once PhysX and Havok are properly ported to OpenCL (Havok already is btw) I don't see a reason why one architecture would run it geat and the other totally suck at it. So I'm thinking this part should be fine.

Havok isn't ported to OpenCL yet. AMD has shown a simple cloth-effect which allegedly ran on OpenCL. But neither OpenCL nor Havok's GPU-acceleration are finished products yet.
In fact, AMD mainly demonstrated OpenCL on their CPUs(!).

I can also see reasons why nVidia's architecture would run better, as OpenCL closely matches Cuda's design, and Cuda's design is based around the nVidia architecture. ATi has a completely different architecture, and has had to add local memory to the 4000-series just to get the featureset right for OpenCL. I doubt that their 'afterthought' design is anywhere near as efficient as nVidia's is.

Originally posted by: Qbah
And your statement that "Developers have supported vendor-specific features for far less popular hardware in the past" - there wasn't a single vendor-specific feature that survived until today.

Nobody said it had to survive. It was about SUPPORT. Obviously PhysX isn't going to live forever in its current form. But it could continue if it were to support OpenCL, and remain backward-compatible with current PhysX games.

Originally posted by: Qbah
Tell me, how widespread is hardward accelerated PhysX? Adds fluff to Mirror's Edge and to Cryostasis, which recently hit the US market. Also used for some destructive things in GRAW2. And those are the most implemented cases. How many people play the PhysX-enabled UT3 maps? Mirror's Edge wasn't stellar either - it was a medicore game at best, that sold few copies - not to mention the biggest market for gaming now - consoles - does not support hardwarde-accelerated PhysX. Cryostasis - did you hear something more about this game? Any nominations to anything? Nope - it's a niche game that wanted to ride on PhysX popularity. PhysX isn't popular, Cryostasis won't be either.

What do you expect, really? PhysX has only had GPU-support for a few months. It takes years to develop a game. The only major engine so far that has embraced hardware-accelerated PhysX is the Unreal Engine. But as you see, various games based on the UE have also embraced hardware-accelerated PhysX for extra effects and greater detail.
I think PhysX has become very popular in a very short time, and it seems that ever more developers are trading in Havok for PhysX. So the REAL wave of PhysX games is still about to happen... when the developers who moved to PhysX have finished their upcoming games.

Originally posted by: Qbah
We will have to see how well ATi hadware will run hardware PhysX ported to OpenCL. If it will run great I can see this standard flourishing. Until that's the case, it just won't happen.

I think nVidia may be big enough to make PhysX a success even without ATi's support. In fact, imagine what would happen if ATi's upcoming DX11 cards are not competitive with nVidia's. That would give nVidia a nice boost in marketshare, making PhysX ever more attractive.
 

Pantalaimon

Senior member
Feb 6, 2006
341
40
91
I think nVidia may be big enough to make PhysX a success even without ATi's support. In fact, imagine what would happen if ATi's upcoming DX11 cards are not competitive with nVidia's. That would give nVidia a nice boost in marketshare, making PhysX ever more attractive.

What about if the NVIDIA DX11 cards are not competitive? It's not the first time NVIDIA has stumbled. Are gamers willing to sacrifice performance to gain PysX effects? Are you? Would you buy a much lesser performing card just to have PhysX?
 

Qbah

Diamond Member
Oct 18, 2005
3,754
10
81
Originally posted by: Scali
But now they are dependent on Intel... which is their competitor in the CPU market AND will be a competitor in the GPU market next year aswell.
I think Intel is far more dangerous than nVidia, because Intel is much bigger, and has its own production facilities.

Also, think of it like this: if ATi were to team up with nVidia, then they would probably have pre-empted Intel's Havok altogether, because there'd be little reason for developers not to use PhysX anymore... You'd get GPU-acceleration on both major GPU brands.
This would also take the sting out of Intel's CPUs... Physics is currently one of the heaviest workloads, and the main reason why games still require fast CPUs.
If ATi would have gone with PhysX, then there would be less incentive to buy fast Intel CPUs for games, which could help AMD's CPU sales aswell.

But now ATi may be able to compete with nVidia... but not with Intel... which may be a bigger problem than nVidia ever was.

Well, AMD is in a strange position. Intel is pushing physics calculations on CPU, AMD can't look worse so they can't just say the GPU is better for it - it would probably look bad for their CPU part. The thing is, CPU physics will run similarly on any CPU - provided it's a decent one. There's no "proprietary standard" here. It will just run fine on either Intel or AMD CPUs.

Now PhysX won't run on AMD graphics cards - by choice. We don't know how wel it would run, but it won't. Not until DX11 and OpenCL and a true standard for GPU computing. We will know then.

Havok isn't ported to OpenCL yet. AMD has shown a simple cloth-effect which allegedly ran on OpenCL. But neither OpenCL nor Havok's GPU-acceleration are finished products yet.
In fact, AMD mainly demonstrated OpenCL on their CPUs(!).

I can also see reasons why nVidia's architecture would run better, as OpenCL closely matches Cuda's design, and Cuda's design is based around the nVidia architecture. ATi has a completely different architecture, and has had to add local memory to the 4000-series just to get the featureset right for OpenCL. I doubt that their 'afterthought' design is anywhere near as efficient as nVidia's is.

This is all speculation. The Havok Cloth demo was running under OpenCL - I'd say it was run using the GPU - perhaps because it was said it was running either on the CPU or GPU - Reported here. Obviously it's still a work-in-progress, as none of those solutions are here yet. But it's a pretty good indication that it will run great on GPUs.

Nobody said it had to survive. It was about SUPPORT. Obviously PhysX isn't going to live forever in its current form. But it could continue if it were to support OpenCL, and remain backward-compatible with current PhysX games.

A support that won't last is meaningless. Hell, it's not called support, more like one-time use. It's not a point in a discussion as it doesn't bring any weight.

What do you expect, really? PhysX has only had GPU-support for a few months. It takes years to develop a game. The only major engine so far that has embraced hardware-accelerated PhysX is the Unreal Engine. But as you see, various games based on the UE have also embraced hardware-accelerated PhysX for extra effects and greater detail.
I think PhysX has become very popular in a very short time, and it seems that ever more developers are trading in Havok for PhysX. So the REAL wave of PhysX games is still about to happen... when the developers who moved to PhysX have finished their upcoming games.

All great, but why tout PhysX as a major selling point if there's nothing significant now to run it? There is no point. Once those PhysX titles hit the market in 3-4 years, great! Then we will buy what's best to run them. Current GPUs most definately won't be sufficient to fuel them though. And every time there's something new coming out, we hear the same words over and over again - "games for it are just around the corner! wait and see, any moment now". Let those games come out - then we can decide if it's so great. And then people will support it with their wallets - buying the hardware that runs PhysX best. Hell, you can imagine a situation when an OpenCL ported PhysX will run fastest on a Radeon ;)

I think nVidia may be big enough to make PhysX a success even without ATi's support. In fact, imagine what would happen if ATi's upcoming DX11 cards are not competitive with nVidia's. That would give nVidia a nice boost in marketshare, making PhysX ever more attractive.

I think nVidia is in no position to create industry-wide standards. Microsoft with its DirectX technology can and does. Plus you have the widely accepted and platform-independent OpenGL. OpenCL will join those two. Proprietary standards will never lift off, no matter which vendor tries to force them.
 

Keysplayr

Elite Member
Jan 16, 2003
21,211
50
91
Quote from Scali: "I can also see reasons why nVidia's architecture would run better, as OpenCL closely matches Cuda's design, and Cuda's design is based around the nVidia architecture. ATi has a completely different architecture, and has had to add local memory to the 4000-series just to get the featureset right for OpenCL. I doubt that their 'afterthought' design is anywhere near as efficient as nVidia's is."

This is pretty much the reason I am a little confused as to why ATI fans spurn CUDA/PhysX, and are embracing OpenCL. If OpenCL is closely matched to CUDA's design, how well do you think DirectX Compute will run on ATI GPU's (if it does at all, meaning offloads to CPU) compared to Nvidia's current architecture, not to mention GT300?

Nvidia has embraced OpenCL and is definitely pushing it.

http://www.nvidia.com/object/cuda_opencl.html

http://www.youtube.com/watch?v...o&feature=channel_page

http://www.nvidia.com/object/io_1228825271885.html

http://www.nvidia.com/object/dxcompute.html

Doesn't appear to me that Nvidia is looking to overrun OpenCL or downplay it. It looks to me like they are in full speed ahead supporting it. ATI? I dunno man, with their current Vec5 arch, and continuing it with R8xx, doesn't bode well for them.
If OpenCL is so closely matched to CUDA, what chance do you think ATI has in competing with their current architecture and there next gen architecture? Do we need to wait for R9xx?

If there is an ATI OpenCL demo, I would like to check it out. Anyone have a link?

Ah, see post below for AMD's demo.
 

Scali

Banned
Dec 3, 2004
2,495
0
0
Originally posted by: Pantalaimon
What about if the NVIDIA DX11 cards are not competitive? It's not the first time NVIDIA has stumbled.

I doubt it. DX11 is an evolutionary step from DX10, and one of the main features is the Compute Shader. This is stepping on nVidia's turf with Cuda. Basically it's DX11 catching up with nVidia, not the other way around.

Originally posted by: Pantalaimon
Are gamers willing to sacrifice performance to gain PysX effects? Are you? Would you buy a much lesser performing card just to have PhysX?

Depends on how you look at it.
Because if you DO want PhysX effects enabled, then the videocard performance itself isn't all that relevant anymore. If you don't have PhysX acceleration, your CPU will struggle to process the effects, even in the games we have today, which only have some modest PhysX effects added.

So the question might be: Do you want a videocard that is fast without PhysX effects? Or do you want one that is fast WITH PhysX effects?
Since better graphics quality and eye-candy is what's been pushing the gaming and 3d acceleration industry for well over a decade, one would expect that people want to go for the PhysX effects. I mean, if people were only interested in framerates, we wouldn't have games like Crysis today. We'd just be playing games that looked like GLQuake, at tens of thousands FPS.
No, there is a clear pattern that every generation of games uses as much eyecandy and detail as possible, making it reach 'playable' framerates (25-50 fps), but no more. It's entirely possible to use PhysX effects and still have 'playable' framerates. And it only gets better with every generation of GPUs (while CPU physics are pretty stagnant).

Therein lies the danger: PhysX allows developers to support consoles, CPUs and GPUs, by scaling to various levels of physics complexity. They don't have to worry about ATi, because ATi users will just have to scale down to CPU physics. The game will still work.
Add to that the fact that PhysX is free for use, while Havok has an expensive license...
 

Scali

Banned
Dec 3, 2004
2,495
0
0
Originally posted by: Qbah
A support that won't last is meaningless. Hell, it's not called support, more like one-time use. It's not a point in a discussion as it doesn't bring any weight.

That sounds odd in a discussion about videocards.
Are you not aware that it's all disposable technology in this industry?
An 8-series GeForce is a completely different design from a 7-series GeForce. Same goes for APIs, DX9 is different from DX10... And game engines aswell, a completely new engine is written every few years (IDTech, CryEngine, UnrealEngine etc).
Let alone if you go further back in software/hardware history. The older stuff isn't used anymore. We've passed those stations to get to where we are today.
Likewise, the current PhysX API is the first step in physics acceleration, but certainly not the last.
Most technology only lives for 2-3 years anyway, before it is replaced by something newer and better.

Originally posted by: Qbah
All great, but why tout PhysX as a major selling point if there's nothing significant now to run it?

Well, that's just marketing, isn't it? nVidia has something that its competitors don't, so they want to emphasize their technological advantage. I see nothing wrong with that.
Ironically ATi has been marketing "physics processing" capabilities of their GPUs for years, with nothing to show for it. It's been a checkbox feature on their Radeons since the X1000-series I believe.

Originally posted by: Qbah
I think nVidia is in no position to create industry-wide standards. Microsoft with its DirectX technology can and does. Plus you have the widely accepted and platform-independent OpenGL. OpenCL will join those two. Proprietary standards will never lift off, no matter which vendor tries to force them.

I would think that the proprietary x86 standard is the thing that makes the whole computing world go round. Try again :)
 

Keysplayr

Elite Member
Jan 16, 2003
21,211
50
91
Originally posted by: Pantalaimon
I think nVidia may be big enough to make PhysX a success even without ATi's support. In fact, imagine what would happen if ATi's upcoming DX11 cards are not competitive with nVidia's. That would give nVidia a nice boost in marketshare, making PhysX ever more attractive.

What about if the NVIDIA DX11 cards are not competitive? It's not the first time NVIDIA has stumbled. Are gamers willing to sacrifice performance to gain PysX effects? Are you? Would you buy a much lesser performing card just to have PhysX?

So you're asking if you would want 150fps without PhysX, or 110 with PhysX?
Answer: With PhysX

Or are you asking if you would want 32fps without PhysX or 23fps with PhysX?
Answer: Without PhysX

All depends on how you are asking the question. Am I to assume you are referring to the second example?
 

Scali

Banned
Dec 3, 2004
2,495
0
0
Originally posted by: Keysplayr
This is pretty much the reason I am a little confused as to why ATI fans spurn CUDA/PhysX, and are embracing OpenCL. If OpenCL is closely matched to CUDA's design, how well do you think DirectX Compute will run on ATI GPU's (if it does at all, meaning offloads to CPU) compared to Nvidia's current architecture, not to mention GT300?

Same thing I guess... DirectX Compute is also quite similar to OpenCL and Cuda.
Nothing that really loves Vec5 I think. ATi would have to work VERY hard on their compilers to get an advantage. And as I say, I wonder how efficient their local memory solution is, compared to nVidia's.
Intel's Larrabee is also arranged mostly as a parallel scalar processor like nVidia's (using SIMD to run scalar threads in parallel on a single execution 'core'). So it looks like ATi is the odd one out.

And yes, nVidia is pretty close to releasing beta drivers with OpenCL support, where there is no sign from ATi supporting OpenCL yet (apart from their own demos which could run with whatever half-baked implementation they currently have. It's no proof that they have a full and fully compliant solution yet).
 

Scali

Banned
Dec 3, 2004
2,495
0
0
Originally posted by: Keysplayr
Oh, Scali, here is the link to AMD's OpenCL demo. Claims to be running on the GPU. Any reason you would think it is not, and running on the CPU? I haven't found any info stating this.

No, that particular demo most probably runs on a GPU (I don't think their CPUs would be fast enough).
It's just that this is the ONLY Havok demo so far, and it shows only one effect: cloth.
This demo does in no way prove that the full Havok API and OpenCL are functional on ATi hardware. And if it was finished, why is it still not released today?

As you see, other AMD demos with OpenCL are done on CPUs. Makes you wonder where their focus is, or how much progress they've made.
 

Scali

Banned
Dec 3, 2004
2,495
0
0
Originally posted by: Keysplayr
So you're asking if you would want 150fps without PhysX, or 110 with PhysX?
Answer: With PhysX

Or are you asking if you would want 32fps without PhysX or 23fps with PhysX?
Answer: Without PhysX

All depends on how you are asking the question. Am I to assume you are referring to the second example?

Or the third option:
So you're asking if you would want 110 with PhysX (GPU), or 5 with PhysX (CPU)?
Answer: With PhysX (GPU)
 

Qbah

Diamond Member
Oct 18, 2005
3,754
10
81
Originally posted by: Scali
That sounds odd in a discussion about videocards.
Are you not aware that it's all disposable technology in this industry?
An 8-series GeForce is a completely different design from a 7-series GeForce. Same goes for APIs, DX9 is different from DX10... And game engines aswell, a completely new engine is written every few years (IDTech, CryEngine, UnrealEngine etc).
Let alone if you go further back in software/hardware history. The older stuff isn't used anymore. We've passed those stations to get to where we are today.
Likewise, the current PhysX API is the first step in physics acceleration, but certainly not the last.
Most technology only lives for 2-3 years anyway, before it is replaced by something newer and better.

Again, the difference being that a 7-series could run the same thing as a X19xx series, more or less at the same speed. Both camps ran DX9 code fine. Because DX9 was and still is an industry standard. And is still used by developers to create games - every card on the market can run it - that's the whole point. Engines - every card can run those! It's not like nVidia will be the only one able to run the new IDTech engine... The case with PhysX is that it's only supported by one vendor - not every card on the market can run it. You're mixing two things here :)

Well, that's just marketing, isn't it? nVidia has something that its competitors don't, so they want to emphasize their technological advantage. I see nothing wrong with that.
Ironically ATi has been marketing "physics processing" capabilities of their GPUs for years, with nothing to show for it. It's been a checkbox feature on their Radeons since the X1000-series I believe.

Great - it's marketing then. Let's leave it out of the discussion? ;) PhysX being a near-useless technology for now makes the whole discussion moot - since it's all marketing - you just killed it ;)

I would think that the proprietary x86 standard is the thing that makes the whole computing world go round. Try again :)

The thing is that everything and their mother is running x86 now and has been for the past several decades :) It's the sole instruction set on the market now as far as widespread use goes. PhysX is backed by nVidia since Feb 2008 - not to mention nVidia has no monopoly on anything. Intel with their deal with IBM last century kinda killed everything else :)

 

Scali

Banned
Dec 3, 2004
2,495
0
0
Originally posted by: Qbah
Again, the difference being that a 7-series could run the same thing as a X19xx series, more or less at the same speed. Both camps ran DX9 code fine. Because DX9 was and still is an industry standard. And is still used by developers to create games - every card on the market can run it - that's the whole point. Engines - every card can run those! It's not like nVidia will be the only one able to run the new IDTech engine... The case with PhysX is that it's only supported by one vendor - not every card on the market can run it. You're mixing two things here :)

Might I refresh your memory?
The GeForce 7 series supported shadowmapping extensions (DST/PCF), the Radeons didn't.
The Radeons supported 3Dc normalmap compression, the GeForces didn't.
Guess what? Both the shadowmapping and normalmap compression were used in many titles, even though it required different codepaths for different vendors.

And speaking of IDTech... Doom 3 actually had a specific path for nVidia hardware, making use of things like UltraShadow. Yes, that can only run on nVidia hardware. Other cards had significantly reduced stencilshadow performance.
Just look at the trusty old GeForce 5900 outperforming the Radeon 9800 series in Doom 3:
http://www.tomshardware.com/re...5900-ultra,630-14.html

Originally posted by: Qbah
Great - it's marketing then. Let's leave it out of the discussion? ;) PhysX being a near-useless technology for now makes the whole discussion moot - since it's all marketing - you just killed it ;)

That's where you and many other people go wrong.
A technology isn't useless just because there is only one brand supporting it.
PhysX is an excellent technology, opening up many new possibilities for physics in games.
Remember Glide? It wasn't exactly useless either. In the end it obviously didn't survive, because as other vendors started offering 3d acceleration, a hardware-agnostic solution was called for. But Glide was very useful when there was only one vendor offering this kind of acceleration in the first place. It laid the groundwork for 3d videocards as we know them today.

I was just saying that nVidia does its best to promote PhysX because that's what commercial companies do: they promote their products and technologies. It is always to be taken with a grain of salt.

Originally posted by: Qbah
The thing is that everything and their mother is running x86 now and has been for the past several decades :) It's the sole instruction set on the market now as far as widespread use goes.

Yea, it is now. But it wasn't when I first started playing with computers. In fact, my first 2 or 3 computers didn't have an x86 processor in them at all (even though x86 and the IBM PC did exist back then).
It slowly worked itself up from nothing to where it is today.
All that doesn't make it any less proprietary though.

Originally posted by: Qbah
PhysX is backed by nVidia since Feb 2008 - not to mention nVidia has no monopoly on anything.

nVidia owns PhysX though, so they can add OpenCL anytime they like. And nVidia has VERY good developer relations through the TWIMTP program. They have a lot of influence in the gaming industry. And they have the monopoly on GPGPU so far.
 

Keysplayr

Elite Member
Jan 16, 2003
21,211
50
91
Originally posted by: Scali
Originally posted by: Keysplayr
So you're asking if you would want 150fps without PhysX, or 110 with PhysX?
Answer: With PhysX

Or are you asking if you would want 32fps without PhysX or 23fps with PhysX?
Answer: Without PhysX

All depends on how you are asking the question. Am I to assume you are referring to the second example?

Or the third option:
So you're asking if you would want 110 with PhysX (GPU), or 5 with PhysX (CPU)?
Answer: With PhysX (GPU)

I was actually only referring to Nvidia PhysX supported cards. But you're right that if you are using an ATI card, the third option is the only option.
 

Scali

Banned
Dec 3, 2004
2,495
0
0
Originally posted by: Keysplayr
I was actually only referring to Nvidia PhysX supported cards. But you're right that if you are using an ATI card, the third option is the only option.

Yea, but nVidia cards isn't where the hurt is.
I mean, everyone with an nVidia card can just turn off hardware-PhysX in the control panel and/or turn off the PhysX effects in the game.
So you can choose whichever you like. Since you don't have to pay extra for PhysX, there's no disadvantage regardless of which way you prefer to use your card. No point in making any fuss about PhysX or Cuda or whatever. I mean... not everyone uses something like 16xAA or transparency AA either because they don't want to trade performance for visual quality, but does anyone make any fuss about it being supported by their cards?

The hurt is with (potential) ATi owners who can't use the hardware-effects at all. So they try their best to flame PhysX in any way possible. Reality is that PhysX does work, and is supported by an ever-growing number of games, while there is only the sound of crickets chirping in the ATi camp.

If I had the choice between a technology that works on all cards, and an equivalent technology that works on only one vendor's cards, I'd prefer the one that works everywhere.
But the reality is that you don't have this choice. And you may not get it either. There won't be games that support both Havok and PhysX (the API's are just too different, and it requires too much work to make both work in a single game.... much like how games supporting both OpenGL and Direct3D have been abandoned years ago). So even if Havok delivers OpenCL-powered physics, and even if Havok works fine on both ATi and nVidia hardware... there still are many games that use PhysX.
However you want to look at it, nVidia has the advantage. And that's where the hurt is.
 

Pantalaimon

Senior member
Feb 6, 2006
341
40
91
Or the third option: So you're asking if you would want 110 with PhysX (GPU), or 5 with PhysX (CPU)? Answer: With PhysX (GPU)

So I guess you don't mind being locked into one hardware vendor? Sorry, I like being able to choose which hardware vendor's card to buy. I'd rather take better performing card without PhysX but can run an open physics standard instead.