"Inevitable Bleak Outcome for nVidia's Cuda + Physx Strategy"

Page 7 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Creig

Diamond Member
Oct 9, 1999
5,170
13
81
Originally posted by: Scali
Since when did the majority ever know what is good for them?
??? Um... Wow. I'm not even sure how to respond to this.

It's an opinion poll. How can somebody's opinion be wrong? 3/4 to 4/5 of 14,500 people voted PhysX as either marginally useful or not useful. However you slice it, it shows an underwhelming public opinion of PhysX.

Originally posted by: Scali
Besides, the majority still considers it a bonus at least, the 'not useful' crowd is way less than that (only 30/33%).
I wouldn't consider that a plus for PhysX when they could have chosen to instead vote PhysX as "Useful", "Important" or "Very important".




 

Scali

Banned
Dec 3, 2004
2,495
1
0
Originally posted by: dadach
its a bonus to have usefulness if some games actually come out...we are still waiting

I've been playing Mirror's Edge for a while now.
 

dadach

Senior member
Nov 27, 2005
204
0
76
Originally posted by: Scali
Originally posted by: dadach
its a bonus to have usefulness if some games actually come out...we are still waiting

I've been playing Mirror's Edge for a while now.

that really says a lot doesnt it...one game...in normal circumstances you would give me a list of 10 games that are worth a damn, no?
 

Scali

Banned
Dec 3, 2004
2,495
1
0
Originally posted by: Creig
It's an opinion poll. How can somebody's opinion be wrong? 3/4 to 4/5 of 14,500 people voted PhysX as either marginally useful or not useful. However you slice it, it shows an underwhelming public opinion of PhysX.

I'm not saying an opinion is wrong.
But opinions can be misinformed, based on wrong assumptions, or suffer from a lack of vision. To name but a few things.
Let's face it, most people who voted there, probably have no clue about game design, let alone what PhysX actually is, or what it can do for a game.
Unless a 'killer app' for PhysX comes out, they won't 'see the light'.
That says more about these people than about PhysX itself.
Hence I don't see any value in such polls. Especially not with the leading questions that Derek Wilson put in.

It reminds me of DX10 and how people were opposed to it, thinking that DX9 would be just as good, and wouldn't require them to buy Vista.
If you were to have a poll about the value of DX10 a few years ago, it'd probably be way different from the same poll today, now that many games actually use DX10, and many people have upgraded to Vista anyway.
What's important there is that neither Vista nor DX10 changed. It's just that people 'saw the light' after a number of DX10 games came out, some of which they liked.
 

Scali

Banned
Dec 3, 2004
2,495
1
0
Originally posted by: dadach
that really says a lot doesnt it...one game...in normal circumstances you would give me a list of 10 games that are worth a damn, no?

No, it's still too soon.
Just have patience, more PhysX games will come.
It took quite a while for DX10 games to arrive aswell, and with a few exceptions (such as Crysis) they generally just were DX9 games with some slightly polished shaders.
But eventually, games with good use of DX10 arrived, and people started to accept the technology.
 

Qbah

Diamond Member
Oct 18, 2005
3,754
10
81
Originally posted by: Scali
Originally posted by: Creig
It's an opinion poll. How can somebody's opinion be wrong? 3/4 to 4/5 of 14,500 people voted PhysX as either marginally useful or not useful. However you slice it, it shows an underwhelming public opinion of PhysX.

I'm not saying an opinion is wrong.
But opinions can be misinformed, based on wrong assumptions, or suffer from a lack of vision. To name but a few things.
Let's face it, most people who voted there, probably have no clue about game design, let alone what PhysX actually is, or what it can do for a game.
Unless a 'killer app' for PhysX comes out, they won't 'see the light'.
That says more about these people than about PhysX itself.
Hence I don't see any value in such polls. Especially not with the leading questions that Derek Wilson put in.

It reminds me of DX10 and how people were opposed to it, thinking that DX9 would be just as good, and wouldn't require them to buy Vista.
If you were to have a poll about the value of DX10 a few years ago, it'd probably be way different from the same poll today, now that many games actually use DX10, and many people have upgraded to Vista anyway.
What's important there is that neither Vista nor DX10 changed. It's just that people 'saw the light' after a number of DX10 games came out, some of which they liked.

Shit man, nobody is saying PhysX sucks! Everybody in here is saying that right now it's an underwhelming thing that doesn't bring anything groundbreaking in its current form. Got that last part I put in bold? Vista, the day it launched and during its first months, was pretty much a waste - people were very happy about their XP machines.

Let me spell it for you - right now PhysX isn't useful. Well, perhaps to the 100 people still playing UT3 or the other 100 people who bought Mirror's Edge for the PC 2 months after it's console debut where everybody knew already the game was medicore at best. The poll states that right now PhysX is a non-factor when making a decision regarding your GPU purchase.

If it gets proper implementation and a "killer-app" that willuse it to full extent, I'm sure that poll would look different. Right now it's poorly implemented and is available in titles I can count with my left hand.
 

WelshBloke

Lifer
Jan 12, 2005
33,467
11,608
136
My only problem with physX is there seems to be more lines of marketing written for it than actual code. ;)
 

Scali

Banned
Dec 3, 2004
2,495
1
0
Originally posted by: Qbah
The poll states that right now PhysX is a non-factor when making a decision regarding your GPU purchase.

Yea, I'm aware that the general opinion of the average end-user on PhysX is like that.
But that's not what I was talking about.
I was talking about the PhysX and Cuda technologies themselves, and how valuable they are to developers for paving the way for wide adoption of GPGPU and physics acceleration. Which in turn will be valuable to end-users aswell.

Oh and let me spell something out for you:
This poll, nor the end-user in general, has any effect on what game developers choose to do with PhysX and hardware-acceleration.
End-users have no influence on game development.

Originally posted by: Qbah
If it gets proper implementation and a "killer-app" that willuse it to full extent, I'm sure that poll would look different. Right now it's poorly implemented and is available in titles I can count with my left hand.

But at least it is already responsible for getting the OpenCL standard going, and making Intel and ATi think about GPGPU and physics acceleration.
 

Creig

Diamond Member
Oct 9, 1999
5,170
13
81
Originally posted by: Scali
I'm not saying an opinion is wrong.
But opinions can be misinformed, based on wrong assumptions, or suffer from a lack of vision. To name but a few things.
Let's face it, most people who voted there, probably have no clue about game design, let alone what PhysX actually is, or what it can do for a game.
What does that have to do with anything? In its current form, PhysX doesn't add a whole lot to any game that people are interested in. It's up to Nvidia to prove PhysX to the public, not the other way around. You can't put the cart in front of the horse.


Originally posted by: Scali
Unless a 'killer app' for PhysX comes out, they won't 'see the light'.
That says more about these people than about PhysX itself.
Hence I don't see any value in such polls. Especially not with the leading questions that Derek Wilson put in.
But why should people get excited about PhysX if it doesn't have any 'killer app'? If PhysX isn't implemented in a way to make people sit up and take notice, then nobody is going to pay attention to it or feel that it is important in any way. And why should they?

Just because PhysX has the "potential" to be successful doesn't automatically mean it will be.


Originally posted by: Scali
It reminds me of DX10 and how people were opposed to it, thinking that DX9 would be just as good, and wouldn't require them to buy Vista.
If you were to have a poll about the value of DX10 a few years ago, it'd probably be way different from the same poll today, now that many games actually use DX10, and many people have upgraded to Vista anyway.
What's important there is that neither Vista nor DX10 changed. It's just that people 'saw the light' after a number of DX10 games came out, some of which they liked.
Exactly. And until PhysX is implemented in such a way that its contributions to games excites the public interest, it's going to continue to be seen as an unimportant feature when purchasing both hardware and software.

As I said, it's up to Nvidia to prove the value of PhysX to the public. And so far, it hasn't.
 

Scali

Banned
Dec 3, 2004
2,495
1
0
Originally posted by: Creig
What does that have to do with anything? In its current form, PhysX doesn't add a whole lot to any game that people are interested in. It's up to Nvidia to prove PhysX to the public, not the other way around. You can't put the cart in front of the horse.

Not sure if you realized it, but you just explained exactly why this poll is useless (which is what I was trying to get at).
PhysX hasn't proven itself to the general public yet, so the outcome was quite predictable.
In fact, the outcome is more positive than I expected, given the small number of games available, and the modest extras that PhysX brings to them.

Originally posted by: Creig
But why should people get excited about PhysX if it doesn't have any 'killer app'? If PhysX isn't implemented in a way to make people sit up and take notice, then nobody is going to pay attention to it or feel that it is important in any way. And why should they?

I'm not expecting people to get excited about things they don't understand. You don't seem to get that you're saying the same thing as I am.

Originally posted by: Creig
Just because PhysX has the "potential" to be successful doesn't automatically mean it will be.

As I said in an earlier post, PhysX/Cuda have already been VERY important to the industry. Perhaps not in ways you realized or even understand.

Originally posted by: Creig
And until PhysX is implemented in such a way that its contributions to games excites the public interest, it's going to continue to be seen as an unimportant feature when purchasing both hardware and software.

This is flawed logic. Not everyone buys their hardware based on the software that is currently out there. Some people buy hardware looking at the software that is coming out in the next 2-3 years when they own that hardware (why do people buy quadcores, or 64-bit processors/OSes, etc?).
Therefore it is entirely valid for an informed buyer to say "I think PhysX is going to be an important enough factor in future games for me to go nVidia with my next GPU".
He doesn't even have to be right in his assessment 2-3 years down the line, he's already bought the card, and PhysX already weighed into that decision.
I myself am tempted to go for nVidia aswell, if both ATi and nV have DX11 cards and price and performance are matched. PhysX would be a nice extra. I'd have nothing to lose and everything to gain.
 

SunnyD

Belgian Waffler
Jan 2, 2001
32,675
146
106
www.neftastic.com
*yawn* You're obviously right and everybody else is wrong. :roll:

Originally posted by: Scali
OpenCL is basically just taking Cuda and making it a platform-independent framework. If you've ever bothered to look at both, you'd see the glaring similarities. It's much like how D3D's HLSL, OpenGL's GLSL and nVidia's Cg are virtually identical.

As for Stream... ATi started over with Stream when the first drafts of OpenCL came out. At first ATi had a different GPGPU solution, which WAS made for their hardware... but then they decided to change course and make Stream as OpenCL.

It's funny you say that, because being platform independent, OpenCL doesn't really need to care what Stream does. Even if AMD "changed" Stream, it still is made for their hardware... NOT for OpenCL.

Don't underestimate me. I'm not some random idiot. I have a long history as a developer with GPU/GPGPU code. I think you're the one making a mistake, because you don't seem to think that different GPGPU designs have any effect on their performance.
I'll give you an example:
AMD and Intel CPUs both run x86 code... They are both DESIGNED to run x86 code.
Yet Intel's CPUs run the code more efficiently. Why? Their architecture is different, and handles the x86 code that is out there better.
The same will happen with OpenCL... one GPGPU will run it better than the other. I'm saying that this will be the nVidia GPGPUs.

Sounds like you're the one that's overestimating yourself. I never said different architectures will perform the same. If you implied that, I'm sorry you failed to understand that. I simply said that your assessment of OpenCL being designed for "CUDA" is dead wrong. If that were the case, it would be called CUDA, not OpenCL.

So unless you're going to pull out your credentials from the Khronos OpenCL working group or one of it's sponsoring members, you'll excuse me if I call you out on that one. Otherwise, I'll prefer my crow warm with a side of mashed potatoes.
 

Scali

Banned
Dec 3, 2004
2,495
1
0
Originally posted by: SunnyD
It's funny you say that, because being platform independent, OpenCL doesn't really need to care what Stream does. Even if AMD "changed" Stream, it still is made for their hardware... NOT for OpenCL.

Stream has the task of compiling OpenCL code to their native instructions for their GPGPU, just as Cuda does for nVidia.
Therefore OpenCL doesn't need to know what Cuda/Stream do, but the opposite certainly isn't true. The compiler needs to be able to generate efficient code from OpenCL for the underlying hardware. This obviously works better if both the compiler and the hardware are designed with the OpenCL programming model in mind.
This holds true for nVidia, but not for ATi.

Originally posted by: SunnyD
I never said different architectures will perform the same.

Then how exactly was I to interpret this statement?
"OpenCL will sit atop of Stream just as well as it will sit atop of CUDA"

If you agree with me saying that it will not perform as well on Stream as it will on Cuda, then why are you arguing with me in the first place?

Originally posted by: SunnyD
If you implied that, I'm sorry you failed to understand that. I simply said that your assessment of OpenCL being designed for "CUDA" is dead wrong. If that were the case, it would be called CUDA, not OpenCL.
So unless you're going to pull out your credentials from the Khronos OpenCL working group or one of it's sponsoring members, you'll excuse me if I call you out on that one. Otherwise, I'll prefer my crow warm with a side of mashed potatoes.

I don't have to pull any credentials to prove the obvious.
But I do happen to be a registered nVidia Cuda developer with access to their OpenCL SDK. If you have the same access, you know I'm right. If not, I think you're the one who's going to need to pull credentials.
Or actually, I'd prefer if you'd argue facts. Now it's just "he said, she said...". Why don't you argue any points I made about the similarities? Or bring up your own points why they wouldn't be closely related... Then we might actually get somewhere.

In fact, you may want to argue what nVidia says in their OpenCL jumpstart guide:
http://developer.download.nvid...CL_JumpStart_Guide.pdf
"The NVIDIA CUDA Driver API allows programmers to develop applications for the CUDA architecture and is the predecessor of OpenCL. As such, the CUDA Driver API is very similar to OpenCL with a high correspondence between functions. Using the CUDA Driver API and the guidelines explained in this document will allow a smooth transition to OpenCL in the future, and allows you to get started today learning GPU computing and parallel programming concepts."

Nice overview of the subtle differences between the two in general.
 

SunnyD

Belgian Waffler
Jan 2, 2001
32,675
146
106
www.neftastic.com
Originally posted by: Scali
Originally posted by: SunnyD
It's funny you say that, because being platform independent, OpenCL doesn't really need to care what Stream does. Even if AMD "changed" Stream, it still is made for their hardware... NOT for OpenCL.

Stream has the task of compiling OpenCL code to their native instructions for their GPGPU, just as Cuda does for nVidia.
Therefore OpenCL doesn't need to know what Cuda/Stream do, but the opposite certainly isn't true. The compiler needs to be able to generate efficient code from OpenCL for the underlying hardware. This obviously works better if both the compiler and the hardware are designed with the OpenCL programming model in mind.
This holds true for nVidia, but not for ATi.

CUDA was NOT designed with OpenCL in mind! Good lord, I don't know how many times we have to go around the circle here. All of a sudden you're talking about compiler efficiency, not hardware efficiency as you were droning on about yesterday. It's the same damn problem Intel has with Itanium versus other architectures - and something AMD could just as easily "fix" (assuming it's broken like you say it is) over time. So what is it then, is it the hardware or the software that's "broken" for AMD? Make up your mind!

Originally posted by: SunnyD
I never said different architectures will perform the same.

Then how exactly was I to interpret this statement?
"OpenCL will sit atop of Stream just as well as it will sit atop of CUDA"

If you agree with me saying that it will not perform as well on Stream as it will on Cuda, then why are you arguing with me in the first place?

I am most definitely emphatically NOT agreeing with you in your opinion that OpenCL will not perform as well on Stream as it would with CUDA. Your reading comprehension seems to be going down hill rapidly here. The only thing I stated in that comment is that OpenCL will run just fine on top of AMD's stack. Anything beyond that, you're putting words in where I didn't.

Originally posted by: SunnyD
If you implied that, I'm sorry you failed to understand that. I simply said that your assessment of OpenCL being designed for "CUDA" is dead wrong. If that were the case, it would be called CUDA, not OpenCL.
So unless you're going to pull out your credentials from the Khronos OpenCL working group or one of it's sponsoring members, you'll excuse me if I call you out on that one. Otherwise, I'll prefer my crow warm with a side of mashed potatoes.

I don't have to pull any credentials to prove the obvious.
But I do happen to be a registered nVidia Cuda developer with access to their OpenCL SDK. If you have the same access, you know I'm right. If not, I think you're the one who's going to need to pull credentials.
Or actually, I'd prefer if you'd argue facts. Now it's just "he said, she said...". Why don't you argue any points I made about the similarities? Or bring up your own points why they wouldn't be closely related... Then we might actually get somewhere.

There is absolutely nothing obvious about it, and you're rapidly falling into the hole of discreditability (yes, I just made that word up) here. You've just backed off from saying AMD's hardware is inferior (in terms of OpenCL) to saying their software stack is inferior (in terms of OpenCL), and now you're simply asking us to "take your word for it" because you may or may not have looked at an API? You know what, you're absolutely right - I do have access to the same materials as you. Ironically, I'm also a developer, I also work in high performance graphics. But I can say this with absolute certainty about the shit you're spewing: Any "similarity" between OpenCL and CUDA may or may not be coincidence, but without any doubt whatsoever you are absolutely not qualified to make any absolute mandate that one vendor's implementation will perform better than another's simply because a few API calls look similar, unless you have some actual empirical evidence to prove as such.

So what's it going to be? You're asking for facts... you have anything concrete to share here? Or are you just going to be pushing your opinion as gospel like the rest of the pro-Nvidiots around here?
 

Scali

Banned
Dec 3, 2004
2,495
1
0
Originally posted by: SunnyD
CUDA was NOT designed with OpenCL in mind!

I never said it did.
You don't seem to understand what I'm saying here, do you?
I said Cuda was designed with the OpenCL programming model in mind.
Why do I say that? Because I've already said before that OpenCL has adopted the Cuda programming model
Now I don't need to argue that Cuda was designed with the Cuda programming model in mind, do I?
It makes a painful amount of sense.

Originally posted by: SunnyD
All of a sudden you're talking about compiler efficiency, not hardware efficiency as you were droning on about yesterday.

I never talked about hardware efficiency as that doesn't make sense in the OpenCL context.
Hardware can be very efficient if you just run random code on it that suits the hardware. That doesn't mean anything.
We are talking about OpenCL code here, which has to run on the hardware.
This code will first go through a compiler, and then will be run on the hardware.
It is the compiler's task to make it run efficiently on the hardware. So the two are closely related.

Originally posted by: SunnyD
So what is it then, is it the hardware or the software that's "broken" for AMD? Make up your mind!

That's a matter of perspective. I personally would say that it's the hardware that's 'broken', because I go from the assumption that compiler technology cannot bridge the gap between OpenCL and ATi's current hardware architecture.

You could also argue that ATi's current hardware would perform well in OpenCL if only the compiler was 'fixed' to extract more parallelism from the code... If you go from the assumption that it is possible for a compiler to do this to that extent.

Originally posted by: SunnyD
There is absolutely nothing obvious about it, and you're rapidly falling into the hole of discreditability (yes, I just made that word up) here.

You're rapidly falling into the hole of slinging personal insults around rather than just discussing the topic at hand.

Originally posted by: SunnyD
You've just backed off from saying AMD's hardware is inferior (in terms of OpenCL) to saying their software stack is inferior (in terms of OpenCL)

No I didn't. I said that because the hardware is so different from nVidia's/what OpenCL expects, the compiler has to put in a lot of hard work.
I don't want to pull the reading comprehension card here like you did... But I didnt specifically put the blame with the compiler (I said "both compiler and hardware", since they work as a team). That is just what you read into it.

Originally posted by: SunnyD
You know what, you're absolutely right - I do have access to the same materials as you. Ironically, I'm also a developer, I also work in high performance graphics.

You don't really give off the impression that you know what you're talking about, to be honest. In this post I've addressed quite a few misconceptions of yours that an experienced developer wouldn't have made.

Originally posted by: SunnyD
Any "similarity" between OpenCL and CUDA may or may not be coincidence, but without any doubt whatsoever you are absolutely not qualified to make any absolute mandate that one vendor's implementation will perform better than another's simply because a few API calls look similar, unless you have some actual empirical evidence to prove as such.

The problem here is that your assumption is invalid. I'm not basing the entire thing on a few API calls. I've also studied both hardware architectures, and have hands-on experience developing code for both. That however does not mean I care to go into great detail on an internet forum, or produce empirical evidence for some smart-mouth.

Originally posted by: SunnyD
So what's it going to be? You're asking for facts... you have anything concrete to share here? Or are you just going to be pushing your opinion as gospel like the rest of the pro-Nvidiots around here?

I'm not a pro-nVidiot.
And if you choose to believe something other than what I said, fine. That's your choice. The problem I have with you however is that you come here and insult me, pretend to know what you're talking about, but don't back up anything you say, or even argue any of the points I've made in this thread. You've not even convinced me that you know what you're talking about.
 

Creig

Diamond Member
Oct 9, 1999
5,170
13
81
Originally posted by: Scali
Originally posted by: Creig
And until PhysX is implemented in such a way that its contributions to games excites the public interest, it's going to continue to be seen as an unimportant feature when purchasing both hardware and software.

This is flawed logic. Not everyone buys their hardware based on the software that is currently out there. Some people buy hardware looking at the software that is coming out in the next 2-3 years when they own that hardware (why do people buy quadcores, or 64-bit processors/OSes, etc?).
Therefore it is entirely valid for an informed buyer to say "I think PhysX is going to be an important enough factor in future games for me to go nVidia with my next GPU".
He doesn't even have to be right in his assessment 2-3 years down the line, he's already bought the card, and PhysX already weighed into that decision.
I myself am tempted to go for nVidia aswell, if both ATi and nV have DX11 cards and price and performance are matched. PhysX would be a nice extra. I'd have nothing to lose and everything to gain.

Why would anybody base a purchasing decision on a product that may or may not even exist 2-3 years down the road? And even if it does exist, would a 2-3 year old card be fast enough to run it? I doubt it. Certainly there would be games out there that a 2-3 year old card could not render fast enough, PhysX or no PhysX.

Sorry, but most people base their purchases on what is important today, not something that may or may not be significant 2-3 years from now.
 

Wreckage

Banned
Jul 1, 2005
5,529
0
0

Scali

Banned
Dec 3, 2004
2,495
1
0
Originally posted by: Creig
Why would anybody base a purchasing decision on a product that may or may not even exist 2-3 years down the road? And even if it does exist, would a 2-3 year old card be fast enough to run it? I doubt it. Certainly there would be games out there that a 2-3 year old card could not render fast enough, PhysX or no PhysX.

I disagree.
The 8800GTX and 8800Ultra are more than 2 years old now, and still run the latest games just fine.

Originally posted by: Creig
Sorry, but most people base their purchases on what is important today, not something that may or may not be significant 2-3 years from now.

Aside from what 'most people' might or might not mean, it doesn't make my argument any less valid.
 

Lonyo

Lifer
Aug 10, 2002
21,938
6
81
Originally posted by: Wreckage
82% of people polled stated overwhelmingly that they like PhysX
http://www.driverheaven.net/polls/poll-1198-a.html

67% of the video card market last quarter was from cards supporting PhysX/CUDA
http://www.neoseeker.com/news/...rovement-over-q4-2008/

A major gaming website states "Everybody Loves PhysX"
http://forums.anandtech.com/me...=2302598&enterthread=y

Yet a random blog post speaks poorly of it and the "Red Team" can't focus on anything else.

I am going to totally ignore "67% of video cards last Q supported PhysX" because that's a pointless statement to repeat over and over.
Who cares if the 8400GT supports PhysX, it can't play the games which have PhysX.
What YOU (Wreckage) should be doing is looking at different numbers.
The Steam hardware survey indicates that around 75% of reasonable gaming cards (e.g. from GF 8600/HD3650 and up in terms of performance) are from NV, which gives them an even bigger gamer marketshare than general marketshare.

But then you have to realise there aren't many games which make use of hardware PhysX, rendering it pretty useless for now as a feature to support. Those which do support it don't use it to add much, except maybe more effects etc.
There is a comparison here to DirectX improvements such as SM2 vs SM3, DX10 vs DX10.1 etc. The differences between these DX implementations is minor, maybe one runs the game a little faster and with nicer graphics. PhysX software vs hardware is (mostly at the moment) the same. Mirrors Edge for example has nicer cloth swaying etc, but the fundamental gameplay is unaltered.

PhysX, and hardware accelerated physics in general, will only really be a step forward when it is used to fundementally change the gameplay itself. I would give the example of the PhysX levels of Unreal Tournament 3, where they can do things not possible without hardware acceleration. But, widespread implementation of such gameplay altering features can't happen (really) when you don't have all of the market, and when the competition at the mid range to high end, where the gamers will be making their purchases, is quite tight (which we cannot totally determine because we don't have accurate numbers for GTX + 9800 series vs HD4800 series cards even if overall NV sales make up 67% of the market. The market doesn't matter if it's a lot of low end stuff).

PhysX and hardware physics can't show their true value until it's available for all, because the major developers won't want to alienate all non-NV graphics chip companies. With the addition of Intel in the future that's going to be even more true. Why make a game based around hardware physics that only one vendor out of three supports? Your game won't be saleable to a large part of the market, and you will be shooting yourself as a development studio in the foot. Sure the situation now is that it's NV vs AMD and NV has a larger marketshare, but we already know not many games at the moment support hardware PhysX, and in the future it's going to be even less appealing to use it with the addition of Intel to the GPU market. The true value of hardware physics will be adding to the fundamental gameplay experience, and not adding some fancy debris to random parts of a level. That can't happen if only NV can run PhysX hardware acceleration.
 

Munky

Diamond Member
Feb 5, 2005
9,372
0
76
Originally posted by: Scali
I have already explained it.
Cuda also refers to nVidia's hardware architecture, and how it is organized around scalar threads running in parallel on SIMD processors, and how they are scheduled in 'warps' and such.
The hardware and programming language go hand-in-hand with Cuda.
Since OpenCL's programming language and API is incredibly similar to Cuda (same concepts in terms of threading, warp scheduling etc), it follows that the hardware to run OpenCL efficiently is also incredibly similar to nVidia's.
And obviously, ATi's hardware is NOT similar to nVidia's (they have instructions that process up to 5 scalars at a time. Which is why they get those impressive marketing figures like '800 shader processors', but worst case they only get 20% efficiency out of them... which is why nVidia with 'only' '240 shader processors' is still faster. They don't have the efficiency problem because of their different approach).

Firstly, Cuda and "C for Cuda" are two separate things. Cuda is the underlying framework which Nv designed to interface applications with their gpu. C for Cuda is the actual language programmers use to program the gpu, which in turn gets compiled by the driver and run on the HW. That language is based on a decades-old C language, so Nvidia hardly did anything innovative there, not to mention the GLSL shader language is also based on it, and it preceded Cuda by a few years.

Secondly, OpenCL is not based on Cuda, but rather follows the design model of GLSL, where the developer writes a program for the gpu, which gets compiled and loaded at runtime by the driver, and then gets "bound" to make it active. The fact that it shares some similarities with Cuda is no different than than OpenGL and DirectX both being based on rasterization. That doesn't mean one is based on the other.

And lastly, AMD HW is a super-scalar architecture, so it's not like the 5 units can only operate on a single instruction. It also runs and schedules threads, only it organizes them differently from Nvidia's architecture. There are trade-offs between Nvidia's and AMD's approaches, and Nvidia's design is not universally superior. There are cases where AMD's HW wins. 800 vs 240? How about 240 shaders running at twice the clockspeed? Or 1 billon transistor r790 beating a 1.4billion xstor g200? It's not as simple as counting the number of shaders and such.
 

Munky

Diamond Member
Feb 5, 2005
9,372
0
76
Originally posted by: Wreckage
82% of people polled stated overwhelmingly that they like PhysX
http://www.driverheaven.net/polls/poll-1198-a.html

67% of the video card market last quarter was from cards supporting PhysX/CUDA
http://www.neoseeker.com/news/...rovement-over-q4-2008/

A major gaming website states "Everybody Loves PhysX"
http://forums.anandtech.com/me...=2302598&enterthread=y

Yet a random blog post speaks poorly of it and the "Red Team" can't focus on anything else.

AT's reader poll says otherwise.
 

SunnyD

Belgian Waffler
Jan 2, 2001
32,675
146
106
www.neftastic.com
Originally posted by: Wreckage
82% of people polled stated overwhelmingly that they like PhysX
http://www.driverheaven.net/polls/poll-1198-a.html

67% of the video card market last quarter was from cards supporting PhysX/CUDA
http://www.neoseeker.com/news/...rovement-over-q4-2008/

A major gaming website states "Everybody Loves PhysX"
http://forums.anandtech.com/me...=2302598&enterthread=y

Yet a random blog post speaks poorly of it and the "Red Team" can't focus on anything else.

Hey look, the marketing propaganda bot has been turned on for the day!

Originally posted by: Scali
I said Cuda was designed with the OpenCL programming model in mind.
Why do I say that? Because I've already said before that OpenCL has adopted the Cuda programming model

I find it funny that Nvidia's compute language and architecture could possibly have been designed for a generic hardware agnostic compute API that hadn't even been conceived at the time Nvidia's grand CUDA architecture was designed.

You're rapidly falling into the hole of slinging personal insults around rather than just discussing the topic at hand.

And you're preaching personal opinion as gospel. What's your point?

Originally posted by: SunnyD
You've just backed off from saying AMD's hardware is inferior (in terms of OpenCL) to saying their software stack is inferior (in terms of OpenCL)

No I didn't. I said that because the hardware is so different from nVidia's/what OpenCL expects, the compiler has to put in a lot of hard work.
I don't want to pull the reading comprehension card here like you did... But I didnt specifically put the blame with the compiler (I said "both compiler and hardware", since they work as a team). That is just what you read into it.

Hmm, "reading into things" seems to go with the territory. Explains how you could end up mistaken on your opinions here.

Again, I point out - OpenCL is not expecting anything from the hardware, other than support. That's the exact reason why OpenCL will run on a generic CPU, DSP, or GPU without a care in the world. This is the point you seem to be failing to realize. OpenCL isn't design around CUDA... it never has been. If it was, it wouldn't be platform agnostic, and if that were the case then I'd happily concede your point. There's no point in going on with this ad infinitum.

The problem here is that your assumption is invalid. I'm not basing the entire thing on a few API calls. I've also studied both hardware architectures, and have hands-on experience developing code for both. That however does not mean I care to go into great detail on an internet forum, or produce empirical evidence for some smart-mouth.

Empirical proof, otherwise it's hearsay. If you don't want to back up your claims, that's up to you. All that does is reduce your credibility to the same level as we regard rags like TheInq... spout enough fud and odds are you might get lucky eventually.

The problem I have with you however is that you come here and insult me, pretend to know what you're talking about, but don't back up anything you say, or even argue any of the points I've made in this thread. You've not even convinced me that you know what you're talking about.

I'm not here to poopoo your ego. Thankfully I do know what I'm talking about, but I don't really care if you believe me. The only thing I'm here to do is disprove misinformation when I see it. All I need to do is point out providing the misinformation do enough to discredit themselves. You yourself have not provided any single piece of factual information to further your cause, leaving myself unconvinced that you yourself know what you're talking about. Prove without a shadow of a doubt your opinion is fact, I'll happily retract. Until then, have fun going around in circles with yourself.
 

Munky

Diamond Member
Feb 5, 2005
9,372
0
76
Originally posted by: Scali
Originally posted by: dadach
the whole point is that physx now is not worth it, and is not one of nvidia strong points...

This is all just personal opinion.
The more people are biased towards ATi, the less willing they are to admit that PhysX has any worth at all.

I've seen it all dozens of times before... Everytime manufacturer A introduced feature X which manufacturer B didn't support...

Regardless of how important you think PhysX is... bottom line is:
1) PhysX is available to developers today.
So is a number of other physics API's.
2) PhysX is actually being used by a number of game studios.
So what? Havok is used by just as many if not more game studios.
3) There are a few games with PhysX effects on the market already.
Those effects add nothing to gameplay, only drawing extra crap on the screen.
4) There is currently no alternative to PhysX.

These are the facts which we assume to be true, and need not be discussed.

BS.
 

Scali

Banned
Dec 3, 2004
2,495
1
0
Originally posted by: munky
Firstly, Cuda and "C for Cuda" are two separate things. Cuda is the underlying framework which Nv designed to interface applications with their gpu. C for Cuda is the actual language programmers use to program the gpu, which in turn gets compiled by the driver and run on the HW.

I know, I posted that days ago on the blog in the opening post.
Is that where you got it from? :)

Originally posted by: munky
That language is based on a decades-old C language, so Nvidia hardly did anything innovative there, not to mention the GLSL shader language is also based on it, and it preceded Cuda by a few years.

Lol okay :)

Originally posted by: munky
Secondly, OpenCL is not based on Cuda, but rather follows the design model of GLSL, where the developer writes a program for the gpu, which gets compiled and loaded at runtime by the driver, and then gets "bound" to make it active.

Lol again. If OpenCL was just like GLSL, why would you need it in the first place?
The programmability of Cuda/OpenCL goes way beyond simple GLSL shaders.

Originally posted by: munky
The fact that it shares some similarities with Cuda is no different than than OpenGL and DirectX both being based on rasterization. That doesn't mean one is based on the other.

Actually it does mean that. OpenGL specified certain rasterization rules. DirectX adopted the OpenGL rasterization model to a certain extent, because otherwise you couldn't use the same hardware for both OpenGL and DirectX.

Aside from that... Why do you think the OpenCL standard was drawn up in just a few months? And by Apple no less (not a GPU designer)?
The only possible answer is that they took Cuda as their guide and generalized the model.
If the standard was devised from scratch, there's no way it would be done as quickly as it was.
And THAT is why there are so many similarities. Neither Apple nor nVidia made a big secret of that:
http://www.appleinsider.com/ar...rt_on_top_of_cuda.html

Originally posted by: munky
And lastly, AMD HW is a super-scalar architecture, so it's not like the 5 units can only operate on a single instruction.

Incorrect.
They have instructions that can take up to 5 inputs (data parallelism). That's how they come to the creative number of 800 shader processors. Technically there are only 160 SIMD units, each capable of Vec5 processing. So you can have up to 160 threads in parallel, each processing Vec5's.
This means that in the worst case (scalar code), you get only 160 operations at a time, or only 20% efficiency. It's up to the compiler to try and find multiple operations that can be combined into single instructions with multiple inputs.
nVidia on the other hand can run 240 scalar threads on its SIMD units... they don't need to rely on data paralellism inside the instructions, so there are no efficiency issues. It doesn't matter if your code uses float, float2, float4 or whatever else, since it's always compiled to a sequence of scalar operations.
Because of the different hardware design, nVidia's instructions are simpler, and therefore the processors can run at higher clockspeeds.

Bottom line is that on paper they both have about the same peak performance of 1 TFLOPS... but with nVidia it's much easier to get code running efficiently, so the real-world performance will generally be closer to the peak performance than with ATi.
 

Scali

Banned
Dec 3, 2004
2,495
1
0
Originally posted by: SunnyD
I find it funny that Nvidia's compute language and architecture could possibly have been designed for a generic hardware agnostic compute API that hadn't even been conceived at the time Nvidia's grand CUDA architecture was designed.

It's very easy to understand when you see that the hardware-agnostic API works in the same way as Cuda does.
But if you can't understand that, there's no point in discussing it any further.

Originally posted by: SunnyD
And you're preaching personal opinion as gospel.

It just looks that way because it's a one-sided discussion. The other side fails to deliver any good arguments.

Originally posted by: SunnyD
Hmm, "reading into things" seems to go with the territory. Explains how you could end up mistaken on your opinions here.

Is that supposed to be some kind of apology?
Not accepted.

Originally posted by: SunnyD
Again, I point out - OpenCL is not expecting anything from the hardware, other than support. That's the exact reason why OpenCL will run on a generic CPU, DSP, or GPU without a care in the world.

Obviously it will run on various types of processors.
But the point is, do you expect OpenCL to run as fast on an x86 CPU as on a high-end GPU, simply because x86 CPUs will get OpenCL support?
And if not, why do you think that is? Could it possibly have something to do with the fact that GPU hardware is more suited to the kind of parallel processing that OpenCL is designed for? Maybe even that GPUs were the main target for OpenCL, and CPU support is only there for compatibility's sake? (Gives me flashbacks from the days of optional FPUs, where you could use floating point code just fine on any CPU in a programming language, but systems with an FPU would be MUCH faster. The others would have to simulate the floating point operations with their not-so-suitable integer instructions. It worked, but no more than that).
Hey, now wait a minute, could the same also be true for different types of GPU designs?
Wow, that makes so much sense!

Originally posted by: SunnyD
This is the point you seem to be failing to realize. OpenCL isn't design around CUDA... it never has been. If it was, it wouldn't be platform agnostic, and if that were the case then I'd happily concede your point. There's no point in going on with this ad infinitum.

The problem is that I NEVER said that. Don't put words into my mouth.
If you bother to grant me the respect to READ what I said... I said that OpenCL adopted the Cuda programming model, but made it into a hardware/platform-agnostic API.
You can do nothing but concede to that.

Originally posted by: SunnyD
Empirical proof, otherwise it's hearsay. If you don't want to back up your claims, that's up to you. All that does is reduce your credibility to the same level as we regard rags like TheInq... spout enough fud and odds are you might get lucky eventually.

I see it is not possible to have a mature discussion with you.

Originally posted by: SunnyD
Thankfully I do know what I'm talking about, but I don't really care if you believe me.

Good, because I don't.

Originally posted by: SunnyD
The only thing I'm here to do is disprove misinformation when I see it.

Funny enough that is exactly why I've entered this thread.

Originally posted by: SunnyD
You yourself have not provided any single piece of factual information to further your cause, leaving myself unconvinced that you yourself know what you're talking about.

Actually I have. But you misunderstood the parts that you decided to respond to, and ignored the others.

Originally posted by: SunnyD
Until then, have fun going around in circles with yourself.

Then kindly leave this thread, as I don't think you have any more to add. You only throw insults around, and display your own lack of understanding and reading comprehension.