"Inevitable Bleak Outcome for nVidia's Cuda + Physx Strategy"

Page 8 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
Originally posted by: Zstream
Wow, one guy coming out of the wood work to defend physx. You guys are diehard lol.

Ya . Ya got to love it. He has my cat convinced. But the Dog here is no fool.


First were painting a picture around Cuda being the foundation for Open CY. Which is false! Apple and Power VR may have alittle to say about that. I am sure AMD/Intel will chime in with open CL. TO say Cuda = C and C= open CL is a lie. MS I believe has alot to do with C . Open CL was shoved down MS throat. They only came onboard after Apple got its way .

Than adoptaion of Cuda VS. Open CL . I suggest ya look at Open CL backers. ARM /Apple/Intel /AMD/ PowerVR/ ATI/NV/ Ect. Ect. ect. . Now show list of Cuda backers lets compare there influence in computing.

The Fact that NV marketers are pushing PX says alot. Game developers being pushed By nv to support PX while at same time telling them not to use DX10.1.

Maybe the EU should look into NV and gamedeveloper relations. They look guility to me . So maybe it should be looked into.
 

Munky

Diamond Member
Feb 5, 2005
9,372
0
76
Originally posted by: Scali
I know, I posted that days ago on the blog in the opening post.
Is that where you got it from? :)
LOL, you think you're the only one here who develops code?

Really informative and insightful comment there... :roll:

Lol again. If OpenCL was just like GLSL, why would you need it in the first place?
The programmability of Cuda/OpenCL goes way beyond simple GLSL shaders.
LOL, you talk like Cuda and GLSL runs on different HW. Anyone with enough knowledge of OpenGL and GLSL can accomplish the same results as you can with Cuda or OpenCL on a modern gpu. Cuda and OpenCL exist so the developer doesn't have to write all the graphics-related code to get those results. GLSL is not limited to simple shaders.


Actually it does mean that. OpenGL specified certain rasterization rules. DirectX adopted the OpenGL rasterization model to a certain extent, because otherwise you couldn't use the same hardware for both OpenGL and DirectX.
So which innovative Cuda features did OpenCL adopt?


Aside from that... Why do you think the OpenCL standard was drawn up in just a few months? And by Apple no less (not a GPU designer)?

The only possible answer is that they took Cuda as their guide and generalized the model.
If the standard was devised from scratch, there's no way it would be done as quickly as it was.
And THAT is why there are so many similarities. Neither Apple nor nVidia made a big secret of that:
http://www.appleinsider.com/ar...rt_on_top_of_cuda.html
If I want accurate un-BS info on Nvidia or Apple, the last place I'd look is Nvidia or Apple. The rest is just your speculation.


Incorrect.
They have instructions that can take up to 5 inputs (data parallelism). That's how they come to the creative number of 800 shader processors. Technically there are only 160 SIMD units, each capable of Vec5 processing. So you can have up to 160 threads in parallel, each processing Vec5's.
This means that in the worst case (scalar code), you get only 160 operations at a time, or only 20% efficiency. It's up to the compiler to try and find multiple operations that can be combined into single instructions with multiple inputs.
nVidia on the other hand can run 240 scalar threads on its SIMD units... they don't need to rely on data paralellism inside the instructions, so there are no efficiency issues. It doesn't matter if your code uses float, float2, float4 or whatever else, since it's always compiled to a sequence of scalar operations.
Because of the different hardware design, nVidia's instructions are simpler, and therefore the processors can run at higher clockspeeds.

Bottom line is that on paper they both have about the same peak performance of 1 TFLOPS... but with nVidia it's much easier to get code running efficiently, so the real-world performance will generally be closer to the peak performance than with ATi.

You left out the important part that Nvidia is not 240 independent scalar processors either. They are grouped into multiprocessor clusters, each working on a single program stream, and if that stream diverges based on heavy branching, you're getting a lot of bubbles, basically resulting in wasted cycles. So it's not like NV's architecture has no worst-case penalties either.
 

Munky

Diamond Member
Feb 5, 2005
9,372
0
76
Originally posted by: Wreckage
Originally posted by: munky

So is a number of other physics API's.
We are dicussing GPU Physics, please keep up.

Oh, you mean the gpu physics that draws more crap when something explodes? Wake me up when there are more interesting implementations of gpu physics.
 

Scali

Banned
Dec 3, 2004
2,495
1
0
Originally posted by: munky
LOL, you talk like Cuda and GLSL runs on different HW. Anyone with enough knowledge of OpenGL and GLSL can accomplish the same results as you can with Cuda or OpenCL on a modern gpu. Cuda and OpenCL exist so the developer doesn't have to write all the graphics-related code to get those results. GLSL is not limited to simple shaders.

Patently false :)
In a way you could say that Cuda and GLSL run on different hardware. GLSL was devised a few years ago when the first shader hardware arrived. Cuda was devised for the G80, which is a completely different architecture from the GPUs that were around when GLSL was devised.

And no, you can't just do what you can with Cuda/OpenCL in OpenGL/GLSL. That's exactly the point.
OpenGL only allows you to render from vertexbuffers into output buffers, going through vertex shaders and pixel shaders.
THere is no concept of local storage or anything, and the memory access is very limited aswell. You can only read from textures, and you can only render to your output buffers (and you're not allowed to use the same texture for both input and output in a single pass).

Technically you may be able to devise some kind of multipass OpenGL scheme for whatever algorithm you want to implement... but it's in no way comparable to how Cuda/OpenCL handle code, input, output etc.

Originally posted by: munky
You left out the important part that Nvidia is not 240 independent scalar processors either. They are grouped into multiprocessor clusters, each working on a single program stream, and if that stream diverges based on heavy branching, you're getting a lot of bubbles, basically resulting in wasted cycles. So it's not like NV's architecture has no worst-case penalties either.

I left that part out because it isn't specific to nVidia. That part is very similar to ATi, and will most likely also be similar for Larrabee.
This is because they are essentially SIMD processors, where the threads all share the same code, and even the same program counter. Technically there's only one instruction, it's just executed by many units at the same time.

That's exactly the difference between GPGPU and CPU processing in general. CPU's may not have the parallelism, but all their threads are completely independent and can branch however they like.

But I won't let that distract me. You actually did agree with me on everything I posted about the differences between ATi and nVidia in terms of GPGPU and code compilation. So you will understand my concerns relating to ATi's performance in OpenCL.
 

SunnyD

Belgian Waffler
Jan 2, 2001
32,675
146
106
www.neftastic.com
Originally posted by: Scali
Then kindly leave this thread, as I don't think you have any more to add. You only throw insults around, and display your own lack of understanding and reading comprehension.

Yessum mas'er. Yer absolutely right mas'er. :roll: I'll leave when a mod tells me to, no sooner, thanks.

As far as the rest is concerned, I'm not going to both responding to what your posts have devolved to from here. You've made my case for me, I'll leave it at that.

Originally posted by: Nemesis 1
Originally posted by: Zstream
Wow, one guy coming out of the wood work to defend physx. You guys are diehard lol.

Ya . Ya got to love it. He has my cat convinced. But the Dog here is no fool.


First were painting a picture around Cuda being the foundation for Open CY. Which is false! Apple and Power VR may have alittle to say about that. I am sure AMD/Intel will chime in with open CL. TO say Cuda = C and C= open CL is a lie. MS I believe has alot to do with C . Open CL was shoved down MS throat. They only came onboard after Apple got its way .

Than adoptaion of Cuda VS. Open CL . I suggest ya look at Open CL backers. ARM /Apple/Intel /AMD/ PowerVR/ ATI/NV/ Ect. Ect. ect. . Now show list of Cuda backers lets compare there influence in computing.

The Fact that NV marketers are pushing PX says alot. Game developers being pushed By nv to support PX while at same time telling them not to use DX10.1.

Maybe the EU should look into NV and gamedeveloper relations. They look guility to me . So maybe it should be looked into.

Wow Nemesis, I was actually able to follow that post of yours without any trouble this time around. :)

Let me help you out here... Text

OpenCL is being created by the Khronos Group with the participation of many industry-leading companies and institutions including 3DLABS, Activision Blizzard, AMD, Apple, ARM, Barco, Broadcom, Codeplay, Electronic Arts, Ericsson, Freescale, HI, IBM, Intel, Imagination Technologies, Kestrel Institute, Motorola, Movidia, Nokia, NVIDIA, QNX, RapidMind, Samsung, Seaweed, Takumi, Texas Instruments and Umeå University.

I have this distinct feeling that there's a lot more than just Nvidia's input going into OpenCL. Just a hunch here. While I agree certain aspects of what they've brought to the GPGPU table are undoubtedly better suited than other implementations, I find it very funny that someone would imply that their entire implementation (which is strictly GPU-based) is the entire basis of a hardware agnostic platform.

I am more interested as to why Microsoft isn't as interested in adding compute to their DirectX suite though. My guess is they're letting the OpenCL working group do the legwork first, much like what Microsoft did with Direct3D and OpenGL (though arguably the first incarnations of Direct3D were... lackluster).
 

Wreckage

Banned
Jul 1, 2005
5,529
0
0
Originally posted by: munky
Originally posted by: Wreckage
Originally posted by: munky

So is a number of other physics API's.
We are dicussing GPU Physics, please keep up.

Oh, you mean the gpu physics that draws more crap when something explodes? Wake me up when there are more interesting implementations of gpu physics.

You wasted your money on your video card as it will not add anything more to a game than a $50 card for you.
 

Scali

Banned
Dec 3, 2004
2,495
1
0
Originally posted by: SunnyD
As far as the rest is concerned, I'm not going to both responding to what your posts have devolved to from here. You've made my case for me, I'll leave it at that.

Yea, I figured it was impossible to argue anymore, after my explanation with OpenCL on CPUs and all that. Glad you finally understand that I was right all along.

Originally posted by: SunnyD
I am more interested as to why Microsoft isn't as interested in adding compute to their DirectX suite though. My guess is they're letting the OpenCL working group do the legwork first, much like what Microsoft did with Direct3D and OpenGL (though arguably the first incarnations of Direct3D were... lackluster).

What are you talking about? Compute Shaders are part of DirectX 11, and they have had partial support for DirectX 11 in the SDK for quite a while now (At least in the March 2009 SDK, and I think the one before that aswell). Compute Shaders are part of that 'technology preview'.
So Microsoft has already added it. We just need to wait for an official release of DX11 driveres and runtime.
See here for more info:
http://www.microsoft.com/downl...60BFC6A&displaylang=en
 

SlowSpyder

Lifer
Jan 12, 2005
17,305
1,002
126
Originally posted by: Wreckage
Originally posted by: munky
Originally posted by: Wreckage
Originally posted by: munky

So is a number of other physics API's.
We are dicussing GPU Physics, please keep up.

Oh, you mean the gpu physics that draws more crap when something explodes? Wake me up when there are more interesting implementations of gpu physics.

You wasted your money on your video card as it will not add anything more to a game than a $50 card for you.

Very nice Wreckage. You tell someone who has a faster video card than you, that also has a number of features your card does not have, that also sells for around half the price at it's launch that your card launched at, that his card is a waste of money.

Everything you and your fellow cheerleaders say about Physx can be said about DX10.1. I can buy a card that supports DX10.1 NOW. I can play games that use DX10.1 NOW. You cannot. Your card is a waste because you can't use DX10.1 or SM4.1. See, I can make retarded arguements just like you.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
Originally posted by: Nemesis 1
Maybe the EU should look into NV and gamedeveloper relations. They look guility to me . So maybe it should be looked into.

They need another 13% marketshare first. It's not enough to be merely abusive or merely a monopoly, you need to become an abusive monoply (preferably the kind with deep pockets that can be dug into) before governments will bother with you. :p ;) :laugh:
 

Munky

Diamond Member
Feb 5, 2005
9,372
0
76
Originally posted by: Wreckage
Originally posted by: munky
Originally posted by: Wreckage
Originally posted by: munky

So is a number of other physics API's.
We are dicussing GPU Physics, please keep up.

Oh, you mean the gpu physics that draws more crap when something explodes? Wake me up when there are more interesting implementations of gpu physics.

You wasted your money on your video card as it will not add anything more to a game than a $50 card for you.

LMAO. With which $50 video card can I play modern games with high rez textures, high lighting/shadow details and high model geometry at 1920x1200 with 4xAA? None.

Getting playable framerates with godrays enabled in Stalker CS is something I definitely notice. Having extra foliage on the ground because of gpu physx, on the other hand, is not.
 

Wreckage

Banned
Jul 1, 2005
5,529
0
0
Originally posted by: munky


LMAO. With which $50 video card can I play modern games with high rez textures, high lighting/shadow details and high model geometry at 1920x1200 with 4xAA? None.

LOL!!!!! Thank you for proving my point. None of those things add any more to the game than PhysX does.

You could still play the game with AA turned off at a lower resolution. You wasted your money. Wasted it!!!!

Originally posted by: SlowSpyder

Very nice Wreckage.

Thank you I thought so. It proved my point perfectly. ATI fans do not need to buy more than a 4830 to play games. They don't care about extra visual effects. I mean look at this quote....
Oh, you mean the gpu physics that draws more crap when something explodes? Wake me up when there are more interesting implementations of gpu physics.

AA\AF\Higher resolution\advanced textures don't even do this much.
 

Lonyo

Lifer
Aug 10, 2002
21,938
6
81
Originally posted by: munky
Originally posted by: Wreckage
Originally posted by: munky
Originally posted by: Wreckage
Originally posted by: munky

So is a number of other physics API's.
We are dicussing GPU Physics, please keep up.

Oh, you mean the gpu physics that draws more crap when something explodes? Wake me up when there are more interesting implementations of gpu physics.

You wasted your money on your video card as it will not add anything more to a game than a $50 card for you.

LMAO. With which $50 video card can I play modern games with high rez textures, high lighting/shadow details and high model geometry at 1920x1200 with 4xAA? None.

Getting playable framerates with godrays enabled in Stalker CS is something I definitely notice. Having extra foliage on the ground because of gpu physx, on the other hand, is not.

Um, he was pointing out that if you don't want the extra graphical niceties of PhysX (foliage etc) then why would you care about high res textures, high lighting and shadow details, high model geometry a high resolution and AA?
Why are all the graphical effects that aren't necessary that you _can_ run valid, but missing out on some makes them worthless and invalid?
What's the difference between AA and extra foliage? Why should you care about having one but not the other if you want everything on? Either you want it all, or none of it really matters and you might as well get a $50 card that can run the game at 800x600 with everything on low.
 

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
Originally posted by: Scali
Originally posted by: munky
LOL, you talk like Cuda and GLSL runs on different HW. Anyone with enough knowledge of OpenGL and GLSL can accomplish the same results as you can with Cuda or OpenCL on a modern gpu. Cuda and OpenCL exist so the developer doesn't have to write all the graphics-related code to get those results. GLSL is not limited to simple shaders.

Patently false :)
In a way you could say that Cuda and GLSL run on different hardware. GLSL was devised a few years ago when the first shader hardware arrived. Cuda was devised for the G80, which is a completely different architecture from the GPUs that were around when GLSL was devised.

And no, you can't just do what you can with Cuda/OpenCL in OpenGL/GLSL. That's exactly the point.
OpenGL only allows you to render from vertexbuffers into output buffers, going through vertex shaders and pixel shaders.
THere is no concept of local storage or anything, and the memory access is very limited aswell. You can only read from textures, and you can only render to your output buffers (and you're not allowed to use the same texture for both input and output in a single pass).

Technically you may be able to devise some kind of multipass OpenGL scheme for whatever algorithm you want to implement... but it's in no way comparable to how Cuda/OpenCL handle code, input, output etc.

Originally posted by: munky
You left out the important part that Nvidia is not 240 independent scalar processors either. They are grouped into multiprocessor clusters, each working on a single program stream, and if that stream diverges based on heavy branching, you're getting a lot of bubbles, basically resulting in wasted cycles. So it's not like NV's architecture has no worst-case penalties either.

I left that part out because it isn't specific to nVidia. That part is very similar to ATi, and will most likely also be similar for Larrabee.
This is because they are essentially SIMD processors, where the threads all share the same code, and even the same program counter. Technically there's only one instruction, it's just executed by many units at the same time.

That's exactly the difference between GPGPU and CPU processing in general. CPU's may not have the parallelism, but all their threads are completely independent and can branch however they like.

But I won't let that distract me. You actually did agree with me on everything I posted about the differences between ATi and nVidia in terms of GPGPU and code compilation. So you will understand my concerns relating to ATi's performance in OpenCL.

I sugjest that you look at the latest PowerVR multi-core gpu and how it functions. If you look deep enough. You will find that Apple and Power VR been working along time on open CL for power VR. A lot of what ya said is fact. The part ya didn't mention is PowerVR has come up with solution . Apple /Intel / Power VR been working on what is called Open CL for a very long time its not something that just poped up as you say. Both Intel and Apple have been using Power VR tech along time. Both Apple /Intel bought Power VR shares . AFTER. Open CL was adapted,why's that? I would think long and hard on that . Havok been working on physics for AMD along time low and behold its what is now called open CL . Why's That? Apple been working on grandcentral for long time for intel cpus. Very open cl friendly. why's that? .

If you want to see what influenced open Cl watch Apple and Intel in the hand held . Intels SoC will have intel/PowerVR graphics. That do physics just fine . Apple also is going with Power Vr design . And Apple is infact working on its own chip set for SoC ARM and SoC Atom . Both Using Power VR chips. 2010! FACT.

 

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
Originally posted by: SunnyD
Originally posted by: Scali
Then kindly leave this thread, as I don't think you have any more to add. You only throw insults around, and display your own lack of understanding and reading comprehension.

Yessum mas'er. Yer absolutely right mas'er. :roll: I'll leave when a mod tells me to, no sooner, thanks.

As far as the rest is concerned, I'm not going to both responding to what your posts have devolved to from here. You've made my case for me, I'll leave it at that.

Originally posted by: Nemesis 1
Originally posted by: Zstream
Wow, one guy coming out of the wood work to defend physx. You guys are diehard lol.

Ya . Ya got to love it. He has my cat convinced. But the Dog here is no fool.


First were painting a picture around Cuda being the foundation for Open CY. Which is false! Apple and Power VR may have alittle to say about that. I am sure AMD/Intel will chime in with open CL. TO say Cuda = C and C= open CL is a lie. MS I believe has alot to do with C . Open CL was shoved down MS throat. They only came onboard after Apple got its way .

Than adoptaion of Cuda VS. Open CL . I suggest ya look at Open CL backers. ARM /Apple/Intel /AMD/ PowerVR/ ATI/NV/ Ect. Ect. ect. . Now show list of Cuda backers lets compare there influence in computing.

The Fact that NV marketers are pushing PX says alot. Game developers being pushed By nv to support PX while at same time telling them not to use DX10.1.

Maybe the EU should look into NV and gamedeveloper relations. They look guility to me . So maybe it should be looked into.

Wow Nemesis, I was actually able to follow that post of yours without any trouble this time around. :)

Let me help you out here... Text

OpenCL is being created by the Khronos Group with the participation of many industry-leading companies and institutions including 3DLABS, Activision Blizzard, AMD, Apple, ARM, Barco, Broadcom, Codeplay, Electronic Arts, Ericsson, Freescale, HI, IBM, Intel, Imagination Technologies, Kestrel Institute, Motorola, Movidia, Nokia, NVIDIA, QNX, RapidMind, Samsung, Seaweed, Takumi, Texas Instruments and Umeå University.

I have this distinct feeling that there's a lot more than just Nvidia's input going into OpenCL. Just a hunch here. While I agree certain aspects of what they've brought to the GPGPU table are undoubtedly better suited than other implementations, I find it very funny that someone would imply that their entire implementation (which is strictly GPU-based) is the entire basis of a hardware agnostic platform.

I am more interested as to why Microsoft isn't as interested in adding compute to their DirectX suite though. My guess is they're letting the OpenCL working group do the legwork first, much like what Microsoft did with Direct3D and OpenGL (though arguably the first incarnations of Direct3D were... lackluster).

Actually look deeper Open CL is an Apple trade mark . Apple was the one who wanted the standard.

 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
Originally posted by: Nemesis 1
I sugjest that you look at the latest PowerVR multi-core gpu and how it functions. If you look deep enough. You will find that Apple and Power VR been working along time on open CL for power VR. A lot of what ya said is fact. The part ya didn't mention is PowerVR has come up with solution . Apple /Intel / Power VR been working on what is called Open CL for a very long time its not something that just poped up as you say. Both Intel and Apple have been using Power VR tech along time. Both Apple /Intel bought Power VR shares . AFTER. Open CL was adapted,why's that? I would think long and hard on that . Havok been working on physics for AMD along time low and behold its what is now called open CL . Why's That? Apple been working on grandcentral for long time for intel cpus. Very open cl friendly. why's that? .

If you want to see what influenced open Cl watch Apple and Intel in the hand held . Intels SoC will have intel/PowerVR graphics. That do physics just fine . Apple also is going with Power Vr design . And Apple is infact working on its own chip set for SoC ARM and SoC Atom . Both Using Power VR chips. 2010! FACT.

Any chance this is related to what Apple's got cooking with all this here?
 

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
Originally posted by: Idontcare
Originally posted by: Nemesis 1
Maybe the EU should look into NV and gamedeveloper relations. They look guility to me . So maybe it should be looked into.

They need another 13% marketshare first. It's not enough to be merely abusive or merely a monopoly, you need to become an abusive monoply (preferably the kind with deep pockets that can be dug into) before governments will bother with you. :p ;) :laugh:

As long as your not a monoply you can do this ? If thats case Intel should break up and compete against self.

 

Scali

Banned
Dec 3, 2004
2,495
1
0
Originally posted by: Nemesis 1
I sugjest that you look at the latest PowerVR multi-core gpu and how it functions. If you look deep enough. You will find that Apple and Power VR been working along time on open CL for power VR. A lot of what ya said is fact. The part ya didn't mention is PowerVR has come up with solution . Apple /Intel / Power VR been working on what is called Open CL for a very long time its not something that just poped up as you say. Both Intel and Apple have been using Power VR tech along time.

I wouldn't know, I haven't seen any PowerVR hardware since I had a Kyro II.
But even if PowerVR originally devised it, and nVidia 'stole' the idea for G80 and Cuda, that doesn't change anything about the fact that the Cuda programming model and the OpenCL model are essentially the same, and that ATi is the odd one out here.
 

Munky

Diamond Member
Feb 5, 2005
9,372
0
76
Originally posted by: Lonyo
Um, he was pointing out that if you don't want the extra graphical niceties of PhysX (foliage etc) then why would you care about high res textures, high lighting and shadow details, high model geometry a high resolution and AA?
Why are all the graphical effects that aren't necessary that you _can_ run valid, but missing out on some makes them worthless and invalid?
What's the difference between AA and extra foliage? Why should you care about having one but not the other if you want everything on? Either you want it all, or none of it really matters and you might as well get a $50 card that can run the game at 800x600 with everything on low.

I don't care about graphical physics, I want interactive gameplay physics. Ragdoll model effects was a good start, and those run just fine without dedicated physics HW. For something as hyped up as gpu physics I'd expect a lot more than just extra debris on the screen.

Originally posted by: Wreckage
LOL!!!!! Thank you for proving my point. None of those things add any more to the game than PhysX does.

You could still play the game with AA turned off at a lower resolution. You wasted your money. Wasted it!!!!

They add exactly what they're supposed to add, so my game doesn't look as crappy as yours. You wasted your money because when you enable HW physx in the 2 games that support it, you're still limited to the same simplified game physics when it comes to interacting with the game world.
 

Munky

Diamond Member
Feb 5, 2005
9,372
0
76
Originally posted by: Wreckage
AA\AF\Higher resolution\advanced textures don't even do this much.

They make a game not look like crap, compared to your low-rez, low detail experience.
 

Wreckage

Banned
Jul 1, 2005
5,529
0
0
Originally posted by: munky
Originally posted by: Wreckage
AA\AF\Higher resolution\advanced textures don't even do this much.

They make a game not look like crap, compared to your low-rez, low detail experience.

If you are playing a Physx game with a card that does not support PhysX, it will look more like crap.

You don't have to keep proving my point for me, but I do appreciate it.


http://www.techreport.com/articles.x/16392

Notice that without PhysX, broken glass is no longer persistent, instead replaced by a standard shattering animation. Also, the window shades don't even exist in the game world without PhysX, making the hallway that much less visually appealing. One final item to point out: those little black specks you see in the center of the PhysX image are an example of the simulated debris from ricochets, which is otherwise nonexistent without GPU-accelerated physics enabled.

Many soft bodies simply don't appear in the environment without PhysX enabled, as evidenced by the blue tarp. The tarp deforms and eventually turns into tatters as the helicopter shoots through it to hit the player. In the first image, you can see a pre-cooked puff of debris as a bullet ricochets, while the second image has simulated sparks bouncing around.

Smoke isn't visible all that often in Mirror's Edge, but its rare appearances are generally used to great effect. In this instance, a large amount of mist emanates from the water as the player slides down a waterfall. The whole scene looks a tad bland without PhysX, but the simulated smoke ups the immersion level.

PhysX visual goodness looks best while in motion, however, and screenshots can't tell the whole story.

The name of the game is immersion, and PhysX helps sell the experience.
 

Munky

Diamond Member
Feb 5, 2005
9,372
0
76
Originally posted by: Wreckage
Originally posted by: munky
Originally posted by: Wreckage
AA\AF\Higher resolution\advanced textures don't even do this much.

They make a game not look like crap, compared to your low-rez, low detail experience.

If you are playing a Physx game with a card that does not support PhysX, it will look more like crap.

You don't have to keep proving my point for me, but I do appreciate it.

Wrong. If I'm playing a gpu-PhysX game, that means I'm wasting my time with crap like Mirrors Edge instead of playing a much more impressive game like Farcry 2.
 

Keysplayr

Elite Member
Jan 16, 2003
21,219
56
91
Originally posted by: Nemesis 1
Originally posted by: Zstream
Wow, one guy coming out of the wood work to defend physx. You guys are diehard lol.

Ya . Ya got to love it. He has my cat convinced. But the Dog here is no fool.


First were painting a picture around Cuda being the foundation for Open CY. Which is false! Apple and Power VR may have alittle to say about that. I am sure AMD/Intel will chime in with open CL. TO say Cuda = C and C= open CL is a lie. MS I believe has alot to do with C . Open CL was shoved down MS throat. They only came onboard after Apple got its way .

Than adoptaion of Cuda VS. Open CL . I suggest ya look at Open CL backers. ARM /Apple/Intel /AMD/ PowerVR/ ATI/NV/ Ect. Ect. ect. . Now show list of Cuda backers lets compare there influence in computing.

The Fact that NV marketers are pushing PX says alot. Game developers being pushed By nv to support PX while at same time telling them not to use DX10.1.

Maybe the EU should look into NV and gamedeveloper relations. They look guility to me . So maybe it should be looked into.

You do understand that CUDA is not a programming language, right? You do understand that CUDA (Compute Unified Device Architecture) is the actual hardware. The GeForce GPU's starting with G80. CUDA toolkits, now up to version 2.0, is what enables developers to write apps for NVIDIA GPUs in C (with extensions for CUDA).

As far as "CUDA" being similar to OpenCL, that is a misconception. What should be said is, "how does C for CUDA compare to OpenCL". And I'm quoting the very article from the Apple Insider article.

Here is the link to the article. NVIDIA pioneering OpenCL support on top of CUDA

So when you say this: "TO say Cuda = C and C= open CL is a lie." Would be accurate if somebody actually said that, or meant that. To clear up this misconception ,the proper way to explain this is, CUDA is an actual architecture which is programmed by developers using CUDA toolkits to write apps for NVIDIA GPUs in C (with extensions for CUDA). And that C (with extentions for CUDA) is not much different than OpenCL. In fact, it was said that the differences are minor.

Here is an exact quote from the article I linked to above from the AppleInsider:
The bolded part in parenthesis was entered by me to give context of what Manju Hedge means when he states: "the two". But this can be easily understood when reading the article in it's entirety.

Quote as follows:

Quoted from Manju Hegde, the General Manager of CUDA at NVIDIA"The answer is that the two (C for CUDA and OpenCL) share very similar constructs for defining data parallelism, which is generally the major task, so the code will be very similar and the porting efforts will be minor."

What I think is going on here is, some people do not want to even imagine that C for CUDA is very similar to OpenCL. Because those same people were touting OpenCL being the new standard that all GPU's can run without any proprietary restraints they perceive with CUDA and PhysX. And now that it has been said that "CUDA" is very much like OpenCL, well, that just won't do. It just wont do at all!

That's what's happening here. It's why SunnyD is getting all agitated and attacking the new guy "Scali", (who does seem to know his stuff quite well), or Creig and Munky are raging against it. Guys, I'm not picking on anybody here, and I understand it is frustrating to hear this kind of stuff when you very much want to see ATI/AMD succeed when they are so near the brink of failure. I don't want to see them go either. I need them to start thinking ahead like, well, yesterday! Or for christ sakes already, let Nvidia buy them out (If that was ever an intention on Nvidia's part, I have no idea) and continue to fight Intel.
 

ShawnD1

Lifer
May 24, 2003
15,987
2
81

Originally posted by: Idontcare
Originally posted by: Nemesis 1
Maybe the EU should look into NV and gamedeveloper relations. They look guility to me . So maybe it should be looked into.

They need another 13% marketshare first. It's not enough to be merely abusive or merely a monopoly, you need to become an abusive monoply (preferably the kind with deep pockets that can be dug into) before governments will bother with you. :p ;) :laugh:

Must be a pretty shitty monopoly if you don't have deep pockets :p


Wreckage, the people who write those articles are straight up schizophrenic. They're in the same league as people who only listen to vinyl records and it must be played on a $10,000 record player. It might be true that PhysX adds something, but this is mostly just another case of making a mountain out of a mole.
 

Lonyo

Lifer
Aug 10, 2002
21,938
6
81
Originally posted by: munky
Originally posted by: Wreckage
Originally posted by: munky
Originally posted by: Wreckage
AA\AF\Higher resolution\advanced textures don't even do this much.

They make a game not look like crap, compared to your low-rez, low detail experience.

If you are playing a Physx game with a card that does not support PhysX, it will look more like crap.

You don't have to keep proving my point for me, but I do appreciate it.

Wrong. If I'm playing a gpu-PhysX game, that means I'm wasting my time with crap like Mirrors Edge instead of playing a much more impressive game like Farcry 2.

HAHAHAHAHAHA.
Man, that made my day, it really did.
There are so many games you could have said and I would have taken you totally seriously, but then you mention Farcry 2 and I don't know whether you're joking or being serious and you think FC2 is more impressive than Mirror's edge.

But PhysX doesn't really add a huge amount. A PhysX game without PhysX hardware won't look like crap, it'll look slightly worse, like having no AA vs 4xAA. Hardly the world.
Wreckage seems to be overestimating how much PhysX adds, while ignoring the fact that he is arguing that current PhysX is about as worthwhile as DX10.1 (which AMD has and NV doesn't).

Wreckage, if you are going to argue that PhysX makes tihngs prettier, then it's effectively just a small graphical extension to the base game, much like a DX10.1 path would be.
Now since AMD has DX10.1 and NV has PhysX H/W, you win some and lose some on both sides, but basically it's a wash.
Both DX10.1 and PhysX do very little really, and only one IHV supports each at the moment, and they both just add small amounts graphics wise. DX10.1 has the advantage that it will eventually be supported by all though, so it's more worthwhile going for a DX10.1 card and waiting for DX101. stuff to be added because any future card should work with it, than going for a PhysX card and hoping that it will become supported by all cards in the future.
 

SunnyD

Belgian Waffler
Jan 2, 2001
32,675
146
106
www.neftastic.com
Originally posted by: Nemesis 1
Actually look deeper Open CL is an Apple trade mark . Apple was the one who wanted the standard.

That was beside my point, though indirectly you pick at that scab. It's an ideal reason for Microsoft to want an official "DirectCompute" API of it's own. As I said, it makes me wonder to an extent why they didn't head down this path themselves.