Interesting take on Kanters article PhysX87: Software Deficiency

Page 4 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

GaiaHunter

Diamond Member
Jul 13, 2008
3,650
218
106
I wouldn't consider Mass Effect 2 and Dragon Age crap games.

No advantage using NVIDIA cards vs AMD cards on those games, though.

The discussion of these kind of threads is:

a) Are physics effects accelerated by GPU any more realistic than CPU?

b) Are the physics effects accelerated by GPU faster than CPU?

c) Are the GPU accelerated physX effects worth the hassle of dropping frame rates and limiting your cards choice?

There is no doubt that physX effects that can are vendor agnostic, read run on CPU, is successful amongst developers - it is free and such.

But why is it free? What does NVIDIA gets?

Of course what NVIDIA is interested is getting physX games that actually require NVIDIA GPUs to run, not Dragon Age and Mass Effect kind of games that run just fine in a system with an AMD graphics card.

The fact that physX is free to implement for developers is just NVIDA way to try to push for their ultimate goal: GPU physX.

Again, there is 2 types of physX - physX that runs on CPU and physX that runs on GPU.
 

Outrage

Senior member
Oct 9, 1999
217
1
0
I wouldn't consider Mass Effect 2 and Dragon Age crap games.

and i wouldent consider bioware to be indie devolopers.

ps3 physx: 48 games
ps3 havok: 82 games

360 physx: 63 games
360 havok: 98 games

pc physx: 165 games
pc havok: 75 games

If physx is free and the best physics engine, how come developers pay money to use havok?
 

Lonyo

Lifer
Aug 10, 2002
21,938
6
81
and i wouldent consider bioware to be indie devolopers.

ps3 physx: 48 games
ps3 havok: 82 games

360 physx: 63 games
360 havok: 98 games

pc physx: 165 games
pc havok: 75 games

If physx is free and the best physics engine, how come developers pay money to use havok?

Not only that, but Unreal Engine 3 is quite possibly the most widely used single engine in games, and ships with PhysX as the default physics engine. That was probably the biggest win for Aegia back in the day.
And despite that, Havok is still popular.
 

SirPauly

Diamond Member
Apr 28, 2009
5,187
1
0
It's simple for me: It's really nice to see Havok and PhysX try to innovate Physics and to improve gaming experiences. There is no contest to me and desire both to evolve and mature and offer more.

If one is for improved Physics and if they feel Physics is the next frontier -- it's great to have Havok and PhysX compete and innovate.
 

Lonbjerg

Diamond Member
Dec 6, 2009
4,419
0
0
It's growing among indie developers without money, pushing out crap games on the pc. If they had money they would purchase a havok license like the majority of console developers do.


How do you explain EPIC and the UT3 engine then?
Or the fact that Sony bought a "free-for all-devs" PhysX license for the PS3?

But could you link me to documentation that most console developers buy a Havok license?
Because this thread is full of false claims, so it would be nice to see a real, documented one...for once.
 

Scali

Banned
Dec 3, 2004
2,495
0
0
That's the 1st time I'm hearing about I-Fluid. I'll have a look at that.

I assume It's based on software physx?

I think most people would agree that software physx is pretty good (Except in NFS-shift), IMO it's not as good as havok, but its pretty good. I would go as far as to say its better than GPU physx as it usually plays a larger role on the gameplay.

This is something that makes no sense to me. But it might give me some insight in why people can't get their head around these things.
Let me try to explain:
PhysX is an API.
There is no difference between "software PhysX" and "hardware PhysX", or "CPU PhysX" or "GPU PhysX".
There just are multiple backends ('drivers') for executing PhysX code. You should compare this for example with Direct3D or OpenGL.
If you write a game using Direct3D or OpenGL, it can run on a variety of hardware without changing anything. AMD provides a backend for their Radeons in their drivers, nVidia for their GeForces, etc (in case you still don't understand: Radeons, GeForces and other videocards have virtually NOTHING in common. It's not like a videocard 'does' Direct3D or OpenGL... no, these are APIs, high-level abstractions of operations that a GPU is required to perform. Pretty much like how websites use standard languages such as HTML, which is defined to render in a certain way. Firefox, Chrome, IE, etc... they all use their own renderer, completely independent of the others, yet they are capable of displaying the same websites, because they understand the same HTML commands etc).

This is the same with PhysX. By default, you will have a CPU backend on a PC. If you have an nVidia videocard, you will also get a GPU backend, and you can choose between the two. If you have an Ageia PPU, you will have a PPU backend to choose.

Now, the thing with GPU and PPU acceleration is that they can handle a lot more physics operations per second, because they have a lot more raw processing power than even the fastest CPUs in the world (if you read my article closely, I refer to that, the GFLOPS rating is about a factor 10 apart). They are also pretty efficient at doing physics, so this raw processing power translates well to practical situations.
So, when you use a GPU or PPU, you can process a lot more PhysX.

Game developers use this to add extra content/detail, specifically tailored for high-speed GPU/PPU processing. Obviously if you run THAT on your CPU, it's going to crawl.
Is that nVidia's/PhysX' fault? No, your CPU just is a factor 10 slower (see above), what did you expect? People who cry "yea but you can get a 6-core CPU now for less than $300, and all these cores could do PhysX", they simply don't get it. Sorry, but that's just how it is.
And people who cry "Yea, but Havok doesn't need a GPU for the same effects", they simply don't get it either... Sorry... but get a clue about what physics effects ARE, how much processing power some of them cost, and then look closely at your beloved Havok games again. Do you see realtime cloth effects? Realtime smoke/fluid effects? Soft bodies?
The answer is a simple: no.
Basically all you see is ragdolls and low amounts of rigidbody physics. Yea, CPUs can handle those, but they can handle those very well with PhysX running on a CPU as well. I-Fluid is a nice example of that.
 

Lonbjerg

Diamond Member
Dec 6, 2009
4,419
0
0
It's simple for me: It's really nice to see Havok and PhysX try to innovate Physics and to improve gaming experiences. There is no contest to me and desire both to evolve and mature and offer more.

If one is for improved Physics and if they feel Physics is the next frontier -- it's great to have Havok and PhysX compete and innovate.

I don't think Havok really "innovated" from 2000 to 2006.
AGIEA innovated (the PPU).
They changed things for ever.
Now the box is open and there is no going back.
 

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
We've gone through this many times already in other threads which you have participated in. If you forgot that's on you. There is not ONE not ONE game where Physx affects gameplay at all. We've already discussed the MANY games that use Havok and where it affects gameplay.

Well since we have proven this to be either an outright lie or an uninformed false statement... I call your bluff and raise you this: "There is not ONE not ONE game where Havok affects graphics and immersion at all. We've already discussed the MANY games that use PHYSX and where it affects graphics."

That is a lie today and that would have been a lie in 2006.
Destructable architehcture IS gamealtering, if you like it or not:
http://www.youtube.com/watch?v=d3TU65KaPXI

I've also mentioned I-Fluid a few times in such discussions, but that was ignored completely each and every time.

Add Crazy Machines 2 to the list.

http://www.guitarsolos.com/videos-crazy-machines-gameplay-%5BzhZd3WU5l38].cfm
 
Last edited:

Genx87

Lifer
Apr 8, 2002
41,091
513
126
and i wouldent consider bioware to be indie devolopers.

ps3 physx: 48 games
ps3 havok: 82 games

360 physx: 63 games
360 havok: 98 games

pc physx: 165 games
pc havok: 75 games

If physx is free and the best physics engine, how come developers pay money to use havok?

Why do developers still use Cold Fusion? Because it is what they know.
 

SirPauly

Diamond Member
Apr 28, 2009
5,187
1
0
No advantage using NVIDIA cards vs AMD cards on those games, though.

Why should this matter? PhysX is an engine, middleware API, library and tool-set.

The discussion of these kind of threads is:

a) Are physics effects accelerated by GPU any more realistic than CPU?

Not necessarily, but one may able to do more based on the raw potential of parallel processing.

b) Are the physics effects accelerated by GPU faster than CPU?

Yes!

c) Are the GPU accelerated physX effects worth the hassle of dropping frame rates and limiting your cards choice?

Yes, for me, but this is subjective. I can always turn it off but what if one desires it and yet no choice at all? Proprietary is not ideal at times because it brings division and fragmentation but it also brings innovation and choice for some. Simply don't allow idealism to be the enemy of good.

There is no doubt that physX effects that can are vendor agnostic, read run on CPU, is successful amongst developers - it is free and such.

But why is it free? What does NVIDIA gets?

To bring awareness, bring value to their name brands, try to innovate Physics.

Of course what NVIDIA is interested is getting physX games that actually require NVIDIA GPUs to run, not Dragon Age and Mass Effect kind of games that run just fine in a system with an AMD graphics card.

The fact that physX is free to implement for developers is just NVIDA way to try to push for their ultimate goal: GPU physX.

I think their ultimate goal is to have more physics done on the GPU -- either by PhysX or any other vehicle. It's not like nVidia is anti OpenCL or Compute Shader even though they have Cuda.
 
Last edited:

Scali

Banned
Dec 3, 2004
2,495
0
0
I don't think Havok really "innovated" from 2000 to 2006.

Indeed.
Havok was responsible for the initial push of physics, with games like Max Payne 2 and Half-Life 2.
But that was ages ago. Games still use the exact same physics effects, just rigidbody and ragdoll. Nothing new.
Also not really surprising that such physics effects work well on today's CPUs. These games date from 2003/2004, before dualcore processors were around.
 

Outrage

Senior member
Oct 9, 1999
217
1
0
How do you explain EPIC and the UT3 engine then?
Or the fact that Sony bought a "free-for all-devs" PhysX license for the PS3?

Dont that support my argument? the only reason physx is in a few console titles is because its free, or bundled with a game engine. The ones that pay for there physics engine chooses havok.

But could you link me to documentation that most console developers buy a Havok license?
Because this thread is full of false claims, so it would be nice to see a real, documented one...for once.

there is a link breaking down havok and physx used in games in this thread.
 

GaiaHunter

Diamond Member
Jul 13, 2008
3,650
218
106
There is no difference between "software PhysX" and "hardware PhysX", or "CPU PhysX" or "GPU PhysX".
(...)
Basically all you see is ragdolls and low amounts of rigidbody physics. Yea, CPUs can handle those, but they can handle those very well with PhysX running on a CPU as well. I-Fluid is a nice example of that.

So there is a difference.

Most games out there use physX simply as they would use any other physics engine that runs on the CPU.

So while physX is only one, the requirements to run it can vary from "having a CPU" to "require NVIDIA GPU" and that is how we split games that use physX into "uses physX but requires CPU only" and "uses physX and requires NVIDIA GPU", so while there are loads of games using the former not so many use the latter.
 

Scali

Banned
Dec 3, 2004
2,495
0
0
So there is a difference.

Not on the PhysX side.
The developer just chose to use a certain type/amount of effects/detail that is suited to the extra grunt that a GPU or PPU delivers.
Just like how I can run Crysis in low detail and high detail.
The high-detail mode won't run very well on a low-end card.. but that doesn't mean the high-detail mode runs with a completely different version of Direct3D, different code, different drivers, or whatnot. It just sends heavier graphics workloads to the videocard.

Heck, we've been through this same thing with 3d accelerators... with Direct3D/OpenGL working on a CPU, but not being fast enough to actually play any games designed for a hardware accelerator.
"But my CPU is 500 MHz, and I have MMX, how can it be slower than a piece of hardware that is specifically designed and optimized to render 3d graphics?"

Most games out there use physX simply as they would use any other physics engine that runs on the CPU.

I would say all of them do... except some of them use such a physics workload that it cannot be processed adequately by current CPUs.

So while physX is only one, the requirements to run it can vary from "having a CPU" to "require NVIDIA GPU" and that is how we split games that use physX into "uses physX but requires CPU only" and "uses physX and requires NVIDIA GPU", so while there are loads of games using the former not so many use the latter.

With the exception of some games that simply don't ALLOW you to run without a GPU/PPU, all (yes ALL) PhysX games allow you to run the effects on a CPU as well (as I tried to explain: it's the same API, it can just transparently plug in another backend).
The CPU just isn't fast enough (which is the reason why some developers don't let you enable that extra workload on a CPU in the first place: it's pointless to try. Same reason why nobody has tried running Direct3D/OpenGL rendering on the CPU in years: pointless, CPUs are orders of magnitude too slow).
If it didn't run, people wouldn't be complaining that PhysX is slow and is not using their CPU optimally.
They complain about this because they CAN run games such as Mirror's Edge, Batman: AA, Cryostasis etc in the 'GPU enhanced' mode on CPU. It just doesn't run very well... well duh, you're a few GFLOPS short for starters.
 
Last edited:

taserbro

Senior member
Jun 3, 2010
216
0
76
Dont that support my argument? the only reason physx is in a few console titles is because its free, or bundled with a game engine. The ones that pay for there physics engine chooses havok.

there is a link breaking down havok and physx used in games in this thread.

The article you got your numbers from answers your question.
The reason physx wasn't as present on consoles despite its being free is due to the previous iterations of its sdks still being unoptimized for consoles and developers being married to their current (and I'd wager console-port-compatible) workflow including the "[...] orientation on consoles, best-in-class developer support" features that havoc offers. Basically, it's the exact same reason why you don't see amd gpu physics yet: lack of quality dev support.

In this case, nvidia is making active efforts to remedy the situation as their new sdk proves; that directly corroborates sirpauly's assessment applied to their console sdk features, and I quote,
"Or maybe, just maybe, nVidia has work to do and it simply takes time, resources and hard work to address many aspects of Physics? SSE and improved threading are coming, from my understanding, and PhysX needs to evolve and mature like anything else."
 

Skurge

Diamond Member
Aug 17, 2009
5,195
1
71
This is something that makes no sense to me. But it might give me some insight in why people can't get their head around these things.
Let me try to explain:
PhysX is an API.
There is no difference between "software PhysX" and "hardware PhysX", or "CPU PhysX" or "GPU PhysX".
There just are multiple backends ('drivers') for executing PhysX code. You should compare this for example with Direct3D or OpenGL.
If you write a game using Direct3D or OpenGL, it can run on a variety of hardware without changing anything. AMD provides a backend for their Radeons in their drivers, nVidia for their GeForces, etc (in case you still don't understand: Radeons, GeForces and other videocards have virtually NOTHING in common. It's not like a videocard 'does' Direct3D or OpenGL... no, these are APIs, high-level abstractions of operations that a GPU is required to perform. Pretty much like how websites use standard languages such as HTML, which is defined to render in a certain way. Firefox, Chrome, IE, etc... they all use their own renderer, completely independent of the others, yet they are capable of displaying the same websites, because they understand the same HTML commands etc).

This is the same with PhysX. By default, you will have a CPU backend on a PC. If you have an nVidia videocard, you will also get a GPU backend, and you can choose between the two. If you have an Ageia PPU, you will have a PPU backend to choose.

Now, the thing with GPU and PPU acceleration is that they can handle a lot more physics operations per second, because they have a lot more raw processing power than even the fastest CPUs in the world (if you read my article closely, I refer to that, the GFLOPS rating is about a factor 10 apart). They are also pretty efficient at doing physics, so this raw processing power translates well to practical situations.
So, when you use a GPU or PPU, you can process a lot more PhysX.

Game developers use this to add extra content/detail, specifically tailored for high-speed GPU/PPU processing. Obviously if you run THAT on your CPU, it's going to crawl.
Is that nVidia's/PhysX' fault? No, your CPU just is a factor 10 slower (see above), what did you expect? People who cry "yea but you can get a 6-core CPU now for less than $300, and all these cores could do PhysX", they simply don't get it. Sorry, but that's just how it is.
And people who cry "Yea, but Havok doesn't need a GPU for the same effects", they simply don't get it either... Sorry... but get a clue about what physics effects ARE, how much processing power some of them cost, and then look closely at your beloved Havok games again. Do you see realtime cloth effects? Realtime smoke/fluid effects? Soft bodies?
The answer is a simple: no.
Basically all you see is ragdolls and low amounts of rigidbody physics. Yea, CPUs can handle those, but they can handle those very well with PhysX running on a CPU as well. I-Fluid is a nice example of that.

I learned quite a bit from this post :)

There is one thing that gets me about physx or when devs use physx so heavily that its too slow to run on the CPU is that the added physics effects don't add much to a game.

This is just a hypothetical, but if physx was used in BC2 and there was GPU accel for it and you turned it on you would likely get extra rock chunks and glass instead of bringing truly dynamic destruction instead of pre-calculated destruction.

I would like to know your opinion of how well GPU accelerated physx is used atm in games.
 

extra

Golden Member
Dec 18, 1999
1,947
7
81
Well, looks like it's going to be moot anyway, Nvidia is doing the right thing :p And yeah, SSE makes huge difference (duh =p):

"The new PhysX SDK 2.8.4 comes with an optimized CPU cloth simulation path and is compiled with SSE2 option. Optimized CPU cloth simulation ? According to the test I did, this is true. The cloth sample shipped with PhysX SDK shows clearly the gain in performance. I tested on my dev system with a GTX 460 (R260.63) + Quad Core X 9650 @ 3.2GHz:

- PhysX 2.8.4: 443 FPS
- PhysX 2.8.3: 112 FPS"
 

Scali

Banned
Dec 3, 2004
2,495
0
0
There is one thing that gets me about physx or when devs use physx so heavily that its too slow to run on the CPU is that the added physics effects don't add much to a game.

Yes, but if you reverse the situation...
If we were to assume PhysX really IS 'hobbled' on CPU, and all the effects added to PhysX titles are possible with properly optimized code, then why don't we see anything of that?
Let's add up a few things here:
- Intel has acquired Havok
- Intel is the company that developed x86, SSE, and has probably the most knowledge of optimizing and compiling code for multicore x86 CPUs with SSE of any company out there.
- nVidia's GPGPU acceleration is in direct competition with Intel's high-end CPU business

Shouldn't Intel have launched a counter-attack with lots of cloth, fluid, smoke and other eye-candy effects in Havok games by now? Both to pull the rug from under nVidia's PhysX, and to make games even more demanding, and as such give people more reason to upgrade to new Core i7's and future Sandy Bridge chips.

I would like to know your opinion of how well GPU accelerated physx is used atm in games.

I think a lot of it is not done very well. In fact, I think the biggest problem is that most PhysX-titles aren't such great games in the first place.
I quite liked Mirror's Edge though. I thought it was a good game in itself. And since I had a PhysX-capable GPU, I tried it with the extra PhysX effects on. The effect wasn't dramatic... but then again, when was the last time we've seen dramatic changes in games? I think the last big change was when realtime dynamic shadows became commonplance... but even there, only a few games really made use of them in a way that it impacted gameplay... Such as Doom 3, where the light and shadow effects really created the atmosphere, and it just wasn't the same without it.
The closest thing to that in Mirror's Edge was the few times when you ran through a room while they were shooting through the glass. It really was a lot more 'claustrophobic' with PhysX turned on, and all those pieces of glass were flying around your head.
The cloth effects really didn't have any function... but I thought it was pretty cool that the cloth was there, and was completely interactive, from a technical point-of-view.
Since the game ran fine with the effects on, I saw no reason to disable them.

I just don't think that nVidia can do all that much more with TWIMTBP. I mean, they're just trying to add some stuff to a game that is pretty much finished already (especially in the case of Mirror's Edge, which was basically a port from an existing console game).
If we want something better, we need a game that is written from the ground up with physics in mind (so that all the game logic and content is also in place), not added as an afterthought.
CellFactor is a nice example of what you can do when you design such a game from scratch.
 

Scali

Banned
Dec 3, 2004
2,495
0
0
Well, looks like it's going to be moot anyway, Nvidia is doing the right thing :p And yeah, SSE makes huge difference (duh =p):

"The new PhysX SDK 2.8.4 comes with an optimized CPU cloth simulation path and is compiled with SSE2 option. Optimized CPU cloth simulation ? According to the test I did, this is true. The cloth sample shipped with PhysX SDK shows clearly the gain in performance. I tested on my dev system with a GTX 460 (R260.63) + Quad Core X 9650 @ 3.2GHz:

- PhysX 2.8.4: 443 FPS
- PhysX 2.8.3: 112 FPS"

Is that just SSE though?
Or did they also modify the algorithm?
Or perhaps is it because the old version is single-threaded while the new one uses all 4 cores? That would be a more plausible explanation of a 4x speedup than SSE.
Perhaps it's a bit of everything.
 

Skurge

Diamond Member
Aug 17, 2009
5,195
1
71
Yes, but if you reverse the situation...
If we were to assume PhysX really IS 'hobbled' on CPU, and all the effects added to PhysX titles are possible with properly optimized code, then why don't we see anything of that?
Let's add up a few things here:
- Intel has acquired Havok
- Intel is the company that developed x86, SSE, and has probably the most knowledge of optimizing and compiling code for multicore x86 CPUs with SSE of any company out there.
- nVidia's GPGPU acceleration is in direct competition with Intel's high-end CPU business

Shouldn't Intel have launched a counter-attack with lots of cloth, fluid, smoke and other eye-candy effects in Havok games by now? Both to pull the rug from under nVidia's PhysX, and to make games even more demanding, and as such give people more reason to upgrade to new Core i7's and future Sandy Bridge chips.

I think you misunderstood me. What I was saying was they could do a whole lot better with the extra power of a GPU, but they haven't.
 

zerocool84

Lifer
Nov 11, 2004
36,041
472
126
That is a lie today and that would have been a lie in 2006.
Destructable architehcture IS gamealtering, if you like it or not:
http://www.youtube.com/watch?v=d3TU65KaPXI



I just looked throguht this thread I see nothing of that sort?
If it is posted by "Skurge" I can not see it, he is on my ignore, so could you please quote it for me?

And if you continue with lies and false statments I will ignore you to...

http://forums.anandtech.com/showthread.php?t=2102225

You participated in that thread. I don't care if you ignore me. Not trying to please anyone. Not my fault you can't remember what you talked about earlier this month.
 
Last edited:

TheRyuu

Diamond Member
Dec 3, 2005
5,479
14
81
I think people fail to realize that compilers suck at what they do and because of this, they suck at vectorization.

So even if we assume icc > gcc > msvc it really doesn't matter because they all suck.

Simply tuning on SSE2 in the compiler isn't going to do shit. I'm not surprised it only netted 8%.
 

Scali

Banned
Dec 3, 2004
2,495
0
0
I think you misunderstood me. What I was saying was they could do a whole lot better with the extra power of a GPU, but they haven't.

I think you misunderstood me.
The effects themselves are okay... It's just difficult to add something meaningful to a game after-the-fact. Doesn't mean the GPU physics themselves aren't great...

I mean, these effects show very realistic physics simulations:
http://www.youtube.com/watch?v=IyrJdrLpatA

But it seems pretty much impossible to use such effects for anything gameplay-related. And why should you, anyway?
And obviously these effects take a lot of processing power, not much you can do about that, other than continuing to develop faster processors.

But, has anyone seen anything like this in realtime on a CPU? Or an AMD GPU? I haven't. A CPU simply isn't fast enough.
Havok fluid does stuff like this:
http://www.youtube.com/watch?v=YWcwnLJKo2I
Yay, 200 particles... Yes, it was a P4 2.8 HT, so maybe current CPUs are what, 10 times as fast? In a perfect world, with perfect scaling, you may be able to get 2000 particles then... Still way short of what nVidia demonstrates, with hundreds of thousands of particles.
And AMD simply doesn't have the resources to develop something like this.