PhysX and multi-core support

Status
Not open for further replies.

Qbah

Diamond Member
Oct 18, 2005
3,754
10
81
Hi folks,

Couldn't find any general PhysX threads (can't find yours anymore, Keys...), so I guess I'll start a new one with a question:

According to this interview with nVidia's PhysX person (after that long AMD interview that was bashing PhysX's multi-core support), PhysX is actually very much multi-core capable and it's up to the devs and not nVidia to enable that. The nVidia person claims that it can be clearly observed in 3DMark Vantage, where hardware PhysX, when run on the CPU, will use up to 12 threads. Can someone test that? I don't have 3DMark Vantage and am on a 20GB / month connection, so need to be pretty strict with what I download. Here's the part about it from the interview:

Our PhysX SDK API is designed such that thread control is done explicitly by the application developer, not by the SDK functions themselves. One of the best examples is 3DMarkVantage which can use 12 threads while running in software-only PhysX.

However, what bothers me is the next sentence:

This can easily be tested by anyone with a multi-core CPU system and a PhysX-capable GeForce GPU.

Why would you need a GeForce GPU to run the hardware PhysX on the CPU? That kinda defies the point of the whole idea (you already have a card that runs it better than a CPU, so why bother?)... What the AMD person was saying is that when you run the hardware-accelerated part in software mode (so on the CPU), it only uses one core (and you get 5-ish FPS). So either he didn't answer the question, didn't understand it (he is from the PhysX department though...) or did it on purpose. Or the GeForce GPU part is not a requirement and hardware-based PhysX that runs on the CPU is multithreaded, as this was the whole complaint... that it isn't.

Anyway, anyone with 3DMark Vantage mind running the PhysX part and check Task Manager for core loads? Would be great if people with both GeForce and Radeon cards do it. We could see if there's any difference. As perhaps the hardware-based PhysX running on the CPU is multithreaded only when an nVidia GPU is detected in the system (hey, it's far-fetched, I know, but you never know...)
 

evolucion8

Platinum Member
Jun 17, 2005
2,867
3
81
Before nVidia bought AGEIA, all PhysX implementations were multi threaded, besides of Vantage, there's no current multi core optimized PhysX implementation in games.
 

Qbah

Diamond Member
Oct 18, 2005
3,754
10
81
Is there any visual difference between the test when you run it on the CPU and when you run it on the GPU? And I don't mean the speed.

If no difference - hardware-based PhysX running on CPU can be multithreaded and it's the devs being lazy - the AMD guy was full of it and was lying.

If there is difference, it's not the same PhysX and the nVidia person didn't understand the issue (or didn't want to).

And thanks for checking :)

EDIT: Obviously, the question needs to be answered by someone with a GeForce card :)
 

evolucion8

Platinum Member
Jun 17, 2005
2,867
3
81
Is there any visual difference between the test when you run it on the CPU and when you run it on the GPU? And I don't mean the speed.

If no difference - hardware-based PhysX running on CPU can be multithreaded and it's the devs being lazy - the AMD guy was full of it and was lying.

If there is difference, it's not the same PhysX and the nVidia person didn't understand the issue (or didn't want to).

And thanks for checking :)

EDIT: Obviously, the question needs to be answered by someone with a GeForce card :)

Definitively since my AGEIA card only brings playable framerates when the low option is on, something that isn't playable when there's no acceleration at all. The only effect is the way that the body reacts when got fired and the debris that flies everywhere, pretty unimpressive, I've seen better phisic simulations with Havok in Wolfenstein 2009, a cheesy game too!
 

Qbah

Diamond Member
Oct 18, 2005
3,754
10
81
Definitively since my AGEIA card only brings playable framerates when the low option is on, something that isn't playable when there's no acceleration at all. The only effect is the way that the body reacts when got fired and the debris that flies everywhere, pretty unimpressive, I've seen better phisic simulations with Havok in Wolfenstein 2009, a cheesy game too!

My question was connected to 3DMark Vantage, as this is the application that supposedly shows hardware-based PhysX running on the CPU as being multi-threaded. And this is what I would like to find out. If with and without hardware acceleration available, PhysX looks the same in 3DMark Vantage, then it's running the same physics engine under both conditions and this proves the nVidia person right and shows AMD being wrong - and yelling about something that's not nVidia's fault.

Hardware PhysX was created with parallelism in mind so you will never get excellent results with it on a CPU. However, having it on low with "basic" PhysX effects (so with a bit better visuals than software PhysX) on a multi-core system should still provide you with a good experience and if the above question is a YES (looks the same), only developers are to blame even low-level hardware PhysX runs like crap.
 

evolucion8

Platinum Member
Jun 17, 2005
2,867
3
81
My question was connected to 3DMark Vantage, as this is the application that supposedly shows hardware-based PhysX running on the CPU as being multi-threaded. And this is what I would like to find out. If with and without hardware acceleration available, PhysX looks the same in 3DMark Vantage, then it's running the same physics engine under both conditions and this proves the nVidia person right and shows AMD being wrong - and yelling about something that's not nVidia's fault.

Hardware PhysX was created with parallelism in mind so you will never get excellent results with it on a CPU. However, having it on low with "basic" PhysX effects (so with a bit better visuals than software PhysX) on a multi-core system should still provide you with a good experience and if the above question is a YES (looks the same), only developers are to blame even low-level hardware PhysX runs like crap.

I did found a difference in Vantage in the PhysX test, with my Quad Core, I usually get 4 giant donuts ( I don't know the name of those things where the planes get through), my brother's in law who owns a dual core only get two of those, and with my AGEIA card, I get 7 of them and my CPU score increases a bit, but in the rest of the tests there's no difference at all, Vantage implemented PhysX before nVidia's acquisition. But in resume, there's no difference in the remaining tests at all.

While I agree that GPU PhysX is very beneficial, bear in mind that GPU's are so massively parallel that are incapable of calculating branchy code efficiently like collision detection which is very essential for correct phisic calculation which is very impredecible and only a CPU can do well. The only application that I saw nice PhysX effects was in Batman AA and even it was glitchy, I don't see things that bounces infinitively or shakes like a Parkinson patient with Havok games...
 
Last edited:

aka1nas

Diamond Member
Aug 30, 2001
4,335
1
0
They pretty much explained it in the first quote. The API is thread-safe, but it doesn't natively multi-thread anything. The developer has to actually implement it that way themselves.

Developers probably aren't bothering to do this as often now when it's basically a fall-back mode for when GPU-accelerated physics isn't available.

I'd have to imagine that if you're a developer choosing to implement PhysX, you're going to fall into one of two camps:

1) You're title is using a small degree of physics affects that aren't critical to gameplay.

2) You're using physics heavily or as a prime gameplay element.

Group 1 probably isn't using enough physics that the performance difference between software or hardware acceleration is a dealbreaker.

Group 2(which is a much smaller group at this point) probably doesn't have a choice as they need the performance, and isn't going to bother spending relatively expensive development time to do a multi-threaded software implementation that will still be inferior to the hardware-accelerated option.

I am not sure how much easier multi-threading is on competing APIs like Havok or Bullet, but it would be nice if they all parallelized everything at the API level and didn't force the application developer to deal with it.
 

happy medium

Lifer
Jun 8, 2003
14,387
480
126
I did found a difference in Vantage in the PhysX test, with my Quad Core, I usually get 4 giant donuts ( I don't know the name of those things where the planes get through), my brother's in law who owns a dual core only get two of those, and with my AGEIA card, I get 7 of them and my CPU score increases a bit, but in the rest of the tests there's no difference at all, Vantage implemented PhysX before nVidia's acquisition. But in resume, there's no difference in the remaining tests at all.

While I agree that GPU PhysX is very beneficial, bear in mind that GPU's are so massively parallel that are incapable of calculating branchy code efficiently like collision detection which is very essential for correct phisic calculation which is very impredecible and only a CPU can do well. The only application that I saw nice PhysX effects was in Batman AA and even it was glitchy, I don't see things that bounces infinitively or shakes like a Parkinson patient with Havok games...

I get 4 donuts also.
 

aka1nas

Diamond Member
Aug 30, 2001
4,335
1
0
I don't see things that bounces infinitively or shakes like a Parkinson patient with Havok games...

I've seen those sorts of glitches in almost EVERY Havok game that I have played. Oblivion and Fallout 3 have issues where bodies will occasionally ragdoll hundreds of feet in the air. Red Faction: Guerrilla has vehicles doors sometimes slam open and closed repeatedly. All 3 titles have corpses sometimes get "the shakes" for no apparent reason.

Older Havok titles like Half Life 2 didn't show this as much because they didn't use Havok as heavily.
 

Lonbjerg

Diamond Member
Dec 6, 2009
4,419
0
0
Before nVidia bought AGEIA, all PhysX implementations were multi threaded, besides of Vantage, there's no current multi core optimized PhysX implementation in games.

Bollocks, even RealityMark (AGEIA's PhysX benchmark) ran the physcis on a single threaded:
http://forums.guru3d.com/showpost.php?p=1974267&postcount=5

Huddy wasn't speaking the truht, sorry.

I dare you to find me a multi-CPU PhysX game or demo from AGEIA...I bet you can not.
 

Lonbjerg

Diamond Member
Dec 6, 2009
4,419
0
0
Why the rudeness? If you can't be polite then GTFO of the thread, thanks!

If you post facts I will be polite.
Post PR FUD (in contrast to the facts) and I won't be nice.

But did you have anything to say ontopic....or are you done with your attempt to hide the facts that you claims were false?

Ball in your court.

Please refrain from posts like this in the technical forums. -Admin DrPizza
 
Last edited by a moderator:

evolucion8

Platinum Member
Jun 17, 2005
2,867
3
81
If you post facts I will be polite.
Post PR FUD (in contrast to the facts) and I won't be nice.

But did you have anything to say ontopic....or are you done with your attempt to hide the facts that you claims were false?

Ball in your court.

My facts are based on results in my computer thank you, and demos like Cell Factor while it ran in single thread, it was using the AGEIA card for PhysX acceleration, later on I will install the demo and run it in software mode to see if your rant is true.
 
Last edited by a moderator:

Qbah

Diamond Member
Oct 18, 2005
3,754
10
81
People, be civil please. The AMD guy is pure PR. The nVidia guy is a manager from the PhysX department.

The gripe was with nVidia purposely making PhysX run on one CPU core, when the hardware-based engine is running "in software mode", so on the CPU. That's not the case as based on 3DMark Vantage we can conclude that such a scenario can be multithreaded and it's only up to the particular developer to properly implement this feature. Now, no idea if it can run fine even at low settings if made mult-core enabled, but since it doesn't really look that impressive (definately seen before with other physics engines), I don't really see a reason why it shouldn't.
 

Lonyo

Lifer
Aug 10, 2002
21,938
6
81
I don't understand why anyone really cares about PhysX.
For it to really take off and make fundamental gameplay changes, it would have to be present in hardware form in the Xbox 360 and the PS3, because most games are multiplatform these days.
Since there is no hardware acceleration in either console, it won't matter for this generation.
The only way it could matter in the future is if both the next Playstation and next Xbox consoles get hardware accelerated physics, which will likely end up being vendor/platform/etc agnostic, meaning PhysX won't be the issue, or if NV gets the design wins for some of the hardware in both consoles, or if NV license a PhysX implementation to an ATI powered console in which case it's not an issue because PC ATI should then presumably be able to work with it.

Until consoles change to hardware PhysX, it's never going to be meaningful for the vast vast majority of good budget games.
 

Qbah

Diamond Member
Oct 18, 2005
3,754
10
81
I don't understand why anyone really cares about PhysX.
For it to really take off and make fundamental gameplay changes, it would have to be present in hardware form in the Xbox 360 and the PS3, because most games are multiplatform these days.
Since there is no hardware acceleration in either console, it won't matter for this generation.
The only way it could matter in the future is if both the next Playstation and next Xbox consoles get hardware accelerated physics, which will likely end up being vendor/platform/etc agnostic, meaning PhysX won't be the issue, or if NV gets the design wins for some of the hardware in both consoles, or if NV license a PhysX implementation to an ATI powered console in which case it's not an issue because PC ATI should then presumably be able to work with it.

Until consoles change to hardware PhysX, it's never going to be meaningful for the vast vast majority of good budget games.

Those are valid points in general, but I'm afraid I'm missing the connection with the issue at hand. AMD's PR is trying to belittle PhysX and make it look bad, but they should stick to facts (even PR twists) and not flat out lie about their competition - especially in public interviews.

Making your graphs look bigger by using a funky scale or cleverly omitting information (or doing it in a very blunt way) are all fine. But lying about your competitor is just wrong.
 

at80eighty

Senior member
Jun 28, 2004
458
5
81
If you post facts I will be polite.
Post PR FUD (in contrast to the facts) and I won't be nice.

But did you have anything to say ontopic....or are you done with your attempt to hide the facts that you claims were false?

Ball in your court.



jeezus

video cards - srs f'n bzns
 

NIGELG

Senior member
Nov 4, 2009
852
31
91
PhysX has only caused division and controversy in the Gaming community.I hope it dies a slow horrible death.....I don't mean physics in general though.
 

NIGELG

Senior member
Nov 4, 2009
852
31
91
I hope it dies fast actually.

Well if you look at it just how many upcoming PC games are going to use GPU pHYSX??AVP?Bioshock2,ME2??Maybe it will die unless Nvidia sells it to someone.....
 

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
I hope it dies fast actually.

If it does die fast, I only hope that it is because open standards for GPU accelerated physics completely takes over. After having played Batman AA, Cryostasis, Mirror's Edge, and Darkest of Days with GPU physx, I think it's a damn good feature.
 

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
Until consoles change to hardware PhysX, it's never going to be meaningful for the vast vast majority of good budget games.


I disagree. AAA Games like Mirror's Edge, Batman, and Dark Void have gotten released over several platforms and have been enhanced for the PC. Metro 2033 is an upcoming game that is going to be using physx and you can be sure that when Fermi hits Nvidia will spill the beans on more upcoming AAA games using physx.

The current console cycle is expected to extend to 2012 or beyond. If developers aren't leveraging the abundantly excess amount of GPU power available on PC's by then there won't be much to argue about or look forward to in the future of GPU's and PC gaming.
 

Wreckage

Banned
Jul 1, 2005
5,529
0
0
The current consoles already support PhysX.

I expect it to be more popular than DX11 this year, just as it was more popular than DX10 last year.

With a string of successful titles already out there and more sure to come it has no where to go but up.

Most major game developer have already announced some sort of support for PhysX over the past year or so.

Richard Huddy is really giving AMD a black eye, by going around and spreading all this FUD and misinformation.

ATI was offered PhysX and they refused it. They are the ones blocking the standard. How come it runs on everything else but ATI cards.
 
Dec 30, 2004
12,553
2
76
I did found a difference in Vantage in the PhysX test, with my Quad Core, I usually get 4 giant donuts ( I don't know the name of those things where the planes get through), my brother's in law who owns a dual core only get two of those, and with my AGEIA card, I get 7 of them and my CPU score increases a bit, but in the rest of the tests there's no difference at all, Vantage implemented PhysX before nVidia's acquisition. But in resume, there's no difference in the remaining tests at all.

While I agree that GPU PhysX is very beneficial, bear in mind that GPU's are so massively parallel that are incapable of calculating branchy code efficiently like collision detection which is very essential for correct phisic calculation which is very impredecible and only a CPU can do well. The only application that I saw nice PhysX effects was in Batman AA and even it was glitchy, I don't see things that bounces infinitively or shakes like a Parkinson patient with Havok games...

Fermi has out of order execution; this should impact collision detection performance. Branch will still take time but the GPU won't sit around running NOPs.
 
Status
Not open for further replies.