4.5 years is a pretty long time in the PC world.
For something to still be in its infancy 4.5 years after it began isn't very good going.
Hasnt tesselation been around on a GPU since 2001-02? It is still in its infancy.
4.5 years is a pretty long time in the PC world.
For something to still be in its infancy 4.5 years after it began isn't very good going.
Really ? When I enable physx in Mafia 2 on my system, it literally will cut my framerate by 50%. All the benchmarks out there of physx on high in this game show the same.
Considering the small amount of visual quality addition it actually brings to the game, it's a ridiculous performance hit. It's the same sort of performance hit you'd get from going from a resolution of 1680x1050 to 2560x1600 on the same video card setup.
That is what makes GPU physx a terrible feature and a non-starter in its current state. For such a small change in what you notice on your screen there is no reason for such huge performance costs. Until they fix that, gpu physx will continue to be a feature you see in one new game a year and something the buying market does not care about.
Not everyone is going to be using $1000 worth of video card hardware, as much as nvidia would like that.
And I generally got about 40fps with it on high, not 60.
Credit to nvidia for riding physx in on the coat-tails of a good game, it was a smart move, still doesn't make the feature any less crap.
Since when has PC gaming been assessed on the standards of consoles ?
Many times it was not playable, feel free to enable physx on high in Mafia 2 and turn up the rest of the graphics settings, then come and share your experience. I'd like to hear your results.
Mafia 2 and the physx in it helped to further my opinion of physx being a resource pig considering the minimal additions it brought to the game. It's supposed to be the best example of it yet, is it not ?
I got a good chuckle when I walked around in a room and Awkward Looking Rock Chunk #754 jumped up from the floor and hit the ceiling when I walked over it.
Even more of a chuckle when I saw my framerate tank in a game that looked on par with a console port graphically, but some extra debris on the ground necessitated a 50% performance hit. At that point I turned physx off and enjoyed the rest of the game and didn't notice much difference.
There are only about 10 games that use GPU physx, so there a very limited base of games to draw examples from. Some of those are not even true games but free demos from nvidia or one level in a game.
That's the beauty of this entire situation. You had the choice to enable or disable it, and you chose to disable it. At least we're given a choice, rather than someone, somewhere up high saying "50% is too much! Just get rid of it!"
How many DX11 games are there?
And how does DX11 alter the experience?
4.5 years is a pretty long time in the PC world.
For something to still be in its infancy 4.5 years after it began isn't very good going.
I just completed "Mafia II" today.
My rig is a i7 920 @ 3.5Ghz with a BGF GTX285 OC.
I had no issues.
I must ask you, because I need to be certain:
You have no idea how much computaion physcis require do you?
I have tried several savegames with and without PhysX, here is what I noticed:
- Clothes stick in an unnatural way to characters and behaves way to stiff.
- The prerendered smoke looks static and false.
- The missing debris makes the world look static and dead...matter just disspear, like in a freaky strange dimension.
- Explosions don't interact with surroundings
How many DX11 games are there?
And how does DX11 alter the experience?
That is irrelevant though. What does the amount of computations matter to someone playing a game with it enabled when it brings little to the table and cuts framerates in half. Sound like it needs a lot of work then.
Hasnt tesselation been around on a GPU since 2001-02? It is still in its infancy.
I kinda count when nVidia purchased Ageia as the time-line and their vision, but others may differ.
Am disappointed though in the amount of titles and was hoping for 1 AAA title every month.
http://www.anandtech.com/show/2001/5 - 2006, first step. 4.5 years ago.Granted, the improved visuals aren't the holy grail of game physics, but this is an excellent first step
I do agree PhysX as whole does need more work, time, evolve and mature and may not be ideal for all. But, it is a choice, trying to innovate Physics and bring some value for GeForce owners. To try to get the ball rolling on GPU Physics at least and get content in there.
I'll give you impossible to prove. Such situations will also not be great for Altivec, either. It's not like if they implemented it in a way optimized for modern CPUs, and an updated version of Mafia II came out, I'd be able to play it with CPU PhysX--the non-gameplay effects are too much, even if it could provide double performance. But games where it is necessary would be able to run much better, and be able cram more effects in, before reaching some point of unacceptable performance.This is impossible to prove.
If you know anything about the inner workings of physics simulations, you'll probably agree with me. It's not a deterministic process, but an iterative process.
The amount of iterations has a huge effect on both performance and stability/accuracy of the solution.
So doing an apples-to-apples comparison between two different physics APIs is impossible.
It failed.
Then it went away and came back in a new guise in a manner which gave standardisation and allowed it to be supported by anyone who wanted to make a DX11 compatible graphics card.
When it was in its infancy, it had no standards and wasn't part of a standard API which anyone could make hardware for.
See how it goes?
I'd have a much better opinion of physx if it was not so taxing. As it stands now it looks as if nvidia needs 2 or 3 years to make hardware that can handle running it in its current form. And that would be dependent on games being no more demanding in any other way graphically than they are today.
If they took physx to a level where it was truly game changing and brought some real immersion; for example, you walk into a building and can blow a wall away with your big weapon and then walk through this opening, or say, smash a table to pieces and then pick up one of the legs and proceed to beat your enemies with it. I think the performance hit would be insane going on how big it is now.
I also think that there is going to be no forward progress with gpu physics game adoption as long as it is not something available to any video card, regardless of who made it. I think gpu physics will likely become used more widely, but I don't see physx being the standard.
I dont think Nvidia will have a problem supporting a standard whether it is in OpenCL or a Microsoft standard. AMD on the other hand....................
I'd have a much better opinion of physx if it was not so taxing. As it stands now it looks as if nvidia needs 2 or 3 years to make hardware that can handle running it in its current form. And that would be dependent on games being no more demanding in any other way graphically than they are today.
If they took physx to a level where it was truly game changing and brought some real immersion; for example, you walk into a building and can blow a wall away with your big weapon and then walk through this opening, or say, smash a table to pieces and then pick up one of the legs and proceed to beat your enemies with it. I think the performance hit would be insane going on how big it is now.
I also think that there is going to be no forward progress with gpu physics game adoption as long as it is not something available to any video card, regardless of who made it. I think gpu physics will likely become used more widely, but I don't see physx being the standard.
Uh, OK.
So they won't have a problem with it, they just aren't doing it. While AMD would have a problem with it, based on you saying so.
I'm sold.
That's why we see a port of PhysX to OpenCL/DirectCompute in the works.
Download AMDs drivers and try to have OpenCL out of the box without having to install an SDK. Do the same for Nvidia. Nvidia is and has been supporting OpenCL far more than AMD. Scali can certainly chime in on this. That is what I base my opinion on for this matter.
In short, DX11 can make a game noticeably more realistic looking.
PhysX not so much(at least in the current implementations). The fact that you have countless people asking what does enabling PhysX even do when they enable it and don't see much of a difference further proves that point.
It doesn't really make PhsyX look good when you say it's doing a huge number of calculations on various debrees and things and that is why the performance hit is so severe when the actual visual difference is very minuscule; that's called a waste of resources.
As far as PhysX being deliberately crippled on the CPU, of course it is. You'll never encounter more easy to make multi-threaded code than PhysX code(they have no problem running it on hundreds of cores in the GPU) yet they force it to run on just one thread even on 12 thread CPUs. PhysX performance on the CPU could be increased by hundreds of percent if that is what Nvidia's goal was.
What were your framerates ? I definitely had playability issues and am running a setup over 200% more powerful.
I'm not denying or claiming knowledge as to the amount of computations required for physx. It definitely has a whole lot going on as it is unplayable at times in its current most implemented form, high settings in Mafia 2.
That is irrelevant though. What does the amount of computations matter to someone playing a game with it enabled when it brings little to the table and cuts framerates in half. Sound like it needs a lot of work then.
I've seen all those effects implemented to one degree or another in games using physics on the CPU. The reason you don't see any of that in Mafia 2 when you disable physx is because they gutted any physics implementation once you disable physx.
What does this have to do with anything ?
Physx has been around for over 4 years. GPU physx is averaging 2 games a year. Why is it not being adopted more widely ?
In a game like Crysis back when it was released, when you booted it up that first time and put everything on very high in DX10 with some AA and got poor framerates even on the best hardware, but what you did see was incredibly impressive, amazing visuals. One could accept the poor performance because the visual immersion was amazing.
When you load up Mafia 2 with physx on high and get poor performance it makes no sense, because there is nothing amazing going on and it is not bringing a level of immersion you have not seen before.
I dont think Nvidia will have a problem supporting a standard whether it is in OpenCL or a Microsoft standard. AMD on the other hand....................
So nothing to do with PhysX at all?
Just checking because I was confused, since this thread is about PhysX, and NV recently said there was no reason for them to port PhysX to OpenCL.
As far as PhysX being deliberately crippled on the CPU, of course it is. You'll never encounter more easy to make multi-threaded code than PhysX code(they have no problem running it on hundreds of cores in the GPU) yet they force it to run on just one thread even on 12 thread CPUs. PhysX performance on the CPU could be increased by hundreds of percent if that is what Nvidia's goal was.
Any particular reason why that's your timeline?
Bit name support came pre-NV (through Unreal Engine integration).
