• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

"Mafia II: Perfection for NVidia's Graphics Plus Technologies?"

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Personally, consoles, while less flexible, clearly perform the job hassle less. The PS3 with the nVidia GPU is a perfect example. The money you spend on buying a new video card in order to use PhysX, you can buy a new PS3(or even a 360), which can perform easily and free of major frame rate drop, over heating, etc

Did you say the word "console" in the video card forums? Did you say video card and Ps3 in the same sentence?
Did you bring your flame suit?😀
 

Again with the same old link 🙄 Truform existed way before that, not like nVidia's N-patches failure known as



Yeah Matrox was truly a head of the game on that one. I will give you that.

Yeah, also Matrox's Fragment Anti Aliasing was a great idea, not well implemented though.


Pretty much. :thumbsup:

Bad phisics implementations and platform lock down, harvested GPU's from top to bottom, lack of DX10.1 support for more than 2 years until the underperformers GT 2x0 series were launched, unpractical 3D vision implementation, Multi monitor half assed patch work known as Surround vision, doesn't have the fastest single card on the market, rebading the old G92 into more than 7 SKU's, leader in high power consumption and heat dissipation, shoddy practices with Batman AA, and the best of all, the most promoted game currently; Mafia 2 which uses nVidia's Graphic Plus Technologies, with the best of the latest technologies like DX9, horrible performance and terrible multi GPU scaling, (AMD Radeon HD 5850 matches the GTX 470!!! without APEX) and terrible and overdone PhysX effects, a game that can only matches the likes of Elder Scroll Oblivion in terms of graphic detail!!

Yeah, nVidia innovates, pretty much!!! :thumbsdown:
 
Last edited:
My thoughts on the article:

1. I could not tell the difference between the videos showing it with APEX and without APEX. Maybe the resolution of the video is too low to really see the difference?
All it does is make things smoother, and add more crumbly bits to breaking things. If it can be turned off, the physics it brings to the table are BS, whether you see it in the video, or not.

If in-game physics does not affect game balance, or at least have the potential to (FI, knock over a crate full of stuff for that a melee enemy is forced to be in front of your sights longer, after you get some distance), then it's a waste of processing power. The graphical uses of it need to be secondary, and integrated into actual improvements in the game environment.

Wreckage said:
You mean like this patent they filed back in 2005?
No, the cards they had it in before then, with games supporting it, like back in '01 and '02 (I distinctly recall SS:tSE using it it).

...ah, Ninja'd! 🙁
 
Personally, consoles, while less flexible, clearly perform the job hassle less. The PS3 with the nVidia GPU is a perfect example. The money you spend on buying a new video card in order to use PhysX, you can buy a new PS3(or even a 360), which can perform easily and free of major frame rate drop, over heating, etc

Frame rate drops most definitely exist on consoles. They can get down to the low teens, judging by what I've seen with my naked eye, at times.
 
Whats with the amount of hate in this thread. No one is really contributing to this thread at all.

Bad phisics implementations and platform lock down
What is your definition of a good physics implementation? Does AMD or any other vendor have a better physics implementation? Im referring to physic calculations done real time during in game not scripted or anything that falls under the latter category.

harvested GPU's from top to bottom
What has this got to do with innovation? I dont quite understand what harvested GPUs have got to do with what your trying to say.

lack of DX10.1 support for more than 2 years until the underperformers GT 2x0 series were launched
DX10.1 did bring several improvements (I wouldn't say drastic improvements), but nVIDIA didn't support it i.e. most of the game devs sticked with DX10. Sure its easy to blame nVIDIA for not pushing this technology, but most of that would be the fault of AMD not pushing the advantages of DX10.1 enough. They had the chance of not playing the victim but rather the leader. Yet nothing much happened which isn't surprising because this isn't the first time for something like this to happen.

unpractical 3D vision implementation, Multi monitor half assed patch work known as Surround vision
Unpractical? by your standards or whos standard? They have a 3d vision implementation that works well in some games, and not in other games like mafia II. Some people enjoy it, and others dont. Now the question is, where is the other comparable technologies from the other IHVs? If there is a reference or competing solution that this could be comparable to, then we can surely say that one implementation is better than the other, so decide on the practicability of the said implementation.

Its rather absurd when you say NV surround is "half assed". What does this make eyefinity?
Fully assed?

doesn't have the fastest single card on the market
Not sure what this has got to do with innovation.

rebading the old G92 into more than 7 SKU's, leader in high power consumption and heat dissipation

The first statement is probably the only thing that supports your argument. But even then, its simple enough to know that when there is no competition things generally dont look to good for the consumer. As for your second statement, Im sure you know that aint the case.

shoddy practices with Batman AA, and the best of all, the most promoted game currently, Mafia 2 which uses nVidia's Graphic Plus Technologies, with the best of the latest technologies like DX9, horrible performance (AMD Radeon HD 5850 matches the GTX 470!!! without APEX) and terrible and overdone PhysX effects, a game that can't even rival Oblivion!!

Concluding performance for new titles using drivers that have yet to be optimized is pretty silly. The new branch of 260.xx drivers are set to release within 2 weeks as well as the monthly cat 10.8s.

Most of my replies here sound like as if I was defending nVIDIA, but that isn't what Im doing. Most of the arguments laid down are just mindless. There is no logical thought progression in arriving to those statements at all. Especially with comments like

Yeah, nVidia innovates, pretty much!!!

I know your post was a bait post, maybe for wreckage but it does no good to the thread. People can ignore users or complain in the appropriate places about these things.

Back to the OP, I read the review and it was interesting to see the CPU PhysX performance and not entirely crippled. Its also interesting to see a dedicated card as low as a GT240 providing some hefty performance increase. The thread could have been discussing about these, yet it spirals down to a big ugly pissing contest.
 
What is your definition of a good physics implementation? Does AMD or any other vendor have a better physics implementation? Im referring to physic calculations done real time during in game not scripted or anything that falls under the latter category.

Have you ever played Cryostasis? Mafia 2? Mirrors Edge? Those effects are far from being inmersive and real, Batman AA was the only one with effects that were mind blowing.

What has this got to do with innovation? I dont quite understand what harvested GPUs have got to do with what your trying to say.

It was to respost Wreckage's continuous nVidia propaganda, stating that nVidia has the best products from top to bottom, comparing a GTX 460 to an HD 5870, he's smoking grass.

DX10.1 did bring several improvements (I wouldn't say drastic improvements), but nVIDIA didn't support it i.e. most of the game devs sticked with DX10. Sure its easy to blame nVIDIA for not pushing this technology, but most of that would be the fault of AMD not pushing the advantages of DX10.1 enough. They had the chance of not playing the victim but rather the leader. Yet nothing much happened which isn't surprising because this isn't the first time for something like this to happen.

You are right on this one, but there's games that uses the DX10.1 path automatically like Metro 2033, Far Cry 2 and BFBC 2, after all DX10.1 is much closer to DX11 than standard DX10.

Unpractical? by your standards or whos standard? They have a 3d vision implementation that works well in some games, and not in other games like mafia II. Some people enjoy it, and others dont. Now the question is, where is the other comparable technologies from the other IHVs? If there is a reference or competing solution that this could be comparable to, then we can surely say that one implementation is better than the other, so decide on the practicability of the said implementation.

This was directly pointed to Wreckage's continuos nVidia 3D propaganda, derailing threads and trolling. AMD doesn't have it but with software like iz3d, it works with no issues, the issue is the fact that the performance impact is too great to be practical, you must have an SLI setup, that's what I mean.

Its rather absurd when you say NV surround is "half assed". What does this make eyefinity?
Fully assed?

With a single card with Eyefinity, you can run up to 3 monitors, with nVidia, you can only use 3 monitors if you have an SLI setup, its quite clear that nVidia did a patch work to compete against Eyefinity because it caught nVidia by surprise.


Not sure what this has got to do with innovation.

It was pointed directly to Wreckage's continuos nVidia propaganda, stating that nVidia is the innovator and yet, they can't even master GPU manufacturing well, not having a full fledge fermi, the bump issue, etc. I don't call that innovation, and for sure nobody think that its innovation seeing a GPU maker not able to enable all stream processors due to poor yields, that's far cry from being a technology leader like Wreckage always stated.


The first statement is probably the only thing that supports your argument. But even then, its simple enough to know that when there is no competition things generally dont look to good for the consumer. As for your second statement, Im sure you know that aint the case.



Concluding performance for new titles using drivers that have yet to be optimized is pretty silly. The new branch of 260.xx drivers are set to release within 2 weeks as well as the monthly cat 10.8s.

Most of my replies here sound like as if I was defending nVIDIA, but that isn't what Im doing. Most of the arguments laid down are just mindless. There is no logical thought progression in arriving to those statements at all. Especially with comments like



I know your post was a bait post, maybe for wreckage but it does no good to the thread. People can ignore users or complain in the appropriate places about these things.

Back to the OP, I read the review and it was interesting to see the CPU PhysX performance and not entirely crippled. Its also interesting to see a dedicated card as low as a GT240 providing some hefty performance increase. The thread could have been discussing about these, yet it spirals down to a big ugly pissing contest.

Its quite easy to point fingers towards one side, now do the same to Wreckage and see how absurd are his posts, and definitively, it was a post for Wreckage;s continuos FUD and marketing propaganda, which does incredibly well to deviate people from nVidia.
 
Have you ever played Cryostasis? Mafia 2? Mirrors Edge? Those effects are far from being inmersive and real, Batman AA was the only one with effects that were mind blowing.



It was to respost Wreckage's continuous nVidia propaganda, stating that nVidia has the best products from top to bottom, comparing a GTX 460 to an HD 5870, he's smoking grass.



You are right on this one, but there's games that uses the DX10.1 path automatically like Metro 2033, Far Cry 2 and BFBC 2, after all DX10.1 is much closer to DX11 than standard DX10.



This was directly pointed to Wreckage's continuos nVidia 3D propaganda, derailing threads and trolling. AMD doesn't have it but with software like iz3d, it works with no issues, the issue is the fact that the performance impact is too great to be practical, you must have an SLI setup, that's what I mean.



With a single card with Eyefinity, you can run up to 3 monitors, with nVidia, you can only use 3 monitors if you have an SLI setup, its quite clear that nVidia did a patch work to compete against Eyefinity because it caught nVidia by surprise.




It was pointed directly to Wreckage's continuos nVidia propaganda, stating that nVidia is the innovator and yet, they can't even master GPU manufacturing well, not having a full fledge fermi, the bump issue, etc. I don't call that innovation, and for sure nobody think that its innovation seeing a GPU maker not able to enable all stream processors due to poor yields, that's far cry from being a technology leader like Wreckage always stated.




Its quite easy to point fingers towards one side, now do the same to Wreckage and see how absurd are his posts, and definitively, it was a post for Wreckage;s continuos FUD and marketing propaganda, which does incredibly well to deviate people from nVidia.

Overall the only message I'm seeing is somehow you feel it's your job to personally "Mod" wreckage? Thats not your job.🙄
 
With a single card with Eyefinity, you can run up to 3 monitors, with nVidia, you can only use 3 monitors if you have an SLI setup, its quite clear that nVidia did a patch work to compete against Eyefinity because it caught nVidia by surprise.

The only difference is that AMD cards have the hardware to support more than 3+ monitors. If NV surround provided not only compability issues but also lack of performance compared to Eyefinity, then the term half assed could hold some water in regards to this "patch". But seeing as it performs alot better than CF Eyefinity but is generally well polished in all aspects, I find this quite an impressive "patch" seeing as Eyefinity has been around for a long time now.

Its quite easy to point fingers towards one side, now do the same to Wreckage and see how absurd are his posts, and definitively, it was a post for Wreckage;s continuos FUD and marketing propaganda, which does incredibly well to deviate people from nVidia.

This is the solution to all your answers.

This message is hidden because Wreckage is on your ignore list.

If you think this isn't enough, or "worry" that AT users will be misguided by the troll then obviously you can lodge a complaint of some sort to the moderators of these forums. But Im guessing you've already tried this a million times 😛
 
So in a thread about NVIDIA, I had the audacity to say positive things about NVIDIA. Because of this you felt it was your duty to crap all over the thread?

That's pretty fucked up. :thumbsdown:

Sucks when your own medicine is used against you eh?

Besides, crapping all over a thread that had nothing but crap in it anyways makes no difference in the end.
 
So in a thread about NVIDIA, I had the audacity to say positive things about NVIDIA. Because of this you felt it was your duty to crap all over the thread?

That's pretty fucked up. :thumbsdown:

Nope, you are really fucked up. 😛

Overall the only message I'm seeing is somehow you feel it's your job to personally "Mod" wreckage? Thats not your job.🙄

And neither is your job to care about him and about what I do, 😉

The only difference is that AMD cards have the hardware to support more than 3+ monitors. If NV surround provided not only compability issues but also lack of performance compared to Eyefinity, then the term half assed could hold some water in regards to this "patch". But seeing as it performs alot better than CF Eyefinity but is generally well polished in all aspects, I find this quite an impressive "patch" seeing as Eyefinity has been around for a long time now.

I really doubt that, two HD 5870 with six monitors offers far more spectacular experience than the 3 monitors that an GTX 480 SLI can offer, but of course, the GTX 480 SLI setup its gonna be generally faster for the fact that the SLI setup is more powerful and its rendering less pixels. (3 monitors against 6)

If you think this isn't enough, or "worry" that AT users will be misguided by the troll then obviously you can lodge a complaint of some sort to the moderators of these forums. But Im guessing you've already tried this a million times 😛

😛

Sucks when your own medicine is used against you eh?

Besides, crapping all over a thread that had nothing but crap in it anyways makes no difference in the end.

The only difference is that I post the truth with links and real talk, he just uses propaganda and twist words in a way that only the knowledgeable about hardware, can understand that he's an nVidia shill.
 
Last edited:
If physx was really innovative, nvidia would make it available to everyone (ATI owners w/ dedicated physx card and optimize for better CPU performance) so more developers would actually "want" to implement it (without... monetary pursuasion). But nvidia doesn't care about innovation here, just competition. They want you to buy their hardware, and only theirs. Greed is not innovation.
 
Last edited:
If physx was really innovative, nvidia would make it available to everyone (ATI owners w/ dedicated physx card and optimize for better CPU performance) so more developers would actually "want" to implement it (without... monetary pursuasion). But nvidia doesn't care about innovation here, just competition. They want you to buy their hardware, and only theirs. Greed is not innovation.
Agreed. They could even work with engine folks to integrate it, and charge a licensing fee, making it available, plastering their name everywhere, making money from it, allowing it to work well on anyone's hardware (non-vector CPU code is not doing that), and still make it such that game makers might actually make it an important part of the game. It really would be possible to have their cake, eat it too, and share pieces with others, without sacrificing profit, if they were so inclined. Given how it seems to be supported on the consoles, I don't think there's any reason except their desire for an nVidia-hardware-centric hype machine, for it to not be that way. Technically, I know, it's free to get and use PhysX, but nVidia makes so that it is primarily advantageous for you to use it with other middleware, and their support, and has also not bothered to optimize it for CPU use.

Now that physics effects like throwing things with the Havok engine have begun to lose their gimmick appeal (mind you, I loved the gimmick!), physics as part of the game from the very beginning needs to be the way of the future (IE, anything less stout than many-foot-thick concrete walls needs to be modifiable/destructible, at least as a goal, before looming performance issues make you scale back)...not adding PhysX because nVidia convinced your managers with some back-room TWIMTBP-type deal.
 
Last edited:
If physx was really innovative, nvidia would make it available to everyone (ATI owners w/ dedicated physx card and optimize for better CPU performance) so more developers would actually "want" to implement it (without... monetary pursuasion). But nvidia doesn't care about innovation here, just competition. They want you to buy their hardware, and only theirs. Greed is not innovation.

Im going to play the devil's advocate here.

The word "innovation" can mean differently to different people. Your definition of innovation sounds more along the lines of charity. While we are at it, lets have intel open source x86 so they can truly "innovate" the current CPU market.
 
If someone wants to see CPU physics done right, check the below:

http://www.youtube.com/watch?v=WTFQp625FqI

This video is:

1) Slowed down so we can see the effects easier
2) Physics is extremely overdone so that it's also easier to notice (amongst other things)

This thing blows PhysX or Havok so hard, those two probably hide in a corner and cry 😀 That is CryENGINE 2 still btw.

Also, Crytek's 3D implementation in CryENGINE 3 doesn't slow things down to a crawl. It doesn't render every frame from the ground up but does some funky magic in the framebuffer to copy and shift and just minor calculations - runs fine on an Xbox and PS3 (which have sucky components compared to this generation of PC hardware).

So while yes, nVidia does offer 3D in games from the past and future on driver level, the performance hit is a deal breaker. The link in OP is a perfect example. This needs to be supported by the engine to understand the scene and not kill performance. Physics can also be done in an excellent way without PhysX and run on the CPU and offer great results (proof above). Since PhysX needs to be built into a game, I don't really see any use for it if it stops half of PC gamers from using it. I really hope CryENGINE 3 will become the new UE3 engine as it offers everything and runs on both AMD and nVidia.

The Mafia II results from OP are a bit surprising too. The performance drop is so insane with PhysX and 3D (not to mention surround) that I don't really see who could use it (GTX480 SLi owners I guess). What is more surprising, since this is a heavily TWIMTBP sponsored game, why isn't 3D done inside the engine? All in all, I am disappointed. I'll probably get the game, but only on a nice 50%+ promo on Steam.
 
If someone wants to see CPU physics done right, check the below:

http://www.youtube.com/watch?v=WTFQp625FqI

This video is:

1) Slowed down so we can see the effects easier
2) Physics is extremely overdone so that it's also easier to notice (amongst other things)

This thing blows PhysX or Havok so hard, those two probably hide in a corner and cry 😀 That is CryENGINE 2 still btw.

Also, Crytek's 3D implementation in CryENGINE 3 doesn't slow things down to a crawl. It doesn't render every frame from the ground up but does some funky magic in the framebuffer to copy and shift and just minor calculations - runs fine on an Xbox and PS3 (which have sucky components compared to this generation of PC hardware).

So while yes, nVidia does offer 3D in games from the past and future on driver level, the performance hit is a deal breaker. The link in OP is a perfect example. This needs to be supported by the engine to understand the scene and not kill performance. Physics can also be done in an excellent way without PhysX and run on the CPU and offer great results (proof above). Since PhysX needs to be built into a game, I don't really see any use for it if it stops half of PC gamers from using it. I really hope CryENGINE 3 will become the new UE3 engine as it offers everything and runs on both AMD and nVidia.

The Mafia II results from OP are a bit surprising too. The performance drop is so insane with PhysX and 3D (not to mention surround) that I don't really see who could use it (GTX480 SLi owners I guess). What is more surprising, since this is a heavily TWIMTBP sponsored game, why isn't 3D done inside the engine? All in all, I am disappointed. I'll probably get the game, but only on a nice 50%+ promo on Steam.

You know what, this is how it should be. Developers focus on building the engine and doing things like physics, 3D, etc. AMD and Nvidia can focus and making that stuff run as fast as possible.


PS. The Cryenigne is just amazing.
 
I may be reading this wrong, but TruForm was hardware N-patches in the Radeon 8500

That is correct.
There were two types of high-order patches in Direct3D : N-patches and RT-patches (where RT came in regular and quintic Bezier/B-Spline form).
TruForm implemented N-patches.
GeForce 3/4 implemented only RT-patches, although I believe they could emulate N-patches as well. These days nothing seems supported by the driver, according to D3DCapsViewer.
 
If physx was really innovative, nvidia would make it available to everyone (ATI owners w/ dedicated physx card and optimize for better CPU performance) so more developers would actually "want" to implement it (without... monetary pursuasion). But nvidia doesn't care about innovation here, just competition. They want you to buy their hardware, and only theirs. Greed is not innovation.

PhysX is an engine, library, tool-set and middleware that is multi-platform and device -- much more than just GPU Physics. With an idealistic view, it would be great if everything was open and sharing, gamers would unite, hold hands and sing show tunes but companies try to leverage, differentiate, invest and risk more at times, build awareness, different strategies and part of things. It's just the reality of things in my mind set, and simply don't get worked up over things of this nature.

In my mind-set, Physics is the next frontier to help improve fidelity, realism, game-play, immersion and to improve upon the gaming experience. This is going to take time, differentiation, proprietary solutions 'till there is open standards and all part of the chaos.

While proprietary may bring awareness, innovation and choice, it also may bring chaos, division and fragmentation and certainly not ideal for all. But, through the chaos, open standards may be forged over time so developers and consumers may enjoy the fruits of what open standards may bring.

Some don't allow idealism to be the enemy of good.
 
The thing with CryPhysics is that it's implemented in the game very well.
The physics are part of the gameplay in a way... eg, you can blow up houses and all that, and the shrapnel and stuff can kill enemies... but if you turn down the physics, the game remains playable.

In theory this could be done with other physics APIs, but nobody managed to build a game like Crysis where it actually works. So I think the engine/game design is at least as important as the physics API itself.
 
Back
Top