NVIDIA, Epic add DX11 features to Unreal Engine 3

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Lonbjerg

Diamond Member
Dec 6, 2009
4,419
0
0
Am I the only one who wants to see nvidia physx die off? simply because its not for everyone?

gamers with intel IGP or amd cards wont have get anything from these nvidia optimisations. Havoc is the better way to go when it comes to physic engines, simply because it can do more or less the same, and everyone can use it. And with how CPUs are improveing getting more cores, taxing the gpu for physx isnt helping any, esp for games that arnt cpu bound (which most arnt at high graphics levels).

2 arguments againt it:
1) not everyone can use it = nothing for others, because software developers are to lasy to do both.
2) most games at high settings arnt cpu bound, why not let cpu take physical engine stress instead of gpu (for better game performance).

My god the ignorance :mad:

1) Everyone can use PhysX
It's just tha the GPU's are WAY faster than CPU for phyisics calcualtions.
You would run at 1-2 FPS with advanced physics features on the CPU...where GPU physic would run at +30 FPS.
NVIDIA states this.
Intel states this.
EVen the head of Bullet physics states this.

2) Read 1...you are ignorant about the topic.


You need to stop making your posts be about bashing the knowledge-base of your fellow members and more about respectfully debating the merits of the contents of their posts.

This manner of posting is counter-productive and inflammatory. Please do not post in this manner. There is little to be gained by castigating your fellow members as "ignorant" and deriding their posts. It is insulting and it is against our forum rules.

This is a technical forum, not elementary school, and you are expected to conduct yourself accordingly.

Please familiarize yourself with the AnandTech Forum Guidelines:
We want to give all our members as much freedom as possible while maintaining an environment that encourages productive discussion. It is our desire to encourage our members to share their knowledge and experiences in order to benefit the rest of the community, while also providing a place for people to come and just hang out.

We also intend to encourage respect and responsibility among members in order to maintain order and civility. Our social forums will have a relaxed atmosphere, but other forums will be expected to remain on-topic and posts should be helpful, relevant and professional.

We ask for respect and common decency towards your fellow forum members.

(I'm going to keep quoting this same body of text, over and over again, because some of our VC&G forum members appear to have a real difficult time remembering it)

Moderator Idontcare
 
Last edited by a moderator:

Lonbjerg

Diamond Member
Dec 6, 2009
4,419
0
0
UE3 has been able to use hardware PhysX since it was launched (UE3 that is, not PhysX), back when it was PPU only. Anandtech even did an article on it, saying that it could be the "big break" for PhysX.

http://www.anandtech.com/show/2393

That was over 3 years ago.

Also, Apex is used in Mafia 2 on PC, but (I think) not on consoles. All it can be used for is making things look prettier. Nothing game changing. Consoles are "holding back" PCs in this respect, but equally NV is holding back PCs. The only way for PhysX to become big is for it to be supported on both next generation consoles, which either requires it to be opened up, or for both next gen consoles to use NV hardware. In the case of the latter, it would be bad for PC gaming because NV could then get a stranglehold on these features on PC gaming, shutting AMD out.
If it's the former, then other physics solutions would either come to the fore, which could work on any hardware (e.g. something which works through OpenCL or similar), or NV will have to let everything use PhysX on both consoles and support third party hardware for it.


CPU physics runs on consoles, the ONLY diffrence between CPU and GPU physX is the performance.
And thus some develpers choose to remove some physics features from PhysX when running on the CPU...due to performance constaints.
 

Veliko

Diamond Member
Feb 16, 2011
3,597
127
106
No compasion for other gamers? :p

1) there is no alternative to EAX, there is for physx.

2) Havoc works as well or better, and works for all users instead of just nvidia card owners. You win reguardless of which solution is used, so why support the nvidia feature? unless you enjoy watching others not get phsycal engines?

What alternative is there to Physx for GPU-based physics calculation?
 

Arkadrel

Diamond Member
Oct 19, 2010
3,681
2
0
What alternative is there to Physx for GPU-based physics calculation?

OpenCL bullet physic engine.
Havoc on CPU? why does physic engine have to be on the gpu?


1) Everyone can use PhysX
It's just tha the GPU's are WAY faster than CPU for phyisics calcualtions.
You would run at 1-2 FPS with advanced physics features on the CPU...where GPU physic would run at +30 FPS.
NVIDIA states this.
Intel states this.
EVen the head of Bullet physics states this.
1) everyone can NOT use physX. You must have nvidia as a prime card to run it.

2) CPU physx performance isnt as bad as you say:
fps_metro2033.png



39 fps with GPU, 32 fps with CPU. (gpu is 7fps faster)
With stronger cpus than a phenom II x6, like a bulldozer, differnce might be less.

also alot of people think nvidia dont care about optimiseing their physx code to work on cpus, if they did chances are the cpus might actually run it better :p
 
Last edited:

Lonbjerg

Diamond Member
Dec 6, 2009
4,419
0
0
What? If you mean that they are going to add tessellation to the point that no one can use then yes you may be right.

Either way if you feel the tessellation level being used in a game is too high you can easily tone it down now for AMD cards via the tessellation slider in CCC.

5377889496_9cd2e08509_z.jpg


Wrong.
When you hit AMD tesselation "wall", you can cut back on the tesselation the delvoper wanted...in roder to make up for AMD's deficiancy.
 

Lonbjerg

Diamond Member
Dec 6, 2009
4,419
0
0
OpenCL bullet physic engine.
Havoc on CPU? why does physic engine have to be on the gpu?


1) everyone can NOT use physX. You must have nvidia as a prime card to run it.

2) CPU physx performance isnt as bad as you say:
fps_metro2033.png



39 fps with GPU, 32 fps with CPU. (gpu is 7fps faster)
With stronger cpus than a phenom II x6, like a bulldozer, differnce might be less.

also alot of people think nvidia dont care about optimiseing their physx code to work on cpus, if they did chances are the cpus might actually run it better :p

I highligthed your lie.
It must be a lie since you keep posting it while informed of otherrwise.

And I can prove it.

1) Download EVE - Online.
2) Make a trial account
3) The Chararther cretor run Carbon, which is based on APEX/PHYSX.

If you can run it without a NVIDIA GPU...you are lying.

Ball in your court.


You need to stop making your posts be about bashing the knowledge-base of your fellow members and more about respectfully debating the merits of the contents of their posts.

This manner of posting is counter-productive and inflammatory. Please do not post in this manner.

There is little to be gained by castigating your fellow members as "lying" and deriding their posts. It is insulting and it is against our forum rules.

This is a technical forum, not elementary school, and you are expected to conduct yourself accordingly.

Please familiarize yourself with the AnandTech Forum Guidelines:
We want to give all our members as much freedom as possible while maintaining an environment that encourages productive discussion. It is our desire to encourage our members to share their knowledge and experiences in order to benefit the rest of the community, while also providing a place for people to come and just hang out.

We also intend to encourage respect and responsibility among members in order to maintain order and civility. Our social forums will have a relaxed atmosphere, but other forums will be expected to remain on-topic and posts should be helpful, relevant and professional.

We ask for respect and common decency towards your fellow forum members.

(I'm going to keep quoting this same body of text, over and over again, because some of our VC&G forum members appear to have a real difficult time remembering it)

Moderator Idontcare
 
Last edited by a moderator:

Veliko

Diamond Member
Feb 16, 2011
3,597
127
106
OpenCL bullet physic engine.
Havoc on CPU? why does physic engine have to be on the gpu?

You get better performance from the dedicated chip on the GPU than you do the CPU.

How does OpenCL perform compared to Physx?
 

Arkadrel

Diamond Member
Oct 19, 2010
3,681
2
0
really? I just thought Nvidia physx only ran on nvidia cards... guess Im wrong then.
 

Red Hawk

Diamond Member
Jan 1, 2011
3,266
169
106
Asking "Why should Physx be run on the GPU?" is like asking "Why should games be hardware accelerated?" At least before DirectX became dominant, games could be run on the CPU. They just ran a whole lot faster on hardware.
 

PingviN

Golden Member
Nov 3, 2009
1,848
13
81
Wrong.
When you hit AMD tesselation "wall", you can cut back on the tesselation the delvoper wanted...in roder to make up for AMD's deficiancy.

Since when is it a bad idea to be able to lower a setting to improve performance? I don't see you cry about people running Medium or Low settings.

And your yadda-yadda-yadda on how AMDs tesselation sucks is really getting old. Care to change the record already?
 

Keysplayr

Elite Member
Jan 16, 2003
21,211
50
91
Is this yet another case of Nvidia doing more and getting penalized for it because AMD is lacking? Yes. Yes it is. I could swear that this forum recently went through an EPIC battle over this very thing.
Honestly folks, this is the kind of argument all of us "enthusiasts" should avoid. There are people here that would try to convince you that doing more, or exceeding what the competition does, is bad. Go say a few Hail Mary's and repent for what you have done.
Seriously folks, you really need to either get over all this crap, or go out and buy from the company that seems to try to do it all. Not the one holding the rest of us back.

/keysplayr
 
Last edited:

badb0y

Diamond Member
Feb 22, 2010
4,015
30
91
Is this yet another case of Nvidia doing more and getting penalized for it because AMD is lacking? Yes. Yes it is. I could swear that this forum recently went through an EPIC battle over this very thing.
Honestly folks, this is the kind of arguments all of us "enthusiasts" should avoid. There are people here that would try to convince you that doing more, or exceeding what the competition does, is bad. Go say a few Hail Mary's and repent for what you have done.
Seriously folks, you really need to either get over all this crap, or go out and buy from the company that seems to try to do it all. Not the one holding the rest of us back.

/keysplayr
I somewhat agree with you on this but it's not as easy as pointing at AMD and saying "they are holding us back." Imho we being held back by consoles and no amount of money that AMD or nVidia spends is going to change that, for example look at Crysis 2. nVidia paid $2 million just for a patch in the future that may incorporate DX 11.


So as much as people like to point fingers at nVidia or AMD for holding us back, it's probably the consoles that are holding us back more than anything.

As for PhysX, it won't be a major feature until they get some developers on board that actually use it in a significant manner that effects gameplay but until that happens it's still something that isn't as exciting as it has the potential to be.
 

Keysplayr

Elite Member
Jan 16, 2003
21,211
50
91
I don't agree with that. I don't believe that it's consoles holding something like DX11 Crysis 2 from being made from the ground up. I think it's because of all the DX9/DX10 hardware out there that still need to play this game. They want to sell the most copies of this game that they can, and to do that, they need to cater to the widest audience possible. Which may include consoles, but by no means exclusively consoles.
To me, consoles don't even exist. I'm a PC gamer and probably always will be, unless I can get equivalent graphics and keyboard and mouse on a future console. Maybe not even then.
 
Last edited:

itsmydamnation

Diamond Member
Feb 6, 2011
3,045
3,835
136
Turn off tesselation if it sucks so bad. Or dont buy an AMD card and send them a msg they need to up their tesselation performance compared to the competition.

6900 tessaltion performance is fine, also the smaller the triangle the more inefficent rasterization becomes, AMD recommend an optimal level of 16X tessalation. Now heres a challange for you, download the AMD hotfix driver force tessalation to 16X and run heaven on extreme and see if you can actually see a difference (apart from wireframe mode).


edit: the problem with GPU physical is the reason why peole like john carmack hate the idea, latency. having the GPU do rigid body physics (whats required for really interactive world space physics stuff) right now is a problem because GPU's have very bad per "thread" latency, they rely on a massive amount of work in flight to hide this fact. but that doesn't work in this case. it works in things like particals and cloth and thats why we see it now.
 
Last edited:

Keysplayr

Elite Member
Jan 16, 2003
21,211
50
91
So you're saying we technically "should" see a difference because it's a hacked way of running tesselation for better performance? I don't get what you're saying otherwise.
 

Skurge

Diamond Member
Aug 17, 2009
5,195
1
71
Is this yet another case of Nvidia doing more and getting penalized for it because AMD is lacking? Yes. Yes it is. I could swear that this forum recently went through an EPIC battle over this very thing.
Honestly folks, this is the kind of arguments all of us "enthusiasts" should avoid. There are people here that would try to convince you that doing more, or exceeding what the competition does, is bad. Go say a few Hail Mary's and repent for what you have done.
Seriously folks, you really need to either get over all this crap, or go out and buy from the company that seems to try to do it all. Not the one holding the rest of us back.

/keysplayr

1. Are you saying AMD do nothing to further PC gaming?
2. I'd like to see where someone is ripping on nV for doing more. What exactly is this more?
3. What exactly is AMD lacking in.

I could list both companies doing more for PC, could you show me where AMD does nothing?

AMD do just as much as nV if you dig, they just don't go out in the street and scream about it. I don't remember seeing you in those threads where AMD payed Codemasters to add DX11 to Dirt 2, I didn't see you come to their defense and say they were doing more when everyone was complaining that they delayed the game for nothing. I also don't remember seeing defend AMD for adding DX10.1/11 into Stalker to enable AA. Or Help incorporate eyefinity into games like Dirt 2, F1 2010, AC2, BC2, Supreme Commander 2 etc so that the HUD is in the center monitor. Or helping add DX11 to CIV5, AVP, BC2, BattleForge, Dragon Age 2 and now FB2 for BF3 which is gonna look MEGA.

So tell me again, how is AMD lacking? Do you notice it about when AMD decides to further PC gaming? Everything AMD does, and I mean EVERYTHING also benefits nVdia uses just as much even more than AMD users. Or do you only see when AMD do it.

If nVidia wants to improve PC gaming, they can port physx to OpenCL so even AMD GPUs can accelerate Physx, but you know what? They don't cause they want to make as much money as possible. There is nothing wrong with that, but by doing that, they aren't doing more for PC gaming they are holding it back.

Then theres hardware, what AMD (and Intel) are doing on the APU side of thing is helping PC gaming a lot. A lot more people will be able to play modern games on midrange notebooks and entry level PCs. That is a huge boost to PC gaming, it expands the user base much more that would make devs focus on PC gaming a lot more than they are now.

So I ask a again, how is AMD lacking?
 

Skurge

Diamond Member
Aug 17, 2009
5,195
1
71
So you're saying we technically "should" see a difference because it's a hacked way of running tesselation for better performance? I don't get what you're saying otherwise.

That exactly the same as all the optimisations AMD and nVidia run for Filtering and whatnot. AMD found a way to improve performance without affecting image quality. I don't care if the devs intended for it to run at 64billion samples, if I set it to 16 and I get better performance without a loss in image quality then the devs are doing something seriously wrong.

And you know whats the best part? Its completely user adjustable, so if you don't want it, you can turn it off. Awesome hey?
 

badb0y

Diamond Member
Feb 22, 2010
4,015
30
91
I don't agree with that. I don't believe that it's consoles holding something like DX11 Crysis 2 from being made from the ground up. I think it's because of all the DX9/DX10 hardware out there that still need to play this game. They want to sell the most copies of this game that they can, and to do that, they need to cater to the widest audience possible. Which may include consoles, but by no means exclusively consoles.
To me, consoles don't even exist. I'm a PC gamer and probably always will be, unless I can get equivalent graphics and keyboard and mouse on a future console. Maybe not even then.

Either way you should be blaming the market not AMD or nVidia for the lack of advancement.
 

Lonbjerg

Diamond Member
Dec 6, 2009
4,419
0
0
You get better performance from the dedicated chip on the GPU than you do the CPU.

How does OpenCL perform compared to Physx?

Not this again.

OpenCL <-> CUDA <-> Directe Compute

Those are API's and can be compared.

PhysX <-> Havok <-> Bullet

Those are physics engines and can be compared.
 

Lonbjerg

Diamond Member
Dec 6, 2009
4,419
0
0
really? I just thought Nvidia physx only ran on nvidia cards... guess Im wrong then.

Yes you are wrong...about everything you posted in this thread, so keep taht in mind before posting about a topic you are ill informed about next time.

In all the threads I have seen against PhysX +90&#37; of the arguments against PhysX has been based on ignorance....it's reallly sad.


You need to stop making your posts be about bashing the knowledge-base of your fellow members and more about respectfully debating the merits of the contents of their posts.

This manner of posting is counter-productive and inflammatory. Please do not post in this manner.

This is a technical forum, not elementary school, and you are expected to conduct yourself accordingly.

Please familiarize yourself with the AnandTech Forum Guidelines:
We want to give all our members as much freedom as possible while maintaining an environment that encourages productive discussion. It is our desire to encourage our members to share their knowledge and experiences in order to benefit the rest of the community, while also providing a place for people to come and just hang out.

We also intend to encourage respect and responsibility among members in order to maintain order and civility. Our social forums will have a relaxed atmosphere, but other forums will be expected to remain on-topic and posts should be helpful, relevant and professional.

We ask for respect and common decency towards your fellow forum members.

(I'm going to keep quoting this same body of text, over and over again, because some of our VC&G forum members appear to have a real difficult time remembering it)

Moderator Idontcare
 
Last edited by a moderator:

wahdangun

Golden Member
Feb 3, 2011
1,007
148
106
to be honest i don't care about nvdia phisyx i mean look at bf:bc2 or red faction game they look more realistic than cloth simulation on mirror edge

and nvdia tesselatation implementation is stupid, are they thinking everyone have gtx580 ? so what the point including tesselatation on lower card if they can't even use it?
 

Lonbjerg

Diamond Member
Dec 6, 2009
4,419
0
0
Since when is it a bad idea to be able to lower a setting to improve performance? I don't see you cry about people running Medium or Low settings.

And your yadda-yadda-yadda on how AMDs tesselation sucks is really getting old. Care to change the record already?


Not this again...do I need to dig up the thread about tesselation again?! :eek:

Thesselation is already "dynamic".
On the fly.

The only resaon for that setting in AMD' driver is due to lack of tesselation performance.

This is just the same old broken tune from AMD fans in order to jusitfy the lower performance. :thumbsdown:
 

Skurge

Diamond Member
Aug 17, 2009
5,195
1
71
Not this again...do I need to dig up the thread about tesselation again?! :eek:

Thesselation is already "dynamic".
On the fly.

The only resaon for that setting in AMD' driver is due to lack of tesselation performance.

This is just the same old broken tune from AMD fans in order to jusitfy the lower performance. :thumbsdown:

No, people just don't care cause tessellation hasn't done anything outside of synthetic benchmarks. So no body cares if AMD's tessellation performance is lower. I just doesn't matter.

Go ahead and say AMD were holding tessellation back cause devs used AMD cards to develop DX11, but blame nvidia for not having their cards out sooner. AMD just struck the right balance for the amount of tessellation needed for times their cards are being used. So they got it right, regardless of what you think.
 

Lonbjerg

Diamond Member
Dec 6, 2009
4,419
0
0
to be honest i don't care about nvdia phisyx i mean look at bf:bc2 or red faction game they look more realistic than cloth simulation on mirror edge

and nvdia tesselatation implementation is stupid, are they thinking everyone have gtx580 ? so what the point including tesselatation on lower card if they can't even use it?

I suggest a "challenge" to you.

Compare the destruactable architechture in to the scripted "destructable" architechture in BF:BC2 and tell me again you see no "difference"

(hint: Ever single hole in a wall in BF:BC" is the SAME...NO variations...as it is scripted.)

And I can only laugh at your stab at NVIDIA's unified tesselation eninge compared to AMD's limitied fixed function tesselation:

35205.png


Where does NVIDIA's tesselation implementation look unusable? :thumbsdown:
 
Last edited:

Lonbjerg

Diamond Member
Dec 6, 2009
4,419
0
0
No, people just don't care cause tessellation hasn't done anything outside of synthetic benchmarks. So no body cares if AMD's tessellation performance is lower. I just doesn't matter.

Go ahead and say AMD were holding tessellation back cause devs used AMD cards to develop DX11, but blame nvidia for not having their cards out sooner. AMD just struck the right balance for the amount of tessellation needed for times their cards are being used. So they got it right, regardless of what you think.

It mattered to AMD pre-FERMI:
http://blogs.amd.com/play/2009/06/02/why-we-should-get-excited-about-directx-11/

The sky was the limit.

After FERMI, PR spin control went into action:
http://blogs.amd.com/play/2010/11/29/tessellation-for-all/

It'e really thit simple:

When AMD thought they would have the egde due to their previous dabbles in tesselation it was the most important factor of DX11.

When AMD realized their performance was subpar to NVIDIA's there suddenly was a thing as "too much tesselation"...

Think about it.
And the apply it to other features.

Too much AF...
Too much AA...
To much...performance.

Hillarious.