• We should now be fully online following an overnight outage. Apologies for any inconvenience, we do not expect there to be any further issues.

AMD's Richard Huddy on DirectX 11, Eyefinity, and the competition

Page 7 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Grooveriding

Diamond Member
Dec 25, 2008
9,147
1,330
126
Am I the only one who notices no difference between physx physics and cpu run ones. The physics in Crysis are excellent, done on the CPU.
 

tommo123

Platinum Member
Sep 25, 2005
2,617
48
91
what i wanted to ask but forgot to (so might as well do it now) is regarding physx in batman AA.

i played it on the 360 first and got about halfway before having to give it back to a friend so bought it on steam. thing is, the physics part looked the same to be - referring to the 'alley' scene with papers/boxes etc on the ground. was that just run on the CPU then? (i have an ati card).
 

T2k

Golden Member
Feb 24, 2004
1,665
5
81
Well, PhysX, for starter forces you to run its runtime installer, there's no way in any shape for you to cook the libraries you need into your package. Purely idiotic.

BTW Bullet, another physics engine, looks promising: http://bulletphysics.org/wordpress/
 

T2k

Golden Member
Feb 24, 2004
1,665
5
81
what i wanted to ask but forgot to (so might as well do it now) is regarding physx in batman AA.

i played it on the 360 first and got about halfway before having to give it back to a friend so bought it on steam. thing is, the physics part looked the same to be - referring to the 'alley' scene with papers/boxes etc on the ground. was that just run on the CPU then? (i have an ati card).

Correct. Even if you'd have a secondary NV card it would be software - some time ago Nvidia decided to screw everyone who does not have NV card for primary VGA including me, an original AGEIA PhysX PCI card owner.

The main problem with this is that NV not simply disabled GPU acceleration and reverted to software( CPU) mode but even disabled multithreading, just to further penalize you for not having an NV card.

PhysX was and still is perfectly capable to take advantage of your multicore CPU - but alas, NV turned it off too.
 

T2k

Golden Member
Feb 24, 2004
1,665
5
81
Am I the only one who notices no difference between physx physics and cpu run ones. The physics in Crysis are excellent, done on the CPU.

Yeah, it's software-only, proprietary by Crytek, integrated into the CryEngine.
 

tommo123

Platinum Member
Sep 25, 2005
2,617
48
91
Correct. Even if you'd have a secondary NV card it would be software - some time ago Nvidia decided to screw everyone who does not have NV card for primary VGA including me, an original AGEIA PhysX PCI card owner.

The main problem with this is that NV not simply disabled GPU acceleration and reverted to software( CPU) mode but even disabled multithreading, just to further penalize you for not having an NV card.

PhysX was and still is perfectly capable to take advantage of your multicore CPU - but alas, NV turned it off too.

yeah i remember something about that. weird. not exactly tactics that's going to make me run out and buy one of their cards.

thing is, even running on 1 core, the physx looked the same as on the 360 version.

surprised that there wasn't a physics section added to DX11 tbh. MS knew it's in use in some games already. maybe DX12
 

Seero

Golden Member
Nov 4, 2009
1,456
0
0
He's talking about Crysis.

(his claim)Am I the only one who notices no difference between physx physics and cpu run ones. (The supporting statment)The physics in Crysis are excellent, done on the CPU.

what i wanted to ask but forgot to (so might as well do it now) is regarding physx in batman AA.

i played it on the 360 first and got about halfway before having to give it back to a friend so bought it on steam. thing is, the physics part looked the same to be - referring to the 'alley' scene with papers/boxes etc on the ground. was that just run on the CPU then? (i have an ati card).
Make sure that PhysX is installed in your OS.
 

MrK6

Diamond Member
Aug 9, 2004
4,458
4
81
Am I the only one who notices no difference between physx physics and cpu run ones. The physics in Crysis are excellent, done on the CPU.
There's zero difference. PhysX is a gimmick. If you run PhysX on a CPU only, it's gimped not because there isn't enough horsepower available, but because NVIDIA purposefully gimped the engine on CPUs so that it runs "so much better" on their graphics cards. However, if you look at some of the better physics engines out there (Velocity engine from Ghostbusters is one, but there are a few others), it calls their bluff and shows what imbeciles they are. CPUs of today are more than powerful enough to run some insane physics (much more than the static stuff we see in most games) once it's coded properly. The other reason there isn't much difference is that while the GPU is more powerful for physics just by architectural design, they can only add gimmicky effects to games (a little water particle effect here, interactive smoke there, etc.) because no developer is dumb enough to make a game DEPEND on full PhysX acceleration. If they did, NVIDIA would be force to reoptimize PhysX for multicore use, which would screw them or the game wouldn't make any money because the performance would suck. Either way, if you keep supporting companies that actually have the balls to have faith in their products instead of playing this stupid game, you'll see this selfish, proprietary bullshit die off.

/rant
 

SlowSpyder

Lifer
Jan 12, 2005
17,305
1,002
126
Correct. Even if you'd have a secondary NV card it would be software - some time ago Nvidia decided to screw everyone who does not have NV card for primary VGA including me, an original AGEIA PhysX PCI card owner.

The main problem with this is that NV not simply disabled GPU acceleration and reverted to software( CPU) mode but even disabled multithreading, just to further penalize you for not having an NV card.

PhysX was and still is perfectly capable to take advantage of your multicore CPU - but alas, NV turned it off too.

Yea, I'll agree with you 100% here. Nvidia's business practices are pretty shady. It'd be like AMD turning off the ability to run 3D acceleration with your Radeon video card if the drivers detect an Intel motherboard/processor after they already sold you a card and it's been working fine for months.
 

The Milkman

Junior Member
Dec 13, 2009
23
0
0
GPU's are the main bottleneck and most expensive piece of hardware in today's gaming rig's. It's better to use the underutilized CPU for physics.
 

T2k

Golden Member
Feb 24, 2004
1,665
5
81
<weird stuff about Crysis and an irrelevant video about PhysX>

And I told him Crysis is not PhysX so your video is irrelevant as his example has nothing to do with your 'difference'...

Make sure that PhysX is installed in your OS.

As I pointed out above it's impossible to use PhysX without deploying its runtime installer first.
Yes, lousy and annoying, I know but hey, it's Nvidia after all... ;)
 

Lonbjerg

Diamond Member
Dec 6, 2009
4,419
0
0
You stop lying.

Most AAA titles use Havok, supported by ATI.

You stop trolling.

Show me a list of GPU-physics Havok games...supported by ATI.

Again this is 2006:
http://www.youtube.com/watch?v=-x0S0b_eG_M
All talk..and nothing to show.

So why don't you stop with the lies.

AMD(or ATi) dosn't have 1 single piece of GPU-physics software for consumers on the market...and have never had that.

I look forward to your list :)
 

PingviN

Golden Member
Nov 3, 2009
1,848
13
81
Does it matter if the physics are GPU-accelerated or not, if they look the same? If PhysX only brings dynaic particles and cloth to the table (pun intended), is it really that big of a deal?

Crysis displayed great physics.
Ghost Buster diplayed even greater physics.
Nvidia crippled PhysX so it can't be run on multicore CPUs when they are clearly powerfull enough to do so.
 

T2k

Golden Member
Feb 24, 2004
1,665
5
81
Show me a list of GPU-physics Havok games...supported by ATI.

Again this is 2006:
http://www.youtube.com/watch?v=-x0S0b_eG_M
All talk..and nothing to show.

So why don't you stop with the lies.

AMD(or ATi) dosn't have 1 single piece of GPU-physics software for consumers on the market...and have never had that.

I look forward to your list :)

Seriously asking: do you have some reading comprehension problems? :\

Havok, by default, is HARDWARE-NEUTRAL - thus ALL HAVOK GAMES supports ATI. Which game decides to do it on GPU and which on CPU is up to them. It is not PhysX with some crappy 'you-have-to-do-it-this-way-or-the-highway' Nvidia mentality nor it is disabled or crippled any way if runs on the CPU, unlike PhysX.

Which part of this you are unable to grasp?
 
Last edited:

Seero

Golden Member
Nov 4, 2009
1,456
0
0
Does it matter if the physics are GPU-accelerated or not, if they look the same? If PhysX only brings dynaic particles and cloth to the table (pun intended), is it really that big of a deal?

Crysis displayed great physics.
Ghost Buster diplayed even greater physics.
Nvidia crippled PhysX so it can't be run on multicore CPUs when they are clearly powerfull enough to do so.
I think you are confused. In theory, Computer shouldn't use more than 10 Watt and no heat sink should required, clearly that is not the case. In theory, a Quad core cpu should be 4x as fast as single core, that is not the case either. The point is, it isn't about the theory.

DirectX is just another program, it has bugs and limitation. One of the limitation is it is a single threaded process, meaning it can only occupy one core at a time. On a quad core PC, 3 cores are doing nothing most of the time, that is why you often see CPU usage at 25%. Now in theory that 75% of the cpu can be used in games, which makes the game 4 times faster, but this is not happening. One of the goal of Dx10 was trying to fix that, which clearly is not working. Their are 2 problems, first users have different number of cores, and 2 cores is the norm, so making a part time thread will be best, but 50% of a quad core is doing nothing. Should games be written to utilize 4 cores? Then I7 will be at 50% and dual core will run like crap.

In fact, dynamic OC in I7 is designed to fix that, but you can't a core 100%, meaning that there are still unused resources. Dx11 further designed to fix the problem by running in 3 threads instead of 1, but only to programs that are built from ground up with Dx11.

GPU is by far less used compare to CPU. To utilize GPU, data must be fetch through the small PCI-E before the game starts, which is part of the long load screen. Even than shaders can't handle things that are done in CPU because traditional method assumed that they is only 1 core, so how can hundreds of shaders be utilized? The only way is to have an algorithm that can break one task into multiple independent tasks. That is why rendering is such a good task for GPU as it can be done with Matrices, and each calculation is independent once the proper Matrices it defined. One input, many output, and these output either goes to the screen, or stay within the video card, which does not create traffic on the PCI-E. The problem is, shaders are idle most of the time waiting for CPU to send data over, which is also known as "CPU bottleneck". So 75% of the CPU is doing nothing, the 25% of the CPU is probably waiting for the North bridge to sent data, but North bridge is waiting for South bridge to sent data, and South bridge is waiting for HDD to sent data, and HDD is not the only thing connected to the South bridge.

The key point here is, even if you remove NB and SB out of the picture, you still only have 1-2 CPU cores working at most, and the hundreds of shader cores are not doing anything. What will you do? Wait and hope that Dx12 will fix it?

ATI see this long ago and was trying to fill those shaders with tessellation. Again, it isn't plug and play, they implemented a hardware tessellator for it, which is what we have been discussing. The idea is good, but what if the tessellator itself became the bottleneck? Then shaders have to wait for both tessellator and cpu, or is it tessellator or cpu? Yes they can theory craft the whole thing out and start to make their hardware based on the theory, which they did. But did it attract developers? No. Why? Well one of the reason is they don't know how to use it, and other reason is it creates bottleneck. It wasn't ATI's intention to cut GITG, but AMD don't see the need of GITG the way ATI sees it. Tessellation would have been refined many times and used by many games if GITG is as big as TWIMTBP and you will probably be saying "CPU does tessellation as good as GPU... no differences".

PhysX started out as theory crafted too, just not by Nvidia. If it wasn't for TWIMTBP, no one will bother spending the time on it. But now, those games that features PhysX can fit more physics into the game without creating bottleneck in performance, and some ATI user sees the need too, thus buying a cheap Nvidia card for that matter.

So, to answer your question, they don't look better, but it are able to produce better FPS by offloading it to GPU given that the GPU is not the bottleneck.
 

Outrage

Senior member
Oct 9, 1999
217
1
0
So wouldent it be great if physx could be better optimized for multithreding, then you could use the cpu to 100&#37; instead of having the cores sitting idle. But nvidia dont want that, they want you to buy a more powerfull video card.