Physx on CUDA and its impact on RAM and more

taltamir

Lifer
Mar 21, 2004
13,576
6
76
Reviews are starting to leak out about the nearly complete physx on cuda. Which will allow every DX10 capable nvidia card (geforce 8 and above) to render physx thanks to its ability to run C code on its shaders.
Nvidia imitated the nehalem physics test that intel ran at IDF... nehalem was rendering 50000-60000 particles at 15-16fps. a 9600GT with physx cuda was able to get 300fps, and the G200 should get 600fps doing that. Sounds very promising. More over, with the widespread expected support (millions of users who own nvidia cards of 8 and 9th gen) there are now supposedly over 180 games in development with physx support.

This begs an interesting question. What will this do to vram requirements, and to a lesser extent, the card's bandwidth.

Will we now see PCIEv1 cards with too little ram choke? will 512MB be too little ram and the "suckers" who paid for 1GB ram cards find their cards blazing?

This could explain why nvidia why beefing up the shaders beyond what was reasonably needed.

Anyone else got any insights into this matter that I haven't thought of and wants to share them?
 

v8envy

Platinum Member
Sep 7, 2002
2,720
0
0
Sure. There's finally a reason for PCIe 2.0 16x. The physics calculations have to go to the GPU and make their way back. So you're constantly schlepping a pretty decent amount of data up to the GPU to chew on and then right back down. And if it wasn't that much data (tens as opposed to tens of thousands of objects) your CPU can handle it in the spare time anyway.

As far as RAM -- if you've got 320 megs, you've got a good reason to worry. Depends on how inefficient the physics algorithms are with their storage needs, and how much texture data you've already got for the scene you're rendering. At first I'd say 512M is a safe amount to have, game devs will target this. As 1G becomes more mainstream, well. You'll need to upgrade. And once again we'll see the 8800GTX pull even further away from the 9800GTX. =)
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
oh here is a question i forgot... how will this affect AMD... will AMD sue nvidia for this? (probably)
 

aka1nas

Diamond Member
Aug 30, 2001
4,335
1
0
Don't see how they could sue. It's all Nvidia's IP now.

Nvidia already said they would open PhysX up. It's up to AMD to support it if they want to. AMD's equivalent to CUDA, CTM, is only available on their dedicated HPC cards AFAIK.
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
aha... ok so nvidia knows their stuff, they open it up, but it sets AMD back because AMD has to start developing it from scratch to conform to the spec... that is a serious blow to AMD, but perfectly legal because they are not trying to use "unfair business practices"...
I am surprised the EU hasn't cracked down on MS for their directX yet...
 

apoppin

Lifer
Mar 9, 2000
34,890
1
0
alienbabeltech.com
Originally posted by: taltamir
aha... ok so nvidia knows their stuff, they open it up, but it sets AMD back because AMD has to start developing it from scratch to conform to the spec... that is a serious blow to AMD, but perfectly legal because they are not trying to use "unfair business practices"...
I am surprised the EU hasn't cracked down on MS for their directX yet...

let me clue you in .. ATi might have been .. but AMD is not stupid .. if you can see it you have to realize they are prepared

rose.gif


ATi did it with multi-gpu - ati couldn't quite get it until after the merger;
as far as i am concerned now, that merger may have saved them both
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
Of course AMD sees it, the falling person sees the ground as it approaches, but is helpless to stop it...
nVidia only recently purchased AEGIA. Physx was set to be another flop, a dead technology. But with it being bought by nvidia it is setting to become the de-facto standard in physics acceleration, and all nvidia has to do is port an existing C code to their own drivers using CUDA. This is relatively simple and they already got working samples... It could take AMD years to catch up...

On the other hand, directx 11 is supposed to bring physics acceleration. So AMD could focus on it. With both nvidia and AMD supporting DX11, developers will have to choose between making a DX11 physics (everyone can use) or a physx physics (only nvidia card owners can use).
It all depends on the timings. How long will it take AMD to develop physx? how long till DX11 arrives? how long till nvidia finishes the port? how long till fusion arrives?
 

Sylvanas

Diamond Member
Jan 20, 2004
3,752
0
0
I'd be interested to see what impact it has on PCI-E 1 and 2.0 cards.....perhaps the extra ram (768mb) on the G80 GTX might be it's saving grace here. Nonetheless it's my understanding that devs still have to incorporate it into their engines/games and since Physx titles (that are actually good) are scarce I can't exactly see some sort of massive adoption of the new tech....UT3 would be a good candidate to test though, I remember AT did a bench a while back with the standalone Physx card and UT3, perhaps we will see a follow up with PhysCUDAx :p. As for AMD I can't image they'd invest new resources and R&D into replicating a similar Physics platform all in time BEFORE DX11, then do it all over again when DX11 arrives- thats just stupid.
 

apoppin

Lifer
Mar 9, 2000
34,890
1
0
alienbabeltech.com
Originally posted by: taltamir
Of course AMD sees it, the falling person sees the ground as it approaches, but is helpless to stop it...
nVidia only recently purchased AEGIA. Physx was set to be another flop, a dead technology. But with it being bought by nvidia it is setting to become the de-facto standard in physics acceleration, and all nvidia has to do is port an existing C code to their own drivers using CUDA. This is relatively simple and they already got working samples... It could take AMD years to catch up...

On the other hand, directx 11 is supposed to bring physics acceleration. So AMD could focus on it. With both nvidia and AMD supporting DX11, developers will have to choose between making a DX11 physics (everyone can use) or a physx physics (only nvidia card owners can use).
It all depends on the timings. How long will it take AMD to develop physx? how long till DX11 arrives? how long till nvidia finishes the port? how long till fusion arrives?

were you asking for the answers?

rose.gif


don't ask me :)
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
On the other hand, directx 11 is supposed to bring physics acceleration.

I have yet to see the details on this, but DirectX11 could support the most advanced physics ever seen and they could still have all the calls mapped to the CPU. Also, if nVidia is opening the standard up, it is fully possible that MS may copy and paste their support into DirectX(DXTC is bit for bit S3TC as an example of when they have done this in the past). Not saying either of these is what is going to happen, but MS rolling this into DX would seem to make a lot of sense(a turn key solution with someone spending years to work out the bugs in advance).
 

ViRGE

Elite Member, Moderator Emeritus
Oct 9, 1999
31,516
167
106
Originally posted by: taltamir
This begs an interesting question. What will this do to vram requirements, and to a lesser extent, the card's bandwidth.

Will we now see PCIEv1 cards with too little ram choke? will 512MB be too little ram and the "suckers" who paid for 1GB ram cards find their cards blazing?

This could explain why nvidia why beefing up the shaders beyond what was reasonably needed.

Anyone else got any insights into this matter that I haven't thought of and wants to share them?
It very likely will have little impact when it comes to memory and bandwidth. Keep in mind that the original PhysX hardware had 128MB of GDDR(3, IIRC) RAM with an effective clock of 733mhz on a 128bit bus, for all of 12GB/sec. From that it's my understanding that most games did not come close to maxing out the RAM on the card, I think most games ended up using around 64MB, which is a drop in the bucket for a GPU. It could potentially push a card over the edge, but that doesn't seem particularly likely.

Bandwidth could be a slightly bigger problem since it's going to depend in part on the CUDA implementation. 12GB/sec isn't too much out of say the 8800GTX's memory bandwidth, but it could be a bigger issue on G92 cards. I'm not sure how much of the PhysX's memory bandwidth ended up being used with an average game.
 

Magnulus

Member
Apr 16, 2004
36
0
0
Just because DX11 might have physics support is no guarantee that it will be used. DX 7 had a whole audio library (forgot the name but it was an EAX clone) that was used in a whopping total of 1 game. Developers may opt for PhysX instead. If CUDA PhysX works, it will turn out to be a more cost-effective solution, anyways, than having multi-core CPU's.

PhysX support is decent for a technology that only emerged a few years ago. There are alot more games than you think that use PhysX. The only problem is if all you play are AAA FPS games, you won't see them. There are several RPGS and puzzle games with PhysX support, for instance.
 

themisfit610

Golden Member
Apr 16, 2006
1,352
2
81
I would imagine latency more than overall bandwidth will be much more imporant to physics cards. Think about it, a video card works with textures, which are image data - and are sent across uncompressed or losslessly compressed. Physics is mainly math calculations, so how would that imply a large volume of data? Opposed to bitmap graphics of course!

~MiSfit