Infernal Physics engine running on Core i7

alcoholbob

Diamond Member
May 24, 2005
6,390
469
126
Nice.

This is what I always thought. Why waste a PCI-E slot on a video card to do something that its not even designed to do? Most of the processing power is wasted. Whereas with hyperthreading and multiple cores, and in the case of i7, a direct connection to the memory controller, seems to be ideal. Taxing the PCI-E bus any more (with its high latency) seems to be a bad idea IMO.
 

cmdrdredd

Lifer
Dec 12, 2001
27,052
357
126
But at what FPS? Also how complex can it be compared to running Physx, also...how many ppl have an i7 compared to a 8800 or better Nvidia card?
 

Munky

Diamond Member
Feb 5, 2005
9,372
0
76
Who cares how complex it is? I have a hunch that the most complex physics available on a GPU isn't exactly up to the level of mathematical accuracy you'd use in scientific simulations and modeling. I only care if it looks convincing enough to offer increased immersion in a game.
 

akugami

Diamond Member
Feb 14, 2005
6,210
2,552
136
I did see a few boxes move unrealistically here and there but considering it's a demo and what they were able to do with it in such and unfinished state bodes well for the future of physics acceleration. I've always felt that the future of physics acceleration was a combination of the CPU and GPU acting in tandem.

Game engines will require more cores in the future so it's unrealistic as shown in that demo to devote all of the CPU to physics. At the same time GPU's will get better and it's not out of the question for single chip multi GPU solutions with a process shrink or two. After all, ATI is already relying on multiple GPU's for their higher end cards. I can definitely see an Intel style dual core solution from ATI that essentially bonds two standalone GPU core's in one package. This can be harvested if one core is bad to save costs on lower end cards only needing one core. Slap two of those for quad GPU performance.
 

Piuc2020

Golden Member
Nov 4, 2005
1,716
0
0
Originally posted by: Astrallite
Nice.

This is what I always thought. Why waste a PCI-E slot on a video card to do something that its not even designed to do? Most of the processing power is wasted. Whereas with hyperthreading and multiple cores, and in the case of i7, a direct connection to the memory controller, seems to be ideal. Taxing the PCI-E bus any more (with its high latency) seems to be a bad idea IMO.

Actually, video cards are better suited for parallel processing.
 

SunnyD

Belgian Waffler
Jan 2, 2001
32,675
146
106
www.neftastic.com
Originally posted by: Piuc2020
Originally posted by: Astrallite
Nice.

This is what I always thought. Why waste a PCI-E slot on a video card to do something that its not even designed to do? Most of the processing power is wasted. Whereas with hyperthreading and multiple cores, and in the case of i7, a direct connection to the memory controller, seems to be ideal. Taxing the PCI-E bus any more (with its high latency) seems to be a bad idea IMO.

Actually, video cards are better suited for parallel processing.

Except for the fact that you need to preprocess data with the CPU, push it over the PCIe bus to the video card, let the video card crunch the data, then push it back over the PCIe bus to the CPU, then let the CPU integrate the physics data with the geometry information for the rendering pipeline, and then push it back yet again over the PCIe bus to the video card to render the final product onto the screen.

PhysX has demonstrated there is both a performance gain AND loss when using the video card to handle physics. The facts show the video card can indeed handle far more data (using the same API) for physics than the CPU can (better frame rates with GPU PhysX than CPU PhysX). However it also shows the video card performance hit when doing so (Lower frame rates with GPU PhysX versus no PhysX). It's a trade off.

This demo clearly shows the CPU has enough processing power to handle physics calculations in realtime - at least to the level these developers want. It allows the video card every last drop of raw performance to be fully devoted to the rendering pipeline without any sacrifices. What it will mean in terms of final framerate and comparative quality of the scenes - we'll never know. All we have is a quote from them saying it works better (for their application). Whether they actually tested it or not, who knows.
 

aka1nas

Diamond Member
Aug 30, 2001
4,335
1
0
Originally posted by: SunnyD



This demo clearly shows the CPU has enough processing power to handle physics calculations in realtime - at least to the level these developers want. It allows the video card every last drop of raw performance to be fully devoted to the rendering pipeline without any sacrifices. What it will mean in terms of final framerate and comparative quality of the scenes - we'll never know. All we have is a quote from them saying it works better (for their application). Whether they actually tested it or not, who knows.

Not really, that was a tech demo and it fully loaded a quad-core, hyperthreaded I7. That means that it didn't have all the AI and other background tasks that a full game would have running. It's really not all that impressive that it takes a high-end CPU that sells for $300+ to equal what I was able to do in Cellfactor with a $200 PPU 2 or 3 years ago or with a $50 GPU now.
 

chizow

Diamond Member
Jun 26, 2001
9,537
2
0
Not sure what you guys are so excited about, Rigid Body and Rag Doll simulations were impressive......in like 2002. Cloth, Soft Body, particle/water simulations are the more advanced effects currently being demo'd and implemented with GPU hardware accelerated physics. There's a huge difference and these demos certainly do a good job of illustrating it.
 

akugami

Diamond Member
Feb 14, 2005
6,210
2,552
136
Originally posted by: chizow
Not sure what you guys are so excited about, Rigid Body and Rag Doll simulations were impressive......in like 2002. Cloth, Soft Body, particle/water simulations are the more advanced effects currently being demo'd and implemented with GPU hardware accelerated physics. There's a huge difference and these demos certainly do a good job of illustrating it.

Did you hear the part about how quick they got that demo up? Hard to get cloth, soft body, and particle effects up in that short of a time. Granted this was an quad core i7 but current nVidia cards would be hard pressed to get that much stuff up and bouncing.

Was the demo extremely simplistic in many respects? Yes, however, I'm sure with optimization they would be able to get everything you stated up and running at anything equal to and likely surpassing what's currently available on the much more polished PhysX games which are shipping products.

Either way, physics acceleration is still in it's infancy and we have a long way to go before physics acceleration is used for much more than eye candy. I do think that future physics API's will intelligently incorporate the extra CPU cores as well the GPU to leverage physics.
 

chizow

Diamond Member
Jun 26, 2001
9,537
2
0
Originally posted by: akugami
Did you hear the part about how quick they got that demo up? Hard to get cloth, soft body, and particle effects up in that short of a time. Granted this was an quad core i7 but current nVidia cards would be hard pressed to get that much stuff up and bouncing.
Yes, I did, it sounds like Velocity is similar to any other middleware that would allow you to quickly and easily implement and scale physics effects. Have you played around with any of them like Havok or PhysX? Or even Crysis' Sandbox Editor? Its no more difficult than entering 1000 or 1500 in your keyboard, then executing if you're already playing with a working game engine.

But like I said, those effects with boxes and rag doll soft bodys don't really show us anything new with regard to physics, its just more of whats been available for the last 5-6 years.
 

nitromullet

Diamond Member
Jan 7, 2004
9,031
36
91
Originally posted by: cmdrdredd
But at what FPS? Also how complex can it be compared to running Physx, also...how many ppl have an i7 compared to a 8800 or better Nvidia card?

That is a valid question now because i7 is a relatively new cpu, and hasn't really demanded that anyone running a Q6xxx or better upgrade. However, the more important question is, "How may more people have a CPU than a high end GPU?".

 

aka1nas

Diamond Member
Aug 30, 2001
4,335
1
0
Originally posted by: nitromullet
Originally posted by: cmdrdredd
But at what FPS? Also how complex can it be compared to running Physx, also...how many ppl have an i7 compared to a 8800 or better Nvidia card?

That is a valid question now because i7 is a relatively new cpu, and hasn't really demanded that anyone running a Q6xxx or better to upgrade. However, the more important question is, "How may more people have a CPU than a high end GPU?".


The better question is: How many people have a high-end CPU that is fast enough to run that demo in real time? You don't need a high-end GPU to do the stuff in that demo at all, an 8800GT would handle it fine.
 

nitromullet

Diamond Member
Jan 7, 2004
9,031
36
91
Originally posted by: aka1nas
Originally posted by: nitromullet
Originally posted by: cmdrdredd
But at what FPS? Also how complex can it be compared to running Physx, also...how many ppl have an i7 compared to a 8800 or better Nvidia card?

That is a valid question now because i7 is a relatively new cpu, and hasn't really demanded that anyone running a Q6xxx or better to upgrade. However, the more important question is, "How may more people have a CPU than a high end GPU?".


The better question is: How many people have a high-end CPU that is fast enough to run that demo in real time? You don't need a high-end GPU to do the stuff in that demo at all, an 8800GT would handle it fine.

Yes, that's why I said "now" specifically. That will change as CPUs become more powerful. The fact of the matter is that unless the fundamental architecture of the PC changes, there will always be more PCs with a CPU than a PhysX capable graphics card. Physics on the CPU will make it more accessible to the masses/casual gamers.

edit: I so can't type today :)
 

alcoholbob

Diamond Member
May 24, 2005
6,390
469
126
You mean a "spare" 8800GT for PhysX? I don't think an 8800GT can run a game AND PhysX very convincingly at the same time.
 

aka1nas

Diamond Member
Aug 30, 2001
4,335
1
0
Originally posted by: Astrallite
You mean a "spare" 8800GT for PhysX? I don't think an 8800GT can run a game AND PhysX very convincingly at the same time.

It handles all the currently released PhysX titles just fine that way . You'd need a "spare" i7 to handle the Physics in that demo as it fully loads that chip and is still not a full game.
 

dguy6789

Diamond Member
Dec 9, 2002
8,558
3
76
None of the current PhysX titles look anywhere near as physics intensive or impressive as that demo was either.
 

thilanliyan

Lifer
Jun 21, 2005
12,065
2,278
126
Originally posted by: chizow
Yes, I did, it sounds like Velocity is similar to any other middleware that would allow you to quickly and easily implement and scale physics effects. Have you played around with any of them like Havok or PhysX? Or even Crysis' Sandbox Editor? Its no more difficult than entering 1000 or 1500 in your keyboard, then executing if you're already playing with a working game engine.

I'm not sure if it's what he meant but one of the guys talking in the video said something to the effect of "he put it together in about 10 (?) minutes". So maybe it wasn't just entering 1500 and going from there. Maybe there was no demo made and he had to program it first?
 

evolucion8

Platinum Member
Jun 17, 2005
2,867
3
81
Originally posted by: aka1nas
Originally posted by: dguy6789
None of the current PhysX titles look anywhere near as physics intensive or impressive as that demo was either.

Cellfactor?

http://www.youtube.com/watch?v...uo3qQQ&feature=related

Yeah, but Cellfactor is running on a small scale level, isn't like a full game with huge levels and stuff, AGEIA indeed did an impressive demo which could run without an AGEIA card, but nVidia stuff isn't that impressive at all regarding the PhysX, that may change in the future once we have more power to spare.
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
Originally posted by: SunnyD
Originally posted by: Piuc2020
Originally posted by: Astrallite
Nice.

This is what I always thought. Why waste a PCI-E slot on a video card to do something that its not even designed to do? Most of the processing power is wasted. Whereas with hyperthreading and multiple cores, and in the case of i7, a direct connection to the memory controller, seems to be ideal. Taxing the PCI-E bus any more (with its high latency) seems to be a bad idea IMO.

Actually, video cards are better suited for parallel processing.

Except for the fact that you need to preprocess data with the CPU, push it over the PCIe bus to the video card, let the video card crunch the data, then push it back over the PCIe bus to the CPU, then let the CPU integrate the physics data with the geometry information for the rendering pipeline, and then push it back yet again over the PCIe bus to the video card to render the final product onto the screen.

PhysX has demonstrated there is both a performance gain AND loss when using the video card to handle physics. The facts show the video card can indeed handle far more data (using the same API) for physics than the CPU can (better frame rates with GPU PhysX than CPU PhysX). However it also shows the video card performance hit when doing so (Lower frame rates with GPU PhysX versus no PhysX). It's a trade off.

This demo clearly shows the CPU has enough processing power to handle physics calculations in realtime - at least to the level these developers want. It allows the video card every last drop of raw performance to be fully devoted to the rendering pipeline without any sacrifices. What it will mean in terms of final framerate and comparative quality of the scenes - we'll never know. All we have is a quote from them saying it works better (for their application). Whether they actually tested it or not, who knows.

the video card performance hit is because it is tied up doing physics calculations...

and as for "pushing over pci-e", the CPU would have to push it to the ram and back all the time, more often than the GPU since the GPU can crunch more data in parallel... thus your analogy is flawed.

not to mention the same ridiculous argument could be applied to rendering graphics on the GPU to begin with... "the cpu first has to crunch the data, then push the data via pci-e to the GPU which renders the frame, what a waste, it should have just all been rendered in the CPU! (/sarcasm)
 

mmnno

Senior member
Jan 24, 2008
381
0
0
The PS3 demo was a lot more impressive. On a high end PC they should be demoing cloth, at the least.

If they don't have that, the most interesting feature is how realistic the physics interactions are, and in that category they get a "nice" but not a "wow".