- Jun 21, 2005
- 12,065
- 2,278
- 126
Originally posted by: Astrallite
Nice.
This is what I always thought. Why waste a PCI-E slot on a video card to do something that its not even designed to do? Most of the processing power is wasted. Whereas with hyperthreading and multiple cores, and in the case of i7, a direct connection to the memory controller, seems to be ideal. Taxing the PCI-E bus any more (with its high latency) seems to be a bad idea IMO.
Originally posted by: Piuc2020
Originally posted by: Astrallite
Nice.
This is what I always thought. Why waste a PCI-E slot on a video card to do something that its not even designed to do? Most of the processing power is wasted. Whereas with hyperthreading and multiple cores, and in the case of i7, a direct connection to the memory controller, seems to be ideal. Taxing the PCI-E bus any more (with its high latency) seems to be a bad idea IMO.
Actually, video cards are better suited for parallel processing.
Originally posted by: SunnyD
This demo clearly shows the CPU has enough processing power to handle physics calculations in realtime - at least to the level these developers want. It allows the video card every last drop of raw performance to be fully devoted to the rendering pipeline without any sacrifices. What it will mean in terms of final framerate and comparative quality of the scenes - we'll never know. All we have is a quote from them saying it works better (for their application). Whether they actually tested it or not, who knows.
Originally posted by: chizow
Not sure what you guys are so excited about, Rigid Body and Rag Doll simulations were impressive......in like 2002. Cloth, Soft Body, particle/water simulations are the more advanced effects currently being demo'd and implemented with GPU hardware accelerated physics. There's a huge difference and these demos certainly do a good job of illustrating it.
Yes, I did, it sounds like Velocity is similar to any other middleware that would allow you to quickly and easily implement and scale physics effects. Have you played around with any of them like Havok or PhysX? Or even Crysis' Sandbox Editor? Its no more difficult than entering 1000 or 1500 in your keyboard, then executing if you're already playing with a working game engine.Originally posted by: akugami
Did you hear the part about how quick they got that demo up? Hard to get cloth, soft body, and particle effects up in that short of a time. Granted this was an quad core i7 but current nVidia cards would be hard pressed to get that much stuff up and bouncing.
Originally posted by: cmdrdredd
But at what FPS? Also how complex can it be compared to running Physx, also...how many ppl have an i7 compared to a 8800 or better Nvidia card?
Originally posted by: nitromullet
Originally posted by: cmdrdredd
But at what FPS? Also how complex can it be compared to running Physx, also...how many ppl have an i7 compared to a 8800 or better Nvidia card?
That is a valid question now because i7 is a relatively new cpu, and hasn't really demanded that anyone running a Q6xxx or better to upgrade. However, the more important question is, "How may more people have a CPU than a high end GPU?".
Originally posted by: aka1nas
Originally posted by: nitromullet
Originally posted by: cmdrdredd
But at what FPS? Also how complex can it be compared to running Physx, also...how many ppl have an i7 compared to a 8800 or better Nvidia card?
That is a valid question now because i7 is a relatively new cpu, and hasn't really demanded that anyone running a Q6xxx or better to upgrade. However, the more important question is, "How may more people have a CPU than a high end GPU?".
The better question is: How many people have a high-end CPU that is fast enough to run that demo in real time? You don't need a high-end GPU to do the stuff in that demo at all, an 8800GT would handle it fine.
Originally posted by: Astrallite
You mean a "spare" 8800GT for PhysX? I don't think an 8800GT can run a game AND PhysX very convincingly at the same time.
Originally posted by: chizow
Yes, I did, it sounds like Velocity is similar to any other middleware that would allow you to quickly and easily implement and scale physics effects. Have you played around with any of them like Havok or PhysX? Or even Crysis' Sandbox Editor? Its no more difficult than entering 1000 or 1500 in your keyboard, then executing if you're already playing with a working game engine.
Originally posted by: dguy6789
None of the current PhysX titles look anywhere near as physics intensive or impressive as that demo was either.
Originally posted by: aka1nas
Originally posted by: dguy6789
None of the current PhysX titles look anywhere near as physics intensive or impressive as that demo was either.
Cellfactor?
http://www.youtube.com/watch?v...uo3qQQ&feature=related
Originally posted by: SunnyD
Originally posted by: Piuc2020
Originally posted by: Astrallite
Nice.
This is what I always thought. Why waste a PCI-E slot on a video card to do something that its not even designed to do? Most of the processing power is wasted. Whereas with hyperthreading and multiple cores, and in the case of i7, a direct connection to the memory controller, seems to be ideal. Taxing the PCI-E bus any more (with its high latency) seems to be a bad idea IMO.
Actually, video cards are better suited for parallel processing.
Except for the fact that you need to preprocess data with the CPU, push it over the PCIe bus to the video card, let the video card crunch the data, then push it back over the PCIe bus to the CPU, then let the CPU integrate the physics data with the geometry information for the rendering pipeline, and then push it back yet again over the PCIe bus to the video card to render the final product onto the screen.
PhysX has demonstrated there is both a performance gain AND loss when using the video card to handle physics. The facts show the video card can indeed handle far more data (using the same API) for physics than the CPU can (better frame rates with GPU PhysX than CPU PhysX). However it also shows the video card performance hit when doing so (Lower frame rates with GPU PhysX versus no PhysX). It's a trade off.
This demo clearly shows the CPU has enough processing power to handle physics calculations in realtime - at least to the level these developers want. It allows the video card every last drop of raw performance to be fully devoted to the rendering pipeline without any sacrifices. What it will mean in terms of final framerate and comparative quality of the scenes - we'll never know. All we have is a quote from them saying it works better (for their application). Whether they actually tested it or not, who knows.
