Originally posted by: DeathReborn
Originally posted by: Topweasel
Originally posted by: DeathReborn
I'd like to see this ATI X1800 Physics code running entirely on GPU and without using a SINGLE MHz or MB of the card. AGEIA PhysX PPU will use 128MB of GDDR3 for nothing but Physics.
For example:
How will losing 128MB on a X1800XT affect performance? What if it eats say 200MHz of the Core in Real World use (not some ATI tech demo), are you willing to lose 200MHz & 128MB Memory for it? For all we know it could use something like 16MB & 25MHz for Physics which defeats the object really. You don't get things for free, something always loses out in the process, the question is what.
Well its not even like that, Its about a card being able to handle Physics like math very well so when forced to compute it its faster then the GTX. If they ever did do this they would just sell a X1600 or something like it as a PPU instead of a Video card.
Being able to process Physics Code on thin air would be an awesome sight. I'm not saying it can't do physics calculations, i'm just saying it does it at a cost in performance. I'm quit frankly amazed at how many people think it can do the Physics WITHOUT a performance hit.
Just for those that don't read what I say but instead skim it: IT CAN do basic Physics calculations but just like everything, you need an input, a conversion & a output. You lot think you get an input and an output WITHOUT any work.
See I even question whether they can do both the Graphical and Physics work at the same time. In the end i think Agiea had it right, a dedicated PPU, so even if ATI rebrands one of their gfx cards as a PPU we will be better off. Oh well we will find out eventually.