Why a hardcore AGEIA pusher is turning to embrace ATI's solution.

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Munky

Diamond Member
Feb 5, 2005
9,372
0
76
Ok, physics can be done on the gpu, and there have already been demos of it. Something like a particle system can be processed entirely on the gpu, with the physics of it and all. But the fact remains that the gpu isnt as flexible as a cpu from a coding perspective, and often you have to rewrite the algorithm to make it run efficiently. For example, gpu's work best when running the same instructions on a large amount of data. If you have a large number of objects, you can store their position, velocity, and acceleration vectors in a 2D array represented by a FP32 texture (RGB representing the x,y,z components). Then you run a pixel shader that reads every texel in the texture and performs the calculations. The speed of such calculations would be orders of magnitude faster than what you can do on the cpu.

The major concern with this, though, is that the gpu is already busy rendering the graphics, so ideally you'd need a second gpu with decent pixel shading power to run the physics math.
 

Acanthus

Lifer
Aug 28, 2001
19,915
2
76
ostif.org
Originally posted by: munky
Ok, physics can be done on the gpu, and there have already been demos of it. Something like a particle system can be processed entirely on the gpu, with the physics of it and all. But the fact remains that the gpu isnt as flexible as a cpu from a coding perspective, and often you have to rewrite the algorithm to make it run efficiently. For example, gpu's work best when running the same instructions on a large amount of data. If you have a large number of objects, you can store their position, velocity, and acceleration vectors in a 2D array represented by a FP32 texture (RGB representing the x,y,z components). Then you run a pixel shader that reads every texel in the texture and performs the calculations. The speed of such calculations would be orders of magnitude faster than what you can do on the cpu.

The major concern with this, though, is that the gpu is already busy rendering the graphics, so ideally you'd need a second gpu with decent pixel shading power to run the physics math.

Lets not forget all of the extra coding for doing it on a GPU vs the PhysX API.
 

the Chase

Golden Member
Sep 22, 2005
1,403
0
0
Originally posted by: skooma
Originally posted by: the Chase
Originally posted by: Ichigo
Until something better comes out, who is going to argue with what is basically a free PPU solution as long as you have a PCI-E video card right now?

You have an extra X1600pro or better laying around unused? And a crossfire or one of these new 3 X PCIe16 mobos also?
I thought just a couple months ago nvidia was showing a single 7900gt running with a single 7600(iirc) doing physics.
Yeah- same thing different vender. 7600=1600 . SLI=crossfire mobo. They both want to sell you another video card (or 2 or 3) and the motherboards needed to run them. Which may not be a bad thing. But they market the solution as this easy/cheap way of upgrading when in fact just buying an Ageia card would be cheaper/easier. But I think whatever solution ends up working the best/most widely adopted will be the deciding factor.
 

Fox5

Diamond Member
Jan 31, 2005
5,957
7
81
Originally posted by: TecHNooB
Originally posted by: Fox5
Originally posted by: JAG87
Originally posted by: Questar
physics are completely rendered through complex mathematical calculations

The same thing can be said of 3D graphics.

Not really. graphics are not mathematical at all. all the calculations required are for the polygons in 3d objects, which are far simpler then physics calculations, trust me. It like comparing geometry to advanced calculus. Plus calculations in graphics are straight forward, while in physics you often have to do a lot of trial an error before you obtain a plausible intruction for the object to execute. Realistic physics are not pre rendered. Thats the beauty of it. its all random, but it must be acceptable. you cant shoot someone in the foot and they go flying 15 meters.

The rest in graphics is just applying textures and rendering pre programmed lighting and shader effects. pretty simple, until u start adding AA, AF, HDR ... and all the bells people like today.

1. You sound quite foolish saying graphics aren't mathematical.

Nuh-uh, you do!

No way, you're a poopoo head!
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
I'd love to hear an explanation whenever you have time Ben

The main reasons to increase physics calculations far beyond what we have now is to increase the amount of interactive material and how it can be manipulated. A good example would be environments that can be fully destroyed in a somewhat realistic manner. In order to do this we need to have these areas modeled in such a way that they have areas that can be destroyed- small batches of geometric data that can be manipulated with its' own physical properties and collision model.

Every version of D3D, and hence PC games in general, have horrible problems handling large quantities of small batches of geometric data. Due to the way D3D handles small batch geometric data the entire API bogs down, very badly, when you try to utilize it heavily. This isn't a driver issue, an ATi issue or a nVidia issue, the API itself simply can not deal with it effectively. Currently the hardware is fast enough to improve performance by huge leaps and bounds in this area(in terms of the graphics boards being able to handle them)- they are capable of handling an order of magnitude or more detail then what we are used to seeing.

As of right now- the fastest physics processing unit in the world combined with the fastest CPU and the fastest Quad SLI setup is going to fall down if advanced physics are used to expand detail and interactivity due to the 3D API bottlenecking the rest of the subsystems. They will be sitting around waiting for the API to handle things in a bette fashion.

This is a problem with every version of D3D to date, but DX10 fixes it. When Windows Vista launches the issues revolving around small batch geometric data will be gone and we will start to see some of the large performance benefits that are promised by this technology. Until this happens, we are going to continue to see the kinds of issues we have to date revolving around performance. They may be able to improve things a reasonable amount, but they are not going to get anywhere near the potential until the underlying API issues have been rectified.