Why a hardcore AGEIA pusher is turning to embrace ATI's solution.

Chocolate Pi

Senior member
Jan 11, 2005
245
0
0
Let me first say that this thread is not for those who maintain delusions that either we don't need more physics processing power, or that mere extra CPU cores can meet our needs. There is no place for irrational fear of technological progression here.

Let's take a look back at how this physics escapade started. AGEIA, a very young rival to vet middleware supplier Havok, began telling people that we needed dedicated hardware for the physics calculations of future games. For those who saw the potential, we were instantly hooked. Many others took an optimistic wait-and-see approach. Some thought it would sadly fail, and some stuck their heads in the sand and insisted it was poppycock, just like that newfangled "3D card" those guys made a decade ago. New technology, pffft.

Then came nVidia and Havok, both having something to lose in light of consumers spending money on AGEIA physics cards. They tag-teamed and did something something that fell somewhere between cashing-in and damage control. The deal seemed riddled with greed from both sides, with nVidia using "SLI physics" as an excuse to promote SLI, and Havok making yet another thing to charge developers for. If AGEIA wasn't already asking for $299 a pop for physics cards, it would have been laughable. What was laughable instead was ATI's "Don't forget about us, we're still here!" physics press release that brough absolutely nothing to the table.

The informed community members, which would include anyone reading this I'd imagine, knew that Havok and nVidia's solution was just eye-candy, nothing that effected gameplay. nVidia is certainly allowed to provide more uses for their products and make SLI a better value proposition, but I personally did not like the idea that they might be trampling on the fledgling "REAL" physics cards that could revolutionize gaming as we knew it.

Then we started seeing the AGEIA card and impressions. Simply put, developers either couldn't or wouldn't take full advantage of it. The amount of calculating power that occured indeed proved many claims made about the potential power of the card, but it was never turned into anything meaningful in a game! The cell factor demo even turned out to be a mess. With no Unreal multiplayer support for anything but effects physics, it seemed doomed. I and many others, I'm sure, began to accept that AGEIA couldn't work. There was no reason for the high-end users to adopt it, no way for its benefits to ever trickle down to "Mr. Average-Joe Dell", and no reason for developers to do anything differently. Short of long-term developer adoption of their discounted middleware, and greatly reduced production costs, this chicken and egg would never happen.

Then ATI comes out of nowhere and blindsides us all with their crazy scheme. To be fair, many clues could have been picked up on over the last few months, but I digress. At first glance, the idea of a third GPU slot for physics sounds like the ultimate case of lame one-up-manship and greed. Then... you see ATI's intentions, and you can begin to see how it is quite the opposite! Completely the opposite!

ATI is offering consumers to use their old graphics card for physics... completely independent of their choice for main video cards. ATI is not seeking to add GPU sales, but rather to entice users over to their platform. Look closely, imagine the possible outcomes, and you will see how this is the ONLY physics solution that will be possible, the only one that provides a bridge from today to tomorrow.

Ultra-High End User: The best video card also happens to be the best physics card, which can be used in addition to dual-GPU solutions without sacrifice.

High-End User: Users can use a cheap video card or their previous one for physics, completely independent of their GPU situation.

Average-Joe: Can use integated GPU for physics after adding a video card.

Did you see that coming?

So, here's the overall situation, the one developers have to address. They code games that use the physics power of integrated GPUs as a base line for physics that effects the gameplay. They do the same for video, really, integrated determines the core baseline for many games. Other video cards build upon that baseline. In the same way, video cards that work as physics cards in higher end motherboards will just be faster and have more "effects physics". Physics thus achieves the same position with developer and consumer that video cards have.

It all depends on the assumption that ATI's chipset and platform is designed to let GPUs provide true physics that can affect gameplay. I assume this is possible based on the completely different stance Havok is taking, in addition to the ATI demos that support this by showing objects colliding and interacting. If gameplay physics can be achieved, that is all that matters.

Enough of my ranting, what is everyone else's thoughts?
 

Todd33

Diamond Member
Oct 16, 2003
7,842
2
81
Neither one is going to be used in all but a few games that are paid to show it off. Soon we will have 4 cores, I'm sure we can do some physics in some of them. Moce along, nothing to see here...
 

JAG87

Diamond Member
Jan 3, 2006
3,921
3
76
video cards are not meant to do physics, and no matter how many pipelines it has or how many shaders it has it will suck at it. physics are completely rendered through complex mathematical calculations, which is the ALU in a CPU. The faster that, the more physics the system can feed per second. If you want measurement of this, use sandra's floating point and integer tests. The problem with physics is CPUs today are too slow to keep up with all the rendering GPUs can do, and even if they where able to keep up, the interconnection between CPU and GPU is too slow. The only way to solve the problem is either make the GPU connect to the CPU throught the northbridge (what AMD is working on with torrenza) or make a separate physics processor that can interact with the GPU at high speeds. So far AGEIA has failed at doing so (PCI interface and southbridge bandwith is surely not the way to do it).

Thats my take on it. We wont see realistic physics + smooth gameplay for a while still. You can get a taste of it thou if you play games that use Havok. So far the most realistic and efficient physics engine ever made. It actually doesnt slow down THAT bad.
 

JAG87

Diamond Member
Jan 3, 2006
3,921
3
76
Originally posted by: Questar
physics are completely rendered through complex mathematical calculations

The same thing can be said of 3D graphics.

Not really. graphics are not mathematical at all. all the calculations required are for the polygons in 3d objects, which are far simpler then physics calculations, trust me. It like comparing geometry to advanced calculus. Plus calculations in graphics are straight forward, while in physics you often have to do a lot of trial an error before you obtain a plausible intruction for the object to execute. Realistic physics are not pre rendered. Thats the beauty of it. its all random, but it must be acceptable. you cant shoot someone in the foot and they go flying 15 meters.

The rest in graphics is just applying textures and rendering pre programmed lighting and shader effects. pretty simple, until u start adding AA, AF, HDR ... and all the bells people like today.
 

Fox5

Diamond Member
Jan 31, 2005
5,957
7
81
Originally posted by: JAG87
Originally posted by: Questar
physics are completely rendered through complex mathematical calculations

The same thing can be said of 3D graphics.

Not really. graphics are not mathematical at all. all the calculations required are for the polygons in 3d objects, which are far simpler then physics calculations, trust me. It like comparing geometry to advanced calculus. Plus calculations in graphics are straight forward, while in physics you often have to do a lot of trial an error before you obtain a plausible intruction for the object to execute. Realistic physics are not pre rendered. Thats the beauty of it. its all random, but it must be acceptable. you cant shoot someone in the foot and they go flying 15 meters.

The rest in graphics is just applying textures and rendering pre programmed lighting and shader effects. pretty simple, until u start adding AA, AF, HDR ... and all the bells people like today.

1. You sound quite foolish saying graphics aren't mathematical.

2. There's more than one way to do graphics. The current trial and error method is based on the limitations of current cpus. If you have something much more powerful, you could do a global physics engine that's always doing calculations for objects, thus no need for trial and error. Besides, video cards finally have dynamic branching support, so they may be able to do physics in a hybrid way between a more computationally intense (but less branchy) method and the current methods used.
 

Drayvn

Golden Member
Jun 23, 2004
1,008
0
0
If i remember correctly doesnt the 7900GTX have like 3 ALUs per pipe? and the X1900 has like 1 or 2 with various smaller ALUs which are half the power...

Hmm not much in mathematical power then.
 

biostud

Lifer
Feb 27, 2003
19,868
6,974
136
Very simple:

Don't buy anything specified for physics until the software utilizing it is out.

But it seems nice if we can use our old DX9 cards for physics once DX10 GPU's, Vista and DX10 games are out during 2007.
 

F1shF4t

Golden Member
Oct 18, 2005
1,583
1
71
Originally posted by: biostud
Very simple:

Don't buy anything specified for physics until the software utilizing it is out.

But it seems nice if we can use our old DX9 cards for physics once DX10 GPU's, Vista and DX10 games are out during 2007.

That would actually be great, buy a new card like g80 or r600 and use ur old gpu for physics. Now there was a reason to get a dual pci e mobo. :p
The gfx card will prolly be the only thing i will be upgrading anytime soon, everything else can last 2 years. :p
 

the Chase

Golden Member
Sep 22, 2005
1,403
0
0
Some possible holes in the OP's post, which was very eloquent by the way. Whether HardOCP quoted ATI right or not I don't know. But they did quote and ATI spinster as saying that their solution would be different than what CPU's calculate today and be a "new" type of physics- effects physics. But then they confuse things by showing demos of what would seem to be "gameplay physics" based. So I guess it's early to tell if it will be both- but hearing ATI calling it effects physics makes me believe it will be.(Especially for the 1st years of developing this). So a possible big negative there. And the base minimum of card needed to do this sounds like a 1600 series card. They are fairly cheap but not really that widespread yet. So in reality they will still demand a lot of users to buy new cards. No using old 800(or 850) series here. And of course buying a new motherboard also for all of this also for the vast majority of users.

I like your line about the 3 graphics card slots solution- how did that go?- "the ultimate case of lame one-up-manship and greed" yeah that sounds right on the mark to me! And no I'm not an AGEIA fanboy anymore either. They have a ways to go to get their solution working right. I dunno- people running cell factor almost the same with no PPU as opposed to with the PPU installed....explain that one...
 

KeithP

Diamond Member
Jun 15, 2000
5,664
202
106
I feel that any solution that is specific to one vendor's hardware is a bad idea. Physics acceleration needs a standard API, either something in DirectX or maybe something like OpenGL. This way anyone that can come up with hardware and drivers to support their hardware can enter the marketplace.

In the long run, any proprietary solution will limit innovation and performance.

-KeithP
 

Ichigo

Platinum Member
Sep 1, 2005
2,158
0
0
Until something better comes out, who is going to argue with what is basically a free PPU solution as long as you have a PCI-E video card right now?
 

Markbnj

Elite Member <br>Moderator Emeritus
Moderator
Sep 16, 2005
15,682
14
81
www.markbetz.net
1. You sound quite foolish saying graphics aren't mathematical.

I have a feeling he didn't mean to say quite that, but that is how it came off. I'm not a mathematician, but I know enough about 3D rendering to have written a crude 3D engine in C++ years ago. I think the mathematics of 3D graphics are simpler than the physics of mass and motion, plus they really consist of just a few well-known transformations that are well-suited to hardware implementations. Is that true of physics? In one respect I see a lot of similarities. In both cases the software has to set up and maintain a model, and feed it through the pipeline to get results out that affect the model for the next frame.

Anyway, I don't know if the OP is right about ATI's solution, but I agree with the general thrust of the post.
 

TecHNooB

Diamond Member
Sep 10, 2005
7,458
1
76
Originally posted by: Fox5
Originally posted by: JAG87
Originally posted by: Questar
physics are completely rendered through complex mathematical calculations

The same thing can be said of 3D graphics.

Not really. graphics are not mathematical at all. all the calculations required are for the polygons in 3d objects, which are far simpler then physics calculations, trust me. It like comparing geometry to advanced calculus. Plus calculations in graphics are straight forward, while in physics you often have to do a lot of trial an error before you obtain a plausible intruction for the object to execute. Realistic physics are not pre rendered. Thats the beauty of it. its all random, but it must be acceptable. you cant shoot someone in the foot and they go flying 15 meters.

The rest in graphics is just applying textures and rendering pre programmed lighting and shader effects. pretty simple, until u start adding AA, AF, HDR ... and all the bells people like today.

1. You sound quite foolish saying graphics aren't mathematical.

Nuh-uh, you do!
 

the Chase

Golden Member
Sep 22, 2005
1,403
0
0
Originally posted by: Ichigo
Until something better comes out, who is going to argue with what is basically a free PPU solution as long as you have a PCI-E video card right now?

You have an extra X1600pro or better laying around unused? And a crossfire or one of these new 3 X PCIe16 mobos also?
 

AyashiKaibutsu

Diamond Member
Jan 24, 2004
9,306
4
81
I would imagine any decent physics card would have many processor instructions a graphics card would lack...
 

Cookie Monster

Diamond Member
May 7, 2005
5,161
32
86
Originally posted by: Drayvn
If i remember correctly doesnt the 7900GTX have like 3 ALUs per pipe? and the X1900 has like 1 or 2 with various smaller ALUs which are half the power...

Hmm not much in mathematical power then.

The 7 series has 2 ALUs per pipeline while ATi has 1 ALu and 1 mini ALu per pipe.
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
Until we get Vista we are not going to see effective physics utilization in games, the problem is built in to D3D9(yes, the 3D API). I'm tired ATM, but if anyone is interested I'll explain it later.
 

Acanthus

Lifer
Aug 28, 2001
19,915
2
76
ostif.org
I know i've beat this to death, but theres some new faces in this thread...

The main issue with it right now is that its mainly used for effects like explosions.

The problem is, when things explode in games, they do a hackjob tacked-on type of explosion. The engine spawns a ton of random particles of crap to look pretty when it blows up. This causes a stall and a huge performance hit on the graphics card/s. IF the models were properly constructed (of transformable polygons and groups of polygons that would be modified on the fly by the physics and graphics engine working together) they wouldnt be spawning a ton of crap when things blow up, causing the hit.

Even if you don't have hardware physics you can see this phenomenon by going into developer mode in any game and "forcing" the spawn of a bunch of stuff at once, you'll see the same stalls and fps drops.

Ageia can work, if it ever will is the real question... I have no doubt that it's the most elegant solution right now... I dont want SLI physics, and I don't want to have to buy a specific motherboard.

Edit: Spelling
 

aldamon

Diamond Member
Aug 2, 2000
3,280
0
76
Won't more realistic physics make gameplay too unpredictable? Scalable gameplay physics seems even worse.
 

the Chase

Golden Member
Sep 22, 2005
1,403
0
0
@ Acanthus- I also think Ageia is/could be the best solution out right now, but what are your thoughts on the peeps that got cell factor to work without having the PPU installed? They didn't get the cloth (flag) to work but it sounds like all the other physics features worked on CPU's. I don't know what their frame rate was compared to the PPU frame rates but any thoughts on this?
 

Acanthus

Lifer
Aug 28, 2001
19,915
2
76
ostif.org
Originally posted by: the Chase
@ Acanthus- I also think Ageia is/could be the best solution out right now, but what are your thoughts on the peeps that got cell factor to work without having the PPU installed? They didn't get the cloth (flag) to work but it sounds like all the other physics features worked on CPU's. I don't know what their frame rate was compared to the PPU frame rates but any thoughts on this?

Cellfactor has a software precision mode, it's not as accurate as hardware.

Its also an Xbox 360 title.
 

skooma

Senior member
Apr 13, 2006
635
28
91
Originally posted by: the Chase
Originally posted by: Ichigo
Until something better comes out, who is going to argue with what is basically a free PPU solution as long as you have a PCI-E video card right now?

You have an extra X1600pro or better laying around unused? And a crossfire or one of these new 3 X PCIe16 mobos also?
I thought just a couple months ago nvidia was showing a single 7900gt running with a single 7600(iirc) doing physics.