- Jan 11, 2005
- 245
- 0
- 0
Let me first say that this thread is not for those who maintain delusions that either we don't need more physics processing power, or that mere extra CPU cores can meet our needs. There is no place for irrational fear of technological progression here.
Let's take a look back at how this physics escapade started. AGEIA, a very young rival to vet middleware supplier Havok, began telling people that we needed dedicated hardware for the physics calculations of future games. For those who saw the potential, we were instantly hooked. Many others took an optimistic wait-and-see approach. Some thought it would sadly fail, and some stuck their heads in the sand and insisted it was poppycock, just like that newfangled "3D card" those guys made a decade ago. New technology, pffft.
Then came nVidia and Havok, both having something to lose in light of consumers spending money on AGEIA physics cards. They tag-teamed and did something something that fell somewhere between cashing-in and damage control. The deal seemed riddled with greed from both sides, with nVidia using "SLI physics" as an excuse to promote SLI, and Havok making yet another thing to charge developers for. If AGEIA wasn't already asking for $299 a pop for physics cards, it would have been laughable. What was laughable instead was ATI's "Don't forget about us, we're still here!" physics press release that brough absolutely nothing to the table.
The informed community members, which would include anyone reading this I'd imagine, knew that Havok and nVidia's solution was just eye-candy, nothing that effected gameplay. nVidia is certainly allowed to provide more uses for their products and make SLI a better value proposition, but I personally did not like the idea that they might be trampling on the fledgling "REAL" physics cards that could revolutionize gaming as we knew it.
Then we started seeing the AGEIA card and impressions. Simply put, developers either couldn't or wouldn't take full advantage of it. The amount of calculating power that occured indeed proved many claims made about the potential power of the card, but it was never turned into anything meaningful in a game! The cell factor demo even turned out to be a mess. With no Unreal multiplayer support for anything but effects physics, it seemed doomed. I and many others, I'm sure, began to accept that AGEIA couldn't work. There was no reason for the high-end users to adopt it, no way for its benefits to ever trickle down to "Mr. Average-Joe Dell", and no reason for developers to do anything differently. Short of long-term developer adoption of their discounted middleware, and greatly reduced production costs, this chicken and egg would never happen.
Then ATI comes out of nowhere and blindsides us all with their crazy scheme. To be fair, many clues could have been picked up on over the last few months, but I digress. At first glance, the idea of a third GPU slot for physics sounds like the ultimate case of lame one-up-manship and greed. Then... you see ATI's intentions, and you can begin to see how it is quite the opposite! Completely the opposite!
ATI is offering consumers to use their old graphics card for physics... completely independent of their choice for main video cards. ATI is not seeking to add GPU sales, but rather to entice users over to their platform. Look closely, imagine the possible outcomes, and you will see how this is the ONLY physics solution that will be possible, the only one that provides a bridge from today to tomorrow.
Ultra-High End User: The best video card also happens to be the best physics card, which can be used in addition to dual-GPU solutions without sacrifice.
High-End User: Users can use a cheap video card or their previous one for physics, completely independent of their GPU situation.
Average-Joe: Can use integated GPU for physics after adding a video card.
Did you see that coming?
So, here's the overall situation, the one developers have to address. They code games that use the physics power of integrated GPUs as a base line for physics that effects the gameplay. They do the same for video, really, integrated determines the core baseline for many games. Other video cards build upon that baseline. In the same way, video cards that work as physics cards in higher end motherboards will just be faster and have more "effects physics". Physics thus achieves the same position with developer and consumer that video cards have.
It all depends on the assumption that ATI's chipset and platform is designed to let GPUs provide true physics that can affect gameplay. I assume this is possible based on the completely different stance Havok is taking, in addition to the ATI demos that support this by showing objects colliding and interacting. If gameplay physics can be achieved, that is all that matters.
Enough of my ranting, what is everyone else's thoughts?
Let's take a look back at how this physics escapade started. AGEIA, a very young rival to vet middleware supplier Havok, began telling people that we needed dedicated hardware for the physics calculations of future games. For those who saw the potential, we were instantly hooked. Many others took an optimistic wait-and-see approach. Some thought it would sadly fail, and some stuck their heads in the sand and insisted it was poppycock, just like that newfangled "3D card" those guys made a decade ago. New technology, pffft.
Then came nVidia and Havok, both having something to lose in light of consumers spending money on AGEIA physics cards. They tag-teamed and did something something that fell somewhere between cashing-in and damage control. The deal seemed riddled with greed from both sides, with nVidia using "SLI physics" as an excuse to promote SLI, and Havok making yet another thing to charge developers for. If AGEIA wasn't already asking for $299 a pop for physics cards, it would have been laughable. What was laughable instead was ATI's "Don't forget about us, we're still here!" physics press release that brough absolutely nothing to the table.
The informed community members, which would include anyone reading this I'd imagine, knew that Havok and nVidia's solution was just eye-candy, nothing that effected gameplay. nVidia is certainly allowed to provide more uses for their products and make SLI a better value proposition, but I personally did not like the idea that they might be trampling on the fledgling "REAL" physics cards that could revolutionize gaming as we knew it.
Then we started seeing the AGEIA card and impressions. Simply put, developers either couldn't or wouldn't take full advantage of it. The amount of calculating power that occured indeed proved many claims made about the potential power of the card, but it was never turned into anything meaningful in a game! The cell factor demo even turned out to be a mess. With no Unreal multiplayer support for anything but effects physics, it seemed doomed. I and many others, I'm sure, began to accept that AGEIA couldn't work. There was no reason for the high-end users to adopt it, no way for its benefits to ever trickle down to "Mr. Average-Joe Dell", and no reason for developers to do anything differently. Short of long-term developer adoption of their discounted middleware, and greatly reduced production costs, this chicken and egg would never happen.
Then ATI comes out of nowhere and blindsides us all with their crazy scheme. To be fair, many clues could have been picked up on over the last few months, but I digress. At first glance, the idea of a third GPU slot for physics sounds like the ultimate case of lame one-up-manship and greed. Then... you see ATI's intentions, and you can begin to see how it is quite the opposite! Completely the opposite!
ATI is offering consumers to use their old graphics card for physics... completely independent of their choice for main video cards. ATI is not seeking to add GPU sales, but rather to entice users over to their platform. Look closely, imagine the possible outcomes, and you will see how this is the ONLY physics solution that will be possible, the only one that provides a bridge from today to tomorrow.
Ultra-High End User: The best video card also happens to be the best physics card, which can be used in addition to dual-GPU solutions without sacrifice.
High-End User: Users can use a cheap video card or their previous one for physics, completely independent of their GPU situation.
Average-Joe: Can use integated GPU for physics after adding a video card.
Did you see that coming?
So, here's the overall situation, the one developers have to address. They code games that use the physics power of integrated GPUs as a base line for physics that effects the gameplay. They do the same for video, really, integrated determines the core baseline for many games. Other video cards build upon that baseline. In the same way, video cards that work as physics cards in higher end motherboards will just be faster and have more "effects physics". Physics thus achieves the same position with developer and consumer that video cards have.
It all depends on the assumption that ATI's chipset and platform is designed to let GPUs provide true physics that can affect gameplay. I assume this is possible based on the completely different stance Havok is taking, in addition to the ATI demos that support this by showing objects colliding and interacting. If gameplay physics can be achieved, that is all that matters.
Enough of my ranting, what is everyone else's thoughts?