ATI Havok GPU physics apparently not as dead as we thought

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

ViRGE

Elite Member, Moderator Emeritus
Oct 9, 1999
31,516
167
106
Originally posted by: instantcoffee
while I am defending the open standard alternative Havok.
Just so we're clear, Havok is neither a "standard" nor is it "open". Havok is middleware - it's a software package that Havok Co. puts together and lets companies license for use in other software projects. It's not a standard in any sense, since it doesn't define any kind of data interface. Nor is it open, since Havok Co. gets to program Havok as it sees fit.

As for the matter at hand, I'll believe it when it ships. Havok Co. has been extremely flaky here in the past; nothing is set in stone until there's an actual product that devs can buy.
 

chizow

Diamond Member
Jun 26, 2001
9,537
2
0
Nothing has really changed here, the real question, same as 6 months ago when AMD was blowing smoke and mirrors about PhysX, is if Intel will actually license GPU accelerated middleware to AMD before they come out with their own acceleration-capable solution (Larrabee).

I don't doubt that Intel has put considerable work into a GPU-accelerated Havok and wouldn't be surprised if they have a GPU-accelerated build already, however, I think it remains to be seen when they're actually willing to license the technology. As demonstrated recently with the lawsuits involving both Nvidia and AMD - Intel has been known to protect and closely guard their IP on more than one occasion. ;)
 

thilanliyan

Lifer
Jun 21, 2005
12,040
2,256
126
Originally posted by: chizow
Intel has been known to protect and closely guard their IP on more than one occasion. ;)

I think it would benefit Intel to prevent PhysX from gaining any more traction by allowing ATI to use it. If they allowed ATI GPU Havok AND it did well (which is an unknown), they would have a foot in the door when Larrabee comes out, assuming they want to have Larrabee run Havok. Sort of like Crossfire on Intel chipsets IMO.
 

instantcoffee

Member
Dec 22, 2008
28
0
0
Originally posted by: ViRGE
Originally posted by: instantcoffee
while I am defending the open standard alternative Havok.
Just so we're clear, Havok is neither a "standard" nor is it "open". Havok is middleware - it's a software package that Havok Co. puts together and lets companies license for use in other software projects. It's not a standard in any sense, since it doesn't define any kind of data interface. Nor is it open, since Havok Co. gets to program Havok as it sees fit.

As for the matter at hand, I'll believe it when it ships. Havok Co. has been extremely flaky here in the past; nothing is set in stone until there's an actual product that devs can buy.

I thought it was clear already. By open standard, I am talking about OpenCL. "Open standard based alternative Havok" might be clearer if you prefer that. As oposed to the closed propritary CUDA.

As by "extremely flaky", I am guessing that you refer to Havok FX which where to accelerate physics on both Nvidia and ATI GPU's, but where put on ice after Intels took over Havok? If so, I would hardly call it extremely flaky. Disappointing, yes, but it doesn't suprise me that business strategies alters when a company gets sold.

Originally posted by: keysplayr2003
"No, I'm not that impressed with CUDA"

I can't imagine why. Almost everyone in the industry is.

As for your questions, you can ask the first one to all the devs who signed on for PhysX and see if they feel it won't stick around. Better to ask them then to ask me.

Second question. I already answered this remember?

"it all comes down to which is best, faster, easier. It'll all play out as usual."

So there you go.

I would have to say however, that one standard across all hardware would be beneficial to all of us. Open CL or other.

tidbit on Nvidia and how they feel about Open CL.

The industry is impressed by GPU's GPGPU capabilities. ATI Stream, CUDA and OpenCL can all do that. But, unlike ATI stream and CUDA, OpenCL is introduced as platform independent and as an open standard, which makes it more impressive.

You didn't answer any of the questions, only avoiding them. Still, I find this satisfactory:

I would have to say however, that one standard across all hardware would be beneficial to all of us. Open CL or other.

Which is where we agree. One standard would benifit all.

I don't see how that should ever happen with CUDA. With OpenCL, we have a chance of getting a crossplatform standard, which is even an open standard.

I don't know if you recall or read Nvidia's statement about DX10.1 support, so here it is:

When we asked what DirectX 10.1 features Nvidia supported back in May, the company was very cagey, with Tony Tamasi claiming that "the red team will go out and try to get every ISV to implement things that aren't supported [by our GPUs] for competitive reasons. That really isn't good for game developers, Microsoft and also for us too. So I'd rather not say what [DX10.1] features we don't support."
http://www.bit-tech.net/news/h...eatures-in-far-cry-2/1

I don't belive that ATI thinks differently with CUDA and PhysX (under CUDA). Nvidia would exploit it for what its worth. Nvidia has "offered" PhysX with CUDA to ATI and some have tried to make ATI into the evil sabotour for not accepting those terms. Considering Nvidia's statement above, do you blame ATI?

PhysX have been heavily marketed to sell GFX cards (which is natural) and build up on the Nvidia brand and CUDA. That makes it difficult for other hardware vendors to adopt PhysX. As long as PhysX is not "independent" from Nvidia and not agnostic when it comes to which hardware it runs on, it will never take off in my opinion. ATI have been clear about that earlier in their choice of Havok:

We will happily work with and support all middleware tool providers. We announced collaboration with Havok since they are willing to operate as a middleware vendor and enable support for all platforms. If Nvidia wishes to place resources into enabling PhysX on AMD platforms, we would have no argument, provided they don?t artificially disadvantage our platforms in relation to theirs. We have attempted to initiate discussions with Nvidia on this matter, but so far they have been less than forthcoming.
http://www.tgdaily.com/content/view/38392/118/

In the text further down, you can read that Havoks API will get hardware support in the AMD CPU's. Intel would probably do the same.

Our strategy is to optimize our CPUs to run Havok?s API and libraries and then to investigate how we can improve gameplay with offloading certain forms of physics simulation to the GPU.

This means that Havok's API will get dedicated hardware support on all platforms in the future. Regardless if they have an Nvidia card or not. If Nvidia chooses to support Havok as well, then Havok will have full support on all hardware. CPU's and GPU's.

Leaving PhysX out in the cold....

 

Keysplayr

Elite Member
Jan 16, 2003
21,211
50
91
"You didn't answer any of the questions, only avoiding them. Still, I find this satisfactory:"

I probably did answer your questions just fine. Just not the answers you wished for? I dunno. Can't make everyone happy I guess. hehe.
 

instantcoffee

Member
Dec 22, 2008
28
0
0
Originally posted by: keysplayr2003
"You didn't answer any of the questions, only avoiding them. Still, I find this satisfactory:"

I probably did answer your questions just fine. Just not the answers you wished for? I dunno. Can't make everyone happy I guess. hehe.

LOL! I think we might have different option of what qualifies as answers to a question. :beer:

In first question, I asked for your opinion of how PhysX will be as an alternative to Havok. You answered:

Ask someone else.

Second question I asked for your opinion of how you think Intel and AMD/ATI will relate to PhysX after their statements. You answered:

I refer to our previous discussions.

No direct answers to direct questions.

No biggie though. I'm not mad at you... :wine:
 

aka1nas

Diamond Member
Aug 30, 2001
4,335
1
0
Originally posted by: instantcoffee


I thought it was clear already. By open standard, I am talking about OpenCL. "Open standard based alternative Havok" might be clearer if you prefer that. As oposed to the closed propritary CUDA.

As by "extremely flaky", I am guessing that you refer to Havok FX which where to accelerate physics on both Nvidia and ATI GPU's, but where put on ice after Intels took over Havok? If so, I would hardly call it extremely flaky. Disappointing, yes, but it doesn't suprise me that business strategies alters when a company gets sold.


I don't believe that they have said that it will be OpenCL based, it very well might be another proprietary implementation.

Havok FX was launched long before Intel acquired Havok, no developer chose to implement it. To be fair, it may have had some significant performance issues on any SM 3.0 hardware that made it unattractive.
 

instantcoffee

Member
Dec 22, 2008
28
0
0
Originally posted by: aka1nas

I don't believe that they have said that it will be OpenCL based, it very well might be another proprietary implementation.

For what reason should Intel choose yet another standard, when Larrabee will support OpenGL?:

The bottom lines is that Intel can make this very wide many-core CPU look like a GPU by implementing software libraries to handle DirectX and OpenGL.

http://anandtech.com/cpuchipse...howdoc.aspx?i=3367&p=1

Havok FX was launched long before Intel acquired Havok, no developer chose to implement it. To be fair, it may have had some significant performance issues on any SM 3.0 hardware that made it unattractive.


Havok FX was announced both by ATI and Nvidia. It got cancelled after Intels purchase and never became available to developers. It did support SM 3.0. Here's Nvidia and Havok FX:
http://www.neoseeker.com/Artic...eviews/havokfx-nvidia/
 

Keysplayr

Elite Member
Jan 16, 2003
21,211
50
91
Originally posted by: instantcoffee
Originally posted by: keysplayr2003
"You didn't answer any of the questions, only avoiding them. Still, I find this satisfactory:"

I probably did answer your questions just fine. Just not the answers you wished for? I dunno. Can't make everyone happy I guess. hehe.

LOL! I think we might have different option of what qualifies as answers to a question. :beer:

In first question, I asked for your opinion of how PhysX will be as an alternative to Havok. You answered:

Ask someone else.

Second question I asked for your opinion of how you think Intel and AMD/ATI will relate to PhysX after their statements. You answered:

I refer to our previous discussions.

No direct answers to direct questions.

No biggie though. I'm not mad at you... :wine:

I'm glad you're not mad at me. But my answers stand. Not your version of my answers, but my own. :beer:

 

ViRGE

Elite Member, Moderator Emeritus
Oct 9, 1999
31,516
167
106
Originally posted by: instantcoffee
Originally posted by: aka1nas

I don't believe that they have said that it will be OpenCL based, it very well might be another proprietary implementation.

For what reason should Intel choose yet another standard, when Larrabee will support OpenGL?:
I'm not sure what they'd use for ATI and NV cards other than OpenCL. They could use CUDA and Stream, but that's more effort than I'd expect, plus Stream development is particularly rocky at the moment. If it's not a rehash of using shader code, OpenCL is the only thing that makes sense for NV and ATI cards.

But while we're on the subject, you can bet your butts that it'll have an x86 path for Larrabee. An OpenCL bytecode runtime is not going to be nearly as good for x86 as hand-tuned code, and tuning x86 code is something Intel is extremely good at. Owning Havok gives Intel a home field advantage, I'd expect them to play it for all it's worth.
 

chizow

Diamond Member
Jun 26, 2001
9,537
2
0
Originally posted by: instantcoffee
Our strategy is to optimize our CPUs to run Havok?s API and libraries and then to investigate how we can improve gameplay with offloading certain forms of physics simulation to the GPU.

This means that Havok's API will get dedicated hardware support on all platforms in the future. Regardless if they have an Nvidia card or not. If Nvidia chooses to support Havok as well, then Havok will have full support on all hardware. CPU's and GPU's.

Leaving PhysX out in the cold....
I'm not sure what you mean by the bolded portion. CPU physics support is typically referred to as software support, as there's no dedicated hardware acceleration, its all done in software via the CPU which isn't accelerated, its a de minimis feature. You also didn't attempt to answer the question in that quote from Victor Cheng where he says they'll support GPU accelerated physics "when it makes sense". So when does it make sense?

In a worst-case scenario for PhysX, where AMD and Intel do team up and allow for hardware acceleration on their discrete GPUs (again, accelerated beyond the means of the standard - any CPU), you're still looking at ~60/40% of that market favoring Nvidia (and probably much more given how the 8800 series went unchallenged for so long). If you advocated such an effort to implement hardware acceleration on one portion of the market (Intel and AMD GPUs) and denying it from the competitor (Nvidia), you'd be advocating the very same market discrimination you're condemning with Nvidia's PhysX. The main difference of course is that you'd be getting a much smaller % of the market in return in that trade.

But of course, all of this is moot, as Nvidia has already stated numerous times CUDA and PhysX will be fully compliant with OpenCL and DX11. PhysX isn't going anywhere, neither will Havok, as middleware sitting on top of whatever standard API is always going to be necessary. Just see how many games you own and play are actually made with the DirectX SDK tools rather than accomplished and supported middleware tools and engines.

Not only is hardware accelerated physics on any DX10 GPU possible with PhysX, its also supported by any x86 CPU in software as well as all 3 major consoles. The fact that PS3 and Wii recently signed up and made PhysX available to their devs coupled with numerous leading publishers on the PC doing the same (2K, EA, THQ) are clear indications PhysX isn't going anywhere. A Havok GPU accelerated client won't change that, if anything PhysX's support of GPU acceleration has made it the clear market leader, forcing Intel to offer the same features with Havok.

So what's the score, right now? Intel and AMD are still blowing smoke and mirrors about a hardware accelerated solution with Havok, most likely based on a standard API like OpenCL or DirectX, which won't be available in any usable form until later this year. Nvidia has a fully functional hardware accelerated solution with PhysX that's self-sufficient providing all necessary components with the API, middleware, and driver and will also be fully compliant with standards like OpenCL and DirectX. Coupled with the fact many major PC publishers and 2 of the 3 major consoles put the PhysX SDK in their devs hands for free makes me think PhysX isn't going anywhere, its support will only grow imo.
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
while I am defending the open standard alternative Havok.

Havok is not, by any stretch of the imagination an open standard. In fact, it is controlled by the same company that is currently trying to force AMD out of the CPU market completely in the courts despite a prior licensing agreement. You aren't comparing anything remotely resembling an open standard vs a proprietary one, you are comparing the 800lb gorilla and the 500lb gorilla.

Open CL, which you seem to think nV isn't behind, was developed on nV hardware and its boards' chairman is a current nV VP. OpenCL is an API layered on top of CUDA, just as it will be layered on top of Stream.

The part where everything seems to tank on your end of the discussion- when exactly is the OpenCL based Havok alternative supposed to hit? Given that Intel has a very heavy interest in GPU based physics failing miserably atm, I'm not seeing them as being too interested in getting it done a second before Larrabee is ready to go. Conversely, EA and Take2 have already lined up PhysX for all of their upcoming games.

Right now we have two choices- Havok or PhysX. From what we can actually see and use today PhysX is FAR beyond what Havok can do. If Intel or AMD want to show off something better, by all means, have at it. Right now, PhysX is the only real game in town in terms of big jump in physics in games I can buy.

I have no vested interest in any given standard, personally I like the idea of MS making the call on elements such as this solely because they have no vested interest, everyone else we are talking about in this conversation very clearly does. Because MS chose not to enter into the fray, we are left with Havok vs PhysX. For right now, PhysX has shown us quite a bit more then Havok has, which means at this particular point I would be leaning towards PhysX as the direction I would want to head in.

As far as the other companies stating they won't follow, if nV ends up in a dominant position in terms of physics processing and becomes the defacto standard because of PhysX then AMD and even Intel will hop on board. As far as how difficult nVidia would make this- I would wager rather easy.
 

aka1nas

Diamond Member
Aug 30, 2001
4,335
1
0
Originally posted by: chizow
[
Not only is hardware accelerated physics on any DX10 GPU possible with PhysX, its also supported by any x86 CPU in software as well as all 3 major consoles. The fact that PS3 and Wii recently signed up and made PhysX available to their devs coupled with numerous leading publishers on the PC doing the same (2K, EA, THQ) are clear indications PhysX isn't going anywhere. A Havok GPU accelerated client won't change that, if anything PhysX's support of GPU acceleration has made it the clear market leader, forcing Intel to offer the same features with Havok.

Just as a side-note, PhysX has been available on all 3 consoles for quite a while before the Nvidia acquisition,and there have already been a few PS3 and Wii games released with PhysX support. The recent press release was likely just a renewal of agreements that had been originally made by Ageia.

 

chizow

Diamond Member
Jun 26, 2001
9,537
2
0
Originally posted by: aka1nas
Just as a side-note, PhysX has been available on all 3 consoles for quite a while before the Nvidia acquisition,and there have already been a few PS3 and Wii games released with PhysX support. The recent press release was likely just a renewal of agreements that had been originally made by Ageia.
Yep, very true, a point I've made in the past, particularly when comparing to the capabilities of Havok. The conclusion then was the same as now, PhysX is capable of everything Havok is on all the same hardware while the same cannot be said of Havok in relation to PhysX.

The big difference I wanted to distinguish was that PhysX in the past was available to individual developers who licensed the SDK directly from Ageia (and Nvidia) or through a middleware engine (UE3.0). The recent announcements clearly change the situation in that all licensed/networked Wii and PS3 developers will gain access to these tools for "free" as part of their membership. Similar for the publishers in the PC market signing on with PhysX. Instead of individual devs (like Bioware for example) using PhysX on a per engine or per title basis, their publishers (ex: EA) have made the PhysX SDK available to all of their dev houses (Bioware, DICE, Mythic, Pandemic etc).
 

thilanliyan

Lifer
Jun 21, 2005
12,040
2,256
126
Originally posted by: chizow
The conclusion then was the same as now, PhysX is capable of everything Havok is on all the same hardware while the same cannot be said of Havok in relation to PhysX.

Is PhysX actually more capable or is the hardware that it's run on more capable? (ie. would GPU accelerated Havok be similar in capability to PhysX or is Havok not able to simulate many effects that PhysX can?)
 

aka1nas

Diamond Member
Aug 30, 2001
4,335
1
0
Originally posted by: thilan29
Originally posted by: chizow
The conclusion then was the same as now, PhysX is capable of everything Havok is on all the same hardware while the same cannot be said of Havok in relation to PhysX.

Is PhysX actually more capable or is the hardware that it's run on more capable? (ie. would GPU accelerated Havok be similar in capability to PhysX or is Havok not able to simulate many effects that PhysX can?)

It's moot as GPU-accelerated Havok doesn't exist yet.
 

Zstream

Diamond Member
Oct 24, 2005
3,395
277
136
Originally posted by: aka1nas
Originally posted by: thilan29
Originally posted by: chizow
The conclusion then was the same as now, PhysX is capable of everything Havok is on all the same hardware while the same cannot be said of Havok in relation to PhysX.

Is PhysX actually more capable or is the hardware that it's run on more capable? (ie. would GPU accelerated Havok be similar in capability to PhysX or is Havok not able to simulate many effects that PhysX can?)

It's moot as GPU-accelerated Havok doesn't exist yet.

GPU physics is not a long term solution, in fact it is very short sided for gamers. For 3d other modeling needs then sure it is good.

We will be reaching octo-core cpu's, at least now and for the future half of the cores will not be used by modering games. The CPU's need to do something...
 

chizow

Diamond Member
Jun 26, 2001
9,537
2
0
Originally posted by: thilan29
Is PhysX actually more capable or is the hardware that it's run on more capable? (ie. would GPU accelerated Havok be similar in capability to PhysX or is Havok not able to simulate many effects that PhysX can?)
That's the point, capabilities and performance are directly tied to the limits of the hardware. Havok is currently limited to software and CPU acceleration, a performance level and feature set PhysX is able to match as its also compatible with the same CPUs. PhysX on the other hand is also capable of GPU acceleration and effects and features that modern CPUs aren't able to adequately accelerate. Speculating about what Havok can or can't do with GPU acceleration is pointless as there isn't even proof of concept demonstrations for public consumption. Maybe the GDC demo Terry Makedon alluded to will change that.

Originally posted by: Zstream
GPU physics is not a long term solution, in fact it is very short sided for gamers. For 3d other modeling needs then sure it is good.

We will be reaching octo-core cpu's, at least now and for the future half of the cores will not be used by modering games. The CPU's need to do something...
I'd say GPU physics is certainly a long term solution when a $50 add-in card GPU outperforms the fastest $1000 8-threaded CPU when it comes to physics calcluations and performance. As for what the CPU is going to do, it can focus on improved multi-threaded GPU driver performance that we've started seeing recently along with the other multi-threaded optimizations inherited from console ports.
 

thilanliyan

Lifer
Jun 21, 2005
12,040
2,256
126
Originally posted by: chizow
That's the point, capabilities and performance are directly tied to the limits of the hardware. Havok is currently limited to software and CPU acceleration, a performance level and feature set PhysX is able to match as its also compatible with the same CPUs. PhysX on the other hand is also capable of GPU acceleration and effects and features that modern CPUs aren't able to adequately accelerate. Speculating about what Havok can or can't do with GPU acceleration is pointless as there isn't even proof of concept demonstrations for public consumption. Maybe the GDC demo Terry Makedon alluded to will change that.

Okay maybe I should have asked:

Is software PhysX able to simulate a wider variety of effects (regardless of how fast or slow it can run) compared to Havok? What I'm asking is whether PhysX is more "realistic" compared to Havok?
 

chizow

Diamond Member
Jun 26, 2001
9,537
2
0
Originally posted by: thilan29
Okay maybe I should have asked:

Is software PhysX able to simulate a wider variety of effects (regardless of how fast or slow it can run) compared to Havok? What I'm asking is whether PhysX is more "realistic" compared to Havok?

You can compare the feature sets to get an idea of what they're capable of:

PhysX Features
Havok Features

I don't think its an accident they largely mirror each other in terms of features and capabilities, as they've all been tied to the limitations of the CPU up until 8-9 months ago. Its also probably not an accident that the titles they list based on software physics are equally unremarkable.

Simple enough comparison, just look at the titles they list that use software PhysX and Havok. Do any of them stand out over the other? Now compare that to the titles that do have GPU accelerated PhysX. Is there a substantial improvement compared to the software physics? Based on those comparisons my conclusion is the same as it was months ago as I stated earlier: PhysX is capable of everything Havok is on all the same hardware while the same cannot be said of Havok in relation to PhysX
 

thilanliyan

Lifer
Jun 21, 2005
12,040
2,256
126
Originally posted by: chizow
Originally posted by: thilan29
Okay maybe I should have asked:

Is software PhysX able to simulate a wider variety of effects (regardless of how fast or slow it can run) compared to Havok? What I'm asking is whether PhysX is more "realistic" compared to Havok?

You can compare the feature sets to get an idea of what they're capable of:

PhysX Features
Havok Features

Thanks.
 

instantcoffee

Member
Dec 22, 2008
28
0
0
This seems to become a long post, mainly to avoid multiposting and to make it clear what I am answering.

Originally posted by: BenSkywalker
while I am defending the open standard alternative Havok.

Havok is not, by any stretch of the imagination an open standard.

I clearified this statement with Virge above by adding to it so it would be even more clear. That you choose to use the first version as a answer to any on my posts, makes me not wanting to respond to the rest of your post. A pity, since it was more reasonable to discuss that.

Originally posted by: chizow
I'm not sure what you mean by the bolded portion. CPU physics support is typically referred to as software support, as there's no dedicated hardware acceleration, its all done in software via the CPU which isn't accelerated, its a de minimis feature. You also didn't attempt to answer the question in that quote from Victor Cheng where he says they'll support GPU accelerated physics "when it makes sense". So when does it make sense?

In my previous post, I am linking to AMD's strategy about implementing dedicated hardware support for Havok API in their CPU's.

I didn't know that Cheng asked me a question I was to answer.

AMD haven't answered the question as far as I've read (I've paid attention to physics for many years now). I would assume, from what I've read, that they are referring to ingame physics that is crucial to the gameplay itself, not only as decoration.

My personal interpetation: When they have signed the deal with Havok.

In a worst-case scenario for PhysX, where AMD and Intel do team up and allow for hardware acceleration on their discrete GPUs (again, accelerated beyond the means of the standard - any CPU), you're still looking at ~60/40% of that market favoring Nvidia (and probably much more given how the 8800 series went unchallenged for so long). If you advocated such an effort to implement hardware acceleration on one portion of the market (Intel and AMD GPUs) and denying it from the competitor (Nvidia), you'd be advocating the very same market discrimination you're condemning with Nvidia's PhysX. The main difference of course is that you'd be getting a much smaller % of the market in return in that trade.

Worst case for PhysX?

Currently, Havok is the most widely adopted middleware for physics. Featuring over 200 AAA game titles and then some lower budget ones. Over 100 developers have signed up with Havok. Intel is the largest GPU maker (Nvidia is largest for the discreet GPU market), while ATI is the third largest. Intel is a heavy player and owns Havok. Microsoft, another big player, have a unique deal with Havok:

Microsoft-produced games will use Havok's physics-based tools and animation systems until the end of time:
http://ve3d.ign.com/articles/n...ng-Deal-With-Microsoft

Havok is running on the non propritary open standard OpenGL, while PhysX on the propritary closed CUDA.

PhysX can't run on anything less then 8000 series of GPU, leaving GPU acceleration on consoles out in the cold. Havok FX has been run on older GFX cards even and might be adopted to consoles with GPU acceleration.

Havok might be run on both ATI and Nvidia cards, while PhysX only on Nvidia GPU's. Developers wants to reach the largest audience and PhysX won't do that.

Worst case scenario will then be that PhysX dies.

Best case scenario will be that PhysX will be ported to OpenGL for everyone and try to compete with Havok there. Havok is still bigger and wider supported, so it might still die.

Havok was used without special AMD coding which, if I understood it correctly, makes them compatible with Nvidia GPU's and even ran on Nvidia GPU's. This means that Havok covers 100% of the marked. Here's from mhouston
System Architect, AMD:

Yes, we were running Havok cloth demos on multi-core CPU as well as GPU via OpenCL, all with the same OpenCL code underneath the Havok API. As was said above, there is no visible difference between the OpenCL code on either the CPU or the GPU and Havok's native code. The dancer dances off screen if you don't have the camera follow enabled, but the camera follow has a "bob" to it that makes some people sick after watching it for awhile.
We had a few demos we were cycling between. All OpenCL with no specific AMD functions or native code. I'm still partial to the Powdertoy demo and I have probably spend more time than I should playing with it. All in the name of debugging and optimizations.

I really hope Andrew's talk (EA) gets posted soon (the slides should all go up in not too long) as I think it's pretty cool that he was able to extract the Ropa cloth code used in Skate, port to OpenCL, and throw his code at AMD and Nvidia after developing on a different platform, and have AMD showing multi-core CPU and GPU and Nvidia showing GPU, side by side on alpha implementations. OpenCL is a real thing and the implementations are getting there. This year is going to be interesting and some of us are going to be very busy.

http://forum.beyond3d.com/show...p=1280212&postcount=24

Here you go. Perhaps GPU accelerated Havok physics for everyone. One really have to be in the pocket of Nvidia for not liking that!

PS. I had to cut a bit on your post for not making it too long and repeatative. Please ask if there were questions you wanted answered. :)
 

Wreckage

Banned
Jul 1, 2005
5,529
0
0
Originally posted by: BFG10K
I agree with Ben; the ideal solution is for Microsoft to define a physics standard which then forces anyone that wants to be a major player to support it. DirectX (and OpenGL to a lesser extent) has been one of the best things to happen to PC gaming.

A Havok/PhysX war won?t be good for consumers.

You basically support a Microsoft monopoly then.

Competition is good, it will result in a better product. Physx could work with OpenGL, OpenCL and DirectX. While a DirectX solution would only with DirectX and only on MS products. PhysX works with every game console and can work on many operating systems. Havok could do the same as well.
 

chizow

Diamond Member
Jun 26, 2001
9,537
2
0
Originally posted by: instantcoffee
In my previous post, I am linking to AMD's strategy about implementing dedicated hardware support for Havok API in their CPU's.
And how does this negatively impact Nvidia's push for GPU-accelerated PhysX? Nvidia's GPUs will run on any platform that supports CPU physics, both Intel and AMD, and benefit from any such optimizations. The end result is no advantage gained and no penalty from CPU optimized physics.

I didn't know that Cheng asked me a question I was to answer.
No he didn't, he gave the traditional non-sequitur marketing babble that when translated means "we don't have jack." Of course his comment begs the question, "when does it make sense", which is the question I'm asking you. When do you think it makes sense for Intel to license a GPU accelerated version of their propriety physics engine, given they do not have a discrete GPU solution capable of accelerating physics beyond that of a CPU. When do you think it makes sense for AMD to back a GPU accelerated physics engine given they have no direct control over any physics API or middleware and are completely reliant on 3rd party interests or open standards that are months away from public consumption?

AMD haven't answered the question as far as I've read (I've paid attention to physics for many years now). I would assume, from what I've read, that they are referring to ingame physics that is crucial to the gameplay itself, not only as decoration.
Of course they haven't answered the question, as they do not have a solid answer. This is what makes this so comical, there's AMD and people like you advocating "nothing" or "anything" over PhysX when PhysX is clearly the better solution right now with at least an equally optimistic outlook.

In-game physics "crucial to gameplay" with Havok are going to run into the same road blocks that we see with current titles that use PhysX. Simply put they won't exist until there's a de minimis baseline that allows for the same gameplay functionality with all hardware configurations. Until then we'll be left with mostly scalable eye-candy, which bodes well for Nvidia given many titles now are console ports. Having the SDK in the hands of those developers will make scalable PhysX much easier as a result.

My personal interpetation: When they have signed the deal with Havok.
They already signed the deal with Havok. They signed it 3 years ago with Havok FX and they re-signed it 9 months ago as a paper response to Nvidia's GPU accelerated PhysX. And we still have nothing. Oh wait, we have a blog entry by Terry Makedon about a possible tech demo. Great.

Worst case for PhysX?

Currently, Havok is the most widely adopted middleware for physics. Featuring over 200 AAA game titles and then some lower budget ones. Over 100 developers have signed up with Havok. Intel is the largest GPU maker (Nvidia is largest for the discreet GPU market), while ATI is the third largest. Intel is a heavy player and owns Havok. Microsoft, another big player, have a unique deal with Havok:
Let's cut through the marketing BS and look at the numbers that matter. Intel does not have a GPU that can accelerate physics beyond the capabilities of a CPU, nor do they produce any GPUs that are capable of more than basic gaming, so their market share means nothing. Period. There is no hope of this changing until Larrabee at the earliest and even then there's much doubt whether Intel's solution will be competitive.

So that leaves us with the discrete GPU market, which Nvidia has dominated for years with around 60/40 share, a percentage that was as high as 70/30 at the height of G80/G92's dominance. So again, in a worst-case for Nvidia, you're advocating a segregated market and physics solution with Havok that actually results in diminished market share of GPU accelerated parts. And how exactly do you expect the situation or adoption of hardware physics to improve?

Microsoft-produced games will use Havok's physics-based tools and animation systems until the end of time:
http://ve3d.ign.com/articles/n...ng-Deal-With-Microsoft[/quote]
LOL. I guess its probably a good thing for PhysX that Microsoft has closed down nearly all of its gaming studios. They shut down most of their PC studios years ago, the last few shut down earlier this year when Ensemble and ACEs closed their doors.

Havok is running on the non propritary open standard OpenGL, while PhysX on the propritary closed CUDA.
Again, where's this OpenGL Havok client you're referring to? HavokFX? Or did you mean OpenCL, a standard which was only recently finalized but not yet available for public consumption?

PhysX can't run on anything less then 8000 series of GPU, leaving GPU acceleration on consoles out in the cold. Havok FX has been run on older GFX cards even and might be adopted to consoles with GPU acceleration.
GPU accelerated PhysX requires a DX10 capable GPU, as would any GPU accelerated version of Havok that relies on OpenCL or DX11. PhysX runs just fine on consoles and is equally capable of anything Havok is on the same hardware. Your continued references and level of expectations with regard to Havok FX are comical although I'm not sure you realize they undermine all of your arguments with regard to GPU accelerated physics.

Havok might be run on both ATI and Nvidia cards, while PhysX only on Nvidia GPU's. Developers wants to reach the largest audience and PhysX won't do that.
Again, if developers want to reach the largest audience, the clear choice is PhysX as Nvidia dominates the discrete GPU market. All market share indicators and anecdotal evidence relating to DX10 capable parts corroborates this. Also, as its obvious you're bent on misrepresentation and misinformation, you seem to be ignoring the fact ATI parts can run PhysX:

  • Does AMD block PhysX on Radeon development?
    While Nvidia opened up and provided access to its software libraries, engineers and hardware, Badit noted that AMD was less helpful. It appeared as if AMD was silently blocking the development of PhysX for Radeon.
Of course none of this should really come as a surprise as parts from both vendors need to satisfy the same compute shader requirements for OpenCL and DirectX 11 compliance and compatibility.

Worst case scenario will then be that PhysX dies.

Best case scenario will be that PhysX will be ported to OpenGL for everyone and try to compete with Havok there. Havok is still bigger and wider supported, so it might still die.
You mean ported to OpenCL? Maybe you should've read Ben's post, as has been linked previously, Nvidia has already stated numerous times that PhysX will fully support both DX11 and OpenCL. Again, as mentioned, it probably helps that Nvidia's VP of Embedded Technology sits as chair of the OpenCL group at Khronos. This would be like claiming the President of the US doesn't speak English, lol. DX11 compute shader support is fully backward compatible with DX10 parts.

End result is you have some of the largest publishers on the PC (EA, 2K, THQ) and 2 of the 3 major consoles (Wii and PS3) along with some of the most influential game engines (UE 3.0 and Gamebryo) all signing and announcing major licensing agreements with PhysX in the past 8-9 months since Nvidia announced GPU accelerated PhysX. I'd say support is clearly growing in PhysX's favor, if anything.

Havok was used without special AMD coding which, if I understood it correctly, makes them compatible with Nvidia GPU's and even ran on Nvidia GPU's. This means that Havok covers 100% of the marked. Here's from mhouston
System Architect, AMD:

<text>

http://forum.beyond3d.com/show...p=1280212&postcount=24
Ya sounds like you misunderstood. They're using Havok as middleware on top of OpenCL's standardized API as the HAL which then interfaces the various hardware. Good thing PhysX can and does the same exact thing now with their own API through CUDA and will also be fully compatible with OpenCL and DX11.....

Here you go. Perhaps GPU accelerated Havok physics for everyone. One really have to be in the pocket of Nvidia for not liking that!

PS. I had to cut a bit on your post for not making it too long and repeatative. Please ask if there were questions you wanted answered. :)
Perhaps we'll see GPU accelerated Havok this year, but not until it "makes sense" for Intel. ;) In the meantime we can see tangible benefits today with PhysX with the promise of better support in the future.