ATI Havok GPU physics apparently not as dead as we thought

Creig

Diamond Member
Oct 9, 1999
5,170
13
81
Tamlin_WSGF over at HardForum noticed the following comments posted on Twitter by CatalystMaker/Terry Makedon:

http://www.hardforum.com/showthread.php?t=1404124


ATI GPU Physics strategy and ohhh maybe a demo being thrown down next week at GDC. If you're there GO see it
http://twitter.com/CatalystMaker/status/1356572319

Yup it is. Havok is indeed our partner of choice. Go check out the session if you are around, should be educational.
http://twitter.com/CatalystMaker/status/1356774985




Personally, I was hoping for ATI to go with the rumored DirectPhysics API that is supposedly coming out with DX11.
 

aka1nas

Diamond Member
Aug 30, 2001
4,335
1
0
There isn't going to be a DirectPhysics API in Dx11, the Compute Shader should allow for more direct access of the GPU for general purpose computing, though (think more like CUDA).

Ideally, both Havok and PhysX will be extended to support the Compute Shader once Dx11 is out so they will be hardware independent (Nvidia is saying that they will have a wrapper for CUDA to do this so PhysX would then inherit support).
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
regardless this is still good news, we don't want one side to have a monopoly on physics acceleration

I would beg to differ there, just ideally I think that 'side' would be the software side. Right now Intel/nV dictate everything in the physics accelerated world with no real input from anyone else required. If MS had taken the time to add hardware physics into DX11 it would have been a much larger improvement for gamers then us seeing a couple tweaks to help shaders give us the exact same results they have been since DX9 with a few less ops.

Maybe ATi will show off some tech demo before they remember they have been telling us for years they were going to do something and decided doing nothing outside of licensing Intel's tech and do nothing at all to advance or promote it was the way to go. May sound harsh for the ATi fans, but we have been hearing this since at least the x1900 days and you can only hear it so many times before you don't believe a word of it.
 

Keysplayr

Elite Member
Jan 16, 2003
21,211
50
91
They just need to follow through. I second Ben's statements. If you say you're going to do something, then...........
 

thilanliyan

Lifer
Jun 21, 2005
12,040
2,256
126
It's very bad if there's 2 versions of physics IMO. You shouldn't have to choose games by which video card you have.
 

Keysplayr

Elite Member
Jan 16, 2003
21,211
50
91
But people do choose video cards based on what games they play. Whichever does better in a given benchmark. Right?
Most of the "Which card should I buy?" threads second posts begin with "What games do you play?".
I guess we have another Blu-Ray and HD-DVD kind of war brewing. One will win in the end. One will have to cave.
 

BFG10K

Lifer
Aug 14, 2000
22,709
3,002
126
I agree with Ben; the ideal solution is for Microsoft to define a physics standard which then forces anyone that wants to be a major player to support it. DirectX (and OpenGL to a lesser extent) has been one of the best things to happen to PC gaming.

A Havok/PhysX war won?t be good for consumers.
 

apoppin

Lifer
Mar 9, 2000
34,890
1
0
alienbabeltech.com
Originally posted by: keysplayr2003
But people do choose video cards based on what games they play. Whichever does better in a given benchmark. Right?
Most of the "Which card should I buy?" threads second posts begin with "What games do you play?".
I guess we have another Blu-Ray and HD-DVD kind of war brewing. One will win in the end. One will have to cave.

"Most people" do not chose video cards logically, wouldn't you say? i think they do not make a choice at all and rely on IG and use consoles to play games.

Techies who visit tech sites tend to pay attention to benchmarks. Those less informed but still interested tend to look a the conclusion in a magazine - even like Consumer Reports - or rely on a recommendation from a friend, a salesman - or worse, make an impulse buy by looking at the box specs and advertising.

just a guess also, though .. and it is possible that the CPU may end up doing Havok while the GPU uses CUDA and ATi tries to set up something that also works on their GPU to work with both CUDA and Havok ..
rose.gif


 

thilanliyan

Lifer
Jun 21, 2005
12,040
2,256
126
Originally posted by: keysplayr2003
But people do choose video cards based on what games they play. Whichever does better in a given benchmark. Right?
Most of the "Which card should I buy?" threads second posts begin with "What games do you play?".
I guess we have another Blu-Ray and HD-DVD kind of war brewing. One will win in the end. One will have to cave.

Yes but you can choose either and you would be slightly slower in some games and slightly faster in others but you will still be able to play all games with all the graphical effects no matter which card you choose (ie. AA, AF, etc.). Whereas in the case of physics, you're left in the cold completely for some games if you choose to buy one card or the other. You COULD just run the game without hardware physics acceleration but depending on the game you might be losing out...that is the situation I don't want to see.

Well, maybe PhysX will be ported to ATI cards, or using a spare Geforce card will work in the PhysX enabled games (like it sorta does now).
 

instantcoffee

Member
Dec 22, 2008
28
0
0
Originally posted by: BenSkywalker
regardless this is still good news, we don't want one side to have a monopoly on physics acceleration

I would beg to differ there, just ideally I think that 'side' would be the software side. Right now Intel/nV dictate everything in the physics accelerated world with no real input from anyone else required. If MS had taken the time to add hardware physics into DX11 it would have been a much larger improvement for gamers then us seeing a couple tweaks to help shaders give us the exact same results they have been since DX9 with a few less ops.

Maybe ATi will show off some tech demo before they remember they have been telling us for years they were going to do something and decided doing nothing outside of licensing Intel's tech and do nothing at all to advance or promote it was the way to go. May sound harsh for the ATi fans, but we have been hearing this since at least the x1900 days and you can only hear it so many times before you don't believe a word of it.

I doubt Microsoft is going to put any hardware physics into DX11:

Microsoft-produced games will use Havok's physics-based tools and animation systems until the end of time
http://ve3d.ign.com/articles/n...ng-Deal-With-Microsoft

Havok FX was going to be supported on both ATI and Nvidia cards earlier, not only ATI as you say. This still might be the case. ATI have mentioned in all their press releases, that they want to choose Havok because its an open standard.

?There is no plan for closed and proprietary standards like PhysX,? stated Cheng. ?As we have emphasised with our support for OpenCL and DX11, closed and proprietary standards will die.?
http://news.softpedia.com/news...-Is-Future-99876.shtml

So, why Havok? Cheng reasoned that Havok?s technology and toolset have been widely accepted by developers and are considered to be "very mature". He also noted that Havok follows AMD?s open approach philosophy.

While we are still a bit dazzled by the fact that AMD decided to go with an Intel-owned physics engine (and we are pretty sure that some people at Intel may be a bit surprised as well), Cheng stressed that Havok remains "independent" from Intel.
http://www.tomshardware.com/ne...avok-physics,5646.html

With over 100 developers and 300 leading titles already using Havok?s physics engine -- Havok Physics -- the company has clearly defined its position as the leading developer of game physics. By working together, both companies are demonstrating their commitment to open standards and continued support for the needs of the game community."
http://hothardware.com/News/St...and-Havok-Cooperating/

As long as PhysX is CUDA based, I doubt ATI will support it. If would have ported the libraries to OpenCL, chances would have been better. Now might be a little too late for PhysX.

Intel is the largest vendor of GPU chips if to count the notebooks as well. They are planning to enter the desktop market and Intel is a heavy player. I think it was wise of AMD to team up with Intel, especially if Havok would remain "independent" from Intel. They can optimize both their GPU's and CPU's for Havok then. PhysX under CUDA makes Nvidia into a game maker and the only one who sits with the cheat codes for the game. Havok can offer a more "fair play" if they remain independent and especially if they use OpenCL.

Now they can offer physics support in their upcoming K11 Bulldozer and Larrabee.

This is good news!
 

MarcVenice

Moderator Emeritus <br>
Apr 2, 2007
5,664
0
0
I've been doing some reading, and it could be that ATI is making Havok run on both Ati Stream, and OpenCL. So when DirectX11 comes out, both nvidia and ati cards can accelerate havok physics. And then, PhysX is dead. Also, afaik OpenCL is backed by the Khronos group, and Havok is supported by far more developers then PhysX from Nvidia. The recent nvidia pressreleases about the physx SDK being free for ps3/wii developers were pure marketing bullshit pr-releases. Coz it was allready free, and it seems Havok is a more advanced API then physx, or so gamedevelopers have told me/us.
 

instantcoffee

Member
Dec 22, 2008
28
0
0
I think PhysX will die too. Game developers wants to reach everyone to sell their games. If Havok becomes universally supported, then that would be a better choice.

Havok has always earned their money by selling licenses as middleware, not to sell hardware. I think Havok would, unlike PhysX, continue on that path and use open standards instead of closed ones like CUDA.

In addition, Havok FX worked on old ATI and Nvidia cards. That gives hope for console users, like PS3 and Xbox 360, to benifit from GPU accelerated physics. PhysX was limited to 8xxx and above, and its currently an Nvidia only product. Developers often releases their games on several consoles and PC, so again they might reach a larger audience with Havok as their physics engine.

In the end, I think we will have GPU accelerated physics for everyone and it will be hardware independent.
 

Keysplayr

Elite Member
Jan 16, 2003
21,211
50
91
Originally posted by: instantcoffee
Originally posted by: BenSkywalker
regardless this is still good news, we don't want one side to have a monopoly on physics acceleration

I would beg to differ there, just ideally I think that 'side' would be the software side. Right now Intel/nV dictate everything in the physics accelerated world with no real input from anyone else required. If MS had taken the time to add hardware physics into DX11 it would have been a much larger improvement for gamers then us seeing a couple tweaks to help shaders give us the exact same results they have been since DX9 with a few less ops.

Maybe ATi will show off some tech demo before they remember they have been telling us for years they were going to do something and decided doing nothing outside of licensing Intel's tech and do nothing at all to advance or promote it was the way to go. May sound harsh for the ATi fans, but we have been hearing this since at least the x1900 days and you can only hear it so many times before you don't believe a word of it.

I doubt Microsoft is going to put any hardware physics into DX11:

Microsoft-produced games will use Havok's physics-based tools and animation systems until the end of time
http://ve3d.ign.com/articles/n...ng-Deal-With-Microsoft

Havok FX was going to be supported on both ATI and Nvidia cards earlier, not only ATI as you say. This still might be the case. ATI have mentioned in all their press releases, that they want to choose Havok because its an open standard.

?There is no plan for closed and proprietary standards like PhysX,? stated Cheng. ?As we have emphasised with our support for OpenCL and DX11, closed and proprietary standards will die.?
http://news.softpedia.com/news...-Is-Future-99876.shtml

So, why Havok? Cheng reasoned that Havok?s technology and toolset have been widely accepted by developers and are considered to be "very mature". He also noted that Havok follows AMD?s open approach philosophy.

While we are still a bit dazzled by the fact that AMD decided to go with an Intel-owned physics engine (and we are pretty sure that some people at Intel may be a bit surprised as well), Cheng stressed that Havok remains "independent" from Intel.
http://www.tomshardware.com/ne...avok-physics,5646.html

With over 100 developers and 300 leading titles already using Havok?s physics engine -- Havok Physics -- the company has clearly defined its position as the leading developer of game physics. By working together, both companies are demonstrating their commitment to open standards and continued support for the needs of the game community."
http://hothardware.com/News/St...and-Havok-Cooperating/

As long as PhysX is CUDA based, I doubt ATI will support it. If would have ported the libraries to OpenCL, chances would have been better. Now might be a little too late for PhysX.

Intel is the largest vendor of GPU chips if to count the notebooks as well. They are planning to enter the desktop market and Intel is a heavy player. I think it was wise of AMD to team up with Intel, especially if Havok would remain "independent" from Intel. They can optimize both their GPU's and CPU's for Havok then. PhysX under CUDA makes Nvidia into a game maker and the only one who sits with the cheat codes for the game. Havok can offer a more "fair play" if they remain independent and especially if they use OpenCL.

Now they can offer physics support in their upcoming K11 Bulldozer and Larrabee.

This is good news!

Some light reading for ya.
From your posts here, I don't think you actually understand what CUDA, or the C programming language is, or capable of.

Text

I'll post some more items that might help explain to you what CUDA is capable of, unless of course you decide to read up on it yourself, which would be cooler.
;)

 

Keysplayr

Elite Member
Jan 16, 2003
21,211
50
91
Originally posted by: instantcoffee
I think PhysX will die too. Game developers wants to reach everyone to sell their games. If Havok becomes universally supported, then that would be a better choice.

Havok has always earned their money by selling licenses as middleware, not to sell hardware. I think Havok would, unlike PhysX, continue on that path and use open standards instead of closed ones like CUDA.

In addition, Havok FX worked on old ATI and Nvidia cards. That gives hope for console users, like PS3 and Xbox 360, to benifit from GPU accelerated physics. PhysX was limited to 8xxx and above, and its currently an Nvidia only product. Developers often releases their games on several consoles and PC, so again they might reach a larger audience with Havok as their physics engine.

In the end, I think we will have GPU accelerated physics for everyone and it will be hardware independent.

HavokFX was abandoned.
I'm sure you understand that this is a game of wills. AMD/ATI could easily implement CUDA and CUDA based PhysX to their hardware. The developer tools alone for CUDA is pretty lengthy and well established. But..... Nvidia is the arch enemy of ATI. So what did we expect.

You might ask after reading this, "Well, why then doesn't Nvidia just adapt and use Havok? They already can. But further than that, Nvidia is far ahead of ATI right now in terms of GPGPU based computing, both in their physical architecture, and their software libraries.
As far as stream goes, we still don't know if ATI will follow through, or just let is sit and die.
A few press releases here and there doesn't mean Stream is actually going to be continually worked on with true dedication. Maybe ATI does not have the resources to pull it off. I don't know. Wait and see in this economy.
 

nosfe

Senior member
Aug 8, 2007
424
0
0
imo you'd have to be a pretty big ati fanboy to think that stream will take off. opencl on the other hand has apple on its side which means the adobe will probably use it for its products in the future instead of cuda
 

Keysplayr

Elite Member
Jan 16, 2003
21,211
50
91
Originally posted by: nosfe
imo you'd have to be a pretty big ati fanboy to think that stream will take off. opencl on the other hand has apple on its side which means the adobe will probably use it for its products in the future instead of cuda

Or alongside CUDA. Not instead.

"Rather than being competing technologies, Hegde noted that "OpenCL is a layer on top of the CUDA driver interface. As such, OpenCL is one avenue to GPU computing through CUDA, C for CUDA is another."

 

nosfe

Senior member
Aug 8, 2007
424
0
0
well, i don't know how they're implemented but logically(which doesn't always apply with stuff like this) wouldn't OpenCL run faster on nvidia cards if it were natively supported and not run through a layer of cuda? So lets say there's no penalty for doing this(obviously nvidia knows better), why would any developer bother with cuda if nvidia cards also support a format that runs on more graphics cards? Would developing for cuda be that much easier as to justify leaving out all the ati cards that would support OpenCL in the future?
 

instantcoffee

Member
Dec 22, 2008
28
0
0
Originally posted by: keysplayr2003
[Some light reading for ya.
From your posts here, I don't think you actually understand what CUDA, or the C programming language is, or capable of.

Text

I'll post some more items that might help explain to you what CUDA is capable of, unless of course you decide to read up on it yourself, which would be cooler.
;)

I know what CUDA is and does. That you out of my posts believe I don't, makes me think you don't know what it is and does.



Originally posted by: keysplayr2003
HavokFX was abandoned.
I'm sure you understand that this is a game of wills. AMD/ATI could easily implement CUDA and CUDA based PhysX to their hardware. The developer tools alone for CUDA is pretty lengthy and well established. But..... Nvidia is the arch enemy of ATI. So what did we expect.

You might ask after reading this, "Well, why then doesn't Nvidia just adapt and use Havok? They already can. But further than that, Nvidia is far ahead of ATI right now in terms of GPGPU based computing, both in their physical architecture, and their software libraries.
As far as stream goes, we still don't know if ATI will follow through, or just let is sit and die.
A few press releases here and there doesn't mean Stream is actually going to be continually worked on with true dedication. Maybe ATI does not have the resources to pull it off. I don't know. Wait and see in this economy.

Havok FX was put on ice by Intel, but never abandoned as you can see now.

AMD/ATI would never implement CUDA and CUDA based PhysX to their hardware. They use Brook+ and such would result in double duty convertion and execution of the instructions. If PhysX would have been ported the physics libraries from CUDA to OpenCL, ATI could have accessed it through ATI Stream and Nvidia through CUDA. In its current state, its locked in propritary CUDA.

Instead, ATI went with Havok, which traditionally is a non-propritary middleware maker (in the sense that they are hardware agnostic). If everything goes as it should, both Nvidia and ATI should be able to use it through OpenCL.

Here is Intels take on the future of propritary standards like CUDA (and I think it can be directly translated to ATI Stream as well, since thats also propritary):
Intel exec says NVIDIA's CUDA will be a "footnote" in history
http://www.engadget.com/2008/0...a-footnote-in-history/

Same goes with PhysX. It will only be a footnote in history IMHO.

Who would want PhysX supported only by Nvidia, if there will be Havok supported by all?

Open standards FTW!

Ps.

This isn't ATI vs. Nvidia. Its Intels Havok vs. Nvidia's PhysX. ATI is only a supporter of Havok.

In a way, it might not even be that.

It might be GPU accelerated Havok physics for everyone vs. GPU accelerated PhysX physics only for Nvidia users.
 

Keysplayr

Elite Member
Jan 16, 2003
21,211
50
91
Originally posted by: nosfe
well, i don't know how they're implemented but logically(which doesn't always apply with stuff like this) wouldn't OpenCL run faster on nvidia cards if it were natively supported and not run through a layer of cuda? So lets say there's no penalty for doing this(obviously nvidia knows better), why would any developer bother with cuda if nvidia cards also support a format that runs on more graphics cards? Would developing for cuda be that much easier as to justify leaving out all the ati cards that would support OpenCL in the future?

All speculative. We don't know yet. But what I do know is, CUDA is too prevelant, or powerful to be ignored by devs. They'd be a bit foolish to do so. IMHO.

 

Keysplayr

Elite Member
Jan 16, 2003
21,211
50
91
Originally posted by: instantcoffee
Originally posted by: keysplayr2003
[Some light reading for ya.
From your posts here, I don't think you actually understand what CUDA, or the C programming language is, or capable of.

Text

I'll post some more items that might help explain to you what CUDA is capable of, unless of course you decide to read up on it yourself, which would be cooler.
;)

I know what CUDA is and does. That you out of my posts believe I don't, makes me think you don't know what it is and does.



Originally posted by: keysplayr2003
HavokFX was abandoned.
I'm sure you understand that this is a game of wills. AMD/ATI could easily implement CUDA and CUDA based PhysX to their hardware. The developer tools alone for CUDA is pretty lengthy and well established. But..... Nvidia is the arch enemy of ATI. So what did we expect.

You might ask after reading this, "Well, why then doesn't Nvidia just adapt and use Havok? They already can. But further than that, Nvidia is far ahead of ATI right now in terms of GPGPU based computing, both in their physical architecture, and their software libraries.
As far as stream goes, we still don't know if ATI will follow through, or just let is sit and die.
A few press releases here and there doesn't mean Stream is actually going to be continually worked on with true dedication. Maybe ATI does not have the resources to pull it off. I don't know. Wait and see in this economy.

Havok FX was put on ice by Intel, but never abandoned as you can see now.

AMD/ATI would never implement CUDA and CUDA based PhysX to their hardware. They use Brook+ and such would result in double duty convertion and execution of the instructions. If PhysX would have been ported the physics libraries from CUDA to OpenCL, ATI could have accessed it through ATI Stream and Nvidia through CUDA. In its current state, its locked in propritary CUDA.

Instead, ATI went with Havok, which traditionally is a non-propritary middleware maker (in the sense that they are hardware agnostic). If everything goes as it should, both Nvidia and ATI should be able to use it through OpenCL.

Here is Intels take on the future of propritary standards like CUDA (and I think it can be directly translated to ATI Stream as well, since thats also propritary):
Intel exec says NVIDIA's CUDA will be a "footnote" in history
http://www.engadget.com/2008/0...a-footnote-in-history/

Same goes with PhysX. It will only be a footnote in history IMHO.

Who would want PhysX supported only by Nvidia, if there will be Havok supported by all?

Open standards FTW!

Ps.

This isn't ATI vs. Nvidia. Its Intels Havok vs. Nvidia's PhysX. ATI is only a supporter of Havok.

In a way, it might not even be that.

It might be GPU accelerated Havok physics for everyone vs. GPU accelerated PhysX physics only for Nvidia users.

The only reason I mentioned that you may not know enough about CUDA, is you don't seem impressed at all with it. Which doesn't make much sense. It IS very impressive. Even Intel knows this and really got them going. That's all. So please, if you took it personal, dont. Because it was not at all. :beer:

Whatever you and I "converse" about, it all comes down to which is best, faster, easier. It'll all play out as usual.
 

nosfe

Senior member
Aug 8, 2007
424
0
0
Originally posted by: keysplayr2003
All speculative. We don't know yet. But what I do know is, CUDA is too prevelant, or powerful to be ignored by devs. They'd be a bit foolish to do so. IMHO.
it's not cuda that's powerful, it's the gpu's, the thing is, if cuda and opencl can both make an application run twice as fast, why would the developers use cuda?(yes, i'm speculating here that nvidia will want opencl to run as fast as possible on it's cards)
 

instantcoffee

Member
Dec 22, 2008
28
0
0
Originally posted by: keysplayr2003
The only reason I mentioned that you may not know enough about CUDA, is you don't seem impressed at all with it. Which doesn't make much sense. It IS very impressive. Even Intel knows this and really got them going. That's all. So please, if you took it personal, dont. Because it was not at all. :beer:

Cheers!

No, I'm not that impressed with CUDA. Neither with ATI stream to be fair. They have hardware limitations and that slows developement for end users.

I'm more impressed with the developement of OpenCL and its hardware independency.
You speak of PhysX as it was developed in CUDA, but it was ported to CUDA. PhysX as CUDA have again hardware limitations. PhysX libraries needs to be ported to OpenCL to be optimal and act like hardware independent middleware instead of Nvidia physics.

Whatever you and I "converse" about, it all comes down to which is best, faster, easier. It'll all play out as usual.

Thats true. I'm a consumer, you are an Nvidia focus member. You are discussing ATI vs. Nvidia and are promoting CUDA, while I am defending the open standard alternative Havok.

Let me ask you two questions:

Considering that Intel and AMD/ATI are supporting Havok (Via CPU and GPU, even the upcoming K11 Bulldozer and Larrabee might have hardware support for Havok API) and hopefully Nvidia through OpenCL, how do you think PhysX would stand out as an alternative then?

Don't forget that Havok FX worked on older GFX cards compared to PhysX and there's a chance we might see hardware accelerated Havok on consoles as well.

Intel and AMD/ATI have already closed the door for PhysX and proclaimed its death. Do you think they will drop Havok and opt for PhysX instead or perhaps both?
 

Keysplayr

Elite Member
Jan 16, 2003
21,211
50
91
"No, I'm not that impressed with CUDA"

I can't imagine why. Almost everyone in the industry is.

As for your questions, you can ask the first one to all the devs who signed on for PhysX and see if they feel it won't stick around. Better to ask them then to ask me.

Second question. I already answered this remember?

"it all comes down to which is best, faster, easier. It'll all play out as usual."

So there you go.

I would have to say however, that one standard across all hardware would be beneficial to all of us. Open CL or other.

tidbit on Nvidia and how they feel about Open CL.