AMD on Havok

chizow

Diamond Member
Jun 26, 2001
9,537
2
0
That was the original smoke and mirrors PR release from AMD. Here's a few better blurbs that show AMD has no solid plan in regards to GPU-accelerated physics, but they'd rather not support NV's solution as they don't want to advance the IP of their competitors:

FUD blurb

Sources close to AMD's Physics department have told us that AMD plans to introduce Havok on GPU only when this ends up being faster than Physics on a GPU.
So basically they're saying they'll never support GPU-accelerated physics except in cases with extremely low/undemanding resolutions and games.

ExtremeTech Interview

So what is AMD/ATI's take on all this? I spoke with Senior PR manager Rob Keosheyan at AMD, and he had plenty to say about the situation. Open industry standards are extremely important to AMD as a company, and they feel that GP-GPU work should be no different. It's working hard only on its own StreamSDK and Brook+, but with the Kronos group on OpenCL, where it sees the real future.

If open standards are so important, why partner with Havok for physics work? That technology is far from open; it's owned by Intel, the other chief competitor of AMD/ATI. Of course, there are no truly open physics middleware solutions on the market with any traction, so that point might be kind of moot.

Keosheyan says, "We chose Havok for a couple of reasons. One, we feel Havok's technology is superior. Two, they have demonstrated that they'll be very open and collaborative with us, working together with us to provide great solutions. It really is a case of a company acting very indepently from their parent company. Three, today on PCs physics almost always runs on the CPU, and we need to make sure that's an optimal solution first." Nvidia, he says, has not shown that they would be an open and truly collaborative partner when it comes to PhsyX. The same goes for CUDA, for that matter.

Though he admits and agrees that they haven't called up Nvidia on the phone to talk about supporting PhysX and CUDA, he says there are lots of opportunities for the companies to interact in this industry and Nvidia hasn't exactly been very welcoming.

To sum up, Keosheyan assures us that he's very much aware that the GP-GPU market is moving fast, and he thinks that's great. AMD/ATI is moving fast, too. He knows that gamers want GPU physics and GP-GPU apps, but "we're devoted to doing it the right way, not just the fast way."

So it sounds like support for CUDA or PhysX on ATI graphics cards just isn't going to happen unless Nvidia picks up the phone first and offers an olive branch, or there is an overwhelming demand from ATI's customers.
Its quite amazing how much the AMD rep flip-flops in that interview. The level of fear mongering and plain indifference is pretty astonishing. What is very clear however is that AMD/ATI would rather sit on their hands with regards to physics than pay a few pennies per GPU to add value to their cards for their customers. I didn't cut and paste the entire article, its a good read if you're interested in physics and ATI's direction.
 

chizow

Diamond Member
Jun 26, 2001
9,537
2
0
According to NV, yes. Here's an excerpt that precedes the previous blurb:

ExtremeTech Interview

Many have thought that CUDA is proprietary, and will only ever work on Nvidia's GPUs. This is not entirely true.

Though it has been submitted to no outside standards body, it is in fact completely free to download the specs and write CUDA apps, and even completely free to write a CUDA driver to allow your company's hardware (CPU, GPU, whatever) to run apps written in the CUDA environment.

Nvidia "owns" and controls the future of CUDA, so it's not open in the "open source" definition, but it's certainly free. Nvidia tells us it would be thrilled for ATI to develop a CUDA driver for their GPUs.

But what about PhysX? Nvidia claims they would be happy for ATI to adopt PhysX support on Radeons. To do so would require ATI to build a CUDA driver, with the benefit that of course other CUDA apps would run on Radeons as well. ATI would also be required to license PhysX in order to hardware accelerate it, of course, but Nvidia maintains that the licensing terms are extremely reasonable?it would work out to less than pennies per GPU shipped.

I spoke with Roy Taylor, Nvidia's VP of Content Business Development, and he says his phone hasn't even rung to discuss the issue. "If Richard Huddy wants to call me up, that's a call I'd love to take," he said.
 

thilanliyan

Lifer
Jun 21, 2005
11,656
1,761
126
Hmm..they should definitely do it then. It would definitely give extra value as you've stated.
 

nitromullet

Diamond Member
Jan 7, 2004
9,031
36
91
From Chizow's posts, it sounds to me like AMD and NV both sort of need each other on this one and they would both benefit from being able to run PhysX/CUDA on NV and ATI GPUs, but neither wants to pick of the phone...
 

apoppin

Lifer
Mar 9, 2000
34,890
1
0
alienbabeltech.com
AMD evidently was working on their own proprietary version of physics to run on their GPUs since '05. That is what the CrossfireX was supposed to have brought; a more "two-way" communication between CPU and GPU for Physics. However, it appears that it may not have worked out so well [evidently], so they went to Intel's company - Havok - to develop Physics for both CPU and GPU.

It is very clear AMD wants to do Physics on their GPUs [and CPUs] using Havok and would rather die - go to intel and beg - then use 'Nvidia Anything' for PhysX or CUDA. I think that is just their corporate philosophy and now we are left with two *competing* solutions:

1. Intel and AMD supporting Havok for CPU and later GPU

and

2. Nvidia supporting PhysX



So the question remains ... who will win?

i say we wait and see .. it is impossible to predict right now, imo
- frankly i am looking forward to D/Ling the Nvidia pack and trying out Warmonger and the extra maps and demos. Then i want to see what AMD can do.

i think it will be at least a couple of years before either solution becomes a fixture in most games.
 

nitromullet

Diamond Member
Jan 7, 2004
9,031
36
91
Originally posted by: apoppin
AMD evidently was working on their own proprietary version of physics to run on their GPUs since '05. That is what the CrossfireX was supposed to have brought; a more "two-way" communication between CPU and GPU for Physics. However, it appears that it may not have worked out so well [evidently], so they went to Intel's company - Havok - to develop Physics for both CPU and GPU.

It is very clear AMD wants to do Physics on their GPUs [and CPUs] using Havok and would rather die - go to intel and beg - then use 'Nvidia Anything' for PhysX or CUDA. I think that is just their corporate philosophy and now we are left with two *competing* solutions:

1. Intel and AMD supporting Havok for CPU and later GPU

and

2. Nvidia supporting PhysX



So the question remains ... who will win?

i say we wait and see .. it is impossible to predict right now, imo
- frankly i am looking forward to D/Ling the Nvidia pack and trying out Warmonger and the extra maps and demos. Then i want to see what AMD can do.

i think it will be at least a couple of years before either solution becomes a fixture in most games.
Agreed, good assessment.

I think NV is a little ahead with PhysX right now though. A lot in part because of Aegia's efforts to get PhysX implemented in games. Both the UT3 PhysX maps and Warmonger predate NV's purchase of PhysX IIRC.

After trying out the UT3 map packs for myself (@ 1920x1200), I'm of the opinion that PhysX on a powerful gpu like the GTX 280 is definitely ready for prime time. Warmonger on the other hand, wasn't very impressive as a game itself, so didn't do much for PhysX IMO.

Doesn't GRAW work with NV PhysX also?

edit: honestly, based on how game development works these days where you have most studios developing games for the Xbox 360, PS2, and PC simultaneously, I don't think ATI, NV, Intel, or anyone on the PC side of things are going to decide who wins the physics wars. I think it's going to be a console company like Sony or Microsoft who nudges game devs in a certain direction by implementing a physics solution based on whatever API they choose in their next gen console.
 

deerhunter716

Member
Jul 17, 2007
163
0
0
edit: honestly, based on how game development works these days where you have most studios developing games for the Xbox 360, PS2, and PC simultaneously, I don't think ATI, NV, Intel, or anyone on the PC side of things are going to decide who wins the physics wars. I think it's going to be a console company like Sony or Microsoft who nudges game devs in a certain direction by implementing a physics solution based on whatever API they choose in their next gen console.[/quote]



If that does end up being the case - good news that Microsfot is behind Havok then at least for now :)
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
last i heard, ngo got a basic port of physX unto ATI cards, and are now receiving aids from engineers of both AMD and nVidia in order to fully port physX to AMD cards, and not on cuda AFAIK.

Also, didn't nvidia recently make physX an open format? so that would mean it is free as well.

nVidia seems to have gone from trying to keep those as exclusives, to trying to get AMD to jump on the bandwagon in order to better fight intel.
 

chizow

Diamond Member
Jun 26, 2001
9,537
2
0
Originally posted by: nitromullet
From Chizow's posts, it sounds to me like AMD and NV both sort of need each other on this one and they would both benefit from being able to run PhysX/CUDA on NV and ATI GPUs, but neither wants to pick of the phone...
I don't think there's any doubt it would be in consumer's best interests to have 100% of discrete GPU makers behind a single standard to increase adoption rate of GPU-accelerated PhysX, but at the same time I think NV has the market share and dollars to make it work regardless whether AMD participates. Again, with a claimed 70 million GF 8 and 9 series parts and a dominant 2:1 market share in discrete GPUs, NV already has a greater installed user-base relative to other popular check-box features like EAX and DX10. AMD clearly has more to lose here if they choose to do nothing, as they're doing now.

Originally posted by: apoppin
It is very clear AMD wants to do Physics on their GPUs [and CPUs] using Havok and would rather die - go to intel and beg - then use 'Nvidia Anything' for PhysX or CUDA. I think that is just their corporate philosophy and now we are left with two *competing* solutions:

1. Intel and AMD supporting Havok for CPU and later GPU

and

2. Nvidia supporting PhysX



So the question remains ... who will win?

i say we wait and see .. it is impossible to predict right now, imo
- frankly i am looking forward to D/Ling the Nvidia pack and trying out Warmonger and the extra maps and demos. Then i want to see what AMD can do.

i think it will be at least a couple of years before either solution becomes a fixture in most games.
Well I think its a bit more complicated than just a 1. and 2. scenario. Its obvious each company is pushing its own agendas and downplaying the other's solutions to leverage or buy time for their own positions. This is how business is done, nothing wrong with that.

1. Intel - Letting AMD ride its coat tails to hold off PhysX until they can accelerate Havok on Larrabee. They do not offer anything faster or more feature-rich because CPUs are just too slow with FP ops. Once Larrabee is released in 2009/2010 you'll most likely see GPU-acceleration with Havok which may open the door for AMD GPU acceleration as well.

2. AMD - They don't own any proprietary physics API, so they're hoping to buy time until DX11 releases and offers it for free. In the meantime they'll blow sunshine and make no effort to add value to their parts if that means supporting their competitor's IP.

3. Nvidia - They clearly have the best current solution that is scalable from GPU to PPU to CPU to Console CPU. They have some current titles that show the benefits of GPU-accelerated PhysX along with some titles in development. Intel and AMD and any title supporting Havok doesn't directly hurt NV as that simply means there will be no additional physics effects or acceleration for that title. Their hardware implementation is also excellent as it allows mixed and matched pairs and is not chipset dependent like traditional SLI.

Until Havok is accelerated and offers more eye-candy than existing software/CPU Havok, the only "losers" are consumers if devs decide not implement new features like PhysX. In reality I think both Havok and PhysX will co-exist as they do now with some games offering one or the other and in that case the winners will be the ones with a PhysX-capable GPU.
 

chizow

Diamond Member
Jun 26, 2001
9,537
2
0
Originally posted by: nitromullet
edit: honestly, based on how game development works these days where you have most studios developing games for the Xbox 360, PS2, and PC simultaneously, I don't think ATI, NV, Intel, or anyone on the PC side of things are going to decide who wins the physics wars. I think it's going to be a console company like Sony or Microsoft who nudges game devs in a certain direction by implementing a physics solution based on whatever API they choose in their next gen console.
This may be true but its a wash as well when it comes to back-end solvers for consoles. Both PhysX and Havok have software API in place for consoles with prominent titles that were developed on consoles first and then ported over. This is in addition to any PC titles that uses software PhysX or Havok.

Hothardware PhysX Preview

The last slide is a really good illustration of PhysX scalability and cross-platform support. As you can see, the bottom row shows the various Back End Solvers for CPU, PPC (Xbox360), Cell CPU (PS3), PPU, and CUDA/GPU. The key benefit here is that it seems like it wouldn't be too difficult to make a custom PC PhysX pack and simply point to a different Back End Solver, similar to what we saw with UT3. As much as I'd love to see existing software PhysX games go back and do this, I doubt they will but its encouraging to have that option for future console ports.

The slide above it is also a good overview of features offered by the 3 players. In a year or so, you're going to get an additional line added to the Physics Layer with DX11 physics support. This is around the same time Larrabee is supposed to debut for Intel.
 

ASK THE COMMUNITY