ATI Havok GPU physics apparently not as dead as we thought

Page 7 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

evolucion8

Platinum Member
Jun 17, 2005
2,867
3
81
In my case, it pegs the first or second core almost to 99% usage, and the remaining 3 cores usually between 0 and 7% of usage, but in some kind of erratic usage, like the ones you see in a earthquake paragraph. The game only has an option, to turn On or Off PhysX, when it's Off there are some things that won't appear on screen like some flags or plastic covers, not a big deal but they do look good.
 

josh6079

Diamond Member
Mar 17, 2006
3,261
0
0
evolucion8:
Actually, there's some results in the web using the AGEIA card with ATi and nVidia cards, and proved that is even faster calculating Physics than using the 9600GT only as a Physx card, they can be found cheap on ebay, so I might buy one AGEIA card in the future if Havok doesn't take off.

Okay. But the slow down you initially were describing was due to there being no AGEIA PPU or nVidia GPU to handle the physics. They were added to the one nVidia GPU that was already handling the graphics.

munky:
Out of curiosity, what's the cpu usage like in Mirrors Edge? In other words, given a decent quad-core cpu, does it serve any benefit for enhancing physics? Or does the game simply revert to low-detail physics in the absence of dedicated PhysX HW?

Are you asking me or evolucion8? Because I haven't the faintest.
 

evolucion8

Platinum Member
Jun 17, 2005
2,867
3
81
Originally posted by: josh6079

Okay. But the slow down you initially were describing was due to there being no AGEIA PPU or nVidia GPU to handle the physics. They were added to the one nVidia GPU that was already handling the graphics.

Yeah, is just strange because is not graphic workload related, some shattered windows aren't enough for that, specially for a videocard such as mine, and the slowdown still there until I restart the game or disable PhysX. But when is On lots of effects in many areas runs great with no slow downs, until I hit a scene with lots of water and windows starting to shatter, it becomes unplayable. But when it's off the game runs incredibly great, 8xFSAA everything maxxed out and no single slow down. But this is way too offtopic, but there's something that I can tell, the effects are great, but the same effects can be seen in Half Life 2 Episode 1 when the bridge collides, looks outstanding, and that's HAVOK, I couldn't imagine how it would look using GPU acceleration.
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
Out of curiosity, what's the cpu usage like in Mirrors Edge? In other words, given a decent quad-core cpu, does it serve any benefit for enhancing physics? Or does the game simply revert to low-detail physics in the absence of dedicated PhysX HW?

CPUs showing off their physics power :)

In the context of this thread, which is interactive graphics in games, cpu's are not a viable option for pixel shaders, although I'm aware of their use in CG render farms.

See those charts above? That is what you are calling real time? Notice ATi boards are right there with the physics load handled elsewhere, they could be neck and neck on their own if they supported the PhysX API themselves. Oh, for the record- Mirror's Edge uses PhysX for their 'low end' physics model also, it can run extremely outdated physics just like Havok too.

Seeing how the PhysX IP is owned by Ati's competitor, I'm not surprised in the least at their refusal to support it.

Havok is also owned by a competitor, one that is trying to sue them out of their core business.

Besides, Ati is not in the CPU business.

ATi is not a business. They are a division of AMD.
 

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
Ben who cares mirrors edge has nothing over that short video of gameplay for Project Offset game . Thats Havok I liked. Get this word is intel may call the game Nemesis. LOL.
 

chizow

Diamond Member
Jun 26, 2001
9,537
2
0
Originally posted by: evolucion8
PhysX was originally from AGEIA before nVidia bought it, but while is true the part of PhysX running poorly on a CPU because it never was meant to, Havok runs much better and does more things.
Again, as I've asked, prove it. Show me examples of Havok-based games using the same cloth, water, soft body and particle effects that were first seen on the Ageia PPU, much later on Nvidia's GPU PhysX, and just this week on AMD parts through OpenCL Havok.

I've already given a clear example of a Havok title where the cloth effects demo'd at GDC would've been perfect. Assassin's Creed, a Havok game, in Masyaf with the flags or any of the character models. Instead of a fluid, completely dynamic robe on Al Tahir, you have an essentially static texture. Is that what you mean when you claim "Havok runs much better and does more things"?

I do OWN Mirrors Edge and using monitoring sofware with RivaTuner and it does only use one CPU, the other ones remain idle, while the UE3.0 engine is multi threaded, is up to the developers to choose to implement it in the game. Did Bioshock supported Anti Aliasing in DX10? Did Gears of War supported HDR? All those features are supported in the UE3.0 and not implemented in those games because the developer didn't want to, but they're implemented in Mirrors Edge.
Yep, I'm well aware its up to the developers to implement certain features into a game, my comments are based on the fact I don't own a UE3.0 game that isn't at least dual threaded and pegging 2 cores @ 3.0GHz+ (Bioshock). Again, Mirrors Edge could be the exception, but considering its UE3.0 and running either hardware or software PhysX I'm skeptical its only single threaded.

In Mirrors Edge, when you are running inside of the building and the cops shots at the windows and breaks them, it runs at single digits FPS, even if you are not looking at the windows and you move your character to a wall until it hit the face, a position which usually skyrockets your fps, the fps stays like that until you restart the game. So what can you say about it? That doesn't happen to ATi users with the AGEIA card or nVidia cards.
If you're experiencing that with the PhysX effects toggle ON with the additional debris, flags, plastic soft bodies etc, than that's perfectly understandable. You're running the PhysX path meant to be run on hardware accelerated parts, like a PhysX capable Nvidia GPU or an Ageia PPU. This is why your FPS is dropping with an ATI card by itself....because your GPU cannot accelerated these effects which leaves it up to the CPU, which is failing miserably. Again, this is a hardware limitation and nothing else.

Also, the parts about PhysX effects slowing your machine down, even if you're not looking at them and stressing the GPU. You do realize those PhysX calculations need to be made, whether you're looking directly at them or not right? That's why your FPS are slowing to a crawl, its not stress on the GPU, its stress on your CPU causing the slowdown. Turn off advanced PhysX and your FPS should skyrocket to 50-60+ throughout, you just won't get all the advanced effects, just the same kind of effects you've seen over the last 4-5 years with software solutions.
 

Munky

Diamond Member
Feb 5, 2005
9,372
0
76
Originally posted by: BenSkywalker
Out of curiosity, what's the cpu usage like in Mirrors Edge? In other words, given a decent quad-core cpu, does it serve any benefit for enhancing physics? Or does the game simply revert to low-detail physics in the absence of dedicated PhysX HW?

CPUs showing off their physics power :)

In the context of this thread, which is interactive graphics in games, cpu's are not a viable option for pixel shaders, although I'm aware of their use in CG render farms.

See those charts above? That is what you are calling real time? Notice ATi boards are right there with the physics load handled elsewhere, they could be neck and neck on their own if they supported the PhysX API themselves. Oh, for the record- Mirror's Edge uses PhysX for their 'low end' physics model also, it can run extremely outdated physics just like Havok too.
Ati could also be right up there with their own implementation of of GPU-accelerated physics. The fact that they didn't adopt Nvidia's proprietary standard will encourage the adoption of open physics formats, not the way PhysX is right now. And Mirrors Edge is hardly the blockbuster title Ati should be worried to miss out on.


Havok is also owned by a competitor, one that is trying to sue them out of their core business.
I wasn't aware Intel had a GPU-physics implementation and is trying to sue AMD/Ati out of the GPU business... because that's about the only way that statement can have any relevance to this thread. After all, CPU's are way to slow for physics :)


ATi is not a business. They are a division of AMD.

Which means AMD is in the GPU business as well, and has a vested interest in supporting GPU physics... more so than Intel with their integrated crap.
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
Which means AMD is in the GPU business as well, and has a vested interest in supporting GPU physics... more so than Intel with their integrated crap.

That is a very good summation of everything I've been getting at in this thread. You are absolutely right in this statement- so what does AMD do? Hops on board with the one company that has a vested interest in GPU physics failing and not the one that has a lot to gain by it being the dominant format.

The fact that they didn't adopt Nvidia's proprietary standard will encourage the adoption of open physics formats

And what GPU accelerated open physics model is out right now? Oh yeah, none.

And Mirrors Edge is hardly the blockbuster title Ati should be worried to miss out on.

It is a shipping title, sold over a million units across all platforms, and runs extremely well on current hardware. What state was the alternative in? We may see the OpenCL platform finalized and shipping in a few months..... maybe.

I wasn't aware Intel had a GPU-physics implementation and is trying to sue AMD/Ati out of the GPU business... because that's about the only way that statement can have any relevance to this thread.

Given that we have had an employee of AMD pointing out that the fact that Havok would benefit from both sides of their business seems to be an indicator that is it rather relevant to the topic at hand.

Still waiting for Nemesis to reply, get a strange feeling that is the clip he is so impressed by, heh.
 

Wreckage

Banned
Jul 1, 2005
5,529
0
0
Originally posted by: munky

Ati could also be right up there with their own implementation of of GPU-accelerated physics.
Except they never had one.

The fact that they didn't adopt Nvidia's proprietary standard will encourage the adoption of open physics formats, not the way PhysX is right now.
Havok is a proprietary standard owned by Intel.

And Mirrors Edge is hardly the blockbuster title Ati should be worried to miss out on.
As noted above it sold very well and was great PR for PhysX.


I wasn't aware Intel had a GPU-physics implementation and is trying to sue AMD/Ati out of the GPU business... because that's about the only way that statement can have any relevance to this thread. After all, CPU's are way to slow for physics :)
They own Havok and if they pull AMDs x86 license then ATI will go out of business along with their parent company in a matter of months if not weeks.


Which means AMD is in the GPU business as well, and has a vested interest in supporting GPU physics... more so than Intel with their integrated crap.
Intel bought Havok because of Larrabe. Which is supposed to compete directly with ATI and NVIDIA high end GPUs.

I will bet that Intel does not use OpenCL for Havok.
 

Munky

Diamond Member
Feb 5, 2005
9,372
0
76
Originally posted by: Wreckage
Except they never had one.
They don't have one yet. And given that the game runs slower when the same GPU is doing physics and graphics, it's not a mature enough technology for me to be concerned either way.


Havok is a proprietary standard owned by Intel.
Which runs on exactly 0 Intel GPU's right now.


As noted above it sold very well and was great PR for PhysX.
It's merely a drop in the ocean in the grand scheme of GPU-physics and its future. Also, whatever chunk of those sales went to consoles doesn't apply to Nvidia's GPU-physics business strategy, because those were already supported without Nvidia's involvement.


They own Havok and if they pull AMDs x86 license then ATI will go out of business along with their parent company in a matter of months if not weeks.
Merely speculation on your part.


Intel bought Havok because of Larrabe. Which is supposed to compete directly with ATI and NVIDIA high end GPUs.

I will bet that Intel does not use OpenCL for Havok.
And NV bought PhysX because of CUDA-enabled GPU's. The difference is Larrabee is still in development and you don't know whether it will be successful, whereas NV already has a broad market share of discrete GPU's, and is leveraging PhysX to further promote its HW sales.
 

Wreckage

Banned
Jul 1, 2005
5,529
0
0
Originally posted by: munky

They don't have one yet.
Do you have news that they are working on one?

And given that the game runs slower when the same GPU is doing physics and graphics, it's not a mature enough technology for me to be concerned either way.
So does AA.


Which runs on exactly 0 Intel GPU's right now.
Which is fine because there are 0 games using GPU physics with Havok.

Merely speculation on your part.
Not really....
http://techreport.com/discussions.x/16585


And NV bought PhysX because of CUDA-enabled GPU's. The difference is Larrabee is still in development and you don't know whether it will be successful, whereas NV already has a broad market share of discrete GPU's, and is leveraging PhysX to further promote its HW sales.

Well here we can agree.

Actually I am a fan of all physics implementations. PhysX, Havok or the many in-house solutions that developers are using. Whatever makes the game better, I'm for it.
 

josh6079

Diamond Member
Mar 17, 2006
3,261
0
0
Originally posted by: munky
They don't have one yet. And given that the game runs slower when the same GPU is doing physics and graphics, it's not a mature enough technology for me to be concerned either way.

I think the point others are making is that if they (ATi) would have done what you say they could do (implement their own GPU-AP) sooner, it might have been more mature by now.

It's merely a drop in the ocean in the grand scheme of GPU-physics and its future.

I remember when Oblivion was the first big game that used ATi's HDR+AA capabilities - and even that wasn't officially supported at first.

But that doesn't mean the feature wasn't important, nor the game that first implemented it the best.

What is to some a drop in the ocean is to others a spark that starts the fire.
 

Zstream

Diamond Member
Oct 24, 2005
3,396
277
136
Originally posted by: Wreckage
Originally posted by: evolucion8
but while is true the part of PhysX running poorly on a CPU because it never was meant to, Havok runs much better and does more things.

Please stop posting this FUD. This is blatantly false information.

Once more, the PhysX in Mirror's edge was NOT MEANT to run on the CPU. There are many games out there (Gears of War) that run CPU PhysX.

Now unless you have a smidgen of proof about Havok running better or having more features in regards to physics..... stop misleading people.

Considering Havok can run on PS2, xbox, 360, ps3, gc and a PC it is easy to prove it runs better if not much more optimized.
 

josh6079

Diamond Member
Mar 17, 2006
3,261
0
0
Originally posted by: evolucion8
...but while is true the part of PhysX running poorly on a CPU because it never was meant to...

To be fair, you're right. PhysX was never "meant" to run on a CPU.

pcgameshardware: Link

Ageia once developed the Novodex physics engine and the appropriate PPU (Physics Processing Unit). The latter was supposed to revolutionize future games with unprecedented physics effects and should become the third main component in gaming PCs besides the graphics card and the processor. While the Novodex Engine is still used today, the PPU wasn't able to establish itself due to poor software support - and finally Ageia was bought by Nvidia. Novodex was renamed to Physx and due to the porting to Nvidia's Cuda it cannot just be run on the processor but also on modern Geforce GPUs.

But what PhysX is - Novodex renamed - was "meant" to run on a CPU, and still does on a variety of hardware (e.g., certain consoles).

That said, I do agree that it's probable Havok is, currently, the choice of developers. Once they get their GPU-accelerated physics out the door, this will only encourage developers more. Unless, of course, nVidia steps up its entire PhysX package - which they're already doing.

Competition. :beer:

ATi is under 21. :frown:
 

Wreckage

Banned
Jul 1, 2005
5,529
0
0
Originally posted by: Zstream

Considering Havok can run on PS2, xbox, 360, ps3, gc and a PC it is easy to prove it runs better if not much more optimized.

PhysX can run on all of those as well....it even runs on the iPhone.

Not sure what your point is. :confused:
 

novasatori

Diamond Member
Feb 27, 2003
3,851
1
0
some of the posts in this thread are staggering, quite a few posters need to read up a bit before they attempt to discuss topics of which it is clear they know nothing or are blinded by fanboyism

as an owner of an nv card I was pleased to see havok releasing gpu computed physics, I look forward to/hope nv will release open CL/havok gpu support
 

Munky

Diamond Member
Feb 5, 2005
9,372
0
76
Originally posted by: Wreckage
Do you have news that they are working on one?
Isn't that what this thread is all about? Ati working to support Havok on their GPU's?

So does AA.
And that's why some people chose to play without AA in return for higher FPS.

Which is fine because there are 0 games using GPU physics with Havok.
AMD can change that by porting Havok to their GPU. However AMD cannot change the fact that their primary competitor in the GPU market owns PhysX and implemented it on their HW, which makes GPU-PhysX support a lot less appealing to AMD from a business perspective.

So what else is new? Intel and Nvidia are also suing each other, and nobody knows how all this will play out in the end.
 

Munky

Diamond Member
Feb 5, 2005
9,372
0
76
Originally posted by: josh6079
Originally posted by: munky
They don't have one yet. And given that the game runs slower when the same GPU is doing physics and graphics, it's not a mature enough technology for me to be concerned either way.

I think the point others are making is that if they (ATi) would have done what you say they could do (implement their own GPU-AP) sooner, it might have been more mature by now.

It's merely a drop in the ocean in the grand scheme of GPU-physics and its future.

I remember when Oblivion was the first big game that used ATi's HDR+AA capabilities - and even that wasn't officially supported at first.

But that doesn't mean the feature wasn't important, nor the game that first implemented it the best.

What is to some a drop in the ocean is to others a spark that starts the fire.

From a technology perspective, it's a step in the right direction. But unlike with PhysX, HDR+AA was not an IP owned and controlled by Ati, so it made Nvidia's decision to support it a lot simpler.