Thoughts On GPU Physics

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Kuzi

Senior member
Sep 16, 2007
572
0
0
Remember Nemesis Havoc is owned by Intel but they are independent, meaning they can work with any company that wants to work with them (even Nvidia). Also for example, if Havoc gets optimized to run on ATI GPUs, that doesn't mean it will lose x86 support. A game can have certain Physics features running concurrently on the CPU and GPU for an even greater experience.

But I agree, Intel will most definitely optimize Havoc to run physics on Larabee. Not only that, but I?d say Nehalem by itself can run physics very well on its own, if we take into account how powerful it will be, 20-50% faster than Core2 and having the ability to run 8 threads.

For AMD they need to get Havoc physics optimized not only for their CPUs (they are working on that already), but also to get it running on their GPUs. Phenom and the upcoming Shanghai/Deneb will not be anywhere near the performance of Nehalem, and they will only have the ability to run 4 threads, maybe 6 threads next year.

Havoc already runs on x86 CPUs, whether it?s Intel or AMD (but I?m sure more optimization can/will be done). But GPUs can run certain types of physics work many orders of magnitude faster than any CPU can. That?s why AMD/ATI needs to get these certain loads running on their GPUs, especially now that Nvidia seems to be working on doing just that with PhysX.

Once ATI gets Havoc physics drivers out for their GPUs, they will have a huge advantage, because it will mean that any game released with Havoc support will get accelerated with ATIs new GPUs. And over 100 developers use Havoc already.

Larabee may end up being very fast for Physics, but lets wait first to see how it performs with graphics :)
 

aka1nas

Diamond Member
Aug 30, 2001
4,335
1
0
Originally posted by: Sylvanas


Wreckem:
Unreal 3, and all future games based off of the Unreal Engine, will be PhysX enabled.

Unreal Engine 3 does not mean guaranteed PhysX- the engine has the potential to but does not support it on every title. Mass Effect does not have PhysX, Gears of War and Bioshock are the same, infact UT3 is the only UE3 game for the PC that supports hardware accelerated Physics, with the others it is up to the developer to implement it.

Mass Effect and Gears of War both use PhysX, though they don't currently have hardware accel support.
 

Kuzi

Senior member
Sep 16, 2007
572
0
0
Forgot to mention that since Nvidia bought Ageia, they have access to the PhysX hardware and know how it works. If I was in their place, I'd integrate physics hardware similar to PhysX chips inside the GPU itself. This way they make sure games with intensive physics perform very well, better than running on any CPU or GPU.

And game developers will not worry about supporting PhysX because every NV video card sold would have PhysX support in hardware, meaning they know it will run flawlessly (no slowdowns etc).
 

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
Well this is a very tough call.

First thing I have noticed is this.

1). AMD/ATI are going multicore. I believe AMD will scale 100% with R800. Intel larrabee will scale 100% right out of the gate. So Intel /ATI AMD seem to be in agreement the multicore is the future(2009) I believe that Larrabee will do fine as a GPU . But not = to ATI/NV out of the gate but very close. added cards will scale 100% so ya intel will be good + they can run x86 programms with the gpu also.

2). Intel has a compiler for epic that converts x86 instructions to epic. AMD needs this bad. I believe if Intel is smart they will make this compiler available to AMD as part of the l settlement. Intel knows that they need a healthy AMD . AMD knows if they can't compete with Cuda and larrabee they are washed up even if intel hands them 4 billion + dollars. AMD won't survive . Lets see what Intel does . AMD needs this compiler way worse than any can imagine

3). Intel also made it pretty clear to game developers that . Intel is serious about PC gaming . This became clear to all when Intel bought its 1st gaming company . I did say there first gaming company. Intel consols are not in development . But look for one with the release xbox 3 PS4 . So Intel sent out aloud and clear message to all. Were entering the game market and were not playing . Intel is worth what 150 billion . Thats more than ATI/AMD NV/Via all combined. So how do you think this well play out ? Using your logic? For me its a no brainer intel is going to shake it up big time. Amd is in the game providing intel lets them play.

 

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
Well like you . I know little about larrabee . But the fact that Intels XT chipset will use QPI 6.4ghz. To link to larrabee says alot. The fall IDF will feed us just enough info on larrabee to were we should beable to make reasonable assuptions. Spring IDF 09 we should see a working card. When you see MS release DX11 which by the way makes DX10.1 mandatory. Than there is the talk that MS is using Intels API for Raytracing in direct x11. Than there is MS physics for DX11. We know MS will go softwear physics. Will MS creat its own API or use an already existing API for software physics. I smell Havok API physics in DX11. AS already stated Havok has many titles already out. Havok operates independent of Intel . But keep in mind its still intel company.

I actually question if NV will support DX11. If you want links we can do that.
 

bunnyfubbles

Lifer
Sep 3, 2001
12,248
3
0
well the nature of ray tracing would mean just about any mutli chip/card platform that supports such rendering will scale 100% (or at least very close to it)

there was a thread not too long ago discussing whether or not we thought multi gpu was going to be the 'way of the future' or whatever, I said yes stating ray tracing as the reason why we'll see it expand in use instead of staying niche where it is now
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
how can they do ray tracing without an API that supports it and games written for it?
 

DerekWilson

Platinum Member
Feb 10, 2003
2,921
5
0
Originally posted by: SunnyD
Originally posted by: aka1nas
Originally posted by: Kuzi
Startcraft II uses Havoc physics, so ATI video cards may have an advantage there, since Nvidia doesn't support Havoc.

ATI cards don't support Havok either. :p

Actually they do. They just haven't publically released a driver for it, but they have demoed it a year or so ago.

i know this is from why up in the thread and i'm sorry if this has been addressed already ...

in fact, the demo AMD showed off in 06 and 07 is completely irrelevant.

those demos were based on Havok FX which is now a dead product. from what we understand, all that work that was done between AMD (and NVIDIA) and Havok was tossed out the window by Intel when they acquired Havok.

it's tough for even us to read between some of the lines, as obviously Intel AMD NVIDIA what used to be AGEIA and Havok aren't going to give us the full story when this is such a touchy issue and there are deals in place everywhere ...

but it was pretty well stated that Havok FX is not a thing any more and that any new developments in GPU physics by havok will be in a different direction.
 

hooflung

Golden Member
Dec 31, 2004
1,190
1
0
Nemesis I think if MS implement a physics API from any 1 competitor in DX11 then they are going to get sued by the other. What will probably happen is basically what is already there for render paths. DX11 will want x, y, z features and it will be up to the driver to accomplish that feature whether it internally calculates the physics as Havok OR AGEIA or via a vanilla DX11 runtime.

Intel is under scrutiny from the FTC and EU. Microsoft has already been hit up. Nvidia has had a few 'things' said about its Vista 'crashing' drivers by MS employees. I don't think DX11 will be made in such a way that it might as well be legal chumm in the water.
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
Nemesis I think if MS implement a physics API from any 1 competitor in DX11 then they are going to get sued by the other.

Under what law? This is so profoundly absurd it is hard to imagine how it could play out. MS is not a competitor with ANY of the parties involved nor do they have a vested interest in financial terms in seeing any one format come out on top. There is no basis to sue any of the companies no matter what happens in terms of Physics support or lack of in DX11. Supporting a particular extension inside of an API isn't going to get anyone in trouble unless it is MS violating someone else's IP(which they won't be even considering for any of the players we are talking about).

That said, I would assume that Intel and AMD would want MS to leave support out at this point. Neither of them are in a position that they could compete with nVidia, and AMD seems to be staunchly opposed to even trying to go down that route at any point in the next several years. AMD seems bound and determined to make the ATi wing suffer for the benefit of their CPU division no matter what the end costs to the company. What both Intel and AMD have to contend with currently is that neither of them are in a position to compete with nV at the moment on that front. As soon as Intel is in a position where they can compete, they will attempt to push the format that benefits them the most as hard as possible, then MS will be dealing with three different directions competitors are wanting to bring the industry. MS will not roll a proprietary standard into D3D as a requirement. They have rolled formerly proprietary standards in before, DXTC is S3TC bit for bit, but once it was rolled into D3D it became an open standard under D3D(although it was still proprietary under OpenGL).

As far as ray tracing goes, it solves one rather small problem and introduces a ton of new ones. It it is not the long terms solution, it is overall fairly inferior to rasterization and while local illumination with it may have some benefits there are much larger detriments to global illumination. At some point a viable method of radiosity would be the ideal, the problem currently is the amount of precomputed data needed to use the technique. Rasterization is going to be around for a long while yet. Reflective spheres are nice and all, but that is about all ray tracing has going for it at the moment.
 

dunno99

Member
Jul 15, 2005
145
0
0
I think we're all missing something very fundamental and important. This reason will be why Havok physics (that affects in-game state) will not go on to the GPU with the current generation (or even next and a few after that) of hardware.

And what is that? It's floating point accuracy. AMD and nVidia both use non-IEEE-754 compliant hardware to do their floating point calculations (and I'm not just talking about NaN or INF). The ulp ranges as specified by DX10 is 1.0 ulp for floating point multiplies and divides. This basically means that the floating point hardware implementation for either companies can drastically differ, and the state computational results will drift over time. This means either a game has to be developed to one GPU architecture specifically (not even to one company...it's to one architecture), states will have to be synchronized every frame (killing performance), or super expensive branch checks will have to be done on specific floating point operations. Any pick of the above 3 scenarios will result in either crappy performance or market fragmentation: nothing that a PC game company will be stupid enough to chose.

But in case I'm full of s***, please do point it out. =)

Edit: MSDN Link and edits to conform to the link.
 

dunno99

Member
Jul 15, 2005
145
0
0
Originally posted by: BenSkywalker
As far as ray tracing goes, it solves one rather small problem and introduces a ton of new ones. It it is not the long terms solution, it is overall fairly inferior to rasterization and while local illumination with it may have some benefits there are much larger detriments to global illumination. At some point a viable method of radiosity would be the ideal, the problem currently is the amount of precomputed data needed to use the technique. Rasterization is going to be around for a long while yet. Reflective spheres are nice and all, but that is about all ray tracing has going for it at the moment.

Radiosity will not be the ideal. Since radiosity is only good for pure diffuse (inter-)reflectance, it has the inverted problem of ray tracing (where ray tracing solves specular reflections). Almost all (if not all) known material in the world exhibit both diffuse and specular reflectance with respect to the standard BRDF model (obviously, the spectral reflectance lobe isn't just some simple combination of the two).

The only feasible way in the next 100 years to get good global illumination is probably some simpler form of photon mapping (which, if you think about it, is basically just radiosity + ray tracing), with some sort of fast eviction of invalid cached photons + photon recasting. The rest is just a problem of standard Whitted ray tracing (and photon mapping isn't even the bleeding edge of photo-realistic renderings).

However, you're totally correct that rasterization has a long way to go. I'm going to guess that even by 2015 (to pick a nice round number), the majority of a scene is still going to be rendered via rasterization.

Edit: clarification
 

bunnyfubbles

Lifer
Sep 3, 2001
12,248
3
0
Originally posted by: taltamir
how can they do ray tracing without an API that supports it and games written for it?

that's the catch, it seems clear that its the future, but we need to make a transition somewhere just like we're just now making a transition to 64bit computing

ray tracing technology has been in development for a long time, the problem has been it requires far too much hardware muscle to be a realistic option...but the fact that it can scale so well makes it the perfect choice for the way hardware has been evolving into multiple cores and multiple gpgpus/video cards with ever increase bandwidth and communication lanes between such parts
 

bunnyfubbles

Lifer
Sep 3, 2001
12,248
3
0
Originally posted by: dunno99
However, you're totally correct that rasterization has a long way to go. I'm going to guess that even by 2015 (to pick a nice round number), the majority of a scene is still going to be rendered via rasterization.

Seems like such a waste considering that starting in 2009-10 we'll have 8 core (16 thread) Nehalem and PCI-e 3.0 that should allow enough bandwidth to have ~4 x GPGPU cards without running into bottlenecks. While even that much processing power might not be enough for such hardware demanding applications, it certainly would be a start for developers who would be running such high end rigs that would even include multiple CPUs.

I could see the majority of games still being largely/mostly/completely rasterization but I certainly hope we'll have at least a few Crysis-like pioneering games that push/rape even the insane hardware we're bound to have in 2015...
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
Originally posted by: BenSkywalker
Nemesis I think if MS implement a physics API from any 1 competitor in DX11 then they are going to get sued by the other.

Under what law? This is so profoundly absurd it is hard to imagine how it could play out. MS is not a competitor with ANY of the parties involved nor do they have a vested interest in financial terms in seeing any one format come out on top. There is no basis to sue any of the companies no matter what happens in terms of Physics support or lack of in DX11. Supporting a particular extension inside of an API isn't going to get anyone in trouble unless it is MS violating someone else's IP(which they won't be even considering for any of the players we are talking about).

That said, I would assume that Intel and AMD would want MS to leave support out at this point. Neither of them are in a position that they could compete with nVidia, and AMD seems to be staunchly opposed to even trying to go down that route at any point in the next several years. AMD seems bound and determined to make the ATi wing suffer for the benefit of their CPU division no matter what the end costs to the company. What both Intel and AMD have to contend with currently is that neither of them are in a position to compete with nV at the moment on that front. As soon as Intel is in a position where they can compete, they will attempt to push the format that benefits them the most as hard as possible, then MS will be dealing with three different directions competitors are wanting to bring the industry. MS will not roll a proprietary standard into D3D as a requirement. They have rolled formerly proprietary standards in before, DXTC is S3TC bit for bit, but once it was rolled into D3D it became an open standard under D3D(although it was still proprietary under OpenGL).

As far as ray tracing goes, it solves one rather small problem and introduces a ton of new ones. It it is not the long terms solution, it is overall fairly inferior to rasterization and while local illumination with it may have some benefits there are much larger detriments to global illumination. At some point a viable method of radiosity would be the ideal, the problem currently is the amount of precomputed data needed to use the technique. Rasterization is going to be around for a long while yet. Reflective spheres are nice and all, but that is about all ray tracing has going for it at the moment.

under EU law.
The EU does not mess around with monopolies... MS has already broken, and is making IE8 fully standards compliant (no more of that "windows style rendering" of web pages). And supposedly the next version of office will work with open format document (support by every office suite in the world except MS)
 

Elcs

Diamond Member
Apr 27, 2002
6,278
6
81
My apologies for just skimming over this thread (at work) and not being "in the loop" but I have some basic questions.

Are GPU's being used for Physics calculations? If so, are drivers released for the public and where can I get them from?

Does this have a positive effect on Frames Per Second? If so, in what games/type of games? If the impact is negative, why is this so?

What GPU's are capable of doing this currently at consumer level and actually have support, hacked or otherwise for this feature?

Wow.... more than I expected. If my post can be answered by sifting through this thread from home, please advise.
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
And what is that? It's floating point accuracy. AMD and nVidia both use non-IEEE-754 compliant hardware to do their floating point calculations

Actually, nV is almost fully compliant with 754r, they are actually closer in compliance then some of SIMD units currently in use and outside of x87 and SSE are more compliant then even the exotic CPU solutions currently available.

This means either a game has to be developed to one GPU architecture specifically (not even to one company...it's to one architecture), states will have to be synchronized every frame (killing performance), or super expensive branch checks will have to be done on specific floating point operations. Any pick of the above 3 scenarios will result in either crappy performance or market fragmentation: nothing that a PC game company will be stupid enough to chose.

I think you are giving too much credit to game developers utilization of physics ;) For the next several years it will just be to add more visual flair, inaccuracies would be perfectly acceptable in this timeframe. When we start seeing the hardware that is currently at the highest end as the baseline then we can start to see devs utilize physics as an actualy in game mechanic.

Radiosity will not be the ideal. Since radiosity is only good for pure diffuse (inter-)reflectance, it has the inverted problem of ray tracing (where ray tracing solves specular reflections). Almost all (if not all) known material in the world exhibit both diffuse and specular reflectance with respect to the standard BRDF model (obviously, the spectral reflectance lobe isn't just some simple combination of the two).

I was thinking in the context of radiosity+shaders. Honestly, I think on the materials surface level we are already doing pretty good, it is the diffuse factor that isn't in place at all right now that really makes things look fake(air in particular).

ray tracing technology has been in development for a long time, the problem has been it requires far too much hardware muscle to be a realistic option...but the fact that it can scale so well makes it the perfect choice for the way hardware has been evolving into multiple cores and multiple gpgpus/video cards with ever increase bandwidth and communication lanes between such parts

Actually that isn't true at all. Radiosity has a whole bunch of problems that rasterizers don't- 'around the corner' rendering issues, inferior handling of base and more advanced filtering, getting any sort of diffuse going on materials. Radiosity has many drawbacks and is poorly suited for real time graphics not only due to the hardware, but because of how the rendering technique itself works. I recall when working with CGI how much 'cheating' had to be done to get certain frames to render properly if I wanted them ray traced.

Seems like such a waste considering that starting in 2009-10 we'll have 8 core (16 thread) Nehalem and PCI-e 3.0 that should allow enough bandwidth to have ~4 x GPGPU cards without running into bottlenecks. While even that much processing power might not be enough for such hardware demanding applications, it certainly would be a start for developers who would be running such high end rigs that would even include multiple CPUs.

Check out the available demos of what real time ray tracing can offer and ignore the reflections and look at the rest of the scene. Ray tracing not only isn't ideal, it is flat out inferior on several fronts. Some major problems with the rendering technique we have long since forgotten about as they were 'solved' long ago with rasterization. Ray tracing isn't better, it's just different.

The EU does not mess around with monopolies...

I am aware of the Stalinistic methods of the EU- but even they are not so far out of touch with capitalism to sue a company for supporting someone else's standard with no direct benefiit to them at all. We aren't talking about MS chosing their own here, we are talking about them backing nVidia, ATi or Intel- pretty much they must support one of them or come up with their own standard(which would then get them in trouble with the uber coms).
 

dunno99

Member
Jul 15, 2005
145
0
0
Originally posted by: BenSkywalker
Actually, nV is almost fully compliant with 754r, they are actually closer in compliance then some of SIMD units currently in use and outside of x87 and SSE are more compliant then even the exotic CPU solutions currently available.

Yep. The fact that there isn't floating point divide on the Cell SPUs is one popular example.

Originally posted by: BenSkywalker
I think you are giving too much credit to game developers utilization of physics ;) For the next several years it will just be to add more visual flair, inaccuracies would be perfectly acceptable in this timeframe. When we start seeing the hardware that is currently at the highest end as the baseline then we can start to see devs utilize physics as an actualy in game mechanic.

Haha! Yes, that's the first problem...the devs have to implement it to begin with. Although honestly, a lot of devs are using Havok Physics (not FX), especially in the console world. But this is why I said physics on GPU that affects game state won't be used in the near future because of these inaccuracies. At least, not in the PC world, since I don't think any dev would want to segregate their customers into "AMD GPU users can only play with AMD GPU users, and similarly for nVidia users". Of course, if everyone just fall back to CPU, then there probably wasn't a point to putting it on the GPU in the first place.
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
maybe ms DX11 will solve all of this by making one physX layer that both companies will support... and further destroy gaming on linux and mac... (heh, as if those haven't been pulverized enough)
 

bunnyfubbles

Lifer
Sep 3, 2001
12,248
3
0
Originally posted by: BenSkywalker
Check out the available demos of what real time ray tracing can offer and ignore the reflections and look at the rest of the scene. Ray tracing not only isn't ideal, it is flat out inferior on several fronts. Some major problems with the rendering technique we have long since forgotten about as they were 'solved' long ago with rasterization. Ray tracing isn't better, it's just different.
I have seen them and I do know what you're talking about, but I've been basically regurgitating what I've heard from those far more knowledgeable on the subject than I, and you appear to be one of them, so I'm not going to push too hard, but don't you suppose that part of the reason why they focus on showing off ray tracing's strengths and neglect (or I guess distract if its actually weak there) the rest is the fact that the hardware they had to work with was barely enough for even simple demos that can really show off such faults if you're looking for them? My argument is basically that once we have all this awesome hardware at our disposal, such problems will be far more solvable...but maybe you know about the technicalities that I don't that would make it impossible even with the hardware capable of doing it.


Does this have a positive effect on Frames Per Second? If so, in what games/type of games? If the impact is negative, why is this so?
Adding physics to games is a negative net effect on fps simply because it is an element that adds to the game's complexity, however GPUs can be far more suitable for processing physics than the CPUs. Unless the GPU is being completely taxed to perform rendering operations (super high resolutions / AA), it will provide a performance increase vs. having the CPU do it, but it would never be as good as having a dedicated part (such as a PPU, or perhaps a video card doing only physics calculations, essentially acting the part of a PPU) to do the work.
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
well... except a GPU is better are doing physics then a PPU, so:
CPU + GPU < CPU + GPU + PPU < CPU + 2x GPU using CF / SLI

If you are going to buy a second part to do physics, it might aswell be a second GPU. :)
 

Lorne

Senior member
Feb 5, 2001
874
1
76
I wonder if it be possible to create on its own or build into drivers of the GPU's a throttling control (manual lever/slider) to let you sellect a propriatary direction of physics, Wether its more CPU or GPU bound.
It would help in to put the workload onto the less bottlenecked devices.
This would also help significantly in future driver development.
 

hooflung

Golden Member
Dec 31, 2004
1,190
1
0
Originally posted by: BenSkywalker
Nemesis I think if MS implement a physics API from any 1 competitor in DX11 then they are going to get sued by the other.

Under what law? This is so profoundly absurd it is hard to imagine how it could play out. MS is not a competitor with ANY of the parties involved nor do they have a vested interest in financial terms in seeing any one format come out on top. There is no basis to sue any of the companies no matter what happens in terms of Physics support or lack of in DX11. Supporting a particular extension inside of an API isn't going to get anyone in trouble unless it is MS violating someone else's IP(which they won't be even considering for any of the players we are talking about).

This is not the same market place that was around when Direct X was still in version 6. That is when S3TC algorithms were added to DX. Not only that but S3 wasn't in any financial position to cry foul and there was little precedent to back them up on it.

The competition in texture compression wasn't that heated either. It was good for the fledgling 3d graphics market. Today, the market isn't looking for someone to force standards down developers throat. You can liscense physics for free or you can pay for a liscense. Service and market share is what you are paying for, or not paying for.

The moment Microsoft gives the GO-AHEAD-RUN to any one vendor in the physics market there will be backlash from the other vendors because their hardware sales depend on it. Nvidia is not S3. AMD is not S3. Intel is not S3. Microsoft has been beat up by the EU, FTC and SK trade commissions to the point where it would be in their best interest to define a generic physics runtime or an interface to whatever subsystem you want as long as it plays nice with DX.

Take money out of Nvidia, AMD or Intel's mouth and you might see some backlash which won't be good for the consumer.