• We should now be fully online following an overnight outage. Apologies for any inconvenience, we do not expect there to be any further issues.

What GPGPU applications are available to ATI users?

Page 4 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Status
Not open for further replies.

Genx87

Lifer
Apr 8, 2002
41,091
513
126
Intel is already integrating GPUs into their CPUs. Although they are not that powerfull today a few more generation and it would no doubt be capable enough to handle Havok. That being another selling point for their CPUs wouldn't that be enough for them to consider porting it?

When Intel has a GPU capable of running Havok, Havok will come to the GPU. It is as simple as that. Right now their integrated crap can barely run Aero much less physics calculations within an actual game.
 

zebrax2

Senior member
Nov 18, 2007
977
70
91
When Intel has a GPU capable of running Havok, Havok will come to the GPU. It is as simple as that. Right now their integrated crap can barely run Aero much less physics calculations within an actual game.

But by then PhysX (with possibly Bullet) would already have dominated the GPU accelerated physics market for quite a long time and luring those developers to GPU-Havok would be a pretty tough job. Bah! too much speculation on my part guess i just need to sleep.
 

Genx87

Lifer
Apr 8, 2002
41,091
513
126
But by then PhysX (with possibly Bullet) would already have dominated the GPU accelerated physics market for quite a long time and luring those developers to GPU-Havok would be a pretty tough job. Bah! too much speculation on my part guess i just need to sleep.

Intel doesnt care. They still believe if they bring it to market the market will adopt it. The funny thing is it is possible the market could still adopt it. Even if PhysX takes off. If 60% of the overall GPU market can suddenly run GPU physics who do you think devs will flock to?
 

GaiaHunter

Diamond Member
Jul 13, 2008
3,700
406
126
The irony here is that this future will never come (or take a long time) when your 2nd biggest competitor won't push the technology and just keep playing the victim game since they got nothing to show.

Well at least you understand that GPGPU has barely any traction - which means CUDA has barely any traction..

AMD has undoubtedly held down GPGPU development and phsyics acceleration on GPUs. Its better to have two people working on a problem, not just one.

Why do you think he got the job? Its not because he thinks PhysX is a failure. Its because AMD required somebody with good vision and expertise on this area of the field, and the fusion project was a big opportunity for him to work on the field of heterogeneous computing. I think it kind of tells you the status of GPGPU development at AMD. They too know that heterogeneous computing is the future.

AMD bought ATi all those years ago. What reason do you think they did that?
 

Scali

Banned
Dec 3, 2004
2,495
0
0
Intel doesnt care. They still believe if they bring it to market the market will adopt it. The funny thing is it is possible the market could still adopt it. Even if PhysX takes off. If 60% of the overall GPU market can suddenly run GPU physics who do you think devs will flock to?

I doubt that. Intel may have the largest marketshare overall... but if you look at the stats for a gamer community such as Steam, Intel barely plays a role, compared to nVidia.
So I don't think Intel has any leverage here.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
Need I remind you, you're talking about a company whose bread-and-butter is x86-licensed CPUs.

Thank you for making my point all the less subtle.

What is AMD's x86 net profit for the entirety of their existence? How about Intel's?
 

Scali

Banned
Dec 3, 2004
2,495
0
0
Thank you for making my point all the less subtle.

What is AMD's x86 net profit for the entirety of their existence? How about Intel's?

AMD's net profit is about 0. But they're still here :)
But well, it's AMD's own fault that they're in this situation.
They decided to make x86 clones their core business, rather than developing their own technology and carving out a niche in the market. Ironically enough AMD helped build the x86-monopoly that they now depend on. In the early 80s, there was still room for various competing architectures in the PC market, such as Z80, 6502, 68k and x86.

AMD has made two pretty unfortunate decisions anyway, recently.
Firstly, they cancelled their Geode line at pretty much the same moment that Intel introduced their Atom.
With Geode, AMD could have been a decent player in the embedded and netbook market. In fact, at work we build some embedded systems around Geodes, we now have to move to Atom before the supply dries up. Not that easy because Atom is quite a hothead compared to Geode, and our systems have no cooling whatsoever (not even some holes in the casing).
And AMD also sold their flash memory division, shortly before SSDs arrived.

But yes, in general... AMD doesn't come up with its own technology, so they are doomed to follow the leader. Apple threw them a lifeline with OpenCL, but so far AMD is messing that up aswell.
 

faxon

Platinum Member
May 23, 2008
2,109
1
81
I am running Milkyway@home. http://milkyway.cs.rpi.edu/milkyway/

I think you can also run Collatz Conjecture on the ATI cards too.
yea you can run both of those. if you start doing it, join the TeAm! http://milkyway.cs.rpi.edu/milkyway/team_display.php?teamid=77

collatz might be tricky if you're setting it up now just FYI. i just installed 10.5 drivers and i think that they werent behaving when i was running collatz. i hung about 4 times (system completely unresponsive) and the system slowed to a crawl another good half dozen times from what looked like a driver crash but wasnt, started getting like two FPS on desktop whenever i did anything with aero or tried to watch any type of video. never figured out what it was for sure though, was only running it while milkyway wasnt sending work units the last couple days
 

Munky

Diamond Member
Feb 5, 2005
9,372
0
76
So if I release an OpenCL application today, you think it's acceptable that I point the end-users to download the Stream SDK (I don't think it's legal to bundle it in the installer myself, not that I would want to, considering the size), in order to get my application running?
I think that's ridiculous. They shouldn't need to know anything about it, it should just come with the display drivers as standard... nVidia has bundled OpenCL with their drivers since November.
Or you could just, you know, use some other GPGPU shader language besides OpenCL. It might be more complicated, but the alternatives are there if the developer needs them.


I want to use GPU-accelerated physics because it allows me to deliver a more immersive gaming experience to the end-user. I don't care if that's PhysX, Havok, Bullet or something else... but currently there just is no option for ATi period. Despite ATi's promises year after year.

Immersive? That's highly debatable. I wouldn't consider extra debris flying around on the screen "immersive". If the GPU Physx effects were actually interactive and integrated with the game world, then you might be onto something, but as far as I can see, it's just extra fluff.
 

alyarb

Platinum Member
Jan 25, 2009
2,425
0
76
is OpenCL going to standardize realistic, interactive physics for every object in every game?
 

Munky

Diamond Member
Feb 5, 2005
9,372
0
76
is OpenCL going to standardize realistic, interactive physics for every object in every game?

It's still gonna be up to the developers and designers to create the game world around such interactive physics, and a game engine to handle the effects. But at least having a cross-vendor gpu-accelerated physics SDK will make that more likely to happen.
 

Scali

Banned
Dec 3, 2004
2,495
0
0
Or you could just, you know, use some other GPGPU shader language besides OpenCL. It might be more complicated, but the alternatives are there if the developer needs them.

Like what?
There's Cuda, OpenCL or DirectCompute. That's it.
 

Scali

Banned
Dec 3, 2004
2,495
0
0
is OpenCL going to standardize realistic, interactive physics for every object in every game?

No, but OpenCL could be used as the foundation on which something like that is built.
But nobody is working on it.
nVidia isn't working on it because they already have a working solution based on Cuda. Why move to OpenCL? Especially when AMD doesn't even enable OpenCL for end-users anyway. So even if nVidia added OpenCL support to PhysX, it would still not work.
Intel isn't working on it because they don't sell GPGPUs.
AMD is the one that SHOULD be working on it, but doesn't have the means.
 

zebrax2

Senior member
Nov 18, 2007
977
70
91
No, but OpenCL could be used as the foundation on which something like that is built.
But nobody is working on it.
nVidia isn't working on it because they already have a working solution based on Cuda. Why move to OpenCL? Especially when AMD doesn't even enable OpenCL for end-users anyway. So even if nVidia added OpenCL support to PhysX, it would still not work.
Intel isn't working on it because they don't sell GPGPUs.
AMD is the one that SHOULD be working on it, but doesn't have the means.

You should just add to the requirements of your program that Raedon users need to install the sdk. It will just be like those programs that require you to install .net.
 

Scali

Banned
Dec 3, 2004
2,495
0
0
You should just add to the requirements of your program that Raedon users need to install the sdk. It will just be like those programs that require you to install .net.

Actually, it's more like telling people they need to install Visual Studio Express, when they only need .NET.
 

GaiaHunter

Diamond Member
Jul 13, 2008
3,700
406
126
No, but OpenCL could be used as the foundation on which something like that is built.
But nobody is working on it.
nVidia isn't working on it because they already have a working solution based on Cuda. Why move to OpenCL? Especially when AMD doesn't even enable OpenCL for end-users anyway. So even if nVidia added OpenCL support to PhysX, it would still not work.
Intel isn't working on it because they don't sell GPGPUs.
AMD is the one that SHOULD be working on it, but doesn't have the means.

But shouldn't it be the game devs working on it?
 

Scali

Banned
Dec 3, 2004
2,495
0
0
But shouldn't it be the game devs working on it?

No. Is Havok written by game devs? Is PhysX written by game devs?
The CryTek guys may be able to pull it off, but most other game devs, nah. Most of them just buy and use middleware. Physics is a very specialist case.
 

beginner99

Diamond Member
Jun 2, 2009
5,318
1,763
136
I think AMD had no intention other than to sabotage nVidia and its Cuda/PhysX success.

If AMD really wanted to support its customers and enable new technologies, wouldn't they have had working OpenCL and physics stuff out the door by now?

nvidia is sabotaging itself. Like disabling physics when AMD card is present.
Why would I as developer invest in someting that 50 % of the users will not be able to use?

If users like it, nvidia could increase sales pretty much if all Radeon buyers also buy an nv-gpu for PhysiX. Then AMD would be under pressure to do something.

But now? No reason. Fermi is to hot, expensive and loud. Almost no one will by a Fermi because of PhysX.


Other applications are niche-applications. Video encoding probably beeing the most widley used one but no need for gpgpu to use it. Just buy an Phenom X6 if you really need it to run that fast.

BTW, are there any comparisons in video-encoding? Say i7, X6 vs gtx260-285, Fermi?
Never actually seen one.
 

Scali

Banned
Dec 3, 2004
2,495
0
0
nvidia is sabotaging itself. Like disabling physics when AMD card is present.

That's a VERY small market.

Why would I as developer invest in someting that 50 % of the users will not be able to use?

Because there's 50% that CAN use it. And it will set your game apart from games that don't have such effects.
Developers started adopting 3d acceleration long before they had 50% marketshare. Likewise developers started adopting DX10 long before it had 50% marketshare, and now DX11.

If users like it, nvidia could increase sales pretty much if all Radeon buyers also buy an nv-gpu for PhysiX. Then AMD would be under pressure to do something.

I think that's not going to work. The original PPU wasn't much of a success either. People just don't want to buy a lot of cards. They want one card to do it all.

But now? No reason. Fermi is to hot, expensive and loud. Almost no one will by a Fermi because of PhysX.

That's nVidia's problem currently. They need to make sure that the refresh of Fermi deals with those issues. That has little to do with PhysX itself however. It's just not a very successful product, period.
But assuming nVidia does deal with those issues... if they come up with a card that is comparable to AMD's offerings, or even better, in terms of heat, price and noise... then PhysX is a nice free add-on.
 

GaiaHunter

Diamond Member
Jul 13, 2008
3,700
406
126
No. Is Havok written by game devs? Is PhysX written by game devs?
The CryTek guys may be able to pull it off, but most other game devs, nah. Most of them just buy and use middleware. Physics is a very specialist case.

The point is if AMD develops their physics engine, why would they allow NVIDIA to run it? Just like NVIDIA doesn't allow AMD to run it (I know they would, for a fee and then some control of how it would run in AMD hardware and they know very well how that can end since all the performance advantage Intel has isn't based on hardware and process advantage only, but also from compiler).

The half-life 2 physics gun is a more impressive effect than most "eye-candy only effects" that GPU accelerated physX have shown so far (I know they could be more impressive but so far they aren't by not being developed by someone with no interest at the hardware level).

Developers might have been jumping on the physX effects library, but only for those that aren't GPU accelerated (and even these are simply eye-candy that you say cool and forget them).

And AMD is said to be working with Bullet (http://bulletphysics.org/wordpress/) to provide an GPU vendor free physics library.

EDIT:

Also curious is this article on the Bullet news page http://bulletphysics.org/wordpress/?p=88 , that ranks the physics effects library,

The August 2009 issue of Game Developers Magazine features an article about game middleware, written by Mark Deloura. They surveyed over 100 senior developers of various development houses, mainly working on PC, PlayStation 3 or XBox 360.

According to the article, developers like having access to the full source code. When purchasing a Havok or PhysX license, some of the core algorithmic implementations, such as the core constraint solver or collision detection internals are not exposed. PhysX is rated number 1 at 26.8%, Havok comes 2nd at 22.7%, Bullet third at 10.3% and Open Dynamics Engine fourth at 4.1%.
 
Last edited:

Scali

Banned
Dec 3, 2004
2,495
0
0
The point is if AMD develops their physics engine, why would they allow NVIDIA to run it?

Because AMD is not in the situation that nVidia is in.
nVidia has about 59% marketshare, according to the latest Steam survey.
AMD only has 33%.
So if developers have to choose which vendor to support, the choice is easy.

Also curious is this article on the Bullet news page http://bulletphysics.org/wordpress/?p=88 , that ranks the physics effects library,

That has little to do with this topic I'm afraid.
Bullet is quite popular on consoles (originally developed for/by Sony), but on PC, hardly anyone uses it. And PC is the only platform where GPU-acceleration counts. Bullet's GPU acceleration isn't even in a working state anyway...
 
Last edited:

GaiaHunter

Diamond Member
Jul 13, 2008
3,700
406
126
Because AMD is not in the situation that nVidia is in.
nVidia has about 59% marketshare, according to the latest Steam survey.
AMD only has 33%.
So if developers have to choose which vendor to support, the choice is easy.

But the DX11 according to steam is like 95% AMD - 5% NVIDIA. See I can do fanboy numbers too.

Now for more accurate current markshare http://www.crn.com/hardware/225000002, discrete market 49.7% - 50.3%. Is discrete only acceptable or you want IGPs if you believe IGPs can do hardware physX?

That has little to do with this topic I'm afraid.
Bullet is quite popular on consoles (originally developed for/by Sony), but on PC, hardly anyone uses it. And PC is the only platform where GPU-acceleration counts. Bullet's GPU acceleration isn't even in a working state anyway...

And GPU physX is 5 games or so. Weeee.
 
Status
Not open for further replies.