• We should now be fully online following an overnight outage. Apologies for any inconvenience, we do not expect there to be any further issues.

What GPGPU applications are available to ATI users?

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Status
Not open for further replies.

GaiaHunter

Diamond Member
Jul 13, 2008
3,700
406
126
The only thing he said is that it's much easier to develop for CUDA than OpenCL, nothing else.
You are just assuming that this subjects weren't discussed before. Neither me or him are first time posters.


And if you disagree I'd love to see what programs you've written with those two..

I don't agree or disagree.

But I'm going to tell you something - even easier than to write for CUDA is easier to write for C++ and other languages.

Don't agree?

See how many programs in your PC are written for CUDA compared to other languages.

That is all I'm saying.
 

Voo

Golden Member
Feb 27, 2009
1,684
0
76
You are just assuming that this subjects weren't discussed before. Neither me or him are first time posters.
Well ok, just looked strange that quote him saying it's easier to program for CUDA than for OpenCl and then go on that there are not a lot of useful applications in the consumer space (with which I wholefully agree - I can't even think of lots of consumer applications where it'd be useful, other than encoding).. after all correlation doesn't imply causation as we all know ;)

And yeah sure it's much to program in plain C, even if you want to take advantage of multithreading, it's muche easier for the CPU - by far not as much constraints, better and more tools, lots of experience. Though I don't think your average programmer would be able to implement even a basic parallel algorithms like e.g. hook and pointer jumping, which they can get by with C, but is essential for something like Cuda (or cilk or any other language for parallel programming; well I also wouldn't want to implement it in Cuda efficiently).
But that doesn't change the fact that Cuda is still much easier to program for than OpenCl, both companies have lots of work to do in that area if they want GPGPU computing to take off, but Nvidia had the head start there, I don't think anyone would deny that..
 

Cookie Monster

Diamond Member
May 7, 2005
5,161
32
86
My bigger picture is quite simple:

Instead of buying an Ageia PPU or a Fermi, I've bought something that is cheaper and uses runs today features. In the future I'll buy something to run the future features.

The irony here is that this future will never come (or take a long time) when your 2nd biggest competitor won't push the technology and just keep playing the victim game since they got nothing to show.

AMD has undoubtedly held down GPGPU development and phsyics acceleration on GPUs. Its better to have two people working on a problem, not just one.

I understand you mate, all this great technology, very mature, that is the future and insist on staying in the future and not materializing.

AMD is really lucky, isn't it?

By the way, one of the Ageia guys that was working at NVIDIA seems to think the future is Fusion. Ironic, isn't it? :p

Why do you think he got the job? Its not because he thinks PhysX is a failure. Its because AMD required somebody with good vision and expertise on this area of the field, and the fusion project was a big opportunity for him to work on the field of heterogeneous computing. I think it kind of tells you the status of GPGPU development at AMD. They too know that heterogeneous computing is the future.
 

Scali

Banned
Dec 3, 2004
2,495
0
0
The only thing he said is that it's much easier to develop for CUDA than OpenCL, nothing else. And if you disagree I'd love to see what programs you've written with those two..

Exactly, and that's not nVidia vs AMD, because it goes for OpenCL on nVidia too.
 

Scali

Banned
Dec 3, 2004
2,495
0
0
That is why the regular costumers, like me, don't give a damn and you shouldn't be offended by it.

The thing is that you are crapping all over some groundbreaking new technology, just because it doesn't do anything for YOU (or it does, but you're in denial).

And other games have as good or better eye candy and look as real or more.

Rubbish. There's no game with CPU-based physics that can do anything like the effects of PhysX-accelerated games.

In the 3 years that I've been in denial you have Batman, you have Mirrors Edge, you have 2-3 other games that are passable and you will have Mafia II.

The first few years of 3d acceleration you only had a handful of games aswell. One of them just happened to be GLQuake.

Sincerely added realism for games have its pros and cons. Of course that extra added realism also eats a good chunk of the frame rate and/or requires an additional GPU.

All new features require more powerful GPUs and/or take their toll on framerate. Who cares? That's what we're enthusiasts for!
Or do you still want to game with a 3dfx VooDoo? Or heck, let's go back to software rendering, that worked too!

AMD is really lucky, isn't it?

AMD isn't exactly pushing the boundaries of what's possible.

By the way, one of the Ageia guys that was working at NVIDIA seems to think the future is Fusion. Ironic, isn't it? :p

Yea, *his* future probably, I bet he made a nice career move with a great bump in salary.
 

Scali

Banned
Dec 3, 2004
2,495
0
0
But I'm going to tell you something - even easier than to write for CUDA is easier to write for C++ and other languages.

Cuda *supports* C++. That's one of the main reasons why it's so much better than OpenCL or DirectCompute.
 

Zoeff

Member
Mar 13, 2010
86
0
66
Wait, CUDA is still far superior to OpenCL in every way? How is this GPGPU thing ever going to take off when the only 'working' one is nvidia exclusive..?!

Or is CUDA going to become more open and be more like DirectX vs OpenGL in the future?
 

Scali

Banned
Dec 3, 2004
2,495
0
0
Wait, CUDA is still far superior to OpenCL in every way? How is this GPGPU thing ever going to take off when the only 'working' one is nvidia exclusive..?!

That's the question that nVidia's competitors are struggling with right now.
nVidia already has Adobe on its side, so it's pretty smooth sailing from their end.
If developers choose to abandon Cuda in favour of OpenCL, that's fine aswell, nVidia does support it.

Or is CUDA going to become more open and be more like DirectX vs OpenGL in the future?

I don't think nVidia is against opening up Cuda, they've hinted at it before. Problem is more that AMD refused to adopt Cuda/PhysX, because their direct competitor would have too much control over them (so they sided with Havok, owned by Intel, instead, oh the irony).
 

Scali

Banned
Dec 3, 2004
2,495
0
0
Better the devil you know than the one you don't?

Apparently not, because Havok still doesn't run on GPUs, and it looks like AMD pulled the plug because they stopped advertising Havok in the media a long time ago.
I think nVidia would have been the better choice, at least for end-users.
Namely, they'd actually have a working GPU-accelerated physics solution. Yea, perhaps nVidia's GPUs would be more efficient, but at least AMD would have support.
 

Genx87

Lifer
Apr 8, 2002
41,091
513
126
Better the devil you know than the one you don't?

I think AMD realizes now Intel has no intention of bringing Havok to the GPU. Instead of admitting their stupidity of siding with Intel. They are going to try and push Bullet from a minority position? AMD is trying to recreate wheel. And their track record isnt pretty when it comes to dev relations. Good luck.
 

zebrax2

Senior member
Nov 18, 2007
977
70
91
Apparently not, because Havok still doesn't run on GPUs, and it looks like AMD pulled the plug because they stopped advertising Havok in the media a long time ago.
I think nVidia would have been the better choice, at least for end-users.
Namely, they'd actually have a working GPU-accelerated physics solution. Yea, perhaps nVidia's GPUs would be more efficient, but at least AMD would have support.

I think choosing Havok though is was a good choice for AMD. Intel right now doesn't have any plans in the near future to enter the discreet card market since larabee was scrapped and will be used on the HPC market instead. There is now no point for Intel to cripple Havok for the AMD cards since they will not be competing anymore.

On the other hand if they have chosen nvidia they may have very well have a working physics solution but what is the point in having it if your competitor may very well sabotage your performance. PhysX having then wide adoption, reviews of their cards being blasted because of performance in comparison to their nvidia counterparts will result to lost sales. Competition wouldn't be that fierce and as a result nvidia will be able to charge higher for their cards w/c will be bad for the consumer.

The same could be said about opencl and Cuda

Edit:
Is AMD abandoning Havok? Last i head they will support both bullet and Havok
 

Genx87

Lifer
Apr 8, 2002
41,091
513
126
I think choosing Havok though is was a good choice for AMD. Intel right now doesn't have any plans in the near future to enter the discreet card market since larabee was scrapped and will be used on the HPC market instead. There is now no point for Intel to cripple Havok for the AMD cards since they will not be competing anymore.

On the other hand if they have chosen nvidia they may have very well have a working physics solution but what is the point in having it if your competitor may very well sabotage your performance. PhysX having then wide adoption, reviews of their cards being blasted because of performance in comparison to their nvidia counterparts will result to lost sales. Competition wouldn't be that fierce and as a result nvidia will be able to charge higher for their cards w/c will be bad for the consumer.

The same could be said about opencl and Cuda

Edit:
Is AMD abandoning Havok? Last i head they will support both bullet and Havok

AMD supporting havok on a CPU. So does Via I am sure. If nvidia had an x86 chip they would as well. Intel has no plans to bring Havok to a GPU because it ass nothing to their lineup. Which I think is why AMD acquired bullet.
 

Scali

Banned
Dec 3, 2004
2,495
0
0
AMD can't acquire Bullet. It's an open source project, and the main developer is a Sony employee.
AMD just signed an NDA with the developer, and we haven't heard anything about OpenCL support from Bullet since.
 

zebrax2

Senior member
Nov 18, 2007
977
70
91
AMD supporting havok on a CPU. So does Via I am sure. If nvidia had an x86 chip they would as well. Intel has no plans to bring Havok to a GPU because it ass nothing to their lineup. Which I think is why AMD acquired bullet.

The question now would be why won't Intel port Havok to a GPU. What could Intel possibly lose by porting it considering that they could earn money from it?
 

Genx87

Lifer
Apr 8, 2002
41,091
513
126
Ahh I thought the news release AMD put out said they acquired it. So they are going with an open source project for their gpu physics?

That will end about as well as their opencl progress eh? :D :(
 

Scali

Banned
Dec 3, 2004
2,495
0
0
The question now would be why won't Intel port Havok to a GPU. What could Intel possibly lose by porting it considering that they could earn money from it?

That one is simple: CPU sales.
With Havok on CPU, you need a fast multicore processor.
With GPU acceleration, a mainstream dualcore CPU would probably be fine.
 

Genx87

Lifer
Apr 8, 2002
41,091
513
126
The question now would be why won't Intel port Havok to a GPU. What could Intel possibly lose by porting it considering that they could earn money from it?

The question from an Intel perspective isnt what Intel can lose but what do they gain? Why allow AMD to use havok on their GPU when they are selling Havok to run on theirr CPU?
Intel gains nothing from allowing their competitor to use their technology to sell a product.

I never understood AMDs argument, or at least figured it was horsehit when they whined about why they wont jump on board PhysX. Their argument was they didnt want to use a technology owned by a competitor. So they run to Intel????? Utter crap.
 

Phynaz

Lifer
Mar 13, 2006
10,140
819
126
So that would be Flash 10.1 (not released yet), Cyberlink MediaShow Espresso, ArcSoft SimHD, Folding@Home and the ATi Froblins demo....

This is the reason I'm beginning to regret my 5770 purchase. I wanted transcoding support, and I'm stuck with Mediashow Expresso. It's not all that fast. Had I gone Nvidia I would have had more software options.
 

Scali

Banned
Dec 3, 2004
2,495
0
0
I never understood AMDs argument, or at least figured it was horsehit when they whined about why they wont jump on board PhysX. Their argument was they didnt want to use a technology owned by a competitor. So they run to Intel????? Utter crap.

It was purely political... call it FUD.
Firstly they would put nVidia in a bad light, saying it was closed and proprietary technology, where Havok would allegedly work on all OpenCL hardware.
Secondly, they tried to trick developers into believing that a better alternative was on the way, to lure them away from PhysX and wait for Havok.
I think AMD had no intention other than to sabotage nVidia and its Cuda/PhysX success.

If AMD really wanted to support its customers and enable new technologies, wouldn't they have had working OpenCL and physics stuff out the door by now?
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
AMD is trying to recreate wheel.

Navigating the IP landscape in search of new IP territories that can be carved out for oneself is all about recreating the wheel and building the better mousetrap.

You think AMD's margins with Cuda would be anywhere near Nvidia's? (licensing fee adds to cost structure) Would Nvidia's shareholders be happy if it were?

This is business, and in business building a commodity is a margin killer. Why would NV want to make Cuda a commodity and destroy their margins? (and NV is hot on margins right now, as are most businesses in the tech industry)
 

Scali

Banned
Dec 3, 2004
2,495
0
0
You think AMD's margins with Cuda would be anywhere near Nvidia's? (licensing fee adds to cost structure) Would Nvidia's shareholders be happy if it were?

Need I remind you, you're talking about a company whose bread-and-butter is x86-licensed CPUs.
 

Genx87

Lifer
Apr 8, 2002
41,091
513
126
It was purely political... call it FUD.
Firstly they would put nVidia in a bad light, saying it was closed and proprietary technology, where Havok would allegedly work on all OpenCL hardware.
Secondly, they tried to trick developers into believing that a better alternative was on the way, to lure them away from PhysX and wait for Havok.
I think AMD had no intention other than to sabotage nVidia and its Cuda/PhysX success.

If AMD really wanted to support its customers and enable new technologies, wouldn't they have had working OpenCL and physics stuff out the door by now?

You would think so but they would rather play the victim and impede progress.
 

zebrax2

Senior member
Nov 18, 2007
977
70
91
The question from an Intel perspective isnt what Intel can lose but what do they gain? Why allow AMD to use havok on their GPU when they are selling Havok to run on theirr CPU?
Intel gains nothing from allowing their competitor to use their technology to sell a product.

I never understood AMDs argument, or at least figured it was horsehit when they whined about why they wont jump on board PhysX. Their argument was they didnt want to use a technology owned by a competitor. So they run to Intel????? Utter crap.

Intel is already integrating GPUs into their CPUs. Although they are not that powerfull today a few more generation and it would no doubt be capable enough to handle Havok. That being another selling point for their CPUs wouldn't that be enough for them to consider porting it?
 
Status
Not open for further replies.