What GPGPU applications are available to ATI users?

Page 6 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Status
Not open for further replies.

Scali

Banned
Dec 3, 2004
2,495
0
0
PhysX, as of now, is not something I would buy an nVidia card for. From everything I've seen from PhysX the only uses for it were PR. Every effect with PhysX has been blown up 10x just to show how cool it is and to show that "OMFG WERE USING HARDWARE PHYSICS!!! OMFG", but nothing has actually really change the gameplay.

Reminds me of the days when pixelshaders were new, and every game had shiny bumpy surfaces.
No, it didn't make the gameplay any better... and in fact, it didn't even make the games look more realistic, as it turned everything into a caricature... And it didn't exactly help performance either.
But people loved it anyway.

I bet every AMD fanboy loves GPU-physics aswell, deep down... they know it's great, and it's going to become a standard feature of games eventually... they just can't get out of the closet until AMD supports it, which can take a LOOONG time.
 

linkgoron

Platinum Member
Mar 9, 2005
2,598
1,238
136
Reminds me of the days when pixelshaders were new, and every game had shiny bumpy surfaces.
No, it didn't make the gameplay any better... and in fact, it didn't even make the games look more realistic, as it turned everything into a caricature... And it didn't exactly help performance either.
But people loved it anyway.

I bet every AMD fanboy loves GPU-physics aswell, deep down... they know it's great, and it's going to become a standard feature of games eventually... they just can't get out of the closet until AMD supports it, which can take a LOOONG time.

I Will make my point clearer-> PhysX has not brought to the board something that we have not SEEN yet, or hasn't been done on the cpu.

I don't think anyone here is downplaying GPU-physics.
I think people here are down-playing PhysX.
there is a difference you know. Only nVidia fanboys make them one and the same. Yet, deep down, they too know that it's not really worth anything right now. although it (GPU physics) is a great idea for the future, when something cross-platform will arrive.
 

Scali

Banned
Dec 3, 2004
2,495
0
0
I Will make my point clearer-> PhysX has not brought to the board something that we have not SEEN yet, or hasn't been done on the cpu.

It has actually... Perhaps you just haven't been looking, or you didn't understand what you were looking at.
PhysX has shown us detailed, realistic fluid simulations. Things that CPUs simply aren't powerful enough for.
And that has nothing to do with nVidia, because PhysX has already shown that back when only the Ageia PPU accelerated PhysX.
 

luv2increase

Member
Nov 20, 2009
130
0
0
www.youtube.com
I am 100% against PhysX because it is closed and proprietary. I seriously think the main reason why Nvidia doesn't want PhysX to run on AMD hardware is because AMD hardware might actually run PhysX better than on Nvidia hardware. I have thought this for awhile.

I believe it was the 3870 which someone actually got PhysX to run on, and it did better than an 8800 for the CPU/PhysX test in 3DMarkVantage... Someone correct me if I'm wrong.

I believe that Nvidia should at least port PhysX from CUDA to OpenCL. Then, everyone will be able to run PhysX on the GPU, and undoubtedly PhysX will be much more desirable by developers.
 

Scali

Banned
Dec 3, 2004
2,495
0
0
I am 100% against PhysX because it is closed and proprietary. I seriously think the main reason why Nvidia doesn't want PhysX to run on AMD hardware is because AMD hardware might actually run PhysX better than on Nvidia hardware. I have thought this for awhile.

Actually it's the other way around. nVidia actually offered to help AMD with a PhysX implementation for their architecture, but AMD turned the offer down. The reason is simple: AMD knew they could never get it as fast as nVidia.

I believe it was the 3870 which someone actually got PhysX to run on, and it did better than an 8800 for the CPU/PhysX test in 3DMarkVantage... Someone correct me if I'm wrong.

You're wrong. What they did was hack the PhysX runtime so that 3DMarkVantage would pretty much skip the physics test, and produce an inflated score.
There's a reason why they didn't show any screenshots of the actual physics scene, let alone a video demonstrating the physics scene actually running on an AMD GPU... Or even better: release a binary so that every AMD owner can run it aswell.

I believe that Nvidia should at least port PhysX from CUDA to OpenCL. Then, everyone will be able to run PhysX on the GPU, and undoubtedly PhysX will be much more desirable by developers.

Why should they? They have nothing to gain from that. AMD doesn't support OpenCL anyway.. and nVidia's Cuda implementation will always be more efficient than the OpenCL one.
AMD needs to make the next move, and somehow force nVidia's hand in opening PhysX up for OpenCL. Who knows, nVidia may have already been working on an OpenCL implementation 'just in case'. I know I would, if I were manager at nVidia. If a competing GPU-accerated physics API ever arrives, OpenCL is the only way to keep PhysX relevant, and thus the only way to protect the investment.
 

linkgoron

Platinum Member
Mar 9, 2005
2,598
1,238
136
It has actually... Perhaps you just haven't been looking, or you didn't understand what you were looking at.
PhysX has shown us detailed, realistic fluid simulations. Things that CPUs simply aren't powerful enough for.
And that has nothing to do with nVidia, because PhysX has already shown that back when only the Ageia PPU accelerated PhysX.

something like this?

http://www.youtube.com/watch?v=h34xgynBpL8
http://www.youtube.com/watch?v=ILaxCkPKyJs
http://www.youtube.com/watch?v=7f33GYOC2as
 

GaiaHunter

Diamond Member
Jul 13, 2008
3,697
397
126
Then you really ARE a fanboy, because marketshare was not the point I was making.

Everyone his an AMD fanboy to you if they aren't screaming "AMD IS SHIT! AMD IS GOING TO DIE! AMD HAS NO FUTURE!".

Even a fanboy such as yourself will have to admit that although AMD and nVidia are pretty much tied in sales *now*, nVidia has had the upper hand ever since the 8-series. Add to that only AMD's 4000 and 5000-series can run OpenCL and thus GPU-accelerated physics...
nVidia 8-series and beyond vs AMD's 4000-series and beyond?
Even you cannot deny that nVidia has a lot more devices capable of GPU physics on the market.
So if that is the criterion on which developers will have to choose their physics API, AMD is at a disadvantage, THAT was the point.

First, as I stated in another thread if NVIDIA was similar in price per performance to AMD I would buy NVIDIA due to the fact the games run better out of the box, generally.

Unfortunately that isn't happening and in Europe is even worse.

Secondly, go look at reviews of those few games with physX Hardware and see the drop in performance even these mild physX effects exert on GTX200 series.

So all those "GF 8000 series and so and so" argument isn't worth much.


And then we're not even getting into the fact that AMD doesn't HAVE a physics API, and even if they ever came up with one, they would have to fight against nVidia's stable and mature API, which many developers are already familiar with.
In other words, AMD is not in the same position as nVidia, and will need cross-vendor compatibility for leverage.

And we have seen that NVIDIA having a physics API and very aggressive devs relations gave them around 15 titles in the last 3 years - and in most of those titles the effects aren't that different from CPU physics effects.

So basically NVIDIA had these years free of competition and enjoyed an advantage in marketshare.

Now they will have competition and the marketshare lead shrinked.

What exactly makes you think things will suddenly change?
 

Scali

Banned
Dec 3, 2004
2,495
0
0

Uhh no... fluid simulations, you know? HL2 are simple rigid body and ragdoll physics. CPUs can do that...
But stuff like this: http://www.youtube.com/watch?v=r17UOMZJbGs
No way a CPU can do that. It requires the simulation of hundreds of thousands, or even millions of particles, only possible on massively parallel architectures.
Games like Cell Factor and Cryostasis use these effects.
 

Scali

Banned
Dec 3, 2004
2,495
0
0
Everyone his an AMD fanboy to you if they aren't screaming "AMD IS SHIT! AMD IS GOING TO DIE! AMD HAS NO FUTURE!".

No, you're a fanboy if you yank things out of context and focus on useless details to try and discredit nVidia, with no regard to the point that the other party is trying to argue.

Secondly, go look at reviews of those few games with physX Hardware and see the drop in performance even these mild physX effects exert on GTX200 series.

That's completely beside the point again.
Don't you just want to kill everyone who keeps throwing up barriers for revolutionary new technology?

And we have seen that NVIDIA having a physics API and very aggressive devs relations gave them around 15 titles in the last 3 years - and in most of those titles the effects aren't that different from CPU physics effects.

Considering the fact that most major titles require 3 or more years of development time, nVidia has done pretty damn well.
And I completely disagree... The PhysX effects make ALL the difference with a CPU-only game. You get smoke, fluid and debris... things that just were always faked or just not implemented altogether, on every CPU-only game.
Yes, to the untrained eye it may seem insignificant. But to a developer like me, it makes a world of difference. You just CAN'T DO these effects with a CPU. And with the current rate of development of CPUs, it doesn't look like you can do those effects on a CPU in 5-10 years time either.
GPUs on the other hand keep developing at an alarming rate, with much better scaling characteristics than CPUs, so it's just a matter of time.
 

linkgoron

Platinum Member
Mar 9, 2005
2,598
1,238
136
Uhh no... fluid simulations, you know? HL2 are simple rigid body and ragdoll physics. CPUs can do that...
But stuff like this: http://www.youtube.com/watch?v=r17UOMZJbGs
No way a CPU can do that. It requires the simulation of hundreds of thousands, or even millions of particles, only possible on massively parallel architectures.
Games like Cell Factor and Cryostasis use these effects.

Nice game you've linked there.

Anyway, I've watched this video:
http://www.youtube.com/watch?v=kufpPwho9Ec
and this http://www.youtube.com/watch?v=H3GVEExhAOI
and this http://www.youtube.com/watch?v=MBNHPHVbQts

The only thing I see here is developers disabling CPU abilities (with cryotasis) and extremely exaggerating water and other effects with GPU.
In Cell Factor I don't see a big difference from HL2.
 

Scali

Banned
Dec 3, 2004
2,495
0
0
The only thing I see here is developers disabling CPU abilities (with cryotasis) and extremely exaggerating water and other effects with GPU.

Try downloading the demo. It allows you to enable full PhysX water effects on a CPU aswell... but don't be surprised if you get single-digit framerates even on a Core i7. The only way to play Cryostasis without GPU-acceleration is to turn down the water effects.
There's just no debating this point. Don't give me the argument that PhysX would somehow be unfair on CPUs... because it's too easy to counter that argument: no other physics libraries ever succeeded in detailed realtime fluid simulations either. Havok DOES support it on CPUs, but the scale is just WAY different. Where PhysX can do hundreds of thousands of particles on a GPU, Havok can only do a few hundreds, so your detail is orders of magnitudes less, and as such, you simply cannot do the effects in Cryostasis. You don't have the detail for water droplets and such (which pretty much proves that PhysX isn't unfair on CPUs, it's just the CPU limitation in general).

In other words, it's not exactly a coincidence that no game ever had water effects quite like Cryostasis, and that Cryostasis uses the only currently available accelerated physics API to enable these effects. As a developer I don't really care what name it says on the tin. So it is 'nVidia' this time... whatever. What I care about is that there is a company that has enabled this technology for us to use.
I bought a Radeon 8500 aswell, when ps1.4 came out, and most people had no idea about ATi other than their sub-standard Mach cards. So it said 'ATi' on the tin that time... I didn't care. What I did care about was that ps1.4 was considerably more powerful and programmable than the first-gen shaders.
Likewise I went with ATi again for SM2.0. And now I'm back with ATi, or AMD actually, because of DX11.
 
Last edited:

Keysplayr

Elite Member
Jan 16, 2003
21,211
50
91
Still trying to decide who is the most antagonistic poster in this "conversation". There is definitely trolling going on. Intentionally missing points and purposeful out of context understanding of the others posts and such. Take it down a notch or two please, I don't care what side you favor or don't.
And technically, this thread is about what GPGPU apps are available for ATI users. I think that is what should be stuck to.
Anandtech Moderator - Keysplayr
 

Scali

Banned
Dec 3, 2004
2,495
0
0
And technically, this thread is about what GPGPU apps are available for ATI users.

Those can be summed up in a single post, pretty much... which we have.
Then someone turned it into all the GPGPU apps that AREN'T available for ATi users.
Which I personally still consider on-topic.
After all, ATi users don't live in a vacuum. Isn't it as least as important to know what you're NOT going to get, in order to properly evaluate what you ARE going to get?
 

Seferio

Member
Oct 9, 2001
32
0
0
There aren't many ATI users who buy their video cards specifically for GPGPU applications. Think of it more as a bonus when OpenCL finally gets to the stage of being ready for prime time.
 

GaiaHunter

Diamond Member
Jul 13, 2008
3,697
397
126
No, you're a fanboy if you yank things out of context and focus on useless details to try and discredit nVidia, with no regard to the point that the other party is trying to argue.
So you came here in this thread to valiantly defend NVIDIA, saying things like:

OpenCL runtime for every Radeon user, rather than for developers only.

Of course every user can download the SDK - sure it isn't as neat as being packed in the drivers. Of course you fail to mention that and even said they give a runtime for developers on your sentence when what they offer is a SDK. Only when people pointed out did you complain it isn't as easy as a run time.

A GPU-accelerated physics library as an alternative to nVidia's PhysX.

Of course you forgot to mention they are working with Bullet.

I know what you will say "AMD promises are full off shit! Look at Barcelona!". AMD is the only hardware company that ever missed something, right?

I said I'm a developer.
Let me explain it to you: If us developers are hindered in developing and deploying OpenCL applications, you consumers aren't getting applications.
You think it's entirely coincidence that a major company like Adobe chooses Cuda for GPGPU-acceleration, rather than OpenCL?

This seems like you claiming AMD is hindering OpenCL.

From wiki http://en.wikipedia.org/wiki/Opencl
* On April 20, 2009, Nvidia announced the release of its OpenCL driver and SDK to developers participating in its OpenCL Early Access Program.[13]

* On August 5, 2009, AMD unveiled the first development tools for its OpenCL platform as part of its ATI Stream SDK v2.0 Beta Program.[14]

So exactly how is AMD hindering OpenCL? By being 4 months later?

But you're right, I should dump the Radeon and get a Fermi. I've thought about it. I just didn't think Fermi was a good enough product. But every month that passes without AMD adding OpenCL to their drivers, Fermi gets more attractive.

Oh, wait they are hindering by not adding OpenCL supports to their drivers. But aren't you a developer? Don't you use the SDK to develop?

I'm quite sure that they will, actually. Intel supports the OpenCL standard: http://software.intel.com/en-us/blog...h-tim-mattson/

As for DirectCompute, technically their current IGPs should already be able to support it (albeit at CS4.0 level only). They haven't enabled it in the drivers yet, but I'd be surprised if they don't enable it on Sandy Bridge.

Look at the difference.

AMD didn't include OpenCL in their drivers - the logical conclusion is that AMD is hindering OpenCL development.

Intel on the other hand will just enable it later one. No hindering the development here.

It creates a chicken-and-egg problem that is unneccessary. There's no reason why AMD shouldn't bundle it with their drivers.
nVidia does it, S3 does it.
The only reason why AMD doesn't is simple: they'll get creamed by nVidia as soon as people can do apples-to-apples application benchmarks in OpenCL.

Proof?

Nah!

No discrediting AMD here.

In theory, yes... but not many relevant benches exist yet. Partly because AMD was able to control the situation so far. And partly because AMD shot themselves in the foot and drove developers back to Cuda.

AMD the DARKSIDE - controlling all, making everyone life a misery!

If at all possible, I prefer to support the technology that supports the widest range of hardware and thus the largest demographic, yes.
But currently, OpenCL is pretty much only nVidia and S3, and the S3 marketshare is negligible... so the difference in marketshare with Cuda is marginal, while C for Cuda is easier to use and generally a more powerful tool than OpenCL.

Again the same FUD that AMD users can't run OpenCL.

You can run OpenCL perhaps, I can run OpenCL, but the average end-user can't.
You think the average end-user for PhotoShop/Premiere has any idea about GPGPU acceleration or what Cuda really means? Let alone that they would know where to find an SDK and install it?
I doubt it, these are artists, not computer-savvy people (most of the time they use Macs for that reason).
All they have to know is that Adobe recommends nVidia Quadro cards for best performance, so they buy one of those, and probably have it installed for them by the IT department.

Let me explain in terms you may understand (but probably will deny anyway):
AMD advertises with OpenCL support. So people who buy an AMD card, will think that it supports OpenCL. Then they install my application... and hey, it doesn't work!
Who do you think they're going to call for support, me or AMD?
I'd be getting a lot of support calls saying "Hey your software is crap, it doesn't work", and then I'd have to explain that they need to download and install the SDK etc.
Yes, I could put it in the manual, in a readme, or even a popup window on install... but do you think people actually read those? Let me answer that for you: They don't.
So it'd be a big waste of time and money on my behalf. Thank you AMD.

I wonder how those programs that actually use AMD Stream actually do it - I'll have to go check.

And what is AMD doing on the physics front in the meantime? Nothing. There's no sign that AMD will ever offer an alternative to PhysX. I guess that's why you're in denial about PhysX in the first place, right?

Despite AMD announced a cooperation with bullet. Not even saying "they announced this but I don't believe".

That's the question that nVidia's competitors are struggling with right now.
nVidia already has Adobe on its side, so it's pretty smooth sailing from their end.
If developers choose to abandon Cuda in favour of OpenCL, that's fine aswell, nVidia does support it.

No mention that Adobe already stated they are going to support other means of GPGPU in future versions.

AMD just signed an NDA with the developer, and we haven't heard anything about OpenCL support from Bullet since.

Really?

http://bulletphysics.org/wordpress/?p=175

http://www.amd.com/us/press-releases/Pages/amd-ecosystem-2010mar8.aspx

AMD (NYSE:AMD) today announced that, along with partners Pixelux Entertainment and Bullet Physics, it has added significant support to the Open Physics ecosystem by providing game developers with access to the newest version of the Pixelux Digital Molecular Matter (DMM), a breakthrough in physics simulation. In addition, to enabling a superior development experience and helping to reduce time to market, Pixelux has tightly integrated its technology, DMM, with Bullet Physics, allowing developers to integrate physics simulation into game titles that run on both OpenCL- and DirectCompute-capable platforms. And both DMM and Bullet work with Trinigy’s Vision Engine to create and visualize physics offerings in-game.


I think I'll stop for now cause I'm getting tired.



That's completely beside the point again.
Don't you just want to kill everyone who keeps throwing up barriers for revolutionary new technology?

Yeah. I understand, dreams are so much better than reality.

EDIT: This post was being made long before I saw your post Keys (only after I actually hit post, I was in and out while doing it) and I hope it is ok to point some things to defend myself of being accused of a) being a fanboy and b) trying to discredit NVIDIA.

That's fine, but just keep your cool and I'd like to not see anymore of the fanboy nonsense slung around anymore from either you or Scali. Just have a discussion/debate without the heat. It is possible.
Thanks.
AT Moderator - Keysplayr
 
Last edited by a moderator:

Scali

Banned
Dec 3, 2004
2,495
0
0
Of course every user can download the SDK - sure it isn't as neat as being packed in the drivers. Of course you mention to say that and even said they give a runtime for developers on your sentence when what they offer is a SDK. Only when people pointed out did you complain it isn't as easy.

An SDK is not a runtime. I want a runtime. I asked AMD devrel for it. They gave me a bs answer about it being too big a download. Well, if that is the argument, then the SDK is REALLY a bad solution for end-users, as it's an even bigger download than a runtime.
Here is the full mail from Michael Houston (after I already had mailed a few times over the past months, without getting anything even remotely resembling a straight answer):
-----Original Message-----
From: Scali [mailto:scali@scali.eu.org]
Sent: Thursday, February 25, 2010 6:14 AM
To: Houston, Michael
Subject: RE: Still no OpenCL in 10.2 drivers, Mike

I'd like two specific answers:
1) What is the eta for Catalyst WHQL end-user OpenCL support? March? April?
December... 2012?

[MCH] There is a full roadmap available to ISVs under NDA that cover the role out of this and other features. We are trying to pull in the release to be sooner rather than later. At the moment inclusion in Catalyst is aligned with when several vendors have plans to release and sell OpenCL apps. I would like to see things in Catalyst before that. We need to work on the size of the OpenCL runtime to avoid ridiculously bloating Catalyst, and also deal with ICD installation issues we are having with other vendors who are not running the Khronos ratified ICD, and then how to not stomp on each other. This is all being worked on alongside other features.

2) How 'final' is the current SDK? It's not in beta anymore, but in the
newly released 2.01 SDK's I notice that they are built for/against 10.2 or
higher drivers.
[MCH] Those should still work with 10.1 as well. If not, we screwed something up. But, the base drivers (and more specifically CAL), get updates that improve stability, performance, or will enable certain features. 10.2 contained stability and performance improvements at the driver level, as will future releases.

Are there any more 'surprises' in store? When the beta 2.0
Stream SDK went final 2.0, suddenly OpenCL had some significant changes,
requiring a rebuild of applications in order to work with the newer
runtime.
[MCH] Besides the ICD and the stabilizing of calling convention under Windows (it took a little while to get that fixed in Khronos and not all vendors are shipping a Khronos approved ICD), what else changed that caused recompiles? Getting calling convention agreement went pretty fast (which you reported) and we tried to be responsive quickly. (Things like that we can push out much faster to developers directly, which we did). Ratification of the ICD and the platform stuff was a little late from Khronos and not all vendors have moved to the ratified ICD.

Not something I expect from beta -> final stage... that's
something that should only happen in alpha stage.
In other words... if I were to build and release something with today's
SDK, will it still work when end-user runtimes are released, or would I
need to rebuild again?
[MCH] It should continue to work since the ICD is now supposed to be the stabilization point, so there should be ABI stability there. The end-user runtime will be extracted from the developer releases, but the developer releases will likely remain ahead. However, it's possible, of course, that bugs may happen. We try to prevent this by running previously compiled OpenCL code against new releases to catch these issues. For ISVs that are gearing up for releasing apps, we try to get stability and performance test cases from them, if not full apps, for inclusion into QA so that things remain as stable as possible and to avoid performance regressions where possible. If we don't have the app, or at least test cases, it's difficult to make sure an app doesn't break.

(note also that he actually acknowledges my contribution to OpenCL by pointing out their calling convention problem and suggesting how to solve it. I'm such an anti-AMD a-hole, aren't I? Oh, and note the date, this was back in February. It's June by now... you think it's strange that my patience with AMD has run out on this issue? They haven't done anything about it in all these months!).

Of course you forgot to mention they are working with Bullet.

They aren't. You should search for some interviews with Erwin Coumans, the lead developer for Bullet. You'll be amazed at how much AMD 'works on' Bullet.

So exactly how is AMD hindering OpenCL? By being 4 months later?

By not offering a runtime to end-users, as I already explained.

Oh, wait they are hindering by not adding OpenCL supports to their drivers. But aren't you a developer? Don't you use the SDK to develop?

This isn't about me, it's about my potential userbase. I already explained that aswell.
Getting a Fermi would just be a gesture of me voting with my wallet, giving off a sign to AMD that I do not agree with how they treat their customers and developers.

Intel on the other hand will just enable it later one. No hindering the development here.

Intel doesn't have any GPUs capable of acceleration in the first place. So Intel has nothing to do with this issue.
If I want things to run fast on Intel CPUs, I will just use C++ instead of OpenCL, as it's more efficient anyway.

Okay, that's enough quotewars for now. The rest is not even worth my time.
 
Last edited:

GaiaHunter

Diamond Member
Jul 13, 2008
3,697
397
126
An SDK is not a runtime. I want a runtime. I asked AMD devrel for it. They gave me a bs answer about it being too big a download. Well, if that is the argument, then the SDK is REALLY a bad solution for end-users, as it's an even bigger download than a runtime.

I know it isn't a run time. It was you that said that they only gave a run time to developers, but nevermind.

They aren't. You should search for some interviews with Erwin Coumans, the lead developer for Bullet. You'll be amazed at how much AMD 'works on' Bullet.
Enough to call themselves partners in official statements.

By not offering a runtime to end-users, as I already explained.
First there are programs that already offer stream acceleration, so it is possible to get programs out.

This isn't about me, it's about my potential userbase. I already explained that aswell.
Getting a Fermi would just be a gesture of me voting with my wallet, giving off a sign to AMD that I do not agree with how they treat their customers and developers.

Second, is you program, tool, whatever, out yet?

Your user base is all the 4000 series and upward as they support OpenCL and it is proved.

Intel doesn't have any GPUs capable of acceleration in the first place. So Intel has nothing to do with this issue.
If I want things to run fast on Intel CPUs, I will just use C++ instead of OpenCL, as it's more efficient anyway.

So here you answered your question - AMD GPUs are capable of acceleration.

What is the doubt?

Okay, that's enough quotewars for now. The rest is not even worth my time.

I'll take that as you not having an answer.

EDIT:

(note also that he actually acknowledges my contribution to OpenCL by pointing out their calling convention problem and suggesting how to solve it. I'm such an anti-AMD a-hole, aren't I? Oh, and note the date, this was back in February. It's June by now... you think it's strange that my patience with AMD has run out on this issue? They haven't done anything about it in all these months!).

Ah!

So all this is because you are pissed with them!

Just a complaining costumer.

I get it now mate.

You have my sympathy and I'll leave you to the task of putting pressure on AMD while I go finish to conquer India on Empire: Total War.

No hard feelings.

Cya around.
 
Last edited:

Scali

Banned
Dec 3, 2004
2,495
0
0
I know it isn't a run time. It was you that said that they only gave a run time to developers, but nevermind.

Yes, the runtime is part of the SDK (but can not be installed or distributed separately). Until recently you actually had to register as a developer before you were allowed access to the download.
And I consider an SDK aimed at developers only. Regular end-users have no business putting SDKs on their machine. I'm not going to take responsibility for that anyway.

Enough to call themselves partners in official statements.

Yea, so they gave him a videocard and some beta drivers, and an NDA so he couldn't tell anyone if they didn't work.
What they forgot though, is that he can freely talk about nVidia. So in interviews he says that he had a lot of help from nVidia, he uses some code from their SDK in one of his solvers, and he develops on nVidia hardware, because the OpenCL support is very good.
Read this for example: http://www.hitechlegion.com/our-news/1411-bullet-physics-ati-sdk-for-gpu-and-open-cl-part-3?start=1

First there are programs that already offer stream acceleration, so it is possible to get programs out.

Stream is not OpenCL.
AMD bundles the Stream runtime with their graphics driver. The OpenCL runtime is actually built on top of Stream.

Second, is you program, tool, whatever, out yet?

Development is on hold until further notice, because of these Cuda/OpenCL/DirectCompute issues.
 

GaiaHunter

Diamond Member
Jul 13, 2008
3,697
397
126
I said I was going off but I just hit refresh. :)

And that thing about you being a pissed costumer isn't to be seen as condescending (and I apologize if it seems like it) - I just understand your situation.

Yes, the runtime is part of the SDK (but can not be installed or distributed separately). Until recently you actually had to register as a developer before you were allowed access to the download.
And I consider an SDK aimed at developers only. Regular end-users have no business putting SDKs on their machine. I'm not going to take responsibility for that anyway.

I agree that is not the best solution but hardly something to put a project on hold - fact is there is nothing/very few applications (?) out there supporting OpenCL.

From AMD pov makes sense not take the chance of breaking their drivers with a run time that will probably be changed dozens of times before any application is out.


Yea, so they gave him a videocard and some beta drivers, and an NDA so he couldn't tell anyone if they didn't work.
What they forgot though, is that he can freely talk about nVidia. So in interviews he says that he had a lot of help from nVidia, he uses some code from their SDK in one of his solvers, and he develops on nVidia hardware, because the OpenCL support is very good.
Read this for example: http://www.hitechlegion.com/our-news/1411-bullet-physics-ati-sdk-for-gpu-and-open-cl-part-3?start=1

That is from last year just after rumours of AMD being interested on helping/whatever Bullet.

The announcement I linked is from March 2010.

And NVIDIA being working with them is a good sign, is it not?


Development is on hold until further notice, because of these Cuda/OpenCL/DirectCompute issues.

You know very well that you/the team you are part of will keep programming for either OpenCL or DirectCompute. No way you can afford to lock yourself out of 50% of the discrete market unless someone drops loads of greens on you to make it worthwhile.
 
Last edited:

Scali

Banned
Dec 3, 2004
2,495
0
0
I agree that is not the best solution but hardly something to put a project on hold - fact is there is nothing/very few applications (?) out there supporting OpenCL.

Which probably means that most developers share my sympathies.

From AMD pov makes sense not take the chance of breaking their drivers with a run time that will probably be changed dozens of times before any application is out.

What the heck have they been doing all this time then, when they STILL can't make a decent runtime?

That is from last year.

The announcement I linked is from March 2010.

Announcement of what? I pull the latest sources from the bullet repository occassionally, to see how OpenCL support is going... So far I haven't seen much.
AMD has 'announced' a whole lot over the years, and sometimes even showed demos of 'working' GPU physics... They even showed a cloth demo allegedly running on OpenCL-accelerated Havok. That was well over a year ago. Have you seen any OpenCL-accelerated Havok products?

And NVIDIA being working with them is a good sign, is it not?

Ofcourse, this was never about nVidia. It's AMD's doing that people now think that nVidia doesn't support OpenCL, or wants to push only their proprietary Cuda stuff.
In reality nVidia doesn't care whether you use Cuda or OpenCL, as long as you support their hardware. nVidia has MUCH nicer devrel than AMD. They make great SDKs anyway, and there's tons of interesting papers and presentations on their developer site. nVidia is really committed to supporting its developers. The only other company that I can compare it to is Microsoft. Their DirectX SDK is also of exceptional value.

At any rate... AMD is the one creating buzz in the media about Bullet (most people probably never heard of it prior to AMD's involvement). So when you see the lead developer saying this:
“Bullet’s GPU acceleration via OpenCL will work with any compliant drivers, we use NVIDIA GeForce cards for our development and even use code from their OpenCL SDK, they are a great technology partner.”
I think that speaks volumes. He apparently values nVidia over AMD as a technology partner for OpenCL, despite the fact that technically Bullet is a competitor to their own PhysX (which is probably why nVidia never said anything in the media about Bullet and how they support it).

You know very well that you/the team you are part of will keep programming for either OpenCL or DirectCompute. No way you can afford to lock yourself out of 50% of the discrete market unless someone drops loads of greens on you to make it worthwhile.

Adobe can afford it, apparently. You'd be surprised how nVidia-centric certain markets are. Especially professional OpenGL users have long preferred nVidia. The hardware is of secondary importance to them. It's the driver stability and performance that is all important. I know various companies that have an official policy that all computers need to use nVidia GPUs. It has nothing to do with which GPU is actually better, or whether nVidia actually still has an advantage with professional OpenGL drivers today... They just stick with what they know.

Cuda's advantage over DirectCompute is that it works on linux and OS X aswell. And it's just a more advanced and powerful tool in general. Tough decision: do you want to lock out AMD or the non-Windows market?
 
Last edited:

Modelworks

Lifer
Feb 22, 2007
16,240
7
76
I found it interesting that Autodesk brought GPU support to their software for rendering and did not use Cuda or OpenCL or directcompute. Shows there is more than one way to get things done. The software supports both ATI and Nvidia cards.

Guess people that really want to use the GPU for work can get the work done while the rest of the programmers stand around in a pissing match over API.
 

Genx87

Lifer
Apr 8, 2002
41,091
513
126
I found it interesting that Autodesk brought GPU support to their software for rendering and did not use Cuda or OpenCL or directcompute. Shows there is more than one way to get things done. The software supports both ATI and Nvidia cards.

Guess people that really want to use the GPU for work can get the work done while the rest of the programmers stand around in a pissing match over API.

Wouldnt they be utilizing OpenGL?
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
Cuda's advantage over DirectCompute is that it works on linux and OS X aswell. And it's just a more advanced and powerful tool in general. Tough decision: do you want to lock out AMD or the non-Windows market?

What you are saying sounds really interesting.

At this moment it would appear the non-windows market would be smaller.

But what happens if Apple and/or Linux gain ground somehow? Wouldn't this brighten CUDA's future in some respects if the gains didn't come from the success of OPEN CL?

I guess it really depends on what CUDA apps Apple is planning on launching through increasing its GPU to CPU ratio? If this happens would it be predictable to see an increased interest in OPEN GL along with CUDA?
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
Current GPGPU application available to users:

ATI & nVidia: Folding@Home, SETI@Home, a few of those.
nVidia only: Badaboom encoder

Thats it.

GPGPU is huge however in military, research, and a few similar applications.
For example, geological teams across the world are transition their earthquake detectors from ~300 CPU server farms to ~30 nvidia GPUs (packed 4 per computer)... and account for nearly 1/4th of all GPGPU customers for nvidia at the moment.
Why nVidia? ATI hardware is about 50% more powerful (in terms of FLOPS), but nVidia provides CUDA which allows compilation for the GPU of native C code, Fotran code, and recently even C++ code. With ATI you have to learn a unique programming language of ATI's and rewrite your program for it which is significantly more difficult then using CUDA.
 
Last edited:

Scali

Banned
Dec 3, 2004
2,495
0
0
At this moment it would appear the non-windows market would be smaller.

That's what you might think, based on the overall marketshare... but it depends very much on the application.
For example, although Apache exists for both Windows and linux, Apache is primarily used on linux systems.
Likewise PhotoShop used to be a typical Mac application.
 
Status
Not open for further replies.