"Why won't ATI Support Cuda and PhysX?"

Wreckage

Banned
Jul 1, 2005
5,529
0
0
Regarding Havok....
Three, today on PCs physics almost always runs on the CPU, and we need to make sure that's an optimal solution first.
It may be long time (if ever) that we see Havok on the GPU.
 

thilanliyan

Lifer
Jun 21, 2005
12,031
2,243
126
Originally posted by: Wreckage
Regarding Havok....
Three, today on PCs physics almost always runs on the CPU, and we need to make sure that's an optimal solution first.
It may be long time (if ever) that we see Havok on the GPU.

That is correct though. Many more games use CPU physics than GPU physics so games in general have to be optimized for CPU physics in the current climate.

Originally posted by: Keysplayr
What were the interesting points?

"ATI would also be required to license PhysX in order to hardware accelerate it, of course, but Nvidia maintains that the licensing terms are extremely reasonable?it would work out to less than pennies per GPU shipped."

It's not completely free as I think some people think it is so in theory those fees could go up later on to ATI's detriment business wise.

"Two, they have demonstrated that they'll be very open and collaborative with us, working together with us to provide great solutions. It really is a case of a company acting very indepently from their parent company."

This is regarding Havok and I'd like to know if it's still the same situation today.

"Nvidia, he says, has not shown that they would be an open and truly collaborative partner when it comes to PhsyX. The same goes for CUDA, for that matter."

I wonder if there's anything specific he's talking about?
 

brblx

Diamond Member
Mar 23, 2009
5,499
2
0
here's a question- why does physics processing even need to be on the gpu? i mean, i know they're getting ridiculously powerful, but so are CPU's, and it's become mainstream to have at least two cores. once four is the norm, what's the big deal about doing phsyics processing on one of the four? isn't that how the consoles are doing it, albeit with less powerful parts?

a few years ago it was all about having the gpu that could throw the most triangles (and associated texturing and processing, of course). it doesn't really seem like we've 'peaked' with graphics, and i don't see multi-gpu setups becoming mainstream anytime soon, so why build a tech around putting extra tax on the gpu?
 

Wreckage

Banned
Jul 1, 2005
5,529
0
0
Originally posted by: Keysplayr
What were the interesting points?

"Nvidia claims they would be happy for ATI to adopt PhysX support on Radeons."

"Nvidia tells us it would be thrilled for ATI to develop a CUDA driver for their GPUs. "

My favorite....

"Open industry standards are extremely important to AMD as a company"

Havok and DirectX are not open standards. They are proprietary and owned by Intel an Microsoft. It sounds like they got caught with their pants down with regards to GPGPU and physics and are trying to downplay the situation as best they can.
 

thilanliyan

Lifer
Jun 21, 2005
12,031
2,243
126
Originally posted by: brblx
here's a question- why does physics processing even need to be on the gpu?

I think it's because certain types of physics calculations can be run much faster on a GPU compared to a CPU.
 

SlowSpyder

Lifer
Jan 12, 2005
17,305
1,002
126
Originally posted by: thilan29


Originally posted by: Keysplayr
What were the interesting points?

"ATI would also be required to license PhysX in order to hardware accelerate it, of course, but Nvidia maintains that the licensing terms are extremely reasonable?it would work out to less than pennies per GPU shipped."

It's not completely free as I think some people think it is so in theory those fees could go up later on to ATI's detriment business wise.

Good point. What if AMD was to adopt Physx, it became more mainstream, then became 'must-have' and at that point Nvidia changes their fees? AMD obviously looked at it and decided that Physx isn't the way to go, time will tell if that turns out to be a wise decision or not.
 

dguy6789

Diamond Member
Dec 9, 2002
8,558
3
76
Originally posted by: Wreckage


"Nvidia claims they would be happy for ATI to adopt PhysX support on Radeons."

Of course they would, it would mean more money for Nvidia.

Originally posted by: Wreckage

"Nvidia tells us it would be thrilled for ATI to develop a CUDA driver for their GPUs. "

See above.

Originally posted by: Wreckage

"Open industry standards are extremely important to AMD as a company"

Havok and DirectX are not open standards. They are proprietary and owned by Intel an Microsoft. It sounds like they got caught with their pants down with regards to GPGPU and physics and are trying to downplay the situation as best they can.

Havok and DirectX are not owned by companies that compete directly against ATI like PhysX is. Until Intel's Larabee proves to be a worthy card, they aren't ATI's competition.(those integrated GPUs don't count) Besides this, DirectX and Havok are much, much, much more widespread and accepted than PhysX is.
 

DaveSimmons

Elite Member
Aug 12, 2001
40,730
670
126
If nvidia offered a perpetual royalty rate that couldn't be increased later this might not be sticking your arm in the woodchipper for ATI.

Know why the xbox 360 has an ATI GPU? MS didn't negotiate terms carefully enough on the xbox1 and nvidia did unprintable things to MS in response, just because they could.
 

Fox5

Diamond Member
Jan 31, 2005
5,957
7
81
Originally posted by: DaveSimmons
If nvidia offered a perpetual royalty rate that couldn't be increased later this might not be sticking your arm in the woodchipper for ATI.

Know why the xbox 360 has an ATI GPU? MS didn't negotiate terms carefully enough on the xbox1 and nvidia did unprintable things to MS in response, just because they could.

Microsoft was buying the chips from Nvidia, nvidia refused to renegotiate prices. $40 a chip may have been reasonable in 2001, but it really made it hard to price drop in 2003.

Anyhow, there's no reason for ATI to support Physx and CUDA. If they did, they'd become de facto standards...optimized around nvidia hardware.
At least OpenCL (and DirectX compute) puts them on an even playing field, where ATI can write their own drivers around a generic and higher level abstraction than what CUDA is (which is aimed very closely at nvidia hardware).
And Havok will be on OpenCL or DirectX compute shaders. Physx will have to be ported if nvidia wants it to compete.
 

Genx87

Lifer
Apr 8, 2002
41,091
513
126
Originally posted by: SlowSpyder
Originally posted by: thilan29


Originally posted by: Keysplayr
What were the interesting points?

"ATI would also be required to license PhysX in order to hardware accelerate it, of course, but Nvidia maintains that the licensing terms are extremely reasonable?it would work out to less than pennies per GPU shipped."

It's not completely free as I think some people think it is so in theory those fees could go up later on to ATI's detriment business wise.

Good point. What if AMD was to adopt Physx, it became more mainstream, then became 'must-have' and at that point Nvidia changes their fees? AMD obviously looked at it and decided that Physx isn't the way to go, time will tell if that turns out to be a wise decision or not.

I am sure if AMD wanted they could negotiate out those fee's or make the schedule so far in the future it would be pointless for Nvidia to mess around. Right now Nvidia wants to gain market and mindshare with Physx. The sooner they kill Havok from Intel the better. They dont want to mess around trying to raise fee's and drive of AMD. Bigger fish to fry imo.
 

Wreckage

Banned
Jul 1, 2005
5,529
0
0
Originally posted by: dguy6789


Havok and DirectX are not owned by companies that compete directly against ATI like PhysX is.
What does this have to do with their statement about supporting "open standards"?

Until Intel's Larabee proves to be a worthy card, they aren't ATI's competition.(those integrated GPUs don't count) Besides this, DirectX and Havok are much, much, much more widespread and accepted than PhysX is.

Intel is AMDs largest competitor. By far.

Havok has yet to be accepted at all in any game for GPU physics. So it's not much, much more anything really.
 

ronnn

Diamond Member
May 22, 2003
3,918
0
71
Not totally, sure, but I thought open standards did not have a royalty?
 

ShawnD1

Lifer
May 24, 2003
15,987
2
81
Originally posted by: Wreckage
Originally posted by: dguy6789
Havok and DirectX are not owned by companies that compete directly against ATI like PhysX is.
What does this have to do with their statement about supporting "open standards"?

Havok GPU is open-ish because it runs on OpenCL. Of course neither Intel nor Nvidia control the OpenCL standard, so AMD is free to support Havok as long as they support OpenCL.

http://hothardware.com/News/AM...ized-Havok-Middleware/
Havok will enable game developers to offer improved performance and interactivity across a broad range of OpenCL capable PCs. AMD has recently introduced optimized platform technologies, such as ?Dragon? desktop platform technology, which balance performance between the CPU and GPU with ATI Stream technology to deliver outstanding value.

I'm not sure about this but I think the difference between Havok and PhysX is the kind of licensing. Havok may or may not be free on the hardware side (AMD can support it at no cost), but it's definitely not free on the software side. Game developers need to pay if they want to use Havok. PhysX is the exact opposite. The PhysX SDK is free to use, but hardware support for PhysX is not free. Is this correct?
 

aka1nas

Diamond Member
Aug 30, 2001
4,335
1
0
Originally posted by: ShawnD1
Originally posted by: Wreckage
Originally posted by: dguy6789
Havok and DirectX are not owned by companies that compete directly against ATI like PhysX is.
What does this have to do with their statement about supporting "open standards"?

Havok GPU is open-ish because it runs on OpenCL. Of course neither Intel nor Nvidia control the OpenCL standard, so AMD is free to support Havok as long as they support OpenCL.

[

Not even close, buddy. That's no different than saying that Unreal Engine 3 is an open platform because it runs on DirectX.

I'd have used OpenGL as an example, but I can't think of any modern engines that actually support it.;)
 

ShawnD1

Lifer
May 24, 2003
15,987
2
81
Originally posted by: Wreckage
I think ATI cards just don't have the power to run PhysX.

It's only a matter of time before you're banned for trolling. Crysis Warhead works just fine on ATI hardware when "enthusiast shaders" are enabled. Hint: shaders are those programmable things used for GPGPU.


Not even close, buddy. That's no different than saying that Unreal Engine 3 is an open platform because it runs on DirectX.
Sorry. What I mean is Havok is (theoretically) hardware neutral because it uses an open API. Havok is closed, but the API it uses is open. Using your UT3 example, I could say UT3 is hardware neutral because it uses DirectX. AMD and Nvidia don't need special drivers to run UT3; all they need to support is DirectX and the rest just works itself out.

Let's try an analogy. Havok is similar to something like the Quake engine. You don't say you need a Quake driver to play Quake. You need an OpenGL driver because Quake uses OpenGL. With Havok, you don't say you need a Havok driver. You need an OpenCL driver because Havok uses OpenCL. I guess another similarity would be that you need to pay Intel if you want to use Havok in your game, just as you would pay id Software if you wanted to use the Quake engine for your game.
 

aka1nas

Diamond Member
Aug 30, 2001
4,335
1
0
I'd agree, except that:

1) Open-CL based Havok doesn't actually exist yet.

2) AMD still doesn't have Open-CL drivers out.

3) There aren't any games announced that will be using GPU-accelerated Havok. Old games aren't going to be magically upgraded from CPU-based Havok.

I'm sure number three will be sorted out once the first two are no longer an issue, but after all the shell games AMD/ATI has been playing in this area I'm not holding my breathe any longer.
 

Scali

Banned
Dec 3, 2004
2,495
0
0
Originally posted by: brblx
here's a question- why does physics processing even need to be on the gpu? i mean, i know they're getting ridiculously powerful, but so are CPU's, and it's become mainstream to have at least two cores.

In short:
Average high-end GPU: 1+ TFLOPS.
Average high-end quadcore CPU: ~80 GFLOPS.

In addition, GPU performance has scaled much faster than CPU performance over the past decade, and there are no signs of this trend slowing down. So the gap will only get larger with time.

Physics just scales a lot better with a GPU than with a CPU.
For example, if you have a physics load of 40 GFLOPS, that would cut the framerate in half, on a high-end quadcore, because you're spending half your total processing power on physics.
On a 1 TFLOP GPU, 40 GFLOPS is only 4% of your total processing power used.

We see this with Cryostasis for example. Turning on all the physics drops you into the single-digit framerates on the fastest of quadcore CPUs, while a GPU can just handle the physics 'on the side' without much of a problem.
You can try to optimize CPU-physics until you're blue in the face, but you'll never make up for the huge advantage in processing power that a GPU has, which will always make it the better choice by far. It's just orders of magnitude faster.
 

Scali

Banned
Dec 3, 2004
2,495
0
0
Originally posted by: ShawnD1
It's only a matter of time before you're banned for trolling. Crysis Warhead works just fine on ATI hardware when "enthusiast shaders" are enabled. Hint: shaders are those programmable things used for GPGPU.

He does have a bit of a point, to be honest.
Graphics and GPGPU aren't the same thing. nVidia's G80 pretty much rewrote the book on GPGPU by adding a large shared cache to its shader processors.
This has absolutely no use for graphics, because D3D and OpenGL are designed in a way that each vertex and each pixel is completely independent by definition, and there is no sharing of any data between shaders, ever.

However, when doing GPGPU tasks, you can use the shared memory to have multiple threads communicate with eachother efficiently.
Prior to the 4000-series, ATi GPUs had no shared memory at all. They added it in the 4000-series, but the size is rather limited (boils down to about 128 bytes per thread, compared to nVidia's 512 bytes), as is the bandwidth (about 544GB/s compared to 1,417GB/s on RV790 vs GT200b).

Then I believe there is another limitation in ATi's design... namely that only one thread in every block can write to the shared memory, while the others have read-only access.

All this combined means that ATi cards indeed have some limitations in GPGPU compared to nVidia. This is also apparent in Folding@home for example.
Read this thread for example:
http://foldingforum.org/viewto...p?f=51&t=10442&start=0
It includes comments of people like mhouston, who work for AMD on the Folding@Home client. Basically they're saying that they calculate certain values multiple times because on ATi hardware this is faster than using the shared memory (LDS - Local Data Storage).
 

Keysplayr

Elite Member
Jan 16, 2003
21,211
50
91
Wreckage
I think ATI cards just don't have the power to run PhysX (opinion)

ShawnD1
It's only a matter of time before you're banned for trolling. (attack)

DaveSimmons
Now that's a silly troll. (attack)

SlowSpyder
Troll. (attack)

Sorry guys, but out of these four posts, who do you think stands a better chance of getting a vacation? You're no better than you believe Wreckage is if you pull this crap. I'll kindly ask you to knock it off.

@Scali: Nice posts there. It does explain a lot about the differences in ATI and Nvidia hardware and their respective GPGPU performance.

So you're basically saying, for GPGPU purposes, a 4800 series GPU would only have 160 sp's that could read and write to that 128bytes of cache each, while Nvidia's GT200 series has 192/216/240 sp's that could all read and write to 512bytes of cache each.

So if ATI doubled their shaders to say 1600 in the "R870" and increased the cache size to 512bytes per block, they would be much more competitive having 320 read/write sp's.

Am I getting this right?