"Inevitable Bleak Outcome for nVidia's Cuda + Physx Strategy"

Page 12 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

SirPauly

Diamond Member
Apr 28, 2009
5,187
1
0
Originally posted by: Scali
Originally posted by: SirPauly
You're comparing PhysX' eye-candy to FSAA? Sure it is nice to have but it really doesn't matter in the end? When FSAA is probably one of the most important areas when deciding on a GPU. Am I reading you correctly or wrong?

I don't get that view either.
When have new graphics cards EVER changed gameplay?
I mean, take Crysis and remove all the fancy graphics, and all you have left is something like Quake. For all the super-great graphics in Crysis don't have ANY effect on gameplay at all.
Essentially nearly all FPS games that came out in the past 15 years are little more than Quake with more eye-candy.
Nothing has changed gameplay. Why would physics suddenly have to change gameplay before it is worthwhile?

Has AA done anything for gameplay?
Has bumpmapping done anything for gameplay?
Has shadowmapping done anything for gameplay?
Has HDR done anything for gameplay?
Etc...

I think this is the most hypocrit stance you could possibly take, unless you are still playing Quake with software rendering, and didn't bother to buy a new videocard every few years just to get more pretty graphics that didn't do anything for gameplay.

Actually the PhysX to AA analogy is sound to me but not in the context it really doesn't matter in the end though. For me to have more AA quality -- there is a hit -- the same can be said for PhysX -- AA and PhysX are not about just benchmarks and winning, but about immersion and the visual experience; to feel like you're a part or in the game. The bonus with maturity is eventually game-changing content that may redefine gaming and raise the bar of inter-action -- I'll take visual but the upside is much more than this.



 

Scali

Banned
Dec 3, 2004
2,495
1
0
Originally posted by: Lonyo
Why in hell would anyone care about NV PhysX which only works n NV cards when they could look to the big daddy Intel who have Havok?
PhysX doesn't need to be open so ATI can survive, it needs to be open to all so IT can survive.

I think the obvious answer here is:
It doesn't need to be open until Intel actually becomes a threat.
Currently Intel is not a threat, so there's no reason to make PhysX open.
There's a good chance that nVidia will release OpenCL support for PhysX when the time is right.
The time just isn't right yet. Firstly, there IS no OpenCL support out there yet. And secondly there's only ATi, which isn't big enough a threat for nVidia at this point.
 

Scali

Banned
Dec 3, 2004
2,495
1
0
Originally posted by: taltamir
When have new graphics cards EVER changed gameplay?
first order physics calculations as could potentially be done in physX significanty alters gameplay. However there is not a single game in existance, nor any planned, which uses first order physX, because such a game would ONLY be playable nvidia DX10 gpus.

Wrong. UnrealEngine 3 uses PhysX for everything. If you enable hardware acceleration, all physics run on the GPU or PPU. Some games based on the UnrealEngine will also add extra eye-candy in that case.
But that doesn't take away from the fact that all game physics will run on GPU or PPU.

And they are still playable because if you don't have the 'eyecandy', the game physics are light enough to run on the CPU, still using the PhysX API.
There is no 'other physics API' in the UnrealEngine, and I doubt that any other developers would be crazy enough to develop their own physics API for that, when PhysX does the job just fine. It defies the point of using a physics API in the first place.

Now aside from that, you are not getting my point.
My point is: Why do people suddenly demand MORE than just better graphics, when new videocards, new features and new APIs have NEVER added anything other than graphics before?
 

Scali

Banned
Dec 3, 2004
2,495
1
0
Originally posted by: SirPauly
Actually the PhysX to AA analogy is sound to me but not in the context it really doesn't matter in the end though. For me to have more AA quality -- there is a hit -- the same can be said for PhysX -- AA and PhysX are not about just benchmarks and winning, but about immersion and the visual experience; to feel like you're a part or in the game. The bonus with maturity is eventually game-changing content that may redefine gaming and raise the bar of inter-action -- I'll take visual but the upside is much more than this.

I agree with you. I just don't see why others are expecting MORE than just better graphics/immersion when NOTHING that was EVER added to a videocard before has ever done ANTHING AT ALL for gameplay. It's as hypocrit as you can be.
 

Keysplayr

Elite Member
Jan 16, 2003
21,219
56
91
Originally posted by: munky
Originally posted by: Keysplayr
It's entirely weird. Goes against a gamers soul if you ask me. New technology? "Give it to me NOW" it what I'd expect to hear from every single solitary enthusiast gamer.

Not happening here. But, they won't have a choice soon. ATI is still continuing with their same architecture and throwing tons of shaders at the problem. But they don't understand ( or maybe they do and they're working on a new arch for 3 years down the road HOPEFULLY) that the architecture IS the problem. Even throwing double the shaders they have now into a core will only net them 320 shaders if Vec5 isn't properly coded for. And it won't be if it hasn't by now.

Not true. Physics isn't exactly a scalar science, there's a lot of vector math involved. Maybe not vec5, but vec2 and vec3 definitely. If you have properly optimized code, you can combine these instructions into AMD's vec5 units, since they are superscalar. It's not like AMD needs to adopt NV's scalar architecture to successfully implement gpu-accelerated physics, they just need to write the appropriate software to support the functionality.

Exactly, and we've been hearing this since the 2900 series launch. Nobody cared to code for it after all this time, and nobody will care to code for it for the life of the 5xxx series. Different boat, same paddle.
 

Keysplayr

Elite Member
Jan 16, 2003
21,219
56
91
Originally posted by: SickBeast
Originally posted by: munky
Originally posted by: Keysplayr
It's entirely weird. Goes against a gamers soul if you ask me. New technology? "Give it to me NOW" it what I'd expect to hear from every single solitary enthusiast gamer.

Not happening here. But, they won't have a choice soon. ATI is still continuing with their same architecture and throwing tons of shaders at the problem. But they don't understand ( or maybe they do and they're working on a new arch for 3 years down the road HOPEFULLY) that the architecture IS the problem. Even throwing double the shaders they have now into a core will only net them 320 shaders if Vec5 isn't properly coded for. And it won't be if it hasn't by now.

Not true. Physics isn't exactly a scalar science, there's a lot of vector math involved. Maybe not vec5, but vec2 and vec3 definitely. If you have properly optimized code, you can combine these instructions into AMD's vec5 units, since they are superscalar. It's not like AMD needs to adopt NV's scalar architecture to successfully implement gpu-accelerated physics, they just need to write the appropriate software to support the functionality.

Gamers love new technology, but smart people hate proprietary technology.

PhysX is proprietary. The NV people seem to be saying that they offered it to AMD for free (which does not make sense). The AMD people are saying that NV won't give them PhysX. The truth probably lies in the middle.

I remember when NV came out with something called "C for graphics" when their FX5800 cards could not run DX9. In a sense, this is a similar situation. The current NV cards are apparently to blame for a castrated DX10, and now they throw this proprietary PhysX at us.

You'd think that NV would learn from their mistakes by now. They need to work with MS and others to develop OPEN standards.

PhysX is open to anyone who wants to use it. And please show me where an official statement from ATI/AMD that says Nvidia won't give them PhysX??? You make this stuff up at your leisure.

Everyone learns from their mistakes, well, almost everyone. ATI isn't moving from their Vec5 architecture. They just keep on trucking with an arch that nobody wants to code for, or ATI will not, or cannot provide the proper tools to code for anywhere near as easily as coding for CUDA.
 

Qbah

Diamond Member
Oct 18, 2005
3,754
10
81
Originally posted by: munky
Originally posted by: Wreckage
Originally posted by: SickBeast
Wreckage I would love to see you run your beloved PhysX or your favorite game without a copy of Windows.

Your argument that DX is proprietary is one of the biggest straw men I have ever seen. :Q

No problem. Sacred 2 (which has PhysX) is available on the PS3.

Also feel free to download the PhysX SDK for Linux
http://developer.nvidia.com/ob...ysx_downloads.html#SDK

Nice try though.

DirectX is proprietary. I love how you are trying to spin it any other way. Hilarious!

What's hilarious is you bringing up PS3 when it has nothing to do with Cuda or GPU physics.

He could use Mirror's Edge too - hey, it has PhysX and is available on Xbox360 and PS3 too ! ;) Pure nonesense. But he will never admit - he's just plainly wrong.

Your post below is the essence of the PhysX multiplatform state:
Originallyposted by: munky
Other, more mature physics API's like Havok are also cross platform. But the NV supporters will immediately dismiss it because it doesn't offer gpu acceleration on a PC. The irony is that none of the other platforms use gpu for physics, it's all done on the cpu. If you put a fast enough cpu in a PC, you could get the same physics effects as on modern gaming consoles, thereby eliminating the need for NV's Cuda+physx implementation. But of course, the Nv fans will never mention that fact, even while continuously bringing up Physx platforms to which NV contributed absolutely nothing.

It's only the Windows PhysX that has the possibility to be GPU Accelerated. All the other solutions are CPU PhysX, that can also run on a PC without the need of an nVidia CUDA-enabled card.
 

Scali

Banned
Dec 3, 2004
2,495
1
0
Originally posted by: munky
The irony is that none of the other platforms use gpu for physics, it's all done on the cpu.

Well, the PS3 is a special case, with its Cell processor. It's not a CPU as we know it. It's more similar to a GPU or PPU with many small stream processors.

Originally posted by: munky
If you put a fast enough cpu in a PC, you could get the same physics effects as on modern gaming consoles, thereby eliminating the need for NV's Cuda+physx implementation.

The problem is that regular x86 CPUs aren't that fast at physics processing. A GPU or PPU can be orders of magnitude faster at processing physics than even the fastest CPUs.
That's why CPU physics are a dead end eventually. We have better solutions.
I have little doubt that the next-gen consoles will also use the GPU for physics, not the CPU. Since the current line of consoles still has DX9-level hardware, they don't have that option yet. But DX10 or better would.
 

Qbah

Diamond Member
Oct 18, 2005
3,754
10
81
I have little doubt that the next-gen consoles will also use the GPU for physics, not the CPU. Since the current line of consoles still has DX9-level hardware, they don't have that option yet. But DX10 or better would.

Since GPU-PhysX can only run on nVidia GPUs and the newest rumors say:

Next-gen Xbox - ATi
Next-gen Wii - ATi
Next-gen PS3 - Intel

Looks like it won't be PhysX ;)

Since it's still 3-4 years to those consoles, perhaps they are waiting for GPU-Havok to show up? Since Intel owns Havok they can probably make it run on their PS3 GPU without much trouble. And GPU-Havok will run on Radeon cards (once it's here, of course). And since the console-market is the current targeted gaming platform (/flame suit on :p), all 3 next-gen consoles supporting GPU-Havok can really make it fly.

3-4 years is a long time still, many things can happen.
 

Scali

Banned
Dec 3, 2004
2,495
1
0
Originally posted by: Qbah
Since GPU-PhysX can only run on nVidia GPUs and the newest rumors say:

Next-gen Xbox - ATi
Next-gen Wii - ATi
Next-gen PS3 - Intel

Looks like it won't be PhysX ;)

Who cares? The fact that physics acceleration will become standard in games is FAR more important than whether it's PhysX, Havok or what other API.

Besides, my point was that his suggestion of using a fast CPU in a PC was invalid.
You DO need a GPU or some other specialized processor if you want better physics.
CPUs can't replace GPU-accelerated PhysX. Only other GPU-accelerated physics solutions could.

Aside from that, as I already said, I consider PhysX on the PS3 with its Cell processor to be more GPU-accelerated PhysX than CPU-based PhysX.
 

SunnyD

Belgian Waffler
Jan 2, 2001
32,675
146
106
www.neftastic.com
Originally posted by: Scali
Originally posted by: Qbah
Since GPU-PhysX can only run on nVidia GPUs and the newest rumors say:

Next-gen Xbox - ATi
Next-gen Wii - ATi
Next-gen PS3 - Intel

Looks like it won't be PhysX ;)

Who cares? The fact that physics acceleration will become standard in games is FAR more important than whether it's PhysX, Havok or what other API.

Besides, my point was that his suggestion of using a fast CPU in a PC was invalid.
You DO need a GPU or some other specialized processor if you want better physics.
CPUs can't replace GPU-accelerated PhysX. Only other GPU-accelerated physics solutions could.

Aside from that, as I already said, I consider PhysX on the PS3 with its Cell processor to be more GPU-accelerated PhysX than CPU-based PhysX.

Wow... that's incredibly short sighted. It speaks volumes to your Nvidia support as well.

In the same post you manage to state PhysX is irrelevant, yet physics is a necessity. I think the people that care are actually going to be... oh... the entire console gaming market as well as developers for said platform, which incidentally FAR outweigh the PC gaming market. Granted, the above list is rumor, but imagine how much less relevant Nvidia becomes if their name isn't shipped somehow on a console.

What's the solution here? OpenCL. Port PhysX to OpenCL, and your name become marketable despite whatever hardware platform is in the box. What's the relevance to the thread... CUDA becomes irrelevant in the face of OpenCL on 80% of the hardware delivered that is used in physics-capable hardware.

And why exactly can't a CPU be used for "better physics"? The folks developing the Ghostbusters game seem to be very happy with the physics engine they've developed for the CPU, and from their demos I've seen, their use of physics is far better and more noticeable than most 'hardware' physics titles I have seen outside of tech demos.
 

Scali

Banned
Dec 3, 2004
2,495
1
0
Originally posted by: SunnyD
Wow... that's incredibly short sighted. It speaks volumes to your Nvidia support as well.

Lol, what nVidia support? I specifically said it didn't matter if PhysX or another API would win :)
Make up your mind.

Originally posted by: SunnyD
In the same post you manage to state PhysX is irrelevant, yet physics is a necessity.

Careful, your reading comprehension is slipping again :)
I said two things:
1) What I think matters is that physics acceleration is adopted by future games, in whatever shape or form that may be.
2) *Currently* PhysX is the only API that allows for GPU acceleration, CPU-based solutions are no alternative.

Originally posted by: SunnyD
I think the people that care are actually going to be... oh... the entire console gaming market as well as developers for said platform, which incidentally FAR outweigh the PC gaming market. Granted, the above list is rumor, but imagine how much less relevant Nvidia becomes if their name isn't shipped somehow on a console.

I just said I don't care. I don't care whether it's ATi, nVidia or Intel powering future consoles. As long as we get accelerated physics from whoever it will be.

Originally posted by: SunnyD
What's the solution here? OpenCL. Port PhysX to OpenCL, and your name become marketable despite whatever hardware platform is in the box.

There we go again. There IS no OpenCL. What do you know? Maybe nVidia already HAS ported PhysX to OpenCL. It just doesn't matter, because nobody can USE OpenCL yet.
Why do people keep arguing as if OpenCL is the solution, when it doesn't even exist yet?

Originally posted by: SunnyD
What's the relevance to the thread... CUDA becomes irrelevant in the face of OpenCL on 80% of the hardware delivered that is used in physics-capable hardware.

Who cares, other than perhaps nVidia?
Aside from that, Cuda/PhysX have already proven their relevance.
It's just like 3DFX and Glide. They may not exist anymore, but they are what got the ball rolling in terms of 3d acceleration, and their legacy lives on in the videocards and games that we buy today. They were historically significant.

Originally posted by: SunnyD
And why exactly can't a CPU be used for "better physics"?

Because a CPU only has very limited processing power. GPUs and PPUs have FAR more processing power.
It's the same argument as software rendering vs a 3d accelerator. You *could* still render things on the CPU in software. It's just that a CPU is way slower than a GPU, so you can't use the same amount of detail and visual quality.
Aside from that, CPUs haven't really scaled much in general in the past few years. GPUs have increased in processing power and capabilities at a far greater rate.
 

SunnyD

Belgian Waffler
Jan 2, 2001
32,675
146
106
www.neftastic.com
Originally posted by: Scali
Originally posted by: SunnyD
What's the solution here? OpenCL. Port PhysX to OpenCL, and your name become marketable despite whatever hardware platform is in the box.

There we go again. There IS no OpenCL. What do you know? Maybe nVidia already HAS ported PhysX to OpenCL. It just doesn't matter, because nobody can USE OpenCL yet.
Why do people keep arguing as if OpenCL is the solution, when it doesn't even exist yet?

Looks like it exists just fine to me. I would think with all your "experience" comparing OpenCL to CUDA you might not have made that mistake. While public API's aren't readily available, that by no means says OpenCL doesn't exist.

Originally posted by: SunnyD
What's the relevance to the thread... CUDA becomes irrelevant in the face of OpenCL on 80% of the hardware delivered that is used in physics-capable hardware.

Who cares, other than perhaps nVidia?
Aside from that, Cuda/PhysX have already proven their relevance.
It's just like 3DFX and Glide. They may not exist anymore, but they are what got the ball rolling in terms of 3d acceleration, and their legacy lives on in the videocards and games that we buy today. They were historically significant.[/quote]

Did you bother to read the topic title before posting in this thread? Here, I'll point it out for you, "Inevitable bleak outcome for Nvidia's Cuda + PhysX strategy".

As I said, if 80% of the hardware out there is non-Nvidia, then CUDA + PhysX become irrelevant. It's the entire point of the thread for the OP, not whether accelerated physics will continue to exist if CUDA+PhysX goes away. You're the one that brought the whole "CUDA this that and everything else" into the thread. You also brought the notion that OpenCL will only perform up to snuff on Nvidia hardware into this thread. My point here - which is the OP's point, is CUDA means squat, as does PhysX - unless Nvidia takes the step to move beyond their platform.
 

SickBeast

Lifer
Jul 21, 2000
14,377
19
81
Originally posted by: Keysplayr
Originally posted by: SickBeast
Originally posted by: munky
Originally posted by: Keysplayr
It's entirely weird. Goes against a gamers soul if you ask me. New technology? "Give it to me NOW" it what I'd expect to hear from every single solitary enthusiast gamer.

Not happening here. But, they won't have a choice soon. ATI is still continuing with their same architecture and throwing tons of shaders at the problem. But they don't understand ( or maybe they do and they're working on a new arch for 3 years down the road HOPEFULLY) that the architecture IS the problem. Even throwing double the shaders they have now into a core will only net them 320 shaders if Vec5 isn't properly coded for. And it won't be if it hasn't by now.

Not true. Physics isn't exactly a scalar science, there's a lot of vector math involved. Maybe not vec5, but vec2 and vec3 definitely. If you have properly optimized code, you can combine these instructions into AMD's vec5 units, since they are superscalar. It's not like AMD needs to adopt NV's scalar architecture to successfully implement gpu-accelerated physics, they just need to write the appropriate software to support the functionality.

Gamers love new technology, but smart people hate proprietary technology.

PhysX is proprietary. The NV people seem to be saying that they offered it to AMD for free (which does not make sense). The AMD people are saying that NV won't give them PhysX. The truth probably lies in the middle.

I remember when NV came out with something called "C for graphics" when their FX5800 cards could not run DX9. In a sense, this is a similar situation. The current NV cards are apparently to blame for a castrated DX10, and now they throw this proprietary PhysX at us.

You'd think that NV would learn from their mistakes by now. They need to work with MS and others to develop OPEN standards.

PhysX is open to anyone who wants to use it. And please show me where an official statement from ATI/AMD that says Nvidia won't give them PhysX??? You make this stuff up at your leisure.

Everyone learns from their mistakes, well, almost everyone. ATI isn't moving from their Vec5 architecture. They just keep on trucking with an arch that nobody wants to code for, or ATI will not, or cannot provide the proper tools to code for anywhere near as easily as coding for CUDA.

I've just read it on a few forums; nothing official.

That said, why would NV pay all that $$$ to purchase Ageia, only to give the technology away to AMD for free? :confused:

AMD's Vec5 architecture is obviously more efficient in terms of performance per transistor than what NV is doing with their shaders in terms of gaming performance. What NV is doing with CUDA is a desperate move to remain relevant in the event that the GPU becomes less prominent. AMD already has CPUs that will perform general processing, so they probably don't care as much about the whole GPGPU thing.

Instead of developing tools like CUDA, they have focused on creating good GPUs. I think that the next generation will be very telling, and could make or break either company IMO.
 

Scali

Banned
Dec 3, 2004
2,495
1
0
Originally posted by: SunnyD
Looks like it exists just fine to me. I would think with all your "experience" comparing OpenCL to CUDA you might not have made that mistake. While public API's aren't readily available, that by no means says OpenCL doesn't exist.

OpenCL only exists on paper.
You can't release OpenCL software on the market because nobody can run it yet.
That's my point.
Even if nVidia had an OpenCL-based PhysX API, there would be no point in releasing it today, becasue nobody could run it.
That's where you and many others go wrong. Cuda works, OpenCL doesn't.

Originally posted by: SunnyD
Did you bother to read the topic title before posting in this thread? Here, I'll point it out for you, "Inevitable bleak outcome for Nvidia's Cuda + PhysX strategy".

Yea well, that blogger made two mistakes:
1) OpenCL is not a replacement for Cuda, but will actually be a part of Cuda (at least on nVidia hardware). Therefore Cuda doesn't disappear.
2) PhysX doesn't have to disappear either, because it is not implicitly tied to Cuda or nVidia hardware in general.

So because of 1) it is obvious that Cuda (or a successor) will exist as long as nVidia GPGPU hardware exists.
And 2) simply depends on what nVidia want to do. If they want to support OpenCL, they can, so PhysX doesn't need to die.
nVidia hasn't made any official statements yet about whether or not they will add OpenCL support to PhysX. We will just have to wait and see. As long as Havok isn't forcing their hand, they have no reason to support OpenCL anyway. That's just simple business logic.

Originally posted by: SunnyD
As I said, if 80% of the hardware out there is non-Nvidia, then CUDA + PhysX become irrelevant.

But that '80%' assumption is just some arbitrary rule you just threw into the discussion, because in 3-4 years there MAY be a possibility that there are no nVidia-based consoles.
That has nothing to do with the original discussion.

Aside from that, the argument is flawed anyway.
Today we don't have any consoles supporting Cuda either. Yet today Cuda and PhysX exist just fine. Even if there won't be any nVidia-based consoles in the future, that won't affect Cuda in any way, because current nVidia-based consoles don't support Cuda either. So reality has alraedy proven you wrong.
 

Scali

Banned
Dec 3, 2004
2,495
1
0
Originally posted by: SickBeast
That said, why would NV pay all that $$$ to purchase Ageia, only to give the technology away to AMD for free? :confused:

That was already answered:
If nVidia is convinced that ATi's hardware will not perform anywhere near as well with PhysX than their own, why wouldn't they?
nVidia isn't a physics company, they're a GPU company. They bought PhysX to sell more GPUs. If they can use PhysX to make their competitors look bad for free, all the better.

Originally posted by: SickBeast
What NV is doing with CUDA is a desperate move to remain relevant in the event that the GPU becomes less prominent.

That doesn't make sense.
Cuda predates any ATi DX10 GPUs. Cuda was released together with the GeForce 8800, at which time nVidia reigned supreme in the graphics world. There was no reason for nVidia to be 'desperate' at that time.

Originally posted by: SickBeast
AMD already has CPUs that will perform general processing, so they probably don't care as much about the whole GPGPU thing.

They have to. nVidia's Cuda is already replacing supercomputers in development centers and universities around the world because of the huge leap in performance.
This threatens both Intel's and AMD's CPU sales.
Intel has responded with Larrabee, AMD should respond aswell.

Originally posted by: SickBeast
Instead of developing tools like CUDA, they have focused on creating good GPUs.

Even long after Cuda was released, AMD still didn't have any GPUs that competed with nVidia in terms of graphics performance, let alone GPGPU.
Let's not try to find excuses for the fact that the 2000-series was a failure, and the 3000-series wasn't exactly a hot performer either, and that ATi still hasn't delivered on GPGPU.
 

SunnyD

Belgian Waffler
Jan 2, 2001
32,675
146
106
www.neftastic.com
Wow... just wow.

OpenCL only exists on paper.

That's why both Nvidia and AMD along with Havok have demoed OpenCL-based software. Sure, these are technical previews, but I'd hardly say it doesn't exist. Developer support always starts well before the initial release, and I can attest to this with OpenCL directly.

That's where you and many others go wrong. Cuda works, OpenCL doesn't.

Various demos seem to indicate otherwise. Personal experience indicates otherwise as well. But I don't expect you to take my word for it.

Yea well, that blogger made two mistakes:
1) OpenCL is not a replacement for Cuda, but will actually be a part of Cuda (at least on nVidia hardware). Therefore Cuda doesn't disappear.

OpenCL is indeed a replacement for CUDA. It's a compute API, same as CUDA is. The hardware is just hardware, it doesn't matter... the API is what matters.

2) PhysX doesn't have to disappear either, because it is not implicitly tied to Cuda or nVidia hardware in general.

With you harping on "accelerated" PhysX, yeah, it is explicitly tied to CUDA and Nvidia hardware. Unless this changes, or the market landscape changes, PhysX will have a hard time holding on.

Originally posted by: SunnyD
As I said, if 80% of the hardware out there is non-Nvidia, then CUDA + PhysX become irrelevant.

But that '80%' assumption is just some arbitrary rule you just threw into the discussion, because in 3-4 years there MAY be a possibility that there are no nVidia-based consoles.
That has nothing to do with the original discussion.

Yes, 80% is an assumption, however it was used to demonstrate the fact that the hardware landscape out there is non-Nvidia, non-CUDA capable. How many current-gen consoles are currently running or capable of running CUDA? None (maybe the PS3?). It has everything to do with the discussion, because without a hardware base, CUDA is nothing. With the PC industry, it's niche at best. Nvidia doesn't settle for being niche.
 

Scali

Banned
Dec 3, 2004
2,495
1
0
Originally posted by: SunnyD
That's why both Nvidia and AMD along with Havok have demoed OpenCL-based software. Sure, these are technical previews, but I'd hardly say it doesn't exist. Developer support always starts well before the initial release, and I can attest to this with OpenCL directly.

For someone who complains about marketing so much, you are quick to fall for these technical previews.
This is all just marketing aswell. They make a simple demo, which may or may not actually run on OpenCL (nobody got to inspect either the sourcecode or the binaries, so what was ACTUALLY running, nobody knows). The point is to show what is possible with OpenCL, a preview. It's just to promote the OpenCL name, get people familiar with the concept.
It doesn't actually mean that they had a complete OpenCL implementation working yet. They may have had *just enough* functionality for that particular demo... Or they may actually not have had all the functionality, and replaced some of it with a custom-made alternative to fill-in-the-blanks for demoing purposes.

Heck, there are many examples of 'technical previews' where the actual technology never emerged, or things that were shown in a demo were never possible in practice. In other words, the demos were 'rigged'.

Originally posted by: SunnyD
Various demos seem to indicate otherwise. Personal experience indicates otherwise as well. But I don't expect you to take my word for it.

I know that nVidia has a compliance candidate driver.
Problem is, it's only for nVidia hardware, so how does that help ATi?
Besides, it's only available to registered developers.
In other words... If I were to post a binary with OpenCL on this forum, nobody other than registered nVidia developers would actually be able to run it. And they would be able to run Cuda anyway. So how does that help PhysX?

Originally posted by: SunnyD
OpenCL is indeed a replacement for CUDA. It's a compute API, same as CUDA is. The hardware is just hardware, it doesn't matter... the API is what matters.

Wrong.
I'll just repeat what I posted on the blog the other day:
Cuda is the name for the entire GPGPU framework (and they use it to refer to the hardware architecture aswell, at times). OpenCL is not a complete framework, but a programming language. Cuda's programming language is called C for Cuda.
OpenCL is very similar to C for Cuda, and OpenCL will also run on top of the Cuda framework, like C for Cuda does.
So Cuda in itself won't disappear, and will actually be an important part of making OpenCL work on nVidia's hardware. That's why nVidia can continue to promote Cuda all they want. And in a few months they will just say: "Now you can run OpenCL on Cuda too!".

See this page for more info:
http://www.nvidia.com/object/cuda_what_is.html

The first paragraph is a good summary:
"NVIDIA® CUDA? is a general purpose parallel computing architecture that leverages the parallel compute engine in NVIDIA graphics processing units (GPUs) to solve many complex computational problems in a fraction of the time required on a CPU. It includes the CUDA Instruction Set Architecture (ISA) and the parallel compute engine in the GPU. To program to the CUDATM architecture, developers can, today, use C, one of the most widely used high-level programming languages, which can then be run at great performance on a CUDATM enabled processor. Other languages will be supported in the future, including FORTRAN and C++."

Then when you go to their OpenCL page:
http://www.nvidia.com/object/cuda_opencl.html

"OpenCL? (Open Computing Language) is a new heterogeneous computing environment, that runs on the CUDA architecture. It will allow developers to harness the massive parallel computing power of NVIDIA GPU?s to create compelling computing applications."

In other words, OpenCL is just going to be one of the languages you can use in the Cuda framework.

Originally posted by: SunnyD
(maybe the PS3?)

No it can't, the fact that you would even doubt that shows that you have no understanding of what Cuda is at all.
The PS3 can run PhysX with a special Cell implementation of the engine. But that has nothing to do with Cuda.

Originally posted by: SunnyD
Nvidia doesn't settle for being niche.

Ever heard of the nVidia Tesla range of products?
They are basically videocards without the video. The *only* thing they can do is run Cuda.
In fact, ATi thought this was such a good idea, that ATi also launched a FireStream line of products, same thing, but for ATi Stream rather than Cuda, obviously.
 

Wreckage

Banned
Jul 1, 2005
5,529
0
0
So to sum it up.

ATI Defense Force: I don't like PhysX because it's proprietary.
Reality: So is DirectX and Havok
ATI Defense Force: No response.


ATI Defense Force: I don't like PhysX because right now it's mostly graphical enhancements.
Reality: Like AA/AF/HDR/Etc.
ATI Defense Force: No response.


ATI Defense Force: I don't like PhysX because it does not run on all cards.
Reality: Neither does DX10.1
ATI Defense Force: No response.


ATI Defense Force: I'm waiting for Havok and OpenCL
Reality: Great, when will the first game be here
ATI Defense Force: No response.


ATI Defense Force: PhysX will fail.
Reality: So why have so many PhysX compatible cards been sold, games keep being released and developers keep jumping on board.
ATI Defense Force: No response.
 

Qbah

Diamond Member
Oct 18, 2005
3,754
10
81
Originally posted by: Wreckage
So to sum it up.

ATI Defense Force: I don't like PhysX because it's proprietary.
Reality: So is DirectX and Havok
ATI Defense Force: No response.

Havok runs on every CPU available for the PC market
Direct X 9/10 runs on every GPU available on the market
PhysX runs only on 8/9/GTX series nVidia GPUs.
First fail on your part

ATI Defense Force: I don't like PhysX because right now it's mostly graphical enhancements.
Reality: Like AA/AF/HDR/Etc.
ATI Defense Force: No response.

AA/AF/HDR/Etc. is available for every GPU on the market.
PhysX runs only on 8/9/GTX series nVidia GPUs.
Second fail on your part.

ATI Defense Force: I don't like PhysX because it's proprietary.
Reality: So is DirectX and Havok
ATI Defense Force: No response.

Repeat of the first point. Third fail on your part.

ATI Defense Force: I'm waiting for Havok and OpenCL
Reality: Great, when will the first game be here
ATI Defense Force: No response.

Once OpenCL is finalized, this will start showing up.
Havok is and has been available for quite some time. Plenty of games available.
Fourth fail on your part.

ATI Defense Force: PhysX will fail.
Reality: So why have so many PhysX compatible cards been sold, games keep being released and developers keep jumping on board.
ATI Defense Force: No response.

So many PhysX compatible cards have been sold because they are good cards and PhysX has nothing to do with it.
What games, Wreckage? Mirror's Edge, Sacred 2 are the two biggest titles! And PhysX is used there as additional fluff - leaves in Sacred 2 that react to wind and particles that react to spells... And that's it. And hardware requirements for Sacred 2 are crazy with PhysX - they suggest a GTX280 class card to have good framerates!

Seriously, each time you post, it's a bigger failure that's so false it's beyond stupid. You keep repeating the same, wrong things and blantantly ignore each time you are proven wrong - again and again repeating your inaccurate and false claims.
 

Wreckage

Banned
Jul 1, 2005
5,529
0
0
Originally posted by: Qbah

Havok runs on every CPU available for the PC market
So far it does not run on any GPU and it is owned by Intel making it just as proprietary as PhysX. Major fail on your part.

Direct X 9/10 runs on every GPU available on the market
As long as you have Windows. Yet another fail.
PhysX runs only on 8/9/GTX series nVidia GPUs.
Directx 10.1 only runs on 4xxx series. 3 fails for 3.

AA/AF/HDR/Etc. is available for every GPU on the market.
Which has absolutely nothing to do with the fact that they are only eye candy.

Once OpenCL is finalized, this will start showing up.
Show me a release date?
Havok is and has been available for quite some time. Plenty of games available.
Show me "plenty of games" using Havok on the GPU running on OpenCL.

You twisted everything and did not directly or accurately respond to one point made. Typical.

 

Keysplayr

Elite Member
Jan 16, 2003
21,219
56
91
Originally posted by: SunnyD
Wow... just wow.

OpenCL only exists on paper.

That's why both Nvidia and AMD along with Havok have demoed OpenCL-based software. Sure, these are technical previews, but I'd hardly say it doesn't exist. Developer support always starts well before the initial release, and I can attest to this with OpenCL directly.

That's where you and many others go wrong. Cuda works, OpenCL doesn't.

Various demos seem to indicate otherwise. Personal experience indicates otherwise as well. But I don't expect you to take my word for it.

Yea well, that blogger made two mistakes:
1) OpenCL is not a replacement for Cuda, but will actually be a part of Cuda (at least on nVidia hardware). Therefore Cuda doesn't disappear.

OpenCL is indeed a replacement for CUDA. It's a compute API, same as CUDA is. The hardware is just hardware, it doesn't matter... the API is what matters.

2) PhysX doesn't have to disappear either, because it is not implicitly tied to Cuda or nVidia hardware in general.

With you harping on "accelerated" PhysX, yeah, it is explicitly tied to CUDA and Nvidia hardware. Unless this changes, or the market landscape changes, PhysX will have a hard time holding on.

Originally posted by: SunnyD
As I said, if 80% of the hardware out there is non-Nvidia, then CUDA + PhysX become irrelevant.

But that '80%' assumption is just some arbitrary rule you just threw into the discussion, because in 3-4 years there MAY be a possibility that there are no nVidia-based consoles.
That has nothing to do with the original discussion.

Yes, 80% is an assumption, however it was used to demonstrate the fact that the hardware landscape out there is non-Nvidia, non-CUDA capable. How many current-gen consoles are currently running or capable of running CUDA? None (maybe the PS3?). It has everything to do with the discussion, because without a hardware base, CUDA is nothing. With the PC industry, it's niche at best. Nvidia doesn't settle for being niche.

CUDA is not an API for the eleventy billionth time. It's an architecture. The programming language is C with extentions. OpenCL is similar to C for CUDA. OpenCL will ride nicely on the CUDA architecture with minimal effort. Do you get this?
 

ArchAngel777

Diamond Member
Dec 24, 2000
5,223
61
91
Originally posted by: Wreckage
So to sum it up.

ATI Defense Force: I don't like PhysX because it's proprietary.
Reality: So is DirectX and Havok
ATI Defense Force: No response.


ATI Defense Force: I don't like PhysX because right now it's mostly graphical enhancements.
Reality: Like AA/AF/HDR/Etc.
ATI Defense Force: No response.


ATI Defense Force: I don't like PhysX because it does not run on all cards.
Reality: Neither does DX10.1
ATI Defense Force: No response.


ATI Defense Force: I'm waiting for Havok and OpenCL
Reality: Great, when will the first game be here
ATI Defense Force: No response.


ATI Defense Force: PhysX will fail.
Reality: So why have so many PhysX compatible cards been sold, games keep being released and developers keep jumping on board.
ATI Defense Force: No response.

Do these voices in your head ever give you relief?
 

Munky

Diamond Member
Feb 5, 2005
9,372
0
76
Originally posted by: Scali
Originally posted by: Qbah
Since GPU-PhysX can only run on nVidia GPUs and the newest rumors say:

Next-gen Xbox - ATi
Next-gen Wii - ATi
Next-gen PS3 - Intel

Looks like it won't be PhysX ;)

Who cares? The fact that physics acceleration will become standard in games is FAR more important than whether it's PhysX, Havok or what other API.

Besides, my point was that his suggestion of using a fast CPU in a PC was invalid.
You DO need a GPU or some other specialized processor if you want better physics.
CPUs can't replace GPU-accelerated PhysX. Only other GPU-accelerated physics solutions could.

Aside from that, as I already said, I consider PhysX on the PS3 with its Cell processor to be more GPU-accelerated PhysX than CPU-based PhysX.

That's why the PS3 isn't using x86, but a specialized cpu which can handle the physics load. It would be pretty stupid IMO to put a generic x86 cpu in a console and then use the gpu for physics, unless you have a second gpu doing the graphics. Having one gpu do both will only slow down the performance.