"Inevitable Bleak Outcome for nVidia's Cuda + Physx Strategy"

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Scali

Banned
Dec 3, 2004
2,495
1
0
Originally posted by: Pantalaimon
So I guess you don't mind being locked into one hardware vendor? Sorry, I like being able to choose which hardware vendor's card to buy. I'd rather take better performing card without PhysX but can run an open physics standard instead.

PhysX is a reality. I can buy hardware and games that support PhysX today.
Open physics standards don't exist yet. You don't have the choice that you are proposing.
And as I said in the post above, you may not get the choice, because games will likely continue to use PhysX even if Havok/OpenCL become available. Eg, the Unreal Engine is a very popular engine in games today, and it is just built around PhysX, not Havok. This isn't going to change overnight.
 

Qbah

Diamond Member
Oct 18, 2005
3,754
10
81
Originally posted by: Scali
Might I refresh your memory?
The GeForce 7 series supported shadowmapping extensions (DST/PCF), the Radeons didn't.
The Radeons supported 3Dc normalmap compression, the GeForces didn't.
Guess what? Both the shadowmapping and normalmap compression were used in many titles, even though it required different codepaths for different vendors.

And speaking of IDTech... Doom 3 actually had a specific path for nVidia hardware, making use of things like UltraShadow. Yes, that can only run on nVidia hardware. Other cards had significantly reduced stencilshadow performance.
Just look at the trusty old GeForce 5900 outperforming the Radeon 9800 series in Doom 3:
http://www.tomshardware.com/re...5900-ultra,630-14.html

Yes, that is all true. Shadowmapping wasn't used anywehre else (Well, I think in NWN2 too). Nobody is using it now. The standard wasn't supported and it's dead now. No follow-up, no nothing. 3Dc was implemented as part of DX later on - both nV and ATi support it now.

As for the IDTech - yes, the GeForce cards had their own path, but it didn't mean Radeons ran the game worse. Slower - yes, looked the same though. And the game was playable either way. Not to mention (as it is now) - if you wanted to play Doom3 maxed out, you would buy a FX (you want PhysX now? get a GTX). But let's take a look at HL2 - the FX was slaughtered in it, more so than in D3. It had to run a different DX level (8.1 instead of 9) in ordert to get playable FPS. So recommending the FX cause it ran Doom3 better wasn't such a great idea taking all the incoming DX9 games into account - which everybody knew would arrive. And DX9 is an industry standard, supported by everyone. So everyone could enjoy the games. Unlike PhysX.

That's where you and many other people go wrong.
A technology isn't useless just because there is only one brand supporting it.
PhysX is an excellent technology, opening up many new possibilities for physics in games.
Remember Glide? It wasn't exactly useless either. In the end it obviously didn't survive, because as other vendors started offering 3d acceleration, a hardware-agnostic solution was called for. But Glide was very useful when there was only one vendor offering this kind of acceleration in the first place. It laid the groundwork for 3d videocards as we know them today.

I was just saying that nVidia does its best to promote PhysX because that's what commercial companies do: they promote their products and technologies. It is always to be taken with a grain of salt.

This is were we'll disagree completely. Unless it's widely supported, a technology won't be picked up. It being excellent doesn't really cut it - it needs to be widely supported to be a commercial success. And Glide was well before any form of standard DirectX was available. Glide was still back in the MS DOS days... And see what happened to it? DX killed it. It was touted as an industry standard, included in Windows 95 - the next-gen gaming OS. Microsoft was the owner, same as Windows. Glide was so much better than the DX of that time - didn't change the fact that it wasn't used and just died. Together with 3DFx. It was the first widely available 3D API on the market that brought with it huge performance and never before seen graphics - and yet it died. PhysX compared to Glide is like a spit to a nuke (that's my opinion :p).

Yea, it is now. But it wasn't when I first started playing with computers. In fact, my first 2 or 3 computers didn't have an x86 processor in them at all (even though x86 and the IBM PC did exist back then).
It slowly worked itself up from nothing to where it is today.
All that doesn't make it any less proprietary though.

Thing is, there was no standard at all back then. Not really the case today. We will get OpenCL as the platform that will allow PhysX on non-nV hardware. And people will not care if it's PhysX or Havok - they will only care if it will run on their hardware - which it will, as long as it's OpenCL compatible.


nVidia owns PhysX though, so they can add OpenCL anytime they like. And nVidia has VERY good developer relations through the TWIMTP program. They have a lot of influence in the gaming industry. And they have the monopoly on GPGPU so far.

I find that stance very naive and arrogant. Sure, nVidia has TWIMTBP, things still run great on ATi cards - hell, sometimes even better despite the TWIMTBP. Unless some quality titles show up in numbers and PhysX is supported by the newest hardware widely available (be it ATi or nV) it just won't fly. Not until nV has a monopoly on the GPU market - which it doesn't and hopefully never will.
 

Keysplayr

Elite Member
Jan 16, 2003
21,219
56
91
Originally posted by: Pantalaimon
Or the third option: So you're asking if you would want 110 with PhysX (GPU), or 5 with PhysX (CPU)? Answer: With PhysX (GPU)

So I guess you don't mind being locked into one hardware vendor? Sorry, I like being able to choose which hardware vendor's card to buy. I'd rather take better performing card without PhysX but can run an open physics standard instead.

You're forgetting that if you were being "locked" into one hardware vendor, that vendor is still Nvidia and you would be hard pressed to say that you wouldn't be satisfied with one of their cards. You have a GTX216, so you already know.
 

Scali

Banned
Dec 3, 2004
2,495
1
0
Originally posted by: Qbah
Yes, that is all true. Shadowmapping wasn't used anywehre else (Well, I think in NWN2 too). Nobody is using it now. The standard wasn't supported and it's dead now. No follow-up, no nothing.

*ALL* games with shadows use shadowmapping now, and nVidia's DST/PCF was added to DX10.
It forms the basis for the great leap in shadow quality in games like Crysis (dynamic soft-shadows on virtually everything).
Excuse me if I no longer take you seriously after such a big mistake.

Originally posted by: Qbah
So everyone could enjoy the games. Unlike PhysX.

PhysX can run on the CPU aswell, so everyone can enjoy PhysX games. You just get better performance/more effects if you use hardware acceleration through either an Ageia PPU or an nVidia GPU.
Much like how Doom3 got a major boost from using special nVidia features on the 5900 card, which was pretty hopeless in most other SM2.0+ shader games. A clear example of a developer using vendor-specific code to make a certain piece of hardware deliver a better gaming experience.
Something you claim never happened.

Originally posted by: Qbah
This is were we'll disagree completely. Unless it's widely supported, a technology won't be picked up. It being excellent doesn't really cut it - it needs to be widely supported to be a commercial success. And Glide was well before any form of standard DirectX was available. Glide was still back in the MS DOS days... And see what happened to it? DX killed it.

Yea, eventually... But initially Glide was a great success, with good support in games.
So PhysX can be a success for a few years until it either starts supporting OpenCL, or another solution takes over.
Today it's PhysX or nothing though.

Originally posted by: Qbah
Thing is, there was no standard at all back then. Not really the case today. We will get OpenCL as the platform that will allow PhysX on non-nV hardware.

See what you're saying?
"We will get".
Exactly, there IS no standard currently. There WILL be a standard. But there ISN'T one.

In fact, if it wasn't for nVidia and Cuda, we wouldn't be getting OpenCL.

Originally posted by: Qbah
I find that stance very naive and arrogant. Sure, nVidia has TWIMTBP, things still run great on ATi cards - hell, sometimes even better despite the TWIMTBP. Unless some quality titles show up in numbers and PhysX is supported by the newest hardware widely available (be it ATi or nV) it just won't fly. Not until nV has a monopoly on the GPU market - which it doesn't and hopefully never will.

I have a feeling that quality titles WILL show up in numbers... nVidia has done a lot to promote PhysX with developers, and many of them signed on.
Besides, hardware-accelerated PhysX games still DO run great on ATi cards... you just can't enable the more detailed PhysX effects. So it doesn't require a monopoly from nVidia (just like it didn't require a monopoly back when ID decided to add an optimized path for the GeForce 5-series to Doom3).
 

Pantalaimon

Senior member
Feb 6, 2006
341
40
91
Originally posted by: Keysplayr
Originally posted by: Pantalaimon
Or the third option: So you're asking if you would want 110 with PhysX (GPU), or 5 with PhysX (CPU)? Answer: With PhysX (GPU)

So I guess you don't mind being locked into one hardware vendor? Sorry, I like being able to choose which hardware vendor's card to buy. I'd rather take better performing card without PhysX but can run an open physics standard instead.

You're forgetting that if you were being "locked" into one hardware vendor, that vendor is still Nvidia and you would be hard pressed to say that you wouldn't be satisfied with one of their cards. You have a GTX216, so you already know.

I bought the GTX260 because it was almost 30 euros cheaper than the nearest HD4870 1Gb and their performance is close to each other. Should the price difference have been reversed I would have bought another HD4870 instead. I didn't buy the GTX260 because of PhysX. And when the next time I need to buy a new card I would like to be able again to pick the card with the price/performance that suits my wallet, and not because of a propietary standard that can be run only on one certain vendor's card.
 

Genx87

Lifer
Apr 8, 2002
41,091
513
126
I thought Nvidia made Physx open? Meaning if AMD wants Nvidia will help them with designing for it?!?!?!?!?!?

If I am right then this whole articles starts off on the wrong premise.

 

Scali

Banned
Dec 3, 2004
2,495
1
0
Originally posted by: Keysplayr
You're forgetting that if you were being "locked" into one hardware vendor, that vendor is still Nvidia and you would be hard pressed to say that you wouldn't be satisfied with one of their cards. You have a GTX216, so you already know.

I think people are way overreacting to this lock-in thing. We were 'locked-in' to nVidia for a LONG time with DX10 aswell, because ATi not only had a huge delay in introducing their first DX10 cards (2900 series), they were also such poor performers that they weren't really an option to any informed buyer.

Eventually things will balance out again because PhysX will get OpenCL support, or Havok or another API will take over as the dominant physics API... or one of the companies goes out of business... in which case the lock-in doesn't matter anymore.

For now, nVidia has the monopoly on hardware-accelerated physics, and we'll have to see what ATi (or Intel) can do about it.
 

Qbah

Diamond Member
Oct 18, 2005
3,754
10
81
Originally posted by: Scali
Originally posted by: Keysplayr
I was actually only referring to Nvidia PhysX supported cards. But you're right that if you are using an ATI card, the third option is the only option.

Yea, but nVidia cards isn't where the hurt is.
I mean, everyone with an nVidia card can just turn off hardware-PhysX in the control panel and/or turn off the PhysX effects in the game.
So you can choose whichever you like. Since you don't have to pay extra for PhysX, there's no disadvantage regardless of which way you prefer to use your card. No point in making any fuss about PhysX or Cuda or whatever. I mean... not everyone uses something like 16xAA or transparency AA either because they don't want to trade performance for visual quality, but does anyone make any fuss about it being supported by their cards?

The hurt is with (potential) ATi owners who can't use the hardware-effects at all. So they try their best to flame PhysX in any way possible. Reality is that PhysX does work, and is supported by an ever-growing number of games, while there is only the sound of crickets chirping in the ATi camp.

If I had the choice between a technology that works on all cards, and an equivalent technology that works on only one vendor's cards, I'd prefer the one that works everywhere.
But the reality is that you don't have this choice. And you may not get it either. There won't be games that support both Havok and PhysX (the API's are just too different, and it requires too much work to make both work in a single game.... much like how games supporting both OpenGL and Direct3D have been abandoned years ago). So even if Havok delivers OpenCL-powered physics, and even if Havok works fine on both ATi and nVidia hardware... there still are many games that use PhysX.
However you want to look at it, nVidia has the advantage. And that's where the hurt is.

That's true to some extent. I would never pay more for a card only because it supports PhysX. Right now, the GTX260 is similarly priced to HD4870 - so you might as well get the nV card - to see PhysX fluff in those 3 games for yourself and judge if you like it or not.

Also, once PhysX is ported to OpenCL and the ATi puts out the OpenCL drivers - doesn't that mean that it will be able to run PhysX titles? You will be able to install the OpenCL-enabled PhysX drivers and run your PhysX games on ATi cards.

There is no hurt right now as there is nothng a Radeon owner looses out on. PhysX implementation is marginal at best, with a bit additional eye-candy being the most it offers.

Ohh and I game on my Xbox360 only now, thank you very much.
 

Scali

Banned
Dec 3, 2004
2,495
1
0
Originally posted by: Genx87
I thought Nvidia made Physx open? Meaning if AMD wants Nvidia will help them with designing for it?!?!?!?!?!?

If I am right then this whole articles starts off on the wrong premise.

You are correct.
 

Scali

Banned
Dec 3, 2004
2,495
1
0
Originally posted by: Qbah
Also, once PhysX is ported to OpenCL and the ATi puts out the OpenCL drivers - doesn't that mean that it will be able to run PhysX titles? You will be able to install the OpenCL-enabled PhysX drivers and run your PhysX games on ATi cards.

Yup, PhysX will generally continue to work, regardless of the underlying technology. It currently works on CPU, PPU and Cuda GPUs. OpenCL can easily be added to that list (and in that case, on nVidia hardware it will probably continue to use C for Cuda to have the performance advantage).
But nVidia hasn't made any official statements about their future plans with PhysX.

Originally posted by: Qbah
There is no hurt right now as there is nothng a Radeon owner looses out on.

Well there is, they're just in denial.
If there was nothing to miss out on, there wouldn't be so many blogs and forums filled with hate-threads about PhysX.
 

Qbah

Diamond Member
Oct 18, 2005
3,754
10
81
Originally posted by: Scali
*ALL* games with shadows use shadowmapping now, and nVidia's DST/PCF was added to DX10.
It forms the basis for the great leap in shadow quality in games like Crysis (dynamic soft-shadows on virtually everything).
Excuse me if I no longer take you seriously after such a big mistake.

Doom3 used UltraShadow - this is what made the game run so much better on nVidia. UltraShadow is an nVidia specific way of creating and calculating shadows - which you probably know and meant in your first post - I meant the same thing in mine.

And today's dynamic shadows are terrible. Blocky, "floaty" - only Crysis from the recent games has done them right. So did Doom3 or Quake4. Hit games like Assassin's Creed, touted for their beautiful graphics, have such terrible shadows I have no idea how this was overlooked.

PhysX can run on the CPU aswell, so everyone can enjoy PhysX games. You just get better performance/more effects if you use hardware acceleration through either an Ageia PPU or an nVidia GPU.
Much like how Doom3 got a major boost from using special nVidia features on the 5900 card, which was pretty hopeless in most other SM2.0+ shader games. A clear example of a developer using vendor-specific code to make a certain piece of hardware deliver a better gaming experience.
Something you claim never happened.

I wrote that it wasn't supported, not that it wasn't used once or twice! Seriously, we're going into reading comprehension now? UltraShadow was used in Doom3 - made the game run faster on FX cards. Great. The game ran slower, still okay, on Radeons. What's the point here? Everybody could run Doom3 and have a great time. If a developer uses hardware PhysX as a game mechanism, only part of the market will be able to use it - the developers won't in their right mind do that.

Yea, eventually... But initially Glide was a great success, with good support in games.
So PhysX can be a success for a few years until it either starts supporting OpenCL, or another solution takes over.
Today it's PhysX or nothing though.

Glide was a success and still died. PhysX isn't... and you're expecting it to fly?

See what you're saying?
"We will get".
Exactly, there IS no standard currently. There WILL be a standard. But there ISN'T one.

In fact, if it wasn't for nVidia and Cuda, we wouldn't be getting OpenCL.

We don't have, but until AMD says "from this day we will support PhysX as the only physics API", it won't be. And that is highly unlikely.

I have a feeling that quality titles WILL show up in numbers... nVidia has done a lot to promote PhysX with developers, and many of them signed on.
Besides, hardware-accelerated PhysX games still DO run great on ATi cards... you just can't enable the more detailed PhysX effects. So it doesn't require a monopoly from nVidia (just like it didn't require a monopoly back when ID decided to add an optimized path for the GeForce 5-series to Doom3).

That is all true. But that is all besides the point of the discussion... PhysX just doesn't have the means now to become an industry standard. And it's not a reason for most people to pay more for an nV product.
 

Qbah

Diamond Member
Oct 18, 2005
3,754
10
81
Originally posted by: Scali
Yup, PhysX will generally continue to work, regardless of the underlying technology. It currently works on CPU, PPU and Cuda GPUs. OpenCL can easily be added to that list (and in that case, on nVidia hardware it will probably continue to use C for Cuda to have the performance advantage).
But nVidia hasn't made any official statements about their future plans with PhysX.

So basically it's up to nVidia now to port PhysX to OpenCL and make it available for Radeon owners - once AMD releases their OpenCL drivers. How the tables have turned ;)

Well there is, they're just in denial.
If there was nothing to miss out on, there wouldn't be so many blogs and forums filled with hate-threads about PhysX.

The hate is about nVidia pushing PhysX like there's no tomorrow. With opinions like "Radeons suck terribly cause they don't do PhysX" coming out from the nVidia fanatics, really stirring up the atmosphere. And a few months back, when Radeons still were cheaper than nV cards, nVidia supporters were pushing the PhysX card as being worth the price premium. Which, with the current state of things (not to mention a few months ago) is not the case. Competing cards cost the same now though - you might as well get that GTX and see for yourself :) It's an additional feature that nobody forces you to use.
 

Scali

Banned
Dec 3, 2004
2,495
1
0
Originally posted by: Qbah
Doom3 used UltraShadow - this is what made the game run so much better on nVidia. UltraShadow is an nVidia specific way of creating and calculating shadows - which you probably know and meant in your first post - I meant the same thing in mine.

UltraShadow is basically a z-scissor, and can help to reduce the fillrate required for stencil shadows. It has nothing to do with the shadowmaps and DST/PCF that I mentioned regarding GeForce 7 and Radeon X1000-series.
Stencil shadows and shadowmapping are completely different techniques. Stencil shadows are now abandoned in favour of shadowmapping, for which nVidia laid the groundwork with DST/PCF a few years ago, before it was a standard feature in DX.

Originally posted by: Qbah
Glide was a success and still died. PhysX isn't... and you're expecting it to fly?

I think I made it very clear that PhysX will eventually need to adapt (OpenCL or...?) or die.
But due to lack of competing options it can still be successful on the short term.

Originally posted by: Qbah
That is all true. But that is all besides the point of the discussion... PhysX just doesn't have the means now to become an industry standard. And it's not a reason for most people to pay more for an nV product.

nVidia doesn't NEED PhysX to sell their products. nVidia's products are successful enough on their own. And that's where the 'danger' lies. PhysX will 'sneak into' the market because it piggy-backs onto the sales of nVidia GPUs. Which is why more than 50% of all gamers already have support for PhysX. Since PhysX is free for use unlike Havok, it's very tempting for developers to use it in their games. And since they can then add extra effects with little extra effort for the 50+% of their audience that owns nVidia hardware (and through TWIMTBP nVidia will actually help you add these effects to your games), it is tempting for developers to do so. It can give them a bit of extra flash over competing games and boost sales.
 

Scali

Banned
Dec 3, 2004
2,495
1
0
Originally posted by: Qbah
So basically it's up to nVidia now to port PhysX to OpenCL and make it available for Radeon owners - once AMD releases their OpenCL drivers. How the tables have turned ;)

Except there's no incentive for nVidia to do so. They don't need OpenCL for their own hardware, and they currently have nothing to gain from ATi running PhysX.
ATi needs to come up with a decent accelerated Havok first, and persuade developers to use it, and then hope that the Havok effects are more compelling than the PhysX effects.
Only then might nVidia want to support OpenCL, because it would keep Havok from pushing PhysX out of the market altogether.
But this could take years... all the time with PhysX having the monopoly, and ever more PhysX games being released.

Originally posted by: Qbah
The hate is about nVidia pushing PhysX like there's no tomorrow.

What's wrong with that? I would find a company that DOESN'T push its technology far stranger.

Originally posted by: Qbah
With opinions like "Radeons suck terribly cause they don't do PhysX" coming out from the nVidia fanatics, really stirring up the atmosphere.

Since when is a company responsible for its raving fanboy crowd?
 

Qbah

Diamond Member
Oct 18, 2005
3,754
10
81
Originally posted by: Scali
UltraShadow is basically a z-scissor, and can help to reduce the fillrate required for stencil shadows. It has nothing to do with the shadowmaps and DST/PCF that I mentioned regarding GeForce 7 and Radeon X1000-series.
Stencil shadows and shadowmapping are completely different techniques. Stencil shadows are now abandoned in favour of shadowmapping, for which nVidia laid the groundwork with DST/PCF a few years ago, before it was a standard feature in DX.

True, my bad. Seems I projected the shadow part on the Doom3 comment. Doom3 ran better on FX cards cause of UltraShadow. However, did the fact that those extensions weren't supported by X19xx cards hinder their performance in games? Seems the general consensus was that the X1950XTX was the fastest card of its generation, bested only by 7950GX2, when it scaled?
The whole point in this part was that Radeon owners didn't loose out on anything just because a particular feature wasn't included in their cards. And in my opinion they're not loosing out on anything now, either. Previously both camps offered a similar experience - it didn't matter which game engine was used.

I think I made it very clear that PhysX will eventually need to adapt (OpenCL or...?) or die.
But due to lack of competing options it can still be successful on the short term.

I guess you can look at it this way, sure. Though I find such a success meaningless. And with that view on the matter, could you recommend a more expensive card just because it offers something that might or might not be needed?

nVidia doesn't NEED PhysX to sell their products. nVidia's products are successful enough on their own. And that's where the 'danger' lies. PhysX will 'sneak into' the market because it piggy-backs onto the sales of nVidia GPUs. Which is why more than 50% of all gamers already have support for PhysX. Since PhysX is free for use unlike Havok, it's very tempting for developers to use it in their games. And since they can then add extra effects with little extra effort for the 50+% of their audience that owns nVidia hardware (and through TWIMTBP nVidia will actually help you add these effects to your games), it is tempting for developers to do so. It can give them a bit of extra flash over competing games and boost sales.

No, they don't. But why develop and invest into something that you don't support? If every current-gen card would support PhysX, this would be incentive enough for everybody to be using it. There would be no reason not to... Until that is the case, PhysX won't be anything big.

Except there's no incentive for nVidia to do so. They don't need OpenCL for their own hardware, and they currently have nothing to gain from ATi running PhysX.
ATi needs to come up with a decent accelerated Havok first, and persuade developers to use it, and then hope that the Havok effects are more compelling than the PhysX effects.
Only then might nVidia want to support OpenCL, because it would keep Havok from pushing PhysX out of the market altogether.
But this could take years... all the time with PhysX having the monopoly, and ever more PhysX games being released.

For PhysX to be a widely used standard, nVidia needs to do it. Otherwise PhysX will die and a feature of their cards will go forgotten. Imagine all the people that bought nVidia cards cause they can do PhysX seeing the standard dying cause of lack of support.

Originally posted by: Qbah
The hate is about nVidia pushing PhysX like there's no tomorrow.

What's wrong with that? I would find a company that DOESN'T push its technology far stranger.

There's nothing wrong with nVidia pushing it. The hate threads are not started by ATi or nVidia, but by their fans and supporters :) It's the blind following of PhysX by the nVidia fans, that is leading to the whole "war" - at least that's my view on it.


Originally posted by: Qbah
With opinions like "Radeons suck terribly cause they don't do PhysX" coming out from the nVidia fanatics, really stirring up the atmosphere.

Since when is a company responsible for its raving fanboy crowd?

I never said it is. The heated discussion is between supporters of both camps, not the companies themselves. You mentioned the whole internet flamewar - it is and always was between fanboys :)

And sorry for that post - stupid IE8 must've done something during my reply.
 

Keysplayr

Elite Member
Jan 16, 2003
21,219
56
91
Originally posted by: Pantalaimon
Originally posted by: Keysplayr
Originally posted by: Pantalaimon
Or the third option: So you're asking if you would want 110 with PhysX (GPU), or 5 with PhysX (CPU)? Answer: With PhysX (GPU)

So I guess you don't mind being locked into one hardware vendor? Sorry, I like being able to choose which hardware vendor's card to buy. I'd rather take better performing card without PhysX but can run an open physics standard instead.

You're forgetting that if you were being "locked" into one hardware vendor, that vendor is still Nvidia and you would be hard pressed to say that you wouldn't be satisfied with one of their cards. You have a GTX216, so you already know.

I bought the GTX260 because it was almost 30 euros cheaper than the nearest HD4870 1Gb and their performance is close to each other. Should the price difference have been reversed I would have bought another HD4870 instead. I didn't buy the GTX260 because of PhysX. And when the next time I need to buy a new card I would like to be able again to pick the card with the price/performance that suits my wallet, and not because of a propietary standard that can be run only on one certain vendor's card.

Well, I really didn't say you purchased the GTX260 "because" of PhysX. You got a good deal and that is a good thing at any level. Point is, you didn't "have" to pay extra for PhysX capability. Nobody is saying anyone has to do this. There are great deals all around the web to be had for almost any card you want. ATI or Nvidia. And, it is my personal opinion that it would be better to have PhysX enabled hardware with new games now emerging that utilizes PhysX to one extent or another, than to not have one. Again, this is my personal opinion and I am not forcing it upon you. hehe.
 

SunnyD

Belgian Waffler
Jan 2, 2001
32,675
146
106
www.neftastic.com
Originally posted by: Keysplayr
"I can also see reasons why nVidia's architecture would run better, as OpenCL closely matches Cuda's design, and Cuda's design is based around the nVidia architecture. ATi has a completely different architecture, and has had to add local memory to the 4000-series just to get the featureset right for OpenCL. I doubt that their 'afterthought' design is anywhere near as efficient as nVidia's is."

(Note to Keys... I'm not pointing at you, I'm pointing at the article)

What the hell is this line of crap? OpenCL has nothing to do with CUDA. OpenCL is to CUDA as OpenGL is to a GPU, plain and simple. OpenCL has as much capability on a Nvidia GPU as it would on an AMD/ATI GPU as it would on an Intel GPU as it would on a VIA GPU as it would on an AMD CPU as it would on an Intel CPU as it would... shall I keep going?

OpenCL is an API. The API wraps around proprietary technology from any given vendor and provides a common interface to the rest of the world. Be there any middleware layer in between OpenCL and the proprietary technology (read: CUDA, Stream, whatever), it doesn't matter. What matters is the end result matches across ALL platforms.

OpenCL wasn't designed around CUDA or Nvidia at all. OpenCL was in fact designed to REMOVE the need for proprietary tools and architecture (read: Apple didn't want to be locked into CUDA/Nvidia) AMD/ATI didn't design anything as an afterthought either, Stream was around well before OpenCL came around.
 

Scali

Banned
Dec 3, 2004
2,495
1
0
Originally posted by: Qbah
True, my bad. Seems I projected the shadow part on the Doom3 comment. Doom3 ran better on FX cards cause of UltraShadow. However, did the fact that those extensions weren't supported by X19xx cards hinder their performance in games? Seems the general consensus was that the X1950XTX was the fastest card of its generation, bested only by 7950GX2, when it scaled?
The whole point in this part was that Radeon owners didn't loose out on anything just because a particular feature wasn't included in their cards. And in my opinion they're not loosing out on anything now, either. Previously both camps offered a similar experience - it didn't matter which game engine was used.

Depends on who you ask. The PCF gave nVidia cards smoother soft-shadows at a lower performance cost. Some thought it made the shadows look better, more realistic than on ATi hardware.

Originally posted by: Qbah
I guess you can look at it this way, sure. Though I find such a success meaningless. And with that view on the matter, could you recommend a more expensive card just because it offers something that might or might not be needed?

I don't understand the question. PhysX is free, and nVidia and ATi cards are pretty evenly matched in price and performance, right? So why would I have to recommend a more expensive card? Why couldn't I recommend a similarly priced card because it offers PhysX? Why does it have to be a more expensive card?

Originally posted by: Qbah
No, they don't. But why develop and invest into something that you don't support? If every current-gen card would support PhysX, this would be incentive enough for everybody to be using it. There would be no reason not to... Until that is the case, PhysX won't be anything big.

Game developers don't agree with you. PhysX adoption skyrocketed when nVidia announced their takeover and plans for GPU support. As a direct result, Havok usage dropped off.

Originally posted by: Qbah
For PhysX to be a widely used standard, nVidia needs to do it. Otherwise PhysX will die and a feature of their cards will go forgotten. Imagine all the people that bought nVidia cards cause they can do PhysX seeing the standard dying cause of lack of support.

nVidia doesn't care whether PhysX is widely adopted or not. They just want to sell hardware. If they can do that without PhysX supporting OpenCL, then that is what they will do.
If they think that supporting OpenCL will sell more nVidia cards, then that is what they will do.
If they think PhysX doesn't do anything for sales, then they will just drop it or sell it on.

I fail to see the logic that ATi needs support for nVidia to sell more hardware. It seems to almost be a contradiction.
 

Keysplayr

Elite Member
Jan 16, 2003
21,219
56
91
Originally posted by: SunnyD
Originally posted by: Keysplayr
"I can also see reasons why nVidia's architecture would run better, as OpenCL closely matches Cuda's design, and Cuda's design is based around the nVidia architecture. ATi has a completely different architecture, and has had to add local memory to the 4000-series just to get the featureset right for OpenCL. I doubt that their 'afterthought' design is anywhere near as efficient as nVidia's is."

(Note to Keys... I'm not pointing at you, I'm pointing at the article)

What the hell is this line of crap? OpenCL has nothing to do with CUDA. OpenCL is to CUDA as OpenGL is to a GPU, plain and simple. OpenCL has as much capability on a Nvidia GPU as it would on an AMD/ATI GPU as it would on an Intel GPU as it would on a VIA GPU as it would on an AMD CPU as it would on an Intel CPU as it would... shall I keep going?

OpenCL is an API. The API wraps around proprietary technology from any given vendor and provides a common interface to the rest of the world. Be there any middleware layer in between OpenCL and the proprietary technology (read: CUDA, Stream, whatever), it doesn't matter. What matters is the end result matches across ALL platforms.

OpenCL wasn't designed around CUDA or Nvidia at all. OpenCL was in fact designed to REMOVE the need for proprietary tools and architecture (read: Apple didn't want to be locked into CUDA/Nvidia) AMD/ATI didn't design anything as an afterthought either, Stream was around well before OpenCL came around.

Sorry Sunny, that was a quote I took from Scali. That's why it was in quotes. I have fixed the post you are referring to. So, you should redirect your question to Scali.
 

Scali

Banned
Dec 3, 2004
2,495
1
0
Originally posted by: SunnyD
OpenCL is to CUDA as OpenGL is to a GPU, plain and simple. OpenCL has as much capability on a Nvidia GPU as it would on an AMD/ATI GPU as it would on an Intel GPU as it would on a VIA GPU as it would on an AMD CPU as it would on an Intel CPU as it would... shall I keep going?

Funny you should say that, because the OpenGL and Direct3D APIs have for a large part dominated how GPUs were designed and how they evolved (early GPUs were hopeless in OpenGL... nVidia was one of the first to have a GPU that could run most OpenGL code efficiently).
GPUs are designed to run OpenGL and Direct3D operations efficiently.
GPGPUs can be designed to run OpenCL operations efficiently.
nVidia's GPGPU is designed to run Cuda efficiently, and OpenCL is very similar to Cuda.
ATi's GPGPU is designed to run entirely different code efficiently, not Cuda, and as such not OpenCL or DX11 CS.

Originally posted by: SunnyD
OpenCL wasn't designed around CUDA or Nvidia at all. OpenCL was in fact designed to REMOVE the need for proprietary tools and architecture (read: Apple didn't want to be locked into CUDA/Nvidia) AMD/ATI didn't design anything as an afterthought either, Stream was around well before OpenCL came around.

OpenCL is basically Cuda with some of the hardware-specific parts abstracted. The computational model is still the same, which means that you want the same underlying hardware aswell.
You might want to look at some Cuda and OpenCL code to see how similar they are... and read about Brook+ (the solution for ATi) and how they are struggling with their compilers and all.
 

Qbah

Diamond Member
Oct 18, 2005
3,754
10
81
I only have a comment to the below, the rest I find we just have different views and talking more about it won't change anything.

Originally posted by: Scali
I don't understand the question. PhysX is free, and nVidia and ATi cards are pretty evenly matched in price and performance, right? So why would I have to recommend a more expensive card? Why couldn't I recommend a similarly priced card because it offers PhysX? Why does it have to be a more expensive card?

It would be foolish to recommend the one without PhysX, if both cost the same. But that state is now - not a month and earlier ago, when nVidia supporters were pushing GeForce cards cause they can do PhysX - it didn't matter that you had to pay much more for an nVidia card and the Radeon had a similar performance (if not better - 9800GTX vs HD4850).
 

Scali

Banned
Dec 3, 2004
2,495
1
0
Originally posted by: Qbah
It would be foolish to recommend the one without PhysX, if both cost the same. But that state is now - not a month and earlier ago, when nVidia supporters were pushing GeForce cards cause they can do PhysX - it didn't matter that you had to pay much more for an nVidia card and the Radeon had a similar performance (if not better - 9800GTX vs HD4850).

Well don't ask me, ask the nVidia supporters that pushed PhysX. I wasn't one of them.
 

bryanW1995

Lifer
May 22, 2007
11,144
32
91
Originally posted by: AstroManLuca
It's so true. This is the reason that all of Sony's proprietary music and video formats died without ever putting up a fight. I think the main point he's making is that nVidia is reaching too much. They're getting too greedy and trying to use PhysX as a way of selling hardware, without realizing that PhysX will never take off if only a certain percentage of video cards can use it. Even if it does show up in a lot more games in the future, it'll STILL never take off because no game developer is going to limit themselves to developing for nVidia only, and they'll have to ensure that even their PhysX-enabled games will still run great and look great on non-nVidia platforms.

Using proprietary standards to push your own hardware is so passe anyway. Everything's going to be done with OpenCL in the future. nVidia can either port it or let it die, because they're not going to make any money from it.

cough *blu ray* cough
 

Pantalaimon

Senior member
Feb 6, 2006
341
40
91
cough *blu ray* cough

Blu ray is not a Sony only format. They maybe the most visible, but they were not the only one developing it. The format was jointly developed by Blu-ray Disc Association (BDA) which includes among others: Panasonic, Pioneer, Philips, Samsung, LG, Sharp, Sony.