"Inevitable Bleak Outcome for nVidia's Cuda + Physx Strategy"

Page 6 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

zebrax2

Senior member
Nov 18, 2007
977
70
91
1. gpu-accelerated physics is still in its early stages that most of its implementations are rather simplistic (additional effects here and there) and that most of consumers today feels like its not yet a consideration when buying a video card rather an added bonus (and if ever the competing card ties in terms of price/performance of course you will choose the one with more bonuses that is beneficial to you) although im pretty sure in the future it will be a must have feature.

2. AMD choose havok simply because intel owns it? i mean they already know how strong intel is based from their experiences
 

thilanliyan

Lifer
Jun 21, 2005
12,084
2,281
126
Originally posted by: Scali
1) The developer chose to always use the CPU for PhysX.

And I can imagine it staying that way unless all types of video cards can run PhysX (which it can't currently).

Originally posted by: Scali
What amazes me is that it somehow is not a problem that Havok is owned by Intel.
Originally posted by: Keysplayr
You mean, something similar to what Intel is going to do.
Intel does not have any competing GPU solution at the moment. I think Havok was the lesser of 2 evils for the foreseeable future. Intel most likely won't get their GPU business up to speed for at least a couple of years. This next bit is just my speculation-->If they had chosen PhysX, there's no guarantee that nVidia wouldn't immediately have some extra PhysX features or performance that would make it much more attractive to buy one of their cards (which is obviously very bad for AMD) whereas that kind of situation with Havok and Intel is at least a couple of years away. And who's to say PhysX would even run well on ATI's current cards? They could run badly in which case it's smart that they decided not to adopt PhysX in it's current form.

The thing is, it is IMPOSSIBLE to make PhysX work on OpenCL currently.
We don't know if nVidia plans to do this. What we do know is that there's a painfully obvious reason why it doesn't work on ATi hardware yet: there IS no OpenCL.

Is it impossible? I thought I remember Keysplayr or someone else mentioning that it was possible to run PhysX through Cuda (which would be running through OpenCL<--my terminology might not be correct there)?
 

SickBeast

Lifer
Jul 21, 2000
14,377
19
81
That article was pig headed in the sense that it failed to acknowledge what NV gained with PhysX. They will now make the transition to OpenCL much more easily, and it's not like they will lose any of the functionality that they have put into their GPUs once DX11 comes out.

CUDA got the ball rolling, and accelerated the GPGPU revolution for sure.
 

evolucion8

Platinum Member
Jun 17, 2005
2,867
3
81
Originally posted by: Qbah
Again, the difference being that a 7-series could run the same thing as a X19xx series, more or less at the same speed. Both camps ran DX9 code fine. Because DX9 was and still is an industry standard.

At the beginning even the GeForce 7 series were slighly faster overall, but when the developers pushed the DX9 envelope with lots of shaders and graphical effects, the GeForce 7 series performed much slower than the X19X0 series because of their inneficient and slow SM3.0 branch code performance, and the lack of FSAA when HDR was used.

Originally posted by: Scali
I think people are way overreacting to this lock-in thing. We were 'locked-in' to nVidia for a LONG time with DX10 aswell, because ATi not only had a huge delay in introducing their first DX10 cards (2900 series), they were also such poor performers that they weren't really an option to any informed buyer.

They weren't really poor performers, they just weren't fast enough to outperform the high end 8800GTX, it was able to keep up with the GTS 640 in most scenarios as far as you don't turn anti aliasing on. Pretty much like the HD 3870 is currently doing, offering you great performance at high quality in most games at midrange resolutions.

Originally posted by: Keysplayr
"I can also see reasons why nVidia's architecture would run better, as OpenCL closely matches Cuda's design, and Cuda's design is based around the nVidia architecture. ATi has a completely different architecture, and has had to add local memory to the 4000-series just to get the featureset right for OpenCL. I doubt that their 'afterthought' design is anywhere near as efficient as nVidia's is."

That local memory design is also available in the GTX series, I don't think that you have expertise enough or worked in ATi to know if it was an afterthought, that's just an opinion of your own and not an absolute truth.

Originally posted by: Genx87
Once Intel gets larry up and running they will do the same on the GPU front.

For anti trust reasons, AMD can't go away even if Intel wishes it, but I find very doubtful that a very programable pipeline can outperform the much faster fixed function which is heavily parallel like the ones found on nVidia or ATi.

Originally posted by: Scali
Yea, nVidia probably knew that too, being as inviting as they were. As if they were trying to lure ATi into the trap.

You can see it now with Folding@Home. ATi has been at it for years... nVidia recently made their first Cuda-based client for Folding@Home, and it completely blows ATi away. It will be interesting to see what happens when the first OpenCL software emerges.
And how ironic it would be if Havok would run better on nVidia GPUs than on ATi's.

Folding@Home runs slower because isn't using the the local data share in the HD 4x00 architecture, but when it's implemented, it should run as fast or even faster, look for example at Milky Way @ Home, http://www.brightsideofnews.co...s-in-milkywayhome.aspx Runs much faster than it's CPU counterpart.

Guys, farewell, I will be on military annual training for 15 days, so I won't be able to post from today until the end of may, have a good time and remember, competition is always good!!
 

yusux

Banned
Aug 17, 2008
331
0
0
physx can be had for around $60 last time I checked on fleabay, what I'm more concern is when the hell will ATi drivers have ambient occlusion, and when will ATi come up with something new in their drivers and card instead of just raw performance and bring something really fresh and new like what nVidia has been doing, it's like a pizza, sure u can have tones of Costco ones, but they'd still never beat an 8 topping round table.
 

Wreckage

Banned
Jul 1, 2005
5,529
0
0
Well so far CUDA and PhysX have been very successful. Especially compared to the competition.

Havok on the GPU is still only a "possibility". OpenCL is barely out of beta. Brook has all but been abandoned.

Really they have no competition at this point.

When you look at the huge marketshare gains that NVIDIA has had and the success of PhysX titles like UT3 & Mirror's Edge. It's amazing what they have done in such a short time. Especially when the competition has pretty much accomplished nothing.

 

Scali

Banned
Dec 3, 2004
2,495
1
0
Originally posted by: SunnyD
You have that backwards... CUDA was designed to run on Nvidia hardware efficiently. Just as Stream was designed to run on ATI hardware efficiently.

No I don't, because Cuda is the name for the entire GPGPU architecture, including the hardware design and instructionset.

Originally posted by: SunnyD
OpenCL isn't designed to run on any particular platform at all. OpenCL is NOT CUDA. Just because you can draw similarities between the two doesn't make it so. Otherwise, you might as well say Stream is basically CUDA (which you've already said it is not).

OpenCL is basically just taking Cuda and making it a platform-independent framework. If you've ever bothered to look at both, you'd see the glaring similarities. It's much like how D3D's HLSL, OpenGL's GLSL and nVidia's Cg are virtually identical.

As for Stream... ATi started over with Stream when the first drafts of OpenCL came out. At first ATi had a different GPGPU solution, which WAS made for their hardware... but then they decided to change course and make Stream as OpenCL.

Originally posted by: SunnyD
OpenCL will sit atop of Stream just as well as it will sit atop of CUDA.

Obviously it will sit atop of Stream in the case of ATi.
But the "just as well" part is up for discussion. ATi has a VERY different architecture from nVidia.

Originally posted by: SunnyD
Some of the arbitrary constructs may appear similar to their CUDA/Stream counterparts, but do not make the mistake of saying OpenCL was designed around CUDA, because you would be flat out wrong.

Don't underestimate me. I'm not some random idiot. I have a long history as a developer with GPU/GPGPU code. I think you're the one making a mistake, because you don't seem to think that different GPGPU designs have any effect on their performance.
I'll give you an example:
AMD and Intel CPUs both run x86 code... They are both DESIGNED to run x86 code.
Yet Intel's CPUs run the code more efficiently. Why? Their architecture is different, and handles the x86 code that is out there better.
The same will happen with OpenCL... one GPGPU will run it better than the other. I'm saying that this will be the nVidia GPGPUs.
 

Scali

Banned
Dec 3, 2004
2,495
1
0
Originally posted by: munky
It means people aren't going to choose NV over AMD just because some no-name developer makes a crappy game with Physx.

That wasn't the point.
The point was that PhysX is free, and as such can be used by 'no-name developers' to make 'crappy games'...

Originally posted by: munky
Again, unless blockbuster games like Crysis or Farcry 2 begin using gpu-accelerated Physx, the technology will have little to do with anything that matters.

We'll just have to wait and see then.

Originally posted by: munky
It runs PhysX on Nvidia's gpu only, and only those that support Cuda.

But there IS nothing other than Cuda.

Originally posted by: munky
Care to explain that? Or are you just repeating blanket statements originating from Nvidia marketing?

I have already explained it.
Cuda also refers to nVidia's hardware architecture, and how it is organized around scalar threads running in parallel on SIMD processors, and how they are scheduled in 'warps' and such.
The hardware and programming language go hand-in-hand with Cuda.
Since OpenCL's programming language and API is incredibly similar to Cuda (same concepts in terms of threading, warp scheduling etc), it follows that the hardware to run OpenCL efficiently is also incredibly similar to nVidia's.
And obviously, ATi's hardware is NOT similar to nVidia's (they have instructions that process up to 5 scalars at a time. Which is why they get those impressive marketing figures like '800 shader processors', but worst case they only get 20% efficiency out of them... which is why nVidia with 'only' '240 shader processors' is still faster. They don't have the efficiency problem because of their different approach).
 

Scali

Banned
Dec 3, 2004
2,495
1
0
Originally posted by: thilan29
And I can imagine it staying that way unless all types of video cards can run PhysX (which it can't currently).

I don't... various games with hardware acceleration have already come out, and many more will follow. Big names like Epic and EA are backing PhysX, which we've already seen in 'big' titles like Unreal Tournament and Mirror's Edge.

Originally posted by: thilan29
Is it impossible? I thought I remember Keysplayr or someone else mentioning that it was possible to run PhysX through Cuda (which would be running through OpenCL<--my terminology might not be correct there)?

What I meant is: there is no OpenCL support on any hardware yet. The OpenCL specifications have only just been drawn up, and early beta drivers are going through conformation tests now.
This means that no consumer can run OpenCL code on their system yet.
So even if PhysX DID use OpenCL, it STILL wouldn't work on ATi hardware.
That's what makes the whole discussion so senseless.
People act as if OpenCL is a valid option, while it isn't. Consumers probably won't see the first OpenCL drivers until the end of the year. By that time, PhysX will already have been around via Cuda for 1.5 years. Does anyone really think that nVidia should just have waited for OpenCL instead of getting a 1.5 year headstart on the competition because THEY put the effort into developing Cuda? I think nVidia is right in reaping the benefits of their work while they can.
 

Scali

Banned
Dec 3, 2004
2,495
1
0
Originally posted by: evolucion8
They weren't really poor performers, they just weren't fast enough to outperform the high end 8800GTX, it was able to keep up with the GTS 640 in most scenarios as far as you don't turn anti aliasing on. Pretty much like the HD 3870 is currently doing, offering you great performance at high quality in most games at midrange resolutions.

Yes they were poor performers. They were ATi's top offering, and had a large and powerhungry GPU with a big 512-bit bus. It just failed to perform, so ATi had to drop the prices.
They quickly had to 'reinvent' the GPU to a more efficient model, which they did with the 3000-series, where they abandoned the high-end market for single GPUs altogether.

Originally posted by: evolucion8
That local memory design is also available in the GTX series, I don't think that you have expertise enough or worked in ATi to know if it was an afterthought, that's just an opinion of your own and not an absolute truth.

That is EXACTLY the point. Local memory has been in nVidia's GPUs since the G80, so the beginning of Cuda.
OpenCL and Direct3D11 CS adopted this approach.
Since ATi didn't have this in their design, they had to add it in the 4000-series (which was basically just a refresh of the 3000-series otherwise) in order to make their GPUs compatible with the upcoming OpenCL and D3D standards.
Clearly it's an afterthought, because nVidia's G80 GPUs will run OpenCL and D3D11 CS, while ATi's 2000 and 3000-series can not.

Originally posted by: evolucion8
Folding@Home runs slower because isn't using the the local data share in the HD 4x00 architecture, but when it's implemented, it should run as fast or even faster, look for example at Milky Way @ Home, http://www.brightsideofnews.co...s-in-milkywayhome.aspx Runs much faster than it's CPU counterpart.

'Should' is no guarantee. I'll believe it when I see it.
I've heard people on sites like Beyond3D saying that Folding@Home would actually be SLOWER on ATi cards when using shared memory, because ATi's shared memory implementation is nowhere as efficient as nVidia's for this case.
 

Keysplayr

Elite Member
Jan 16, 2003
21,219
56
91
Originally posted by: thilan29
Originally posted by: Keysplayr
Agreed!! ATI should have jumped on board. They probably still could. AFAIK, the door has always been open. Never closed.

The problem is that it's owned by nVidia (and I as well as many others could fathom nVidia using that position to sell more cards one way or another...which is their right BUT in the long run I don't think consumers would benefit and that part irks me). AFAIK something like AA is not owned by either company.

Originally posted by: Scali
What amazes me is that it somehow is not a problem that Havok is owned by Intel.
Originally posted by: Keysplayr
You mean, something similar to what Intel is going to do.

Originally posted by: Thilan29
Intel does not have any competing GPU solution at the moment. I think Havok was the lesser of 2 evils for the foreseeable future. Intel most likely won't get their GPU business up to speed for at least a couple of years. This next bit is just my speculation-->If they had chosen PhysX, there's no guarantee that nVidia wouldn't immediately have some extra PhysX features or performance that would make it much more attractive to buy one of their cards (which is obviously very bad for AMD) whereas that kind of situation with Havok and Intel is at least a couple of years away. And who's to say PhysX would even run well on ATI's current cards? They could run badly in which case it's smart that they decided not to adopt PhysX in it's current form.

At the moment? Ok, yes, right as of this moment they do not. What do you think they've been working on with Larrabee?
A GPU/GPGPU competitor. You think Havok is the lesser of two evils because you think it's at least a couple of years out?
Havok is not evil. PhysX is not evil. They are both methods for processing physics for games which are both GOOD things. Not evil. You're speculation part is probably right on the money. But it sounds like you're saying that in choosing Havok over PhysX, ATI is just putting off the inevitable, which is Intel screwing them over just as badly, if not worse, than you believe Nvidia would. So in your thinking, ATI is just going by whichever company takes "longer" to F them over.
And lastly, I believe you are correct, and similar to my thoughts when you stated "Who's to say PhysX would even run well on ATI's current cards?". I believe they could be made to run PhysX, without a doubt, just wouldn't be able to hold a candle to how it runs/performs on Nvidia current hardware.
Your very last comment also mimic's my thoughts, "They could run badly in which case it's smart that they decided not to adopt PhysX in it's current form." Which really does mean, that PhysX IS evil, but only to ATI. In "choosing" (as if they had a choice) Intel's Havok, they have indeed made the smarter move as far as putting off any sort of embarrassment trying to run PhysX on their GPU's, and saving that for later when Intel's Havok can't be run on their GPU successfully either. Intel has it's guns pointed at Nvidia ever since the first public demonstration of G80 flat blowing the doors off off Intel's very fastest CPU. Even more so when a single custom PC equipped with 6 8800GTX's, (retail value somewhere around $5000 at the time) performed equivalently to an Intel Cluster of 112 CPU's (retail value about $40,000 at the time) in a specific work load. How's that for an eye opener? And that was CUDA in it's infancy, at it's introduction.
Intel is brushing aside the "squirt" and going directly at the source kicking sand in their face at muscle beach.
As time passes, Thilan, the choices that we have seen companies make over time, actually make perfect sense when we take in everything that has been going on over the past two years.
G80 Launch started it all. Intel announces Larrabee. Nvidia buys Ageia and shortly after (about 1 month) announces PhysX on GPU's. ATI knows their hardware and won't touch it with a 20 foot cattle prod and makes the only possible announcement that won't show how inferior their GPGPU capabilities are by saying they now support Havok. Bought themselves a year, maybe two, before they have to actually "put out" anything relating to physics on a GPU.

It all makes perfect sense.

 

dadach

Senior member
Nov 27, 2005
204
0
76
Originally posted by: Wreckage
Well so far CUDA and PhysX have been very successful.

just the fact that they exist, does not make them successful...it would be nice if they actually had something to offer besides just being there
 

Keysplayr

Elite Member
Jan 16, 2003
21,219
56
91

Originally posted by: Qbah
Again, the difference being that a 7-series could run the same thing as a X19xx series, more or less at the same speed. Both camps ran DX9 code fine. Because DX9 was and still is an industry standard.

Originally posted by: evolucion8
At the beginning even the GeForce 7 series were slighly faster overall, but when the developers pushed the DX9 envelope with lots of shaders and graphical effects, the GeForce 7 series performed much slower than the X19X0 series because of their inneficient and slow SM3.0 branch code performance, and the lack of FSAA when HDR was used.

Probably correct here. X1900 series were excellent GPU's.

Originally posted by: Scali
I think people are way overreacting to this lock-in thing. We were 'locked-in' to nVidia for a LONG time with DX10 aswell, because ATi not only had a huge delay in introducing their first DX10 cards (2900 series), they were also such poor performers that they weren't really an option to any informed buyer.

Originally posted by: evolucion8
They weren't really poor performers, they just weren't fast enough to outperform the high end 8800GTX, it was able to keep up with the GTS 640 in most scenarios as far as you don't turn anti aliasing on. Pretty much like the HD 3870 is currently doing, offering you great performance at high quality in most games at midrange resolutions.

Nah, they were a huge let-down. 2900 especially. the 3xxx series offered only a small bit of relief in power consumption (which was off the charts in the 2xxx), heat, price, but offered no relief in AA performance. ATI sucked wind since X1900 series. Made an incredible comeback with 4xxx series. Kudos to ATI for that.

Originally posted by: Keysplayr
"I can also see reasons why nVidia's architecture would run better, as OpenCL closely matches Cuda's design, and Cuda's design is based around the nVidia architecture. ATi has a completely different architecture, and has had to add local memory to the 4000-series just to get the featureset right for OpenCL. I doubt that their 'afterthought' design is anywhere near as efficient as nVidia's is."

Originally posted by: evolucion8
That local memory design is also available in the GTX series, I don't think that you have expertise enough or worked in ATi to know if it was an afterthought, that's just an opinion of your own and not an absolute truth.

No, it was a knee-jerk afterthought. Dude, anyone could see this. ATI knew they had a GPGPU, but in no way did they even approach the level of thought and R&D that Nvidia had done. ATI was only interested in making a gaming GPU with the side benefit of being able to run a few apps as a GPGPU to make the villagers happy. Nvidia took that to the extreme and went full bore with their GPGPU architecture. It's painfully apparent that this it true. You don't need to be a rocket scientist to observe and deduct what has happened over the last two years. And that local memory you speak of, has been present in Nvidia GPU's since G80. Right from the start. 2xxx and 3xxx did not have this. Afterthought.

Originally posted by: Genx87
Once Intel gets larry up and running they will do the same on the GPU front.

Exactly. It's just going to take a while longer. An adjournment for ATI.

Originally posted by: evolucion8
For anti trust reasons, AMD can't go away even if Intel wishes it, but I find very doubtful that a very programable pipeline can outperform the much faster fixed function which is heavily parallel like the ones found on nVidia or ATi.

Agreed, but we'll have to wait and see. After all, they are Intel. Never write them off.

Originally posted by: Scali
Yea, nVidia probably knew that too, being as inviting as they were. As if they were trying to lure ATi into the trap.

You can see it now with Folding@Home. ATi has been at it for years... nVidia recently made their first Cuda-based client for Folding@Home, and it completely blows ATi away. It will be interesting to see what happens when the first OpenCL software emerges.
And how ironic it would be if Havok would run better on nVidia GPUs than on ATi's.

It all comes down to who has the better architecture suited for this purpose. As it stands, and as demonstrated over and again with various applications, bench's and demos, that Nvidia has the better GPGPU architecture. I don't think this can be argued legitimately.

Originally posted by: evolucion8
Folding@Home runs slower because isn't using the the local data share in the HD 4x00 architecture, but when it's implemented, it should run as fast or even faster, look for example at Milky Way @ Home, http://www.brightsideofnews.co...s-in-milkywayhome.aspx Runs much faster than it's CPU counterpart.

Quoted from Evolution8: "I don't think that you have expertise enough or worked in ATi to know if it was an afterthought, that's just an opinion of your own and not an absolute truth."
Does this mean you have the expertise or worked at ATI to know that F@H would run as fast or faster on ATI hardware if the local data share in the HD 4x00 is used? We'll, start writing that code, because nobody else seems to want to. I'll believe it when I see it.
MW@H
We've been through this before. Of course MilkyWay@Home would run faster on an ATI GPU than a CPU. I don't even know why you're including this in our conversation here.
As we have stated before, there isn't an Nvidia CUDA client for MW@H for a direct GPGPU to GPGPU for comparison. Using your logic, probably an 8600GT could wipe the floor against that CPU in your link to the MW@H bench, and perhaps take on the HD 4x00.

Originally posted by: evolucion8
Guys, farewell, I will be on military annual training for 15 days, so I won't be able to post from today until the end of may, have a good time and remember, competition is always good!!

Good Luck, be safe. See you in 16.

 

Keysplayr

Elite Member
Jan 16, 2003
21,219
56
91
Originally posted by: dadach
Originally posted by: Wreckage
Well so far CUDA and PhysX have been very successful.

just the fact that they exist, does not make them successful...it would be nice if they actually had something to offer besides just being there

You'd be right if that were true.

Over 220 applications in use for CUDA.
Tesla platforms being used by universities and researchers from academic to corporate.
I'd advise you to actually go and read up on what CUDA has accomplished since it's introduction. It is pretty extensive.

PhysX now has several games that support it. More coming.

I'd say CUDA and PhysX have a bit more under their belts than just existing.

Your statement is just plain......... I dunno what it is. ;)
 

Scali

Banned
Dec 3, 2004
2,495
1
0
Originally posted by: Keysplayr
ATI knows their hardware and won't touch it with a 20 foot cattle prod and makes the only possible announcement that won't show how inferior their GPGPU capabilities are by saying they now support Havok. Bought themselves a year, maybe two, before they have to actually "put out" anything relating to physics on a GPU.

It all makes perfect sense.

Yea, they bought enough time to improve their architecture to be more competitive in GPGPU. The 5000 series, or perhaps even its successor (6000?).
Ofcourse the downside of this is that nVidia is winning over many developers with PhysX due to lack of competition from Intel/ATi at this point.
It may be possible that by the time Havok runs well on GPGPUs, it no longer matters to most developers.

All ATi can do in the meantime is damage control, by trying to convince the masses that PhysX is useless. This seems to work better on the consumers than on the developers.

Also, there is always the small-print with ATi GPGPU stuff that it will only run on 4000-series cards. Such as Avivo, and Havok, but also OpenCL and DX11 Compute.
Whereas with nVidia, most stuff works even on the aging G80 architecture. Folding@Home works fine, PhysX works fine, Badaboom, Matlab extensions, etc etc. And so will OpenCL and DX11 Compute.
I think that's rather painful for ATi, and I don't really see why people aren't discussing this more on forums.
 

Zstream

Diamond Member
Oct 24, 2005
3,395
277
136
Wow, one guy coming out of the wood work to defend physx. You guys are diehard lol.
 

dadach

Senior member
Nov 27, 2005
204
0
76
Originally posted by: Keysplayr
Originally posted by: dadach
Originally posted by: Wreckage
Well so far CUDA and PhysX have been very successful.

just the fact that they exist, does not make them successful...it would be nice if they actually had something to offer besides just being there

You'd be right if that were true.

Over 220 applications in use for CUDA.
Tesla platforms being used by universities and researchers from academic to corporate.
I'd advise you to actually go and read up on what CUDA has accomplished since it's introduction. It is pretty extensive.

PhysX now has several games that support it. More coming.

I'd say CUDA and PhysX have a bit more under their belts than just existing.

Your statement is just plain......... I dunno what it is. ;)


ok, cuda as you say is not a waste, but to me as a gamer physx had 0 value...i tried gtx260 and it was nice as far as speed and iq goes, except for the huge wow (the game i play the most) slowdown bug...that and i also had to manually edit the drivers Oo to get the card to recognize my philips LCD TV...physx games count, even thoguht the technology is out for like, what 3 years is ridiculously low...so thats too much problems for a card that is suppose to be superior to their counterparts, where i had lot less problems...the whole point is that physx now is not worth it, and is not one of nvidia strong points...as soon as it becomes one, you wll see me 1st with secondary NV physx card in my machine ;)
 

Scali

Banned
Dec 3, 2004
2,495
1
0
Originally posted by: dadach
the whole point is that physx now is not worth it, and is not one of nvidia strong points...

This is all just personal opinion.
The more people are biased towards ATi, the less willing they are to admit that PhysX has any worth at all.

I've seen it all dozens of times before... Everytime manufacturer A introduced feature X which manufacturer B didn't support...

Regardless of how important you think PhysX is... bottom line is:
1) PhysX is available to developers today.
2) PhysX is actually being used by a number of game studios.
3) There are a few games with PhysX effects on the market already.
4) There is currently no alternative to PhysX.

These are the facts which we assume to be true, and need not be discussed.
 

Creig

Diamond Member
Oct 9, 1999
5,170
13
81
Originally posted by: Scali
That is EXACTLY the point. Local memory has been in nVidia's GPUs since the G80, so the beginning of Cuda.
OpenCL and Direct3D11 CS adopted this approach.
Since ATi didn't have this in their design, they had to add it in the 4000-series (which was basically just a refresh of the 3000-series otherwise) in order to make their GPUs compatible with the upcoming OpenCL and D3D standards.
Clearly it's an afterthought, because nVidia's G80 GPUs will run OpenCL and D3D11 CS, while ATi's 2000 and 3000-series can not.

That is very flawed reasoning. So just because one company implements a feature after the other, it's doing so simply as an "afterthought"? ATI has had a tesselator integrated into their GPUs since the R600. So I suppose that when Nvidia's next generation card incorporates one as well in order to be DX11 compliant that it will be doing so simply as an "afterthought"? Any future Nvidia card that ends up using GDDR5 will be doing so as an "afterthought"?

Please...


 

Scali

Banned
Dec 3, 2004
2,495
1
0
Originally posted by: Creig
That is very flawed reasoning. So just because one company implements a feature after the other, it's doing so simply as an "afterthought"?

I never made it a general statement as such.
However, in this case, the local storage is at the core of the GPU design.
ATi didn't redesign their entire GPU architecture when they added local storage, unlike nVidia, where the G80 was a completely fresh architecture, designed around local storage from its inception.
THAT is what makes ATi's an afterthought.
Since the local storage is so tightly coupled to the rest of the architecture, it is not something you can add afterwards and expect it to perform optimally.

Your other comments don't make sense, since nVidia most probably WILL have a fresh architecture for DX11, not a continuation of the G80 with some DX11 features tacked on.
If not, then indeed they are an 'afterthought' aswell. I just doubt this will be the case, because nVidia is long overdue for an architectural refresh anyway.
 

Creig

Diamond Member
Oct 9, 1999
5,170
13
81
Originally posted by: Scali
Originally posted by: dadach
the whole point is that physx now is not worth it, and is not one of nvidia strong points...

This is all just personal opinion.
The more people are biased towards ATi, the less willing they are to admit that PhysX has any worth at all.

I've seen it all dozens of times before... Everytime manufacturer A introduced feature X which manufacturer B didn't support...

Regardless of how important you think PhysX is... bottom line is:
1) PhysX is available to developers today.
2) PhysX is actually being used by a number of game studios.
3) There are a few games with PhysX effects on the market already.
4) There is currently no alternative to PhysX.

These are the facts which we assume to be true, and need not be discussed.

While some people think PhysX is useful to very important, the overwhelming majority feel its value ranges somewhere from marginal to not useful.

As you said, it's all just personal opinion. :beer:
 

Scali

Banned
Dec 3, 2004
2,495
1
0
Originally posted by: Creig
While some people think PhysX is useful to very important, the overwhelming majority feel its value ranges somewhere from marginal to not useful.

Since when did the majority ever know what is good for them?
Besides, the majority still considers it a bonus at least, the 'not useful' crowd is way less than that (only 30/33%).
 

dadach

Senior member
Nov 27, 2005
204
0
76
its a bonus to have usefulness if some games actually come out...we are still waiting