Originally posted by: akugami
Originally posted by: Keysplayr
Quote from Scali: "I can also see reasons why nVidia's architecture would run better, as OpenCL closely matches Cuda's design, and Cuda's design is based around the nVidia architecture. ATi has a completely different architecture, and has had to add local memory to the 4000-series just to get the featureset right for OpenCL. I doubt that their 'afterthought' design is anywhere near as efficient as nVidia's is."
This is pretty much the reason I am a little confused as to why ATI fans spurn CUDA/PhysX, and are embracing OpenCL. If OpenCL is closely matched to CUDA's design, how well do you think DirectX Compute will run on ATI GPU's (if it does at all, meaning offloads to CPU) compared to Nvidia's current architecture, not to mention GT300?
I think that most people are somewhat agnostic towards PhysX. I personally don't care too much one way or another. I think most people (excepting fanboys and anti-fanboys) can see the potential but also see that PhysX is not in any way guaranteed to win out. If it does, it does and we'll buy hardware that supports it. If it doesn't, then it doesn't and we'll buy whatever hardware supports the proper physics acceleration (and other GPGPU) standards.
This also doesn't discount the fact that for ATI, as a business decision, it was the right move to not support PhysX. Technologies live and die on not only whether they are good or not but on business reasons as well.
Therein lies the rub, nVidia fanboys (not saying you) simply refuse to acknowledge that there are valid points to ATI not using PhysX and that there are valid reasons for gamers (and others dependent on GPU's) to not fawn over PhysX like it's the best thing since sliced bread so to speak.
PhysX is currently the best physics solution but it's still too early in the game to crown a winner. PhysX is a bit underwhelming at the moment, contrary to what the fanboys say, but the potential is there. If one is totally unbiased though, one has to see that there is the potential for PhysX to rule the roost but there is also a great chance that it just falls flat in a year or two when Havok hits back. Same thing with CUDA. It might fall to something developed by Microsoft. Again, business dictates the success of technologies as much as how good the technology is. There have been many potentially great technologies that have been beaten by seemingly inferior rivals for business reasons. The fanboys refuse to believe that.
I'm actually more interested in nVidia GPU's for video encoding than physics acceleration. However that too is still early. Unlike physics acceleration however, it is much more mature and likely to succeed IMHO.
And as an aside, I don't believe nVidia has really designed a GPU for PhysX yet. I think some of their GPU design was meant for their GPGPU uses which also helped PhysX. nVidia didn't buy Ageia until early 2008 and likely most of the design work on what would be put into the GT200 GPU cores was already set in stone. I think PhysX will get a kick in the rear in the next iteration of nVidia's GPU (not the GT300) as they truly start to integrate what they bought from Ageia into their GPU designs.
Originally posted by: Scali
nVidia doesn't NEED PhysX to sell their products. nVidia's products are successful enough on their own. And that's where the 'danger' lies. PhysX will 'sneak into' the market because it piggy-backs onto the sales of nVidia GPUs. Which is why more than 50% of all gamers already have support for PhysX. Since PhysX is free for use unlike Havok, it's very tempting for developers to use it in their games. And since they can then add extra effects with little extra effort for the 50+% of their audience that owns nVidia hardware (and through TWIMTBP nVidia will actually help you add these effects to your games), it is tempting for developers to do so. It can give them a bit of extra flash over competing games and boost sales.
I beg to differ. nVidia's products are wildly successful now but the landscape is set to change dramatically in the next two years. First, Intel is heading into the market and while it would be extremely hard for them to gain market share from hardcore gamers, they can easily use their CPU business for their GPU's to piggyback on. And we all know what physics product Intel will be supporting. Second is both Intel and AMD will be moving towards integrated CPU/GPU's in which the multi-core processor contains not only two or more CPU cores but likely at least one GPU core. As processes get smaller, one can even imagine multi CPU and GPU cores in one package. This cuts nVidia out completely.
From the above perspective, I'd say nVidia might not need PhysX now but they definitely want and need nVidia owned technologies in the market if they wish to stay relevant long term. That is assuming they don't go the Transmeta route and put out an x86 emulated CPU/GPU. Multiple x86 emulated cores along with a GPU.