"Inevitable Bleak Outcome for nVidia's Cuda + Physx Strategy"

Page 16 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Nathelion

Senior member
Jan 30, 2006
697
1
0
The reason AMD is not running PhysX on their GPUs is because it is an API owned by their main competitor. nVidia could at any time change the API or decline to give AMD pre-release information on new versions in such a way as to keep AMD from being competitive. Of course AMD is not going to sign up for that. I'm sure PhysX could run great on AMD hardware, but it's not going to happen so long as nVidia retains complete control over the API. Would you voluntarily place yourself at the mercy of your arch-competitor? No, I didn't think so. If physics on the GPU is going to happen, OpenCL is where it's at.
 

SickBeast

Lifer
Jul 21, 2000
14,377
19
81
Originally posted by: Keysplayr
Originally posted by: SickBeast
I'm pretty sure that some people actually managed to hack PhysX to run on AMD GPUs a while back. Please correct me if I'm wrong. I think I remember reading that it actually ran faster on the AMD GPUs as well.

One programmer, with Nvidia's assistance, was able to get PhysX to run on a 3series ATI GPU.
But where did you get the idea that it was faster? Send a link if you have one. Anyway, the effort was abandoned after an unsupportive ATI declined to help the programmer.

When I say, I don't think ATI can run PhysX on their GPUs, I mean "physics" in general, whether it's PhysX, Havok, Bullet or whatever is there, properly or fast enough. IMHO.

I just remember at the time there were rumors that it was "faster than G80" for PhysX.

If you look at the technical specs, I'm pretty sure that the AMD cards actually can do more MIPS/FLOPS than the NV GPUs. Again, I could be wrong here; I'm basing this off my recollection of the X2900XT review vs. the 8800GTX.
 

Wreckage

Banned
Jul 1, 2005
5,529
0
0
Originally posted by: SickBeast


I just remember at the time there were rumors that it was "faster than G80" for PhysX.

If you look at the technical specs, I'm pretty sure that the AMD cards actually can do more MIPS/FLOPS than the NV GPUs. Again, I could be wrong here; I'm basing this off my recollection of the X2900XT review vs. the 8800GTX.

Just look how much faster NVIDIA cards are at folding@home. It's not even close.
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
PhysX only runs on NV hardware and is supported by a handful of games.

No, PhysX runs on nV hardware, the XBox360, PS3, Wii, Intel processors and AMD processors. The only hardware PhysX won't run on is the hardware that AMD won't let it run on.

The reason AMD is not running PhysX on their GPUs is because it is an API owned by their main competitor. nVidia could at any time change the API or decline to give AMD pre-release information on new versions in such a way as to keep AMD from being competitive. Of course AMD is not going to sign up for that. I'm sure PhysX could run great on AMD hardware, but it's not going to happen so long as nVidia retains complete control over the API. Would you voluntarily place yourself at the mercy of your arch-competitor? No, I didn't think so.

You have a point. AMD using PhysX would be almost as foolish as if they were to support an API that was completely owned by Intel instead, oh, wait a minute.....
 

zebrax2

Senior member
Nov 18, 2007
977
70
91
Originally posted by: BenSkywalker
The reason AMD is not running PhysX on their GPUs is because it is an API owned by their main competitor. nVidia could at any time change the API or decline to give AMD pre-release information on new versions in such a way as to keep AMD from being competitive. Of course AMD is not going to sign up for that. I'm sure PhysX could run great on AMD hardware, but it's not going to happen so long as nVidia retains complete control over the API. Would you voluntarily place yourself at the mercy of your arch-competitor? No, I didn't think so.

You have a point. AMD using PhysX would be almost as foolish as if they were to support an API that was completely owned by Intel instead, oh, wait a minute.....
but then again larabee isn't launched yet so as of the moment they are still not a competitor on that market plus we don't know yet if larabee will live up to it hype

 

Wreckage

Banned
Jul 1, 2005
5,529
0
0
Originally posted by: zebrax2
Originally posted by: BenSkywalker
The reason AMD is not running PhysX on their GPUs is because it is an API owned by their main competitor. nVidia could at any time change the API or decline to give AMD pre-release information on new versions in such a way as to keep AMD from being competitive. Of course AMD is not going to sign up for that. I'm sure PhysX could run great on AMD hardware, but it's not going to happen so long as nVidia retains complete control over the API. Would you voluntarily place yourself at the mercy of your arch-competitor? No, I didn't think so.

You have a point. AMD using PhysX would be almost as foolish as if they were to support an API that was completely owned by Intel instead, oh, wait a minute.....
but then again larabee isn't launched yet so as of the moment they are still not a competitor on that market plus we don't know yet if larabee will live up to it hype

ATI is just a brand name for AMD. They are first and foremost a CPU company. If they are siding with Intel out of spite for NVIDIA they will pay dearly for it. In fact they already are.
 

Modelworks

Lifer
Feb 22, 2007
16,240
7
76
Originally posted by: BenSkywalker
PhysX only runs on NV hardware and is supported by a handful of games.

No, PhysX runs on nV hardware, the XBox360, PS3, Wii, Intel processors and AMD processors. The only hardware PhysX won't run on is the hardware that AMD won't let it run on.

They cannot just decide to support PhysX. You need things like licensing. PhysX does not run hardware accelerated on consoles except in very rare cases where the developer has plenty of free resources.

The reason AMD is not running PhysX on their GPUs is because it is an API owned by their main competitor. nVidia could at any time change the API or decline to give AMD pre-release information on new versions in such a way as to keep AMD from being competitive. Of course AMD is not going to sign up for that. I'm sure PhysX could run great on AMD hardware, but it's not going to happen so long as nVidia retains complete control over the API. Would you voluntarily place yourself at the mercy of your arch-competitor? No, I didn't think so.

You have a point. AMD using PhysX would be almost as foolish as if they were to support an API that was completely owned by Intel instead, oh, wait a minute.....

More likely the reason might be because they are better off putting resources elsewhere than to something that only a very small percentage of consumers can use. Don't for a minute think that if they thought supporting hardware physics would place them in the #1 spot, that they wouldn't dedicate the resources.

Looking at the latest hardware survey, its going to be a long time before hardware physics is mainstream.
http://store.steampowered.com/hwsurvey/
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
Originally posted by: Scali
Originally posted by: taltamir
When have new graphics cards EVER changed gameplay?
first order physics calculations as could potentially be done in physX significanty alters gameplay. However there is not a single game in existance, nor any planned, which uses first order physX, because such a game would ONLY be playable nvidia DX10 gpus.

Wrong. UnrealEngine 3 uses PhysX for everything. If you enable hardware acceleration, all physics run on the GPU or PPU. Some games based on the UnrealEngine will also add extra eye-candy in that case.
But that doesn't take away from the fact that all game physics will run on GPU or PPU.

And they are still playable because if you don't have the 'eyecandy', the game physics are light enough to run on the CPU, still using the PhysX API.
There is no 'other physics API' in the UnrealEngine, and I doubt that any other developers would be crazy enough to develop their own physics API for that, when PhysX does the job just fine. It defies the point of using a physics API in the first place.

Now aside from that, you are not getting my point.
My point is: Why do people suddenly demand MORE than just better graphics, when new videocards, new features and new APIs have NEVER added anything other than graphics before?

I was clearly referring to first order physics on GPU. not first order physics "light enough to run on the CPU".

The UE3 mods like the tornado level are the only examples of first order physics done in physX which are NOT light enough to run on the CPU only, and as such actually provide a gameplay benefit, but such a thing is ONLY playable on a DX10 nvidia card and as such nobody in their right mind will program a game that is based on that. They all use first order physics light enough to run on the CPU.. even if they chose to use physX for those.
All that physX does is enable some second order physics effect.
 

zebrax2

Senior member
Nov 18, 2007
977
70
91
My view

If AMD side with nVidia:

If larabee flops:
nVidia could strong arm AMD in physx because the only gpu capable of havok is a flop

If larabee live up to its hype:
They have one mighty competitor there


If AMD side with Intel:

If larabee flops:
Every AMD gpu sold will increase havoks market

If larabee live up to its hype:
They wouldn't strong arm amd because at first they would have a pretty small part of the discreet market. Letting AMD sell a competitive product would increase havoks market share thus boosting one of larabee selling point.

edit: revised
 

zebrax2

Senior member
Nov 18, 2007
977
70
91
Originally posted by: Wreckage
Originally posted by: zebrax2
Originally posted by: BenSkywalker
The reason AMD is not running PhysX on their GPUs is because it is an API owned by their main competitor. nVidia could at any time change the API or decline to give AMD pre-release information on new versions in such a way as to keep AMD from being competitive. Of course AMD is not going to sign up for that. I'm sure PhysX could run great on AMD hardware, but it's not going to happen so long as nVidia retains complete control over the API. Would you voluntarily place yourself at the mercy of your arch-competitor? No, I didn't think so.

You have a point. AMD using PhysX would be almost as foolish as if they were to support an API that was completely owned by Intel instead, oh, wait a minute.....
but then again larabee isn't launched yet so as of the moment they are still not a competitor on that market plus we don't know yet if larabee will live up to it hype

ATI is just a brand name for AMD. They are first and foremost a CPU company. If they are siding with Intel out of spite for NVIDIA they will pay dearly for it. In fact they already are.

Intel would make AMD pay in joining them to sell one of their products?
 

Nathelion

Senior member
Jan 30, 2006
697
1
0
Originally posted by: BenSkywalker
PhysX only runs on NV hardware and is supported by a handful of games.

No, PhysX runs on nV hardware, the XBox360, PS3, Wii, Intel processors and AMD processors. The only hardware PhysX won't run on is the hardware that AMD won't let it run on.

The reason AMD is not running PhysX on their GPUs is because it is an API owned by their main competitor. nVidia could at any time change the API or decline to give AMD pre-release information on new versions in such a way as to keep AMD from being competitive. Of course AMD is not going to sign up for that. I'm sure PhysX could run great on AMD hardware, but it's not going to happen so long as nVidia retains complete control over the API. Would you voluntarily place yourself at the mercy of your arch-competitor? No, I didn't think so.

You have a point. AMD using PhysX would be almost as foolish as if they were to support an API that was completely owned by Intel instead, oh, wait a minute.....

Intel is not a dominant force in the graphics market (yet, at least) so they won't be in a position to strong-arm the market for the foreseeable future, something that nVidia could definitely do.
If you are referring to x86, that's more a matter of not having a choice. AMD does continually suffer because Intel essentially owns the instruction set. AMD and Intel continue to fight it out when it comes to ISA extensions. Anyone remember 3DNow? How about SSE5? To date, Intel has won every ISA extension spat in the x86 space except for Itanium vs AMD's 64-bit implementation - and the only reason AMD won out there was because Intel was essentially trying to get rid of x86 and replace it with a completely different ISA, pitting them against the weight of the gigantic amount of legacy x86 code out there.
 

akugami

Diamond Member
Feb 14, 2005
6,210
2,552
136
Originally posted by: SirPauly
Let's discuss the reasoning why ATI didn't [support PhysX].

For ATI to support PhysX they would of had to support Cuda. People offer it was the right move not to support it -- why?

From an end user standpoint, it doesn't matter a whole lot so long as in the end it delivers. CUDA is not bad in any way. At least not for end users, nor for nVidia. From a developers standpoint, whoever has the dominant standard is the one they'll eventually gravitate towards so if CUDA wins out (or some other format) then that's what they'll use.

Now, from AMD's view and being responsible to your stakeholders, it does not make sense to put yourself at the mercy of your competition.* If your competition is putting out new tech, you either duplicate it or you try to crush it. A business organization's first objective is to make money. You are being irresponsible to your stakeholders if your ability to make money is dependent on technology from your competition.

We've all probably heard the quote that "power corrupts and absolute power corrupts absolutely." We've seen this monopoly power abused on more than one occasion. Intel used it on AMD even if there is no absolute proof out there. Microsoft used it with hidden API's accessing its OS. The fear from AMD's point of view is there will be hidden features in CUDA that will allow nVidia GPU's to outperform AMD's GPU's. Whether this is founded or unfounded, the fear is there. More importantly, the possibility is there. Past experiences with monopolies or companies with a stranglehold on certain tech has not eased any of these concerns.

Let's take a look at a very real example. SIMD extension from Intel. Intel already has performance advantages over AMD currently though that has not always been the case. However, with the SSE instructions, any time they release a new set of instructions it has taken AMD a CPU generation or two before they can implement said extensions. nVidia can do something similar and always force AMD to be one step behind in implementing new CUDA features.

For this reason alone, AMD is justified from a business standpoint in opposing CUDA while it's still early. Even the possibility of being at the mercy of your competitors without exploring all options is being irresponsible to your stakeholders. If the situations were reversed and it was AMD pushing a tech that was more advanced I'd expect nVidia to do the same thing and put out a similar product or tech or try to crush it as well. I would also say it was the right move for nVidia.

* AMD being dependent on Intel is a special case as they needed a license to make x86 CPU's and still need that license.
 

Munky

Diamond Member
Feb 5, 2005
9,372
0
76
Originally posted by: Scali


I didn't say you can't do any physics on the CPU at all.
Should be pretty obvious, since games have been doing physics on the CPU for years. Everyone knows that.
You CAN do physics on the CPU, and both PhysX and Havok allow you to do that.
But in both cases, the CPU's performance is going to limit the amount of effects you will be able to use. Therefore, effects like cloth, water, softbodies etc will not be an option. Which is why we haven't seen them in any games without accelerated physics, regardless of whether the games used Havok, PhysX, or some other API.
...
Exactly, which means that currently Havok is not an option.
If you don't want to use the extra physics, then you could still use PhysX on an x86 CPU instead of Havok. The extra physics effects aren't the only thing that PhysX does.

Again, I was responding to the claim that PhysX would only run on nVidia GPUs. This simply is not true. If you use PhysX on an x86 CPU, it's still a good alternative to Havok on CPU. PhysX just allows you the OPTION of GPU/PPU acceleration and extra effects. Havok doesn't.

I didn't think it would be THAT hard to understand. And I am amazed at how selectively my posts are read, and how they are only partially understood or pulled out of context. It seems deliberate (I give you the benefit of the doubt that you're not really THAT thick. Sadly that means that I think you are trolling and being obnoxious on purpose).

Basic physics will run on my cell phone for all I care, but the kind of physics NV fans are parading for their gpu will not run on x86. So when you say PhysX also runs on x86, it doesn't "run" the way it "runs" on the gpu, and it offers absolutely no advantage over alternative solutions when "running" on the cpu. Get it?

Actually, nVidia DOES support the improved AA techniques that DX10.1 offers through a driver extension. For example, Far Cry 2 uses DX10.1 if possible, or the NVAPI on compatible nVidia cards. So you're not missing out on the AA.

Then why doesn't NV certify its cards DX 10.1 compliant? Because you have to use proprietary software hacks to get those results? Because it only supports a subset of what DX 10.1 offers? Do other games like Stalker Clear Sky also use Nvidia's proprietary extensions for DX 10.1 effects?
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
Originally posted by: zebrax2
My view

If AMD side with nVidia:

If larabee flops:
nVidia could strong arm AMD in physx because the only gpu capable of havok is a flop

If larabee live up to its hype:
They have one mighty competitor there


If AMD side with Intel:

If larabee flops:
Every AMD gpu sold will increase havoks market

If larabee live up to its hype:
They wouldn't strong arm amd because at first they would have a pretty small part of the discreet market. Letting AMD sell a competitive product would increase havoks market share thus boosting one of larabee selling point.

edit: revised

maybe not right away, but in a very short order they would.
 

akugami

Diamond Member
Feb 14, 2005
6,210
2,552
136
Originally posted by: Scali
Originally posted by: akugami
And as an aside, I don't believe nVidia has really designed a GPU for PhysX yet.

Don't you get it?
The "GP" in GPGPU stands for General Purpose.
They don't HAVE to design a GPU for PhysX, because their GPU is designed for General Purpose processing.
Just like Intel and AMD don't really design their CPUs for any specific task in mind. They are designed to run pretty much anything.
As such, nVidia will NEVER design a GPU for PhysX. There's no need. They'll just continue to improve on the GPGPU features.

Originally posted by: akugami
I think some of their GPU design was meant for their GPGPU uses which also helped PhysX. nVidia didn't buy Ageia until early 2008 and likely most of the design work on what would be put into the GT200 GPU cores was already set in stone.

It's worse than that. PhysX works on everything from the G80 up. Any Cuda GPU.
And those are over 2 years old.
The GT200 isn't all that different from the G80, it's mostly just bigger and faster. Aside from that they added a few features to Cuda, but nothing specific to physics. And they probably never will.

In fact, if you study the Ageia PPU design, it's not too different from the G80's original design. The key to the PPU was not so much the parallelism (it didn't have that many cores, only about 12 I believe, and they weren't that fast), but in how the architecture could shuffle the data through a sort of packet-switching bus. It was almost like a network switch.
nVidia's G80 added shared memory between its stream processors, which also allows stream processors to quickly communicate with eachother.
And that's what you want for physics. You want to propagate the forces of one object to the objects that it acts upon.

Maybe if you actually understood what I was trying to say instead of a somewhat snide opening reply it would foster better discussion. Try looking up math coprocessors, Altivec/VMX, and SSE 1/2/3/4.

These were technologies developed to enhance the CPU. Older CPU's could do math. Math coprocessors were more powerful at crunching numbers. The math coprocessors could handle offloaded instructions from the main CPU, freeing it up to crunch other instructions.

Altivec and SSE and similar instruction set extensions are sets of very special instructions that handle certain floating point and integer operations to speed up certain functions such as video encoding. Sure, SSE4 only added a few extra instructions over SSE3 which added only a few extra instructions over SSE2. However, early benchmarks of SSE2 vs SSE4 are showing a 40% increase in performance for DivX encoding.

The point of all these ramblings is that I believe whatever advantages and intellectual properties they were buying when they purchased Ageia has not yet been integrated into the GPU's produced by nVidia. This means that PhysX can only get better when they do properly integrate Ageia's tech with their existing tech.

The fact that the G80 GPU cores and up were well designed with GPGPU in mind and in line with how Ageia used hardware to accelerated PhysX doesn't mean there isn't room for enhancements. If Ageia did not have tech that nVidia coveted (PhysX, both software AND hardware) then nVidia would not have shelled out good money.

I'm not a hardware engineer. I'm not even a programmer. I simply refuse to believe that there isn't hardware and software that nVidia obtained when they bought Ageia that can be integrated into their GPU's to further accelerate PhysX.

Originally posted by: Scali
Originally posted by: akugami
I beg to differ. nVidia's products are wildly successful now but the landscape is set to change dramatically in the next two years. First, Intel is heading into the market and while it would be extremely hard for them to gain market share from hardcore gamers, they can easily use their CPU business for their GPU's to piggyback on. And we all know what physics product Intel will be supporting. Second is both Intel and AMD will be moving towards integrated CPU/GPU's in which the multi-core processor contains not only two or more CPU cores but likely at least one GPU core. As processes get smaller, one can even imagine multi CPU and GPU cores in one package. This cuts nVidia out completely.

Integrated GPUs will not be competitive with discrete cards anytime soon.
Aside from the fact that discrete GPUs are FAR larger than a CPU itself, so you can't really integrate such a large chip in a regular CPU anyway... Another huge problem is the shared memory of an integrated GPU.
A discrete videocard has its own memory, which is different from the main memory in a computer. It's specially designed for graphics (GDDR) and delivers high bandwidth at high latencies. Regular memory is designed to deliver low latencies, and the bandwidth is much lower.
So any integrated GPU will have MUCH lower bandwidth than a discrete card, which means it is impossible to get competitive performance.

This is also why Intel launches its Larrabee as a discrete card.

I agree integrated GPU's will not be competitive with discrete video cards any time soon. However, with the pace technology can move, who can say what is possible in two-three years time. While discrete GPU's are larger than today's CPU's, AMD has shown what can be done with a smaller GPU core to be competitive with a larger one.

While individually AMD's current GPU's may not match nVidia's top GPU's, it is arguable that with the way they set about designing their line up, AMD is very competitive by integrating two GPU's into one Xfire card to combat a single GPU by nVidia. These are competitive from not only a performance standpoint but from a price standpoint as well.

Furthermore there will be process shrinks. This means it should be easier to implement multiple GPU cores in the future assuming one doesn't not raise the transistor count too rapidly as one shrinks the node at which the CPU/GPU's are produced.

By going with less powerful cores you might be able to fit two or three GPU cores in the same die space as four CPU cores. You'd sacrifice sheer power in each individual GPU core but you'd make it up by having more than one. Sure, this solution may never be as powerful as discrete GPU's but as we move further and further ahead, there seems to be less and less gains made in game realism.

Cryengine2 (Crysis) showed some amazing graphics and I think we'll be hard pressed to really go up from that for the average gamer to really notice much differences. I do believe that CPU/GPU's can be made to run a game like Crysis at decent resolutions so that _most gamers_ won't worry or care about extra levels of detail. Case in point, the Xbox 360 and PS3 are pretty close to what will max out the useful graphics updates for general consumers. After that, updated graphics simply becomes another checkbox feature for them.

I don't believe the decreased GPU power of integrated CPU/GPU's vs discrete GPU's will hurt Intel or AMD as much as you seem to think. Most general consumers simply won't care. Furthermore, OEM's definitely will like having one less part to stock. Don't discount the fact that Intel ships the most GPU chipsets even though they're current GPU's are crap for gaming comparatively speaking.

As for the memory issue. Hyper Transport and Intel Quickpath Interconnect or a similar bus technology can be made to deliver a high bandwidth bus along with plugin memory modules on a specially designed daughtercard port. Maybe it's some other solution. Regardless, the current GPU's accessing memory chips still have to go through circuit boards. The memory is not directly on the GPU die. The motherboard is just another circuit board.



***EDIT***

I think I'll quit while the getting's good. Too much arguing in circles as usually happens when the fanboys get at it. I got sucked into arguing with the fanboys a few times already so I'm going to quit. I got my point out, you can agree or disagree. If anyone wants to ask something or further clarify, send a PM or two.
 

Scali

Banned
Dec 3, 2004
2,495
1
0
Originally posted by: SSChevy2001
Yes I know what taskmgr is. While I'm not a developer, but I can understand that you just can't break up a game to run 100% on a quad core. What I expect to see is more titles favoring quad core cpus by more than just by 5%.

You're not a developer, how do you know it's even POSSIBLE to do that?
Look at what Modelworks just said for example.

Originally posted by: SSChevy2001
Only some people will make like these extra cloth, smoke, and debris effect can't run on current CPUs, which is not the case.

What makes you say that?
Both Havok and PhysX (when it was still NovodeX, and didn't support any GPU or PPU acceleration), aswell as various other physics libraries, have long supported these effects. Yet I've never seen a game use them.
The problem is that CPUs aren't fast enough. You may be able to do a cloth or smoke effect in a demo, but doing it interactively at the scale that a game like Mirror's Edge does, while still maintaining playable framerates, that's not an option.
Hence games didn't use these effects. Thanks to GPU's and PPU's, they can.

Let me give you a quick hint:
We know the following facts:
1) A modern high-end CPU has about 76 GFLOPS of processing power
2) A modern high-end GPU has about 1000 GFLOPS of processing power
3) PhysX effects in games generally take about 10-25% of the GPU performance (depending on various settings).

So, a quick calculation, worst case:
25% of 1000 GFLOPS is 250 GFLOPS.

Another quick calculation, worst case:
Say games only use 1 core efficently, so only 25% total power.
Which means there is 75% of the 76 GFLOPS available for physics.
So we have 57 GFLOPS 'to spare' for physics.

Okay, so even if we are to assume that we can use 3 extra cores for physics, how can we get those 250 GFLOPS that we need to do the PhysX effects?
There is no way you can make it fit. GPUs are just way out of the league of CPUs when it comes to physics. You would need many more cores than just the 4 cores of a current CPU. And that's not even taking things like memory bandwidth and synchronization overhead into account.
It's just never going to work.

Originally posted by: SSChevy2001
What's ironic is it's the only effect in the game that causes CPUs to crawl.

Is it? When I disabled PhysX acceleration on my GeForce, suddenly the game crawled on my Core2 Duo @ 3 GHz, even in the training level. There is no glass in the training level. The game just crawled everywhere.
 

Scali

Banned
Dec 3, 2004
2,495
1
0
Originally posted by: Keysplayr
They probably can't. We talked about a theory in this thread, that ATI knew it could not run PhysX on it's GPUs, and quickly turned it down and signed on with Havok.

Well, they most probably couldn't with the 2000 and 3000 series. Was the 4000 series out yet when nVidia offered ATi to help support PhysX on their hardware?
ATi has already admitted that Havok will only run on its 4000 series and up anyway. Not entirely a coincidence.

Originally posted by: Keysplayr
This was how long ago? I dunno. The thing is, I don't think ATI can run any sort of Physics on it's GPUs.
Unless they are severely limited with staff and programmers, we should have seen a whole lot of physics on their GPU's by now. Like we said earlier in the thread. This is just ATI postponing the inevitable IMHO. Please let me be wrong.

You may have a point there. nVidia acquired Ageia and PhysX in February 2008 I believe, and in June 2008 the first drivers were officially released to end-users. So they did it in just 5 months. ATi isn't even alone in this, because they have the support of the Havok team and Intel. Yet, where is Havok's GPU acceleration? Why is it taking so long?
 

Scali

Banned
Dec 3, 2004
2,495
1
0
Originally posted by: SickBeast
I'm pretty sure that some people actually managed to hack PhysX to run on AMD GPUs a while back. Please correct me if I'm wrong. I think I remember reading that it actually ran faster on the AMD GPUs as well.

No they didn't.
What they did was hack the PhysX library so they could 'skip' the physics in the 3DMark Vantage test and get very high scores.
Nobody ever actually saw a screenshot during the test, let alone a video. So there is no proof that they actually calculated any physics.

This was just a bogus internet claim, much like that claim that people got DX10 running on XP. In the end nothing surfaced and the project was cancelled.
It just shows how much people want to BELIEVE!
 

Scali

Banned
Dec 3, 2004
2,495
1
0
Originally posted by: Nathelion
Would you voluntarily place yourself at the mercy of your arch-competitor?

The answer is yes, since ATi placed itself at the mercy of Havok, owned by Intel. A bigger competitor to AMD than nVidia is.
 

Scali

Banned
Dec 3, 2004
2,495
1
0
Originally posted by: SickBeast
If you look at the technical specs, I'm pretty sure that the AMD cards actually can do more MIPS/FLOPS than the NV GPUs.

They have a slight advantage on paper.
Eg the 4870 has slightly over 1 TFLOPS, where the GTX280 has 933 GFLOPS, if I'm not mistaken.
But as we all know, the GTX280 is the faster card in most games. Not because it has more raw processing power, but because it is more efficient in using that processing power.
And that is graphics, the main task that the 4870 was designed for.
With GPGPU the difference is likely going to be larger, because unlike graphics, GPGPU doesn't always use Vec3 or Vec4 operations, but also scalar or Vec2, which eats into ATi's efficiency, where it doesn't affect nVidia's design.
 

Scali

Banned
Dec 3, 2004
2,495
1
0
Originally posted by: zebrax2
but then again larabee isn't launched yet so as of the moment they are still not a competitor on that market plus we don't know yet if larabee will live up to it hype

I wouldn't be surprised if Havok won't get GPU acceleration before Larrabee is on the market though.
And I also wouldn't be surprised if Larrabee would run Havok better than ATi cards.
 

Scali

Banned
Dec 3, 2004
2,495
1
0
Originally posted by: Modelworks
PhysX does not run hardware accelerated on consoles except in very rare cases where the developer has plenty of free resources.

Nonsense. PhysX doesn't run hardware-accelerated on consoles because consoles don't have any hardware to accelerate it with.
The only thing that comes close is the PS3's Cell processor, which is ALWAYS used for PhysX, since it's the only processor in the system.
 

Scali

Banned
Dec 3, 2004
2,495
1
0
Originally posted by: taltamir
I was clearly referring to first order physics on GPU. not first order physics "light enough to run on the CPU".

First-order physics in regular games are light enough to run on the CPU by design.
You were claiming that PhysX isn't used for first-order physics at all. Which isn't true.
UE3 still uses PhysX for first-order physics in the non-modded levels. It's just how the engine was designed. It's just that you can run the game fine either on CPU, PPU or GPU that way, because the physics are light enough by design. Just as the physics are light enough in every other game that doesn't require a physics accelerator.

Originally posted by: taltamir
All that physX does is enable some second order physics effect.

There you go again.
You forget that ALL physics are done by PhysX in engines like UE3. They don't use Havok or anything else. PhysX is a full physics solution for PCs and consoles. That is what PhysX does.
It also enables acceleration which you could use for either first-order or second-order physics effects. But that is not all it does. Get it?