"NVIDIA is the perfect fit for us. We aren't really selling a damn thing right now." said Manju Hedge, co founder and CEO of AGEIA.
Originally posted by: taltamir
You guys all seem to be forgetting that directX11 WILL mandate that video cards do physicx processing. as well as define methods...
So nvidia is probably looking for a head start in getting their DX11 GPUs to do it well...
Originally posted by: Aberforth
Originally posted by: Cookie Monster
First off, this would pretty much be the end of the "PPU". Ive always thought the idea of Physics Processing Unit was quite unpractical because with the introduction of dual/quad and later octo core CPUs, not to mention GPUs could also do the same tasks as the PPU, its future has always been bleak.
You are 100% right, I always thought the same thing too... but the question is for how long? for how long the cpu cores can be utilized to simulate physics of a game? you think the moore's law can be applied forever? No. The 45 nm fabrication itself is the biggest step by intel, for how long do you think Intel can reduce the size of a chip? Let's assume if they fitted 16 cores one day...then we'd have global warming protesters stamping on it. There comes a point where technology cannot advance further without more advanced alternatives. I think this move is Nvidia's strategy to stay in the business for a looong time because their technology is starting to show it's true colors already and they have to make a decision.
Originally posted by: aka1nas
Originally posted by: taltamir
You guys all seem to be forgetting that directX11 WILL mandate that video cards do physicx processing. as well as define methods...
So nvidia is probably looking for a head start in getting their DX11 GPUs to do it well...
That's incorrect.
Dx11 will likely define a standard API for physics processing(i.e. "DirectPhysics"), which will allow vendors to write drivers to handle physics for devices such as GPUs as well as in software.
Frankly, current GPUs are a poor fit for interactive physics processing as they are mainly intended to receive data, process it internally, and dump the output to the screen. Sending data back to the rest of the PC usually has severe performance penalties as the GPU ends up sitting idle much of the time waiting. That's fine if all you want is more accurate eyecandy, but it's not going to cut it for things like deformable terrain and objects.
Originally posted by: SunnyD
Well, this IS a good thing for AGEIA. Now they no longer have to try to make dedicated hardware, they can focus on the software side and let NVIDIA do everything for them. This is also a good thing for NVIDIA customers, as it "should" bring hardware accelerated PhysX with a driver update instead of an add-in card.
Originally posted by: Genx87
Originally posted by: ja1484
I have a hard time believing Ageia could do it better than Nvidia could with their own proprietary technology if they really wanted to. Then again, NV's proprietary moves have never shaken the industry up much either. I think they need to stick to what they do well: Build a kickass processor for an application already in wide use. I think Nvidia could do very very well if they got into discrete sound cards, for example. God knows Creative needs some competition in that area.
Besides, Nvidia already bought Ageia once before. It was called 3DFX then, and the core "talent" of engineering they acquired from that merger was responsible for NV30. Not exactly Nvidia's brightest moment.
I think the big mistake here is Nvidia assuming there's a market for hardware physics processing. Maybe, but not at the prices Ageia has been asking, especially not for the pathetic results all that money gets you.
A lot of the technology from the last 3dFX project ended up in NV40.
There will be a market for hardware physics, I think there is one right now. The problem is Ageia had a bad implementation. Nvidia buying them seals up any patents Ageia holds and keeps the competition from being able to use them.
You put Ageias processor right on the PCB with direct access to the GPU and its memory space and it should see a huge increase in performance. Forcing it to run through a 33Mhz PCI slot was foolish.
Nvidia has been wanting to add true Physics functionality to their GPUs for a few years. The problem obviously is forcing the GPU to do this reduces the cycles available for the GPU to render a scene. I wont be surprised if we see parts of the silicon from Ageia on an Nvidia GPU or the PCB within 2 generations.
Originally posted by: fleabag
Originally posted by: SunnyD
Well, this IS a good thing for AGEIA. Now they no longer have to try to make dedicated hardware, they can focus on the software side and let NVIDIA do everything for them. This is also a good thing for NVIDIA customers, as it "should" bring hardware accelerated PhysX with a driver update instead of an add-in card.
All you're going to get now is useless second order physics thanks to nvidia. Second order physics means particle effects, things that have NO affect on game play.
Originally posted by: taltamir
Originally posted by: fleabag
Originally posted by: SunnyD
Well, this IS a good thing for AGEIA. Now they no longer have to try to make dedicated hardware, they can focus on the software side and let NVIDIA do everything for them. This is also a good thing for NVIDIA customers, as it "should" bring hardware accelerated PhysX with a driver update instead of an add-in card.
All you're going to get now is useless second order physics thanks to nvidia. Second order physics means particle effects, things that have NO affect on game play.
why would nvidia depreciate the technology it purchased by removing the best parts of it?
CUDA and nvidia's tesla already can perform physics on a GPU... all they really have to do is add the physX api commands and the like.. the hardware is already there.
Originally posted by: Nemesis 1
Now as I understand it. For now NV is going to go software physics for the time being. Not hardware as many are saying in this thread. Infact as I read it the 80 series cards will be physics capable as soon as NV release the software.
I still putting my money on the 800 pound gorilla
Originally posted by: taltamir
CPUs are versetile tools that can perform a variety of operations... GPUs are number crunchers...
A CPU is just not as well suited to the type of work needed for physics or graphics.