NVIDIA to Acquire AGEIA Technologies

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

taltamir

Lifer
Mar 21, 2004
13,576
6
76
You guys all seem to be forgetting that directX11 WILL mandate that video cards do physicx processing. as well as define methods...

So nvidia is probably looking for a head start in getting their DX11 GPUs to do it well...
 

aka1nas

Diamond Member
Aug 30, 2001
4,335
1
0
Originally posted by: taltamir
You guys all seem to be forgetting that directX11 WILL mandate that video cards do physicx processing. as well as define methods...

So nvidia is probably looking for a head start in getting their DX11 GPUs to do it well...

That's incorrect.

Dx11 will likely define a standard API for physics processing(i.e. "DirectPhysics"), which will allow vendors to write drivers to handle physics for devices such as GPUs as well as in software.

Frankly, current GPUs are a poor fit for interactive physics processing as they are mainly intended to receive data, process it internally, and dump the output to the screen. Sending data back to the rest of the PC usually has severe performance penalties as the GPU ends up sitting idle much of the time waiting. That's fine if all you want is more accurate eyecandy, but it's not going to cut it for things like deformable terrain and objects.
 

firewolfsm

Golden Member
Oct 16, 2005
1,848
29
91
Originally posted by: Aberforth
Originally posted by: Cookie Monster
First off, this would pretty much be the end of the "PPU". Ive always thought the idea of Physics Processing Unit was quite unpractical because with the introduction of dual/quad and later octo core CPUs, not to mention GPUs could also do the same tasks as the PPU, its future has always been bleak.

You are 100% right, I always thought the same thing too... but the question is for how long? for how long the cpu cores can be utilized to simulate physics of a game? you think the moore's law can be applied forever? No. The 45 nm fabrication itself is the biggest step by intel, for how long do you think Intel can reduce the size of a chip? Let's assume if they fitted 16 cores one day...then we'd have global warming protesters stamping on it. There comes a point where technology cannot advance further without more advanced alternatives. I think this move is Nvidia's strategy to stay in the business for a looong time because their technology is starting to show it's true colors already and they have to make a decision.

Actually, we'll make it 16nm processors before we hit a wall, and we should have an alternative by then (quantum computing?). But even on normal tech, with 16nm they can fit 32 cores with the same power limits as current quads or duals. There's also terascale in the works by intel which looks even more like a PPU and should get the job done. Physics cards are useless.
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
Originally posted by: aka1nas
Originally posted by: taltamir
You guys all seem to be forgetting that directX11 WILL mandate that video cards do physicx processing. as well as define methods...

So nvidia is probably looking for a head start in getting their DX11 GPUs to do it well...

That's incorrect.

Dx11 will likely define a standard API for physics processing(i.e. "DirectPhysics"), which will allow vendors to write drivers to handle physics for devices such as GPUs as well as in software.

Frankly, current GPUs are a poor fit for interactive physics processing as they are mainly intended to receive data, process it internally, and dump the output to the screen. Sending data back to the rest of the PC usually has severe performance penalties as the GPU ends up sitting idle much of the time waiting. That's fine if all you want is more accurate eyecandy, but it's not going to cut it for things like deformable terrain and objects.

And designing and implementing hardware and drivers to push that directphysics to its limit is going to require new talent and knowledge of the field. I didn't mean that they will somehow use the physX API for DX11, i meant that they will get people working on it.

Also the DX11 physicx might end up flopping (doubtful) and nvidia might decide to either compete or compliment it...
also a PPU / GPU dual use processor would FINALLY make SLI useful... second card does physics, and in games that dont support pysicx it will switch to being an SLI graphics system instead. That could shine considering how much MORE math potential a GPU has compared to a CPU, and how much dual GPU implementations suck even after 4 years and numerous hardware iterations.
 

fleabag

Banned
Oct 1, 2007
2,450
1
0
Originally posted by: SunnyD
Well, this IS a good thing for AGEIA. Now they no longer have to try to make dedicated hardware, they can focus on the software side and let NVIDIA do everything for them. This is also a good thing for NVIDIA customers, as it "should" bring hardware accelerated PhysX with a driver update instead of an add-in card.

All you're going to get now is useless second order physics thanks to nvidia. Second order physics means particle effects, things that have NO affect on game play.
 

fleabag

Banned
Oct 1, 2007
2,450
1
0
Originally posted by: Genx87
Originally posted by: ja1484
I have a hard time believing Ageia could do it better than Nvidia could with their own proprietary technology if they really wanted to. Then again, NV's proprietary moves have never shaken the industry up much either. I think they need to stick to what they do well: Build a kickass processor for an application already in wide use. I think Nvidia could do very very well if they got into discrete sound cards, for example. God knows Creative needs some competition in that area.

Besides, Nvidia already bought Ageia once before. It was called 3DFX then, and the core "talent" of engineering they acquired from that merger was responsible for NV30. Not exactly Nvidia's brightest moment.

I think the big mistake here is Nvidia assuming there's a market for hardware physics processing. Maybe, but not at the prices Ageia has been asking, especially not for the pathetic results all that money gets you.

A lot of the technology from the last 3dFX project ended up in NV40.
There will be a market for hardware physics, I think there is one right now. The problem is Ageia had a bad implementation. Nvidia buying them seals up any patents Ageia holds and keeps the competition from being able to use them.

You put Ageias processor right on the PCB with direct access to the GPU and its memory space and it should see a huge increase in performance. Forcing it to run through a 33Mhz PCI slot was foolish.

Nvidia has been wanting to add true Physics functionality to their GPUs for a few years. The problem obviously is forcing the GPU to do this reduces the cycles available for the GPU to render a scene. I wont be surprised if we see parts of the silicon from Ageia on an Nvidia GPU or the PCB within 2 generations.

The ageia physics card was never limited by the PCI slot, simply because it just does not have the kind of bandwidth of that a video card needs. Think of it the difference between transferring a 100 page all text paper and sending a 6 megapixel image, which one do you think requires more data? All the physics processor does is say what is where, which is computationally intense but requires very little ACTUAL data being moved while a GPU RENDERS the image and THEN sends that image it created over many times a second.


Want another analogy? Think of it as me sending you the coordinates to all the Mcdonalds in North america compared to me sending you IMAGES of each of the Mcdonalds in north america.
 

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
I could have sworn MS has backed off on DirectX11. Just read it recently. It would seem everbody is looking really hard at what Intel is going to do.

Intel entering this market changes things alot. Who do ya go with the 800 pound gorilla or the chimps. NO one wants to be left out in the cold. But I suspect Intel is going to play nice with Apple and the hell with MS. As many have said PC gaming is on the decline.

I really believe based on shares outstanding Apple is a great buy right now. Since apple is now enterring the gaming market it looks like what I forsaw for Intel/ Apple and RTRT is moving closer to reality.


Now as I understand it. For now NV is going to go software physics for the time being. Not hardware as many are saying in this thread. Infact as I read it the 80 series cards will be physics capable as soon as NV release the software.


I still putting my money on the 800 pound gorilla
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
Originally posted by: fleabag
Originally posted by: SunnyD
Well, this IS a good thing for AGEIA. Now they no longer have to try to make dedicated hardware, they can focus on the software side and let NVIDIA do everything for them. This is also a good thing for NVIDIA customers, as it "should" bring hardware accelerated PhysX with a driver update instead of an add-in card.

All you're going to get now is useless second order physics thanks to nvidia. Second order physics means particle effects, things that have NO affect on game play.

why would nvidia depreciate the technology it purchased by removing the best parts of it?
CUDA and nvidia's tesla already can perform physics on a GPU... all they really have to do is add the physX api commands and the like.. the hardware is already there.
 

fleabag

Banned
Oct 1, 2007
2,450
1
0
Originally posted by: taltamir
Originally posted by: fleabag
Originally posted by: SunnyD
Well, this IS a good thing for AGEIA. Now they no longer have to try to make dedicated hardware, they can focus on the software side and let NVIDIA do everything for them. This is also a good thing for NVIDIA customers, as it "should" bring hardware accelerated PhysX with a driver update instead of an add-in card.

All you're going to get now is useless second order physics thanks to nvidia. Second order physics means particle effects, things that have NO affect on game play.

why would nvidia depreciate the technology it purchased by removing the best parts of it?
CUDA and nvidia's tesla already can perform physics on a GPU... all they really have to do is add the physX api commands and the like.. the hardware is already there.

Bitter about HavokFX which failed and I'm partially thankful it did, however that then meant the death of Ageia. Nvidia doesn't give a shit about gamers, all they care about is selling video cards and if they can stack them on top of each and sell them to people, they will. They don't care that SLI doesn't get close to 90% performance increase, they don't care at all, that's why instead of putting two GPUs on one board, they make their "dual GPU" solutions have basically two separate cards sandwiched together.

In case you forgot, Nvidia did this same thing to 3Dfx, a pioneer in the GPU market. It's like deja vu all over again, except PPUs didn't take off like GPUs did 10 years ago.
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
and ATI cares? none of them cares... they are all heartless and evil... and without them we wouldn't have a smidgen of the tech gadgets and toys we now have...

nvidia didn't "do this" to 3dfx, 3dfx did it to themselves... 3dfx was unable to make anything good enough to compete with nvidia... so people bought nvidia cards instead and 3dfx went under...
Maybe if 3dfx actually released cards more often...

nvidia bought a bankrupted 3dfx, put its engineers to use, and slowly integrated its tech (like.. SLI... SLI is a 3dfx tech... you could SLI voodoo2).

ageia is different, nvidia bought a non bankrupt company here...

Its funny that tom's hardware quotes nvidia's ceo as saying that he wanted to buy AMD instead... but nvidia is worth only 3 times as much, and as such could not afford to do so (legally speaking could not do so. they can't do that type of merger)....
 

aka1nas

Diamond Member
Aug 30, 2001
4,335
1
0
Originally posted by: Nemesis 1

Now as I understand it. For now NV is going to go software physics for the time being. Not hardware as many are saying in this thread. Infact as I read it the 80 series cards will be physics capable as soon as NV release the software.


I still putting my money on the 800 pound gorilla

Nvidia's solution would be considered "hardware" physics as they will be processing the PhysX API on the GPU via CUDA.

It's probably going to be several years before Intel can scale to enough cores to make the kind of physics effects we want doable. I'd estimate that the 16-32 core range is probably when fully simulated physics environments(i.e. fully deformable terrain and objects) might start becoming feasible. Current software physics engine tend to start choking when there are more than a few objects/core interacting(i.e. remember how bad Oblivion would lag when you dropped large number of Havok-enabled items at once?). Collision detection, in particular, is rough as it is an O(n^2) problem.

Unfortunately, this does really suck for us as it means we will only be getting second-order physics in games for the foreseeable future. :(
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
CPUs are versetile tools that can perform a variety of operations... GPUs are number crunchers...
A CPU is just not as well suited to the type of work needed for physics or graphics. Which is why you have specialzed cards...
A graphics card can provide many times the performance of a CPU when it comes to physics calculations.

aka1nas has it right when he says its hardware not "software"... they will release drivers (aka "software") that will allow their existing hardware to render physics on the GPU.. because all DX10 capable cards have CUDA supprt... which nvidia has been selling companies as a physics calculator (for research and the like)... all the have to do is write drivers to use the exact same capabilities for the physX API and they will be in business...

Graphics is already pretty much peaked... DX10 stuff actually looks REAL...
So the next stuff is physics, we are starting from scratch, just like with graphics years ago.. but slowly games will be more and more physically accurate... give it 10 years and it will be semi realistic.
 

fleabag

Banned
Oct 1, 2007
2,450
1
0
Originally posted by: taltamir
CPUs are versetile tools that can perform a variety of operations... GPUs are number crunchers...
A CPU is just not as well suited to the type of work needed for physics or graphics.

The only thing that was accurate in what you just wrote was in the first sentence. So in order to make you look better, I've opted to simply "trim" all the nonsense you wrote and simply quote what was actually accurate and worthwhile.
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
from the CUDA wikie article:
"CUDA ("Compute Unified Device Architecture"), is a GPGPU technology that allows a programmer to use the C programming language to code algorithms for execution on the GPU. CUDA has been developed by Nvidia and to use this architecture requires an Nvidia GPU and special stream processing drivers. CUDA only works with the new GeForce 8 Series, featuring G8X GPUs; Nvidia states that programs developed for the GeForce 8 series will also work without modification on all future Nvidia video cards[citation needed]. CUDA gives developers unfettered access to the native instruction set and memory of the massively parallel computational elements in CUDA GPUs. Using CUDA, Nvidia GeForce-based GPUs effectively become powerful, programmable open architectures like today?s CPUs (Central Processing Units). By opening up the architecture, CUDA provides developers both with the low-level, deterministic, and for repeatable access to hardware that is necessary API to develop essential high-level programming tools such as compilers, debuggers, math libraries, and application platforms."

So all DX10 cards (the 8 series) can run C code... the most reasonable way to go about integrating physics is to use it...
the might choose to do it in a different way. But this is a simpler, more elegant solution...
And as they say... math libraries... the only reason to use something like that would be to perform intensive math...

Although in retrospect, even if they implement it this way they are probably not going to make it work with the GF8 just so that there is an "added value feature" to whatever version of the GF they do sell it with... it will be the same existing CUDA hardware, but it will be limited to specific models that will come out at the time...

EDIT:
ah there we go... numbers!
FLOPS is floating point operations per second... floating point means decimal (X.XXX rather then integer, which is just X).
Searching online I found the following
"the C2Q 2.93GHz is able to process 55Gflops"

but from tesla's side:
http://en.wikipedia.org/wiki/NVIDIA_Tesla
Assuming each tesla card is a 8800GTX without a display port then...
518.4 Gigaflop per CPU...
with the ability to put 4 of them per system of 2 teraflops... 2073.6Gflop

That singe GPU is almost 10 times more the floating point raw calculational power of a C2Q.
4 of them are not even compareable... being 37.7 times faster for 4GPU vs Quad core CPU.
and this is using an outdated process.

And the reason that CPUs aren't focusing there is because doing so will trade off performance elsewhere.

EDIT2:
http://www.nvidia.com/object/io_1202161567170.html

"NVIDIA's CUDA? technology, which is rapidly becoming the most pervasive parallel programming environment in history, broadens the parallel processing world to hundreds of applications desperate for a giant step in computational performance. Applications such as physics, computer vision, and video/image processing are enabled through CUDA and heterogeneous computing."

nVidia's ceo discusses CUDA as a a physics calculator in nividia's own press statement about purchasing ageia.