GPGPU Programming

BucNews

Member
Mar 11, 2007
81
0
0
Hello everyone.

I just recently did a lot of research on GPGPU programming, Havok, CUDA and CTM and I was curious if anyone else here is intrigued by the concept. While most of the items I found focused on non-gaming applications it left me thinking about how games could be improved by doing more than make pretty graphics with our GPUs. The new wave of DirectX 10 cards appear to hold a much greater potential for use in GPGPU applications due to the unified architectures that are required.

Any thoughts?
 

Born2bwire

Diamond Member
Oct 28, 2005
9,840
6
71
One of the guys in my group did a thesis about solving EM problems using OpenGL, presumably to take advantage of the parallel nature of GPU's. The gains associated with using GPUs in highly parallelizable code like the FDTD Yee algorithm is very favorable over a general purpose CPU. One problem though is that GPU's these days are still single-precision. I think it will be a while before they would need to make a double precision GPU (and even then I don't know if it would be warranted).
 

Lord Banshee

Golden Member
Sep 8, 2004
1,495
0
0
To add:

Seeing that Nvidia and AMD both want to hit the engineering/research market with these concepts i believe they will make it double precision as soon as possible given cost is cheap enough... maybe 1-2 years. The only reason i believe they will make these graphics cards not to be just graphics cards they would not have made the CUDA or Stream application concept. If they thought only a niche will use it then they would have left the application interface as an sdk or something with not much effort put into it.

now talking about of my a$$:
My guess will be AMD will set a standard instruction set for the graphics and will get adopted in x86 so Intel and AMD will then only have to make general pipelines and shaders into the CPU as they do with an FPU. This will reduce cost from integrate an entire GPU on die and will suffice for low-end or mobile market. But what i do not believe is that discrete GPUs will go away unless carbon nano tube transistors get big fast and we see a huge increase in technological manufacturing.
 

BucNews

Member
Mar 11, 2007
81
0
0
The other aspect of this is that GPU vendors have lots of experience in utilizing multi-cores and I hope they can use that knowledge as Intel and AMD expand their multi-core offerings.