• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

General GPU Programming A Reality

ViRGE

Elite Member, Moderator Emeritus
We've talked about this for years, but it looks like GPU programming has finally become a reality, thanks to a Stanford project called "BrookGPU". BrookGPU is an implementation of C-style programming for GPUs, which allows programmers to do a fair amount of C programming for programs that will run directly on their GPU.😀

Now if everyone will quit drooling for a second, and clean themselves up, there are some limitations to this, since it's neither a full implementation of C, and there are hardware limits to keep in mind. The main problem here is that GPUs are still specialized devices, tuned to number operations such as matricies, and will perform greatly at those taks, while things like logic are going to be far slower than a CPU. The benefit however is enormous, as a test shader program on a GeForce FX 5900 pulled 20GFLOPS/sec, about 3 times the speed of the fastest P4 CPU.🙂

As far as implications for DC projects go, it can vary by project. SETI uses a lot of things GPUs like, such as FFTs, which will compute far faster on a GPU than a CPU, and other projects that use similar math also stand to gain a lot. No one has ported a DC project to a GPU yet, but as far as the open source projects go, it's entirely possible we may see some sort of test implementation soon.🙂
 
Interesting stuff ,thanks Virge🙂

*me gets bucket & mop to clear up drool!*😛😱

So when are you writing it for SETI?😉
 
Indeed, sounds good!

I just wonder how long it'll need them to get us some alpha or whatever to give that a try. Just imagine: Your CPU crunching on CPDN or F@H while your GPU is crunching some SETI WUs ... :Q
 
Originally posted by: BlackMountainCow Indeed, sounds good!

I just wonder how long it'll need them to get us some alpha or whatever to give that a try. Just imagine: Your CPU crunching on CPDN or F@H while your GPU is crunching some SETI WUs ... :Q


I can't help thinking about 3 or 4 PCI video cards.....






Kwatt

 
I can't help thinking about 3 or 4 PCI video cards.....

Though, I guess that this is unlikely to happen ... if I was one of those developer guys, I'd concentrate on the fast AGP cards with shaders and lots of RAM.

I'm not sure at this, but what was the last GeForce or Radeon that was produced for the PCI slot? GeForce 2 maybe?
 
Seems they are only trying this on high end equipment. The latest GeforceFX and ATI 9700-9800. They have already obtained 20Gflops using a GeforceFX 5900 NV35 GPU! Thats like a 10Ghz P4! What an amazing amount of processing power! This truly is untapped power and has great potential.
 
This is a very excellent idea, and I look forward to it. I am amazed at the power these graphics cards have, and who is to say they won't come out with a super computing device loaded down with high end graphics cards of the PCI vareity or even the next version of PCI due out next year. Multiple cards (like 6) running on the PCI bus will produce some amazing super computing potential. Then you can slap together some amazing server farms with normal computers, fat power supplies and 6 GPUs each and go to town.
This an uptapped money resource for the GPU makers, untapped potential for super computer makers, and could change the super computing world for good.
 
Wow, I cant wait, I have quite a bit of GPU power around (2 GF4 Tis, GFFX 5900, GF2 Pro, TNT2)

I know that there are GeForce FX 5200 (Non Ultra) in PCI versions, just load up 6 of those along with a 5950 in the AGP and you can not only have 14 displays but you can also crunch 8+ Seti WUs per hour not counting the CPU (~200WUs/day/computer.....Possible?.....wow!)
 
Back
Top