GPU crunching

narreth

Senior member
May 4, 2007
519
0
76
I looked around on the web about GPU crunching but I'm still not completely sure what it means. Is it just using your gfx card as another CPU or something? How do I enable it?

Thanks
 

waffleironhead

Diamond Member
Aug 10, 2005
7,066
571
136
The only Distributed computing app right now. afiak. that uses the gpu to process is the folding at home gpu application. there are other applications out there, just not any dc apps. hopefully some others are able to port their app to gpu usage in the near future.
 

Assimilator1

Elite Member
Nov 4, 1999
24,165
524
126
Originally posted by: narreth
I looked around on the web about GPU crunching but I'm still not completely sure what it means. Is it just using your gfx card as another CPU or something? How do I enable it?

Thanks
1st question ,yea pretty much
2nd ,in the case of F@H ,download the GPU client & install it ,job done :) (IIRC)

Get the GPU client here

Atm it only works on ATI X19xx cards ,but soon (hopefully) the 2900s & 38xxs will be supported :) ,not sure about the 2600s.Nvidia is a no go currently :(

 

PCTC2

Diamond Member
Feb 18, 2007
3,892
33
91
Originally posted by: wired247
Originally posted by: Assimilator1
Originally posted by: narreth
Nvidia is a no go currently :(

Their loss is no one's gain :(

F@H programmers @ Stanford have deemed programming for nVIDIA is too complex, but what about CUDA? The 8800GTX is capable of CUDA I thought...
 

LOUISSSSS

Diamond Member
Dec 5, 2005
8,771
58
91
Originally posted by: PCTC2
Originally posted by: wired247
Originally posted by: Assimilator1
Originally posted by: narreth
Nvidia is a no go currently :(

Their loss is no one's gain :(

F@H programmers @ Stanford have deemed programming for nVIDIA is too complex, but what about CUDA? The 8800GTX is capable of CUDA I thought...

more complex? or not capable... its not nice to spread rumors...:D
 

her209

No Lifer
Oct 11, 2000
56,336
11
0
Originally posted by: LOUISSSSS
btw, the x1900 GPU crunching is DAMN powerful
http://en.wikipedia.org/wiki/Folding@home
check out the gpu stats
Check out this bit:

Multi-core processing client
As more modern CPUs are being released, the migration to multiple cores is becoming more adopted by the public, the Pande Group is adding symmetric multiprocessing (SMP) support to the Folding@home client in the hopes of capturing the additional processing power. The SMP support is being achieved by utilizing Message Passing Interface protocols. In current state it is being confined inside a single node by hard coded usage of the localhost.
Does this mean we can build an HPC cluster and run this software in the near future?
 

PolymerTim

Senior member
Apr 29, 2002
383
0
0
Originally posted by: her209
Does this mean we can build an HPC cluster and run this software in the near future?

Yes, pretty much. I just recently was reading up a bit on gpu computing since I had heard about it but didn't really know anything about it. I was amazed at what it can do. In summary, if an application code can be massively parallelized (I don't imagine this is too difficult for applications already using DC) then gains are enormous.

It looks like the primary advantage comes from the ability of a GPU core to perform the same operation on thousands of sets of data at the same time. Some of the coders were saying that the coding techniques seem counterintuitive at times because they appear much less efficient (if you're used to programming for cpus) but after investing a lot of overhead to properly set up the computation, the gpu then performs a computation on so many sets of data at once that it can more than make up for the overhead penalty. So it is an important point that you can't just simply port the code. You really have to rethink the logic of the code and optimize it to make use of the parallel operations.

Check out the following paper for an excellent summary on the development of CUDA specifically for molecular dynamics simulations (which I believe is what F@H uses). Check out figure 3(right): the cpu used for comparison is a 3.2GHz Xeon (they don't say which one) and the speedup on the y axis is the number of times faster they had an 8800 GTX performing the calculation. That's right, the gpu was 80 times faster! And supposedly, CUDA can be used on any 8xxx series card.
http://aps.arxiv.org/PS_cache/...f/0709/0709.3225v1.pdf

I've got a friend in a small modeling group at my university and they use about 32 SFF Dell machines as their computing cluster for molecular dynamics simulations and I had to tell him about this, because not only is the software in late stages of development, but it looks like nVidia is coming out with special hardware dedicated to HPC. At the low end for about $1300 is the Tesla C870 that will fit in any computer with a PCIex16. Note that this is basically a souped up Quadro 5600 but without monitor connections. This card is intended solely for HPC. I jokingly told my friend that when the software is ready, they could replace their entire computing cluster with one of these.
http://www.nvidia.com/object/tesla_gpu_processor.html
http://xtreview.com/addcomment...870,D870-and-s870.html

I don't see how F@H can avoid this for long. I'm guessing its just a matter of time before they are able to develop platforms for all these options. I've heard that current F@H gpu computing puts a significant load on a processor core as well, but I've read that isn't necessary. What's happening is that the primary program is still running on the cpu and offloading the more demanding calculations to the gpu. With a little more work (and if it is beneficial), the entire program can be run on the gpu.

-Tim
 

PCTC2

Diamond Member
Feb 18, 2007
3,892
33
91
Yes, I also checked up on CUDA and it is capable on ALL 8-series Geforce GPUs.

F@H definitely should tap into nVIDIA's CUDA for 8-series GPUs (there are plently of 8800GTS-8800GTX G8x's and the new G92's that are vastly powerful with at least 96SPs for processing). I know I have 2 8800GTS 512MB G92 cards and I know my employer would look fondly on buying a nVIDIA-based Tesla system ;)
 

PCTC2

Diamond Member
Feb 18, 2007
3,892
33
91
If F@H gets their game going, a P5E64 WS with 4x 8800GS and a Q9450 would be the ultimate cruncher. ;)
 

Foxery

Golden Member
Jan 24, 2008
1,709
0
0
The word from Stanford's official forum is that they tried to write for nVidia. They ran into problems that required nVidia's help, and management told their programmers not to invest much time on it.

They don't want to do it through CUDA because that would require writing and maintaining a whole new code path from scratch. Remember, this isn't a project with a lot of funding or manpower - some things just aren't possible for them. ATI cares and works with them, so we get ATI clients.

I would guess that the To-Do list on Pande Group's office fridge looks something like:
1. PS3 client, because it's broken and Sony is helping
2. ATI client, because it needs 2000/3000 support and ATI is helping
3. Write papers and press releases on our results
4. CPU clients, because they work just fine and we can do that on our own
5. nVidia client, because it's broken and nVidia is not helping
 

PolymerTim

Senior member
Apr 29, 2002
383
0
0
Originally posted by: Foxery
The word from Stanford's official forum is that they tried to write for nVidia. They ran into problems that required nVidia's help, and management told their programmers not to invest much time on it.

They don't want to do it through CUDA because that would require writing and maintaining a whole new code path from scratch. Remember, this isn't a project with a lot of funding or manpower - some things just aren't possible for them. ATI cares and works with them, so we get ATI clients.

I'm hoping that work being done by other groups (such as the paper I linked above) will help speed things along. Even if Pande's group isn't working with CUDA, it looks like these guys in Amsterdam have a pretty good handle on it. I'm not sure exactly what the problems are for coding F@H to nVidia GPUs. I found some info on the F@H GPU FAQ, but I'm guessing it is outdated. It states that the primary reason they went for ATI is that it has more (48 total for the X1900XT) pixel shaders, which are key to their calculations. But it looks like the nVidia g92 core has revamped their architecture and replaced vertex and pixel shaders with a unified processor known as a stream processor. The 88xx series has between 96 and 128 of these stream procesors and the above mentioned group appears to have them working quite well for molecular dynamics simulations. (Edit: I just realized that the modern ATI cards have taken the same stream processor route and have even more of them in a 3800 series.)

I don't know anything about coding, but I would imagine that once the Amsterdam group release their code under the GNU Public License, it should be very helpful to F@H developers. I do realize though that it is a big project for one research group now that they are spreading themselves thin between so many platforms.

 

LOUISSSSS

Diamond Member
Dec 5, 2005
8,771
58
91
Originally posted by: Foxery
The word from Stanford's official forum is that they tried to write for nVidia. They ran into problems that required nVidia's help, and management told their programmers not to invest much time on it.

They don't want to do it through CUDA because that would require writing and maintaining a whole new code path from scratch. Remember, this isn't a project with a lot of funding or manpower - some things just aren't possible for them. ATI cares and works with them, so we get ATI clients.

I would guess that the To-Do list on Pande Group's office fridge looks something like:
1. PS3 client, because it's broken and Sony is helping
2. ATI client, because it needs 2000/3000 support and ATI is helping
3. Write papers and press releases on our results
4. CPU clients, because they work just fine and we can do that on our own
5. nVidia client, because it's broken and nVidia is not helping

all good points yea nvidia sounds pretty self centered and would probably only help if they can get some $$ out of it.
 

Insidious

Diamond Member
Oct 25, 2001
7,649
0
0
Quad with GPU and one SMP I think it would be fine.....

Better points with two SMPs and no GPU though
 

Assimilator1

Elite Member
Nov 4, 1999
24,165
524
126
Yes you can ,but as sid mentioned their's no point.

PolymerTim

I found some info on the F@H GPU FAQ, but I'm guessing it is outdated. It states that the primary reason they went for ATI is that it has more (48 total for the X1900XT) pixel shaders, which are key to their calculations.

Yes that's out of date, that was before Nvidia released the 8800.
 

rabrittain

Senior member
Dec 28, 2006
715
0
0
OK -- perhaps you could consider this perspective....

On one of my rigs the cpu is an AMD Athlon 64 Dual Core FX-60. The Graphics card is an ATI All-In-Wonder X1900, and when it was new a little less than 2 years ago, the cost was ~$500. I don't own a television. I don't watch much. Every other day or so I watch TV news on the computer so that I can stay halfway abreast of what's happening in the world. A couple of months ago I was running F@H, and I considered the GPU client. However I would have had to roll back my graphics adapter driver -- and my GPU runs at 54 Degress C at idle. The thought of possibly burning up my nice card was more than I could endure. And besides -- I am very comfortable with my graphics set-up the way it is. It works real nice, and I'm the type of person who has always thought "If it ain't broke, don't fix it!" So I installed the SMP client as a service -- being sure to say that there are 2 processors. Both cores ran at 100%, and I cranked out reasonably good numbers. Now if the GPU client had been compatable with my graphics driver ... I might have considered it ...but I would still be against it because a water block for my GPU is for me ... impracticle.

So that's it -- another viewpoint.
 

Assimilator1

Elite Member
Nov 4, 1999
24,165
524
126
You don't need a water block to run the GPU client.
How come your idle GPU temps are so high? ,mine is currently idling @ 43C with a room temp of 23C.

Is running the GPU client damaging in the long run? I asked that same question some months ago & did a little digging about, I didn't find anyone who had burnt out there card including some who have been running it from the start. But I didn't come across any hard info saying its ok for the GPU to be running @100% 24/7 for years on end either.
Btw the GPU gets no hotter running F@H than it does running a game.

As for driver revisions, yes they are lagging a bit but 7.11 is supported ,are you using 8.1 or 8.2 then?

OT - OMG the forum search funtion is so screwed! ,I had to use google to find my old thread above.:roll:
That thread is still live inccidently.
 

her209

No Lifer
Oct 11, 2000
56,336
11
0
Originally posted by: PolymerTim
I've got a friend in a small modeling group at my university and they use about 32 SFF Dell machines as their computing cluster for molecular dynamics simulations and I had to tell him about this, because not only is the software in late stages of development, but it looks like nVidia is coming out with special hardware dedicated to HPC. At the low end for about $1300 is the Tesla C870 that will fit in any computer with a PCIex16. Note that this is basically a souped up Quadro 5600 but without monitor connections. This card is intended solely for HPC. I jokingly told my friend that when the software is ready, they could replace their entire computing cluster with one of these.
http://www.nvidia.com/object/tesla_gpu_processor.html
http://xtreview.com/addcomment...870,D870-and-s870.html
Wow, incredible.
 

Foxery

Golden Member
Jan 24, 2008
1,709
0
0
rabrittain,

Newer drivers support Folding@Home now - specifically, I know version 7.11 (i.e. November 2007) does. However, there's some speculation that the next version of the GPU Client itself will use much less CPU time. My advice is to keep running the way you are now, and investigate further when the new client arrives.

54 degrees seems high for idle, but I don't know what the normal range is for that card. Really, though, if it can run a game without overheating, it should run F@H.
 

rabrittain

Senior member
Dec 28, 2006
715
0
0
Thanks everyone for all the advice.

I use Everest ver 3.5 to monitor my temperatures. As I write this, I'm winding down SOB and swithing over to Malaria Control. I'm still working on 1 SOB WU; it has about 6.5 hours to go. Then I'll switch over to Malaria Control. When both cores are crunching @ 100% the cpu temperature is ~60 deg C. Right now it is 54, and the cpu ambient is 32. My gpu temp is 55, and the gpu ambient is 47. My case is an Antec P-180. There is a special vent/fan housing that sits over the graphics card. I have a small fan blowing on the back of the graphics card. Perhaps I should turn the fan around and have it suck air from around the graphics card and to the outside of the box.

It is hard for me to understand what version my graphics driver is. Windows reports it as ver. 8.205. If I look at the information window of the Catalyst Control Center, it says that the driver packaging version is 8.205.2-.... It says that the 2D Driver version is 6.14, Direct 3D ver 6.14, Open GL ver 6.14 and AV Stream ver 6.14. Everest says that I have Catalyst ver 6.1. I'm not sure what it is, but it is very nice.

I do play a game every now and then -- Age of Empires III, Halo, Doom 3 Demo, and Half Life 2. They all run quite well. The Half Life 2 graphics are outstanding. I tried overclocking -- the machine had a tendacy to lock up during game play -- I had to throttle back. I'm running stock now. It locks up about once every blue moon.

Thanks everybody for all your help.
 

Assimilator1

Elite Member
Nov 4, 1999
24,165
524
126
For testing stabilty when overclocking etc use Prime95 or better still F@H ,they both report to you if errors occur.