• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

let your gpu work!

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
yeah... einstien seemed a little buggy to me... was killing my resources... some of my rigs became unusable while crunching einstien
 
hi, was just wondering if there will be gpu workunits available again?


Well, the last time we had GPU units was in summer 2009, when our CPU version of the AQUA app suddenly became much faster than the GPU version, making the credit levels go bonkers and making a lot of GPU users mad. Despite months of trying and a 7x speedup on the GPU version, we never got the GPU version faster than half the performance of a Core i7 CPU.

In contrast, the Fokker-Planck app would actually probably be faster on a GPU than on a CPU, but it'd take time for us to reorganize the code to run on GPUs, and we're always short on time. 🙁 I'm currently focusing on getting our new open quantum system simulation app ready, among the several other projects I'm working on.

No more work for Aqua either.
 
Let's put it back on the first page.

/* edit: disregard the rigs data in the signature - it is not up to date at this time. */
 
Last edited:
Is your gpu doing any work at all? You may want to check the boinc preferences at the project website (your account page) and make sure the answer for "Suspend GPU work while computer is in use?" is "no". If your gpu is not crunching at all, then something else is wrong. Boinc version 6.10.58 is the latest stable version that supports GPU crunching.
And to stop MW using the CPU you need to uncheck 'Use CPU' in the MW project preferences.
 
Primgrid definitely has good nVidia support for the Proth Prime Search Sieve project.

/* edit: disregard the rigs data in the signature - it is not up to date at this time. */
 
Last edited:
Just keeping this on the front page. 😉

Primgrid definitely has good nVidia support for the Proth Prime Search Sieve project.

/* edit: disregard the rigs data in the signature - it is not up to date at this time. */

I haven't had any problems running ATI work units for PrimeGrid, either :thumbsup:
 
Just a bump for the thread. I get the itch to check in on RC5 stuff from time to time. I miss the RC5-64 days. 🙂

I may have to try one of these new stream clients on my HD6850. 😉
 
I'm using 0.13 CPU with 1 Nvidia GPU on GCW Sieve.
Thanks Rudy Toody 🙂

Another question then: Where do you see 0.13?
In my Task Manager (Win7) it only shows whole percent. For instance, on my i7 quad with HT, one full virtual core equals 12-13%.
Is 0.13 one full core of eight then?
 
Thanks Rudy Toody 🙂

Another question then: Where do you see 0.13?
In my Task Manager (Win7) it only shows whole percent. For instance, on my i7 quad with HT, one full virtual core equals 12-13%.
Is 0.13 one full core of eight then?

It's on the task line in the BOINC Manager. I don't know about virtual cores, but mine is 0.13 of one core. (I'm linux64---I have no idea how Win7 handles it.)
 
It's handled the same way in win7, once the WU is running it will appear under status. Running milkyway I get "Running (0.05 CPUs + 1.00 ATI GPUs)"
 
Last edited:
I have a GTX275 - will this card run PrimeGrid WUs?

I selected yes to Use NVIDIA GPU and then I checked the two projects that use NVIDIA.

Why won't it run?
 
I have had that problem before.
You may have a file called cc_config.xml in the BOINC-directory (C:\documents and settings\all users\application data\boinc) If you have one such file edit it to contain the following line in the <options>-section (see below):

<use_all_gpus>1</use_all_gpus>

If you don not have such a file create it and copy the following:
Code:
<cc_config>
   <options>
       <use_all_gpus>1</use_all_gpus>
   </options>
</cc_config>
The spaces are important.

More about this file can be found here.

Good luck!
 
Last edited:
Did you check each sub-project to use nvidia?
Thanks, yes that is what I meant by "I checked the two projects that use NVIDIA" 🙂

<use_all_gpus>1</use_all_gpus>
Thanks Peter, that's what it was!

Except it was in a different place on my computer and I had to restart the client (not just run the "Read config file" command) 😎

It adds more lag to using anything video including the desktop more than the Folding GPU client 🙁
 
My main home PC died recently. It wasn't anything special, but it was my best machine at home, and was the one that I had attached to every BOINC project, partially to keep my accounts synchronized on all projects, but mostly just for the fun of seeing the Boinc client projects tab fill the entire page. 😉

I just replaced that old beast with a new (to me) PC with a Core2Duo E4600 (2.4ghz) and an ATI HD5670 video card. It's certainly not new technology, but it's a LOT better than what I used to have so I'm pretty happy with it.

Anyway, to the topic of the thread, I've had the ATI card running Collatz for the last 2.5 days (since I got the computer) with the CPU running other projects. It's only allowed to run when I'm not using the computer - which lately has been about 8-10 hours per day since I've been home a lot and have been playing games on much higher settings than I ever could before. So in about 30-35 hours of actual processing time, that card has produced 59K points in Collatz. That's more than double what all of my 20(ish) computers have done combined on CPU only projects. Not bad at all for a card that only cost me $43!


edit: The card now has 90K points after exactly three full days of running Collatz (interrupted by me playing games for about 1/3 of the time). 🙂

edit again (Just for fun): After 7 full days running Collatz, the Radeon 5670 now has 280K points, while running about 16-18 hours per day. That's 40K points per day on a low budget card, and it's not even working the full 24 hours per day. I wish I could get that kind of boost on CPU-only projects!
 
Last edited:
Back
Top