• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

FormulaBOINC

Page 4 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
During the race, some factors were found in the ECM project:

League 1:
Czech National Team: 1 factor with 50 digits​

League 2:
OcUK - Overclockers UK: 1 factor with 56 digits
Overclock.net: 1 factor with 50 digits
TeAm AnandTech: 1 factor with 58 digits and 1 factor with 50 digits - win! 🙂

League 3:
Christians: 1 factor with 50 digits​
 
Less than 48 hours until the project for the next race is announced! 😀

Although my computers only make a faint, steady, humming sound, I hope you don't think me too crazy to be running around the room, pushing around a spare optical mouse across the floor, and saying "Vroom! Vroooooom! Vrrrrrrroooommm vrooom vrooom vroom!"
 
my computers only make a faint, steady, humming sound

My computer with 6950X and F@H GPUs made a single loud Plonk! on Sunday evening. This happened after I expanded the water cooling loop with another reservoir, radiator, and GPU blocks, had refilled the loop and bled the air, and had the PC running again for a few minutes. Either too much air went through the pump during air bleeding, or something solid which shouldn't be in the loop had hit the pump, or the pump was to weak for the expanded loop to begin with. I am waiting for a new pump to arrive in the mail now, but this PC might not make it in time for the race.
 
Just had all the wood floors in the house redone. Sanded and three coats of varnish applied. While one has been off and the others were in rooms that were closed off, they all seem to have attracted a fair amount of wood dust and are smelling like a camp fire. So as soon as I can get to them they will need a good cleaning. While I enjoy the smell it probably isn't the best for them. Should be done before the race🙂
 
Most of my GPUs are for double precision work, I have been wondering if it's worthwhile to bother putting them on a single precision GPU project. My guess is that is what the next sprint will consist of.
 
So, Sprint #2 is Einstein@Home. Any suggestions on how best to run this on modern GPUs?
 
Just registered with E@H and began running 1 task per GPU. (They sent me γ-ray pulsar binary search tasks.) GPU-Z shows a constant bar of 94 % utilization on the AMD W7000, so I guess doing 2 tasks at once isn't going to improve throughput a lot. On the GTX 1070, it's fluctuating between 97 % and 74 %. Perhaps a second simultaneous task would fill out those dips.

(My other cards are down until the pump arrives. Could be Friday.)
 
Moved my dual 280x's over to Einstein for the sprint, along with the dual gtx980s. Having trouble getting my Vista setup with one 280x to download any work. Reset the project but it may just be the older driver, boinc is up to date. Told MW not to download any new tasks to see if I just need to clear out some space for the Einstein work. Scratching my head.
 
@iwajabitw, did you perhaps disable some of the applications in your computing preferences on the e@h web site? According to their server status, only FGRPB1(G) work generators are online, i.e. only for Gamma-ray pulsar binary search #1 on CPUs and GPUs.
 
Event log says there is no available work for my type of GPU so that means its the Vista Radeon driver version. Guess I'll leave it on MW
 
2-GTX980s
2-R9 280xs

Bunkered up and crunching away! What's the CST start time, I'll be at work and will have to dump using Teamviewer? Need to set an alarm reminder
 
Well, I have two Nehalem-era machines with GPUs in them that both lock up when attempting Einstein@Home WUs. A third Nehalem PC is CPU-only, it doesn't lock up on E@H. The Sandys, Haswell, and Skylake (all with GPUs) are doing fine as well. I've put the two "problem children" back on M@H duty, they don't seem to mind that.
 
Ah, Einstein! How fortuitous! Hundreds awaiting validation already. 3 at a time works best for me, 2 at a time is almost identical though, on dual R9-280X. Getting 1.05M to 1.2 M ppd.

Nvidia cards will likely require a full thread or core per task, since they nearly lack double precision.

For an R9-280X, app_config.xml for me is:
<app_config>
<app>
<name>hsgamma_FGRPB1G</name>
<gpu_versions>
<gpu_usage>.33</gpu_usage>
<cpu_usage>0.010</cpu_usage>
</gpu_versions>
</app>
</app_config>

edit: Three at a time, each task completes in about 30 minutes or less, for 3,465 points per task.
 
Last edited:
Ah, Einstein! How fortuitous! Hundreds awaiting validation already. 3 at a time works best for me, 2 at a time is almost identical though, on dual R9-280X. Getting 1.05M to 1.2 M ppd.

Nvidia cards will likely require a full thread or core per task, since they nearly lack double precision.

For an R9-280X, app_config.xml for me is:
<app_config>
<app>
<name>hsgamma_FGRPB1G</name>
<gpu_versions>
<gpu_usage>.33</gpu_usage>
<cpu_usage>0.010</cpu_usage>
</gpu_versions>
</app>
</app_config>

edit: Three at a time, each task completes in about 30 minutes or less, for 3,465 points per task.
I would have never guessed we could be that stingy with CPU resources! Do you think older/slower cores might want more than that?
 
Crashtech, I adjust my percentage of CPU available for BOINC to use, to keep my total usage to less than 100%. It's just easier and more consistent across projects that way. Currently that box is set to allow 78% of 28 threads to be used, but that actually resulted in a CPU utilization of mid 90%, due to the GPU tasks.

edit: The advantage to doing it the way I do, is that when running CPU tasks also, I have found that sometimes if you set a high CPU usage for a GPU task, the GPU task will quit, in favor of letting a CPU task run. Obviously that is not optimal. 😉

Well, Nvidia is working out ok, GTX1080 single task in 700 seconds GPU, plus 690 seconds of CPU. Triple task times will be posted soon.

edit: Wow, triple tasks on a 1080 came in at about 26 minutes each.
 
Last edited:
Back
Top