Recent Changes in projects

GLeeM

Elite Member
Apr 2, 2004
7,199
128
106
Just some info if anyone is interested in moving projects to the next Milestone. I know there are a few of us that like helping projects to a minimum Milestone.
This is not a recommendation of any of these projects, just FYI.
None of these are highest paying projects. But for now are above average.

Gerasim added a GPU application if you wanted to move to the next Milestone.

Citizen Science Grid has a few little better paying WUs than usual. Might not last too long before they change back to usual. This project I could recommend as being good - done by University of North Dakota - the Wildlife part I like ... I am 12th in the world for time watching bird videos 8)

EDIT: this is a new project - Amicable Numbers (not listed in the DC Project List)
Make sure you check this next project meets your ethical or minimum DC standards - I have no idea. Myself, I am just moving it to a minimum Milestone. Amicable Numbers has an MT CPU app and now a GPU app.
 
Last edited:

Smoke

Distributed Computing Elite Member
Super Moderator
Jan 3, 2001
12,647
180
106
Thank you for the information, GLeeM. :)

I suggest you get together with Orange Kid and work out just how these projects can be added to his fine thread, the Distributed Computing Project List.

Good job! :wineglass:

 

Orange Kid

Elite Member
Oct 9, 1999
3,852
1,486
146
Thanks Gleem :)
Smoke: These projects are on the list already.
He is just letting us know that they are giving more points than usual and in the case of Gerasim they have had the GPU app for quite a while but work is few and far between and goes fast when available :(
 
  • Like
Reactions: Smoke

Orange Kid

Elite Member
Oct 9, 1999
3,852
1,486
146
"I edited the OP to make it clear that Amicable Numbers is a new project."

Added :)
 

waffleironhead

Diamond Member
Aug 10, 2005
6,490
42
101
I have attached to Gerasim and Amicable Numbers. Gerasim doesnt have AMD WU right now. :( Amicable Wu are all kind of crazy on my systems. One unit will take 4 minutes, the next 4 hours. Not sure what is going on, but thanks for giving the heads up.
 
  • Like
Reactions: Smoke

Orange Kid

Elite Member
Oct 9, 1999
3,852
1,486
146
...and asteroids says upload disk server is out of space, yoyo is down for maintenance. :(
off to find another project for the machines to work on.
 

StefanR5R

Diamond Member
Dec 10, 2016
3,433
3,674
106
ATLAS@Home moved to LHC@Home. That is, the ATLAS application is now being offered as a subproject of LHC@Home. Existing credits are going to be moved to LHC@Home (or at least a best effort will be made). Links:
ATLAS is a CPU application, requiring VirtualBox. Single-thread and multi-thread variants had been available, but it looks like only the multi-thread application was kept in the transition to LHC@Home.

About the project:
ATLAS@Home is a research project that uses volunteer computing to run simulations of the ATLAS experiment at CERN. [...] ATLAS is a particle physics experiment taking place at the Large Hadron Collider at CERN, that searches for new particles and processes using head-on collisions of protons of extraordinary high energy. Petabytes of data were recorded, processed and analyzed during the first three years of data taking, leading to up to 300 publications covering all the aspects of the Standard Model of particle physics, including the discovery of the Higgs boson in 2012.
 

StefanR5R

Diamond Member
Dec 10, 2016
3,433
3,674
106
Universe@Home stopped generating new tasks at the end of January. Yesterday they uploaded a new application, "Black Hole Database", with Windows and Linux CPU variants, and an ARM variant planned. But many of the WUs are reported to fail by exceeding the allowed disk size. The admin is working on it.

https://universeathome.pl/universe/forum_thread.php?id=312

I just downloaded a bunch of tasks on a low core count host to see what happens.
 

TennesseeTony

Elite Member
Aug 2, 2003
3,801
2,617
136
www.google.com
"...exceeding the allowed disk size..." Is that something the user can control, in the preferences (memory and disk usage), or a server side problem?
 

StefanR5R

Diamond Member
Dec 10, 2016
3,433
3,674
106
The project sets a max disk usage attribute for a WU. The server sends tasks only to hosts which have enough disk for such tasks. If a running task begins to use more than specified, boinc-client aborts it.

https://boinc.berkeley.edu/trac/wiki/JobIn
https://boinc.mundayweb.com/wiki/index.php?title=Maximum_disk_space_exceeded

My first results are rather bad:
State: All (36) · In progress (5) · Validation pending (5) · Validation inconclusive (0) · Valid (5) · Invalid (0) · Error (21)
Of the 5 tasks still in progress, 4 have a client_state.workunit.rsc_disk_bound of 900,000,000 bytes, one has 700,000,000. Some earlier tasks must have had a lower limit.

4 of the 5 valid tasks had a peak disk usage of just a few MB. The 5th successful one had almost 700 MB peak disk usage. The 5 tasks pending validation had 800 ... almost 900 MB disk usage.

1 task was cancelled by the server. The 20 other tasks with error had 526 MB ... 1029 MB peak disk usage.

A workaround is to download tasks, shut down the client, edit client_state.xml for larger rsc_disk_bound, and restart the client. Until either the project admin figures out what a proper bound is (edit: should be good now), or the developers figure out how to strictly stay within a sane limit.

PS,
task durations on Linux are in the same range as the "Universe BHspin v2" application of theirs which I ran last year (mostly 70+ minutes on a Xeon E3 with hyperthreading in use, some up to 3 hours). Like before at this project, credit is fixed to 666.67 per task.
 
Last edited:
  • Like
Reactions: TennesseeTony

StefanR5R

Diamond Member
Dec 10, 2016
3,433
3,674
106
I see that some TeAm mates picked up the new Universe tasks and had a high error rate too.
But it should get better now:
On Sunday morning project admin krzyszp said:
Yes, I found another error. Because of saving and zipping result files takes a moment Manager think that is something wrong with application and kill it. This is explained here:
I tested the workunit standalone and found no issues. In looking through the client code it looks like this condition occurs when the client finds that the boinc finish file has been written to disk but the science application process is still running. Since the finish file was written then there must be a hang in boinc_finish somewhere. Or it could be a bug or race condition in the client causing a false positive.
So, in app version 0.03 I have added 2 second "sleep" command before call boinc_finish() function to prevent this.
Also, I just discovered then in some conditions result temporary files can growth up to... 1.3GB!

So, from series "6" (short WU's, 5 to 15 minutes) and batches above "6" (normal, long WU's) I had to set limit to 1.5GB.

At the moment I see that series 6 and higher are finish properly...
https://universeathome.pl/universe//forum_thread.php?id=312&postid=2675

Edit:
Note, like Universe's previous application, the current one is still more than 1.5 as fast on Linux compared to Windows. So, if you have a mixture of Windows hosts and Linux hosts, you may prefer to run Universe@Home on your Linux hosts only. I guess it may even be beneficial to set up a Linux virtual machine on a Windows host and run Universe in the VM.
 
Last edited:

Kiska

Senior member
Apr 4, 2012
747
178
116
Correct!
When boinc_finish is written to disk, BOINC assumes all saving has been completed by the science application, in this case when its still zipping its not wise to call boinc_finish(), I would call boinc_finish() when it has finished all processing including pre-processing and post-processing, this way the race condition no longer exists on the host
 

Howdy

Senior member
Nov 12, 2017
563
458
106
For those who are running this project:

It appears that ODLK1/ Latin Squares is having issues at this point in time. I cannot upload-download-reach their website as of this post.
 

StefanR5R

Diamond Member
Dec 10, 2016
3,433
3,674
106
Apparently the domain name registration of Cosmology@Home (cosmologyathome.org) expired and was promptly snatched by a domain grabber. :-(

Edit,
a post at http://194.57.221.140/forum_thread.php?id=7535 suggests that a workaround is to resolve www.cosmologyathome.org locally to 194.57.221.140.

Edit 2,
Cosmology@Home is back to normal since about March 20.
 
Last edited:

TennesseeTony

Elite Member
Aug 2, 2003
3,801
2,617
136
www.google.com
Very helpful edit, Stefan, thanks for the info on that.

In other news, Jesus has sent me a message, and I want to in turn share the good news with all of you!

DENIS@Home: Re-starting the project!
Dear all,
Thank you very much everyone for your patient wait. I'm writting to you to give you good news. We sent an small block of tasks to check if all is working well this Friday. Little by little, we will send more task. It seems that everything is working well but we want to go slowly to ensure each step. We have still some things to improve but now we can say that we are back!

Many thank you for your collaboration

Jesus.
You will have to manually enter the project URL if you decide to join this long dormant/crashed medical project. The link is here: denis.usj.es
 

StefanR5R

Diamond Member
Dec 10, 2016
3,433
3,674
106
  • Like
Reactions: Rudy Toody

zzuupp

Lifer
Jul 6, 2008
14,777
2,305
126
Coincidentally, both DENIS@Home and Universe@Home are restarting their projects currently, and both struggle with the problem that their new applications create huge result files, causing the project server to run out of disc space quickly, merely from the results of the comparably few early testers.

https://denis.usj.es/denisathome/forum_thread.php?id=158&postid=1401
https://universeathome.pl/universe//forum_thread.php?id=314&postid=2754
Thanks! I knew about DENIS. He's hoping to be back in a day or two.

Numberfields has a new application: for anyone else catching WUPROP stars.
 

ASK THE COMMUNITY