You'd probably have to wait for another "bump" for the classic workqueue happening like the ones you can see in the green ready-to-send (rts) graph in Rosetta results - by week, the ones which are paired with a rise of the blue tasks-in-progress graph. But even then you will only get work if your computers don't stop sending work requests. (Classic work, that is. The unstable and resource hungry vbox based work is readily available all the time of course.)So now that the 64 core EPYC's are out of work for WCG, I enabled Rosetta. Out of 384 possible threads, I have ONE Rosetta task.
$ free -h total used free shared buff/cache available Mem: 251Gi 20Gi 111Gi 38Gi 119Gi 190Gi Swap: 0B 0B 0B $ sudo du -sh /var/lib/boinc/slots/ 39G /var/lib/boinc/slots/ $ df -h /var/lib/boinc/slots/ Filesystem Size Used Avail Use% Mounted on tmpfs 126G 39G 88G 31% /var/lib/boinc/slots
$ free -h total used free shared buff/cache available Mem: 251Gi 20Gi 169Gi 31Mi 61Gi 229Gi Swap: 0B 0B 0B $ sudo du -sh /var/lib/boinc/slots/ 39G /var/lib/boinc/slots/ $ df -h /var/lib/boinc/slots/ Filesystem Size Used Avail Use% Mounted on /dev/nvme0n1p2 868G 228G 639G 27% /var
But.... on my 64 core EPYC boxes I have not changed the time, and they run in about 3:30.Some R@h users appear to be doing something similar. I am routinely seeing workunits which had an earlier failed task on somebody else's computer which timed out. (Remember: The reporting deadline is 3 days. Hence, don't buffer more than can be completed in 3 days.)
Tip: Increase the "Target CPU run time" from the default 8 h to more.
Then the same number of downloaded tasks will last you longer. While browsing top_hosts.php, I saw that one of the prolific users set it to 24 h even (which may be a bit much, but works too of course).
Some notes on the process of changing "Target CPU run time":
1. It used to be that the boinc client was oblivious to these changes which are made at the server, and continued to put the old task duration as estimated task durations of new tasks. (In effect, the client would buffer too much after the target run time was increased, or vice versa would buffer too few after the target run time was decreased.) The client's estimation only gradually converged to the new run time while more and more tasks completed with the new setting.
I don't know if this problem still exists.
2. When said change is made on the website, which tasks exactly are affected by the new setting? Well, somebody once wrote:
After a scheduler request of the client (e.g. a project update), the target run time of the following tasks is being modified:
+ tasks which hadn't been started yet (and are started after the scheduler request), this of course also includes tasks which are yet to be downloaded,+ tasks which were suspended to disk (and are resumed after the scheduler request).The target run time of the following tasks are not modified:
–tasks which are running during the scheduler request,–tasks which were suspended to RAM during the scheduler request.
Edit, I forgot:
3. For the time being, "Target CPU run time" is recognized only by the "Rosetta" and "Rosetta mini" applications, not by "rosetta python projects".
where would you change this anyway ? looked at every preference, I don't see it:Actually on all of your computers the runtime is about 3 h. And now that I looked around more, the same is true on several other users' computers.
Apparently 3 h is the current default target runtime, not 8 h anymore.
This has been changed by the project admins a few times over the years.
(Years ago, admins might have made announcements about changes like this. But more recently, there is no communication from the project to the contributors anymore. Scientists feed work into Rosetta on their end, users complete work on their end, but in between there is deafening silence. Admins and moderators have vanished without a trace.)
|Thread starter||Similar threads||Forum||Replies||Date|
|S||Milkyway@home v. Mower edition||Distributed Computing||5|
|D||Huge data uploads from LHC@home?||Distributed Computing||8|
|S||BOINCGames.com Sprint Cosmology@home||Distributed Computing||8|
|B||Rosetta@Home - I GIVE!||Distributed Computing||30|
|Folding@Home bigadv EOL 3rd Anniversary Challenge: [H] vs TAAT||Distributed Computing||340|