Recent Changes in projects

Page 6 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

TennesseeTony

Elite Member
Aug 2, 2003
4,209
3,634
136
www.google.com
Asteroids@home: New CUDA102 application was released
Today we released a new CUDA102 application v102.14 for both Windows & Linux.
A bug was fixed where application causes 100% utilisation of whole CPU core (thread in hyperthreaded CPUs)

Running a pair of test WUs now, on a GTX 1070 under Windows. Estimated 33min until completion. Looks like Asteroids is STILL a CPU project (Ryzen 3000 series runs a CPU task in under an hour on average). :( First one finished in 26m19s.

Notes: 100% GPU usage, but only 55-60% of the power target, almost no CPU usage (some spikes occasionally). Progress bar and time remaining freeze then jump ahead significantly.
 
Last edited:

StefanR5R

Elite Member
Dec 10, 2016
5,498
7,786
136
Rosetta@home
After a brief test of the https URL scheme at Ralph@home, Rosetta@home switched its project URL from http://boinc.bakerlab.org/rosetta/ to https://boinc.bakerlab.org/rosetta/. They made this change because companies, who want to contribute computer time, asked for it.

If you have still the old URL on your computers, you can keep using it. Switch over only at a time when you have no Rosetta tasks on the computer.

Furthermore, the application version was updated from v4.15 to v4.20 just half a day ago. I haven't looked up yet what the changes are. I wasn't attentive at Ralph@home for the testing period of v4.20, which lasted just 17 hours.

Edit,
v4.20 has a fix for occasional failures to create initial task files. This problem of the previous version would cause tasks failing with error very early during execution, AFAIK. I haven't found out yet whether there are any other fixes in it.

Edit 2,
the v4.20 update includes according to a forum post:
  1. extraction of the Rosetta database into the project directory with all following jobs reading from the same database rather than extracting into the slot directory for every job. This significantly reduces the disk usage per job.
  2. checkpointing in the Rosetta comparative modeling protocol. This should significantly reduce wasted cpu time if jobs are preempted often, particularly for jobs that take a long time to produce models.
 
Last edited:

StefanR5R

Elite Member
Dec 10, 2016
5,498
7,786
136
PrimeGrid
An LLR2 applications is being tested since ~a week now. Its important change over LLR is that it incorporates runtime self-tests for software and hardware errors, by Robert Gerbicz. It will not replace LLR completely; LLR will continue to be used in subprojects which test k*2^n-1, as well as on 32bit hosts.
stream said:
First step of integration includes partial replacement of current LLR by LLR2 in "Gerbicz runtime error check" mode for some projects and platforms:
  • Platforms: Windows-64 and Linux-64 only;
  • Projects: all except base-2 "-1" tests, because LLR2 uses another testing method and incompatible residues.
Remaining platforms and projects will continue to use old LLR (residues are compatible).

Please note that it will not enable fast validation scheme yet. Only Gerbicz runtime error check will be enabled. It will allow to catch all types of hardware and, what is most important, software errors. Ideally, we should not see mismatching residues anymore. Even completely broken/overheating computer should be either eventually complete the test, even abort it after few attempts, but never return wrong residue.

Pavel Atnashev said:
The importance of Gerbicz check is that it is very good at detecting local hardware and software errors almost for free, and provides the way to recover from them. We should see a dramatic drop in invalid results. That's why it's important to start using it asap.

(found via a post from pschoefer in the SETI.Germany forum)

Translated from German:
pschoefer said:
In the long run, double checking will be revolutionized by this. More on this in due course.

Update:
The LLR2 application is already live. It's version 8.10 among the LLR-based applications:
CUL, DIV, ESP, MEGA, PPS, PPSE, PSP, SOB.
 
Last edited:

Fardringle

Diamond Member
Oct 23, 2000
9,188
753
126
View attachment 24724

New project: MLC@Home (link)


Linux only at the moment. Windows app coming soon™
source.

Edit: Around 700 MB per WU. Ouch.
The tasks I'm getting are only 5 - 7 MB each...

Edit: Completed tasks receive 260 credits, regardless of how long they take. So far I've had run times anywhere from 10.5 minutes to 75 minutes on my Ryzen 9 3900X, with most of them in the 65-70 minute range.
 
Last edited:
  • Like
Reactions: TennesseeTony

StefanR5R

Elite Member
Dec 10, 2016
5,498
7,786
136
RakeSearch (the one with the nicest badges of them all)
RakeSearch is about to end soon:
https://rake.boincfast.ru/rakesearch/forum_thread.php?id=230
The last batch of WUs were sent to hosts a few days ago. The site will stay online, but no new tasks will be generated anymore. The findings of the project are public.
Two weeks ago, RakeSearch rebooted with an application "SAT-based search for orthogonal pairs of DLS of order 10":
https://rake.boincfast.ru/rakesearch/forum_thread.php?id=234
(CPU, Windows, Linux)
Server status is showing 36 % completion of this run. Seems still to be a test of the waters to prepare for the real comeback.
 

StefanR5R

Elite Member
Dec 10, 2016
5,498
7,786
136
The virtual set size is the less interesting figure. Especially on Linux which overcommits the system memory. The "working set size" or "resident set size" ("rss") is the more important figure, because this is what really lives in physical RAM at a given moment.

However, your second screenshot shows the horrible news that this task has got an RSS which is almost as big as the virtual memory size of this process, i.e. 40 GB indeed. :-O

A counter example: I've got GPUGrid's acemd3 running right now. Each of these tasks has got ~30 GB virtual memory size, but only ~400 MB working set size. I have three acemd3 tasks running at once (plus forty jobs of a CPU project) on a Linux system which has merely 32 GB physical RAM and a 32 GB swap partition. There you can see how the Linux kernel overcommitted the 2x 32 GB which are present in the system. This policy of overcommitment is possible and sensible because many programs never actually populate a more or less considerable fraction of the memory which they allocate.

(If suddenly the three acemd3 processes decided to access all of their memory, the Linux kernel's out-of-memory killer (OOM killer) would spring into action and snipe one heavy memory user after another until the OOM situation is cleared.)

But your Rosetta task did indeed access almost all of the memory which it allocated.

Edit,
your finding made me go look through the Rosetta forum for any announcement of such large jobs. I did not find any. I may have missed it though because the forum is so horribly disorganized and extremely noisy. But the more likely and expected explanation is that there really has not been such an announcement yet — if this large memory usage is by design, not due to a bug.
 
Last edited:
  • Like
Reactions: biodoc

StefanR5R

Elite Member
Dec 10, 2016
5,498
7,786
136
NFS@Home
Two new applications were added: "lasievee_small" and "lasievef_small". Actually, while they are new applications from the BOINC point of view, they use the existing executable files of lasievee and lasievef. Just the workunit space has been split off, in order to go with the times of ever growing numbers to be factored.

In total, there are now these applications which you can select in NFS@Home preferences:
  • lasived (14e Lattice Sieve),
    very small numbers, uses less than 0.5 GB memory, work may be infrequently available
  • lasievee_small (15e Lattice Sieve for smaller numbers),
    small numbers, uses up to 0.8 GB memory
  • lasievee (15e Lattice Sieve),
    medium numbers, uses up to 1 GB memory
  • lasievef_small (16e Lattice Sieve for smaller numbers),
    large numbers, uses up to 1 GB memory
  • lasieve5f (16e Lattice Sieve V5),
    huge numbers, uses up to 1.25 GB memory
(forum thread with the announcement)
 

Markfw

Moderator Emeritus, Elite Member
May 16, 2002
25,542
14,496
136
Come on Rosetta......seriously? 40 GB RAM for one task?

View attachment 29382

View attachment 29384
I can't figure out how to make screen copies in linux, but this is the thrird 40 gig process that I have seen. My 7452 has 128 gig of ram, and most processes in Rosetta are taking 1.5 to 2 gig ram (60 Rosetta tasks or so) but now I have 20 tasks waitng for memory and that 40 gig one.

arg !!!!
 

StefanR5R

Elite Member
Dec 10, 2016
5,498
7,786
136
If you use the Cinnamon desktop of Linux Mint, simply press the [PrtSc] key to capture the whole screen, or [Alt][PrtSc] to capture the currently focused window.

After that, a window pops up which lets you save the screenshot as a file.
 

Markfw

Moderator Emeritus, Elite Member
May 16, 2002
25,542
14,496
136
If you use the Cinnamon desktop of Linux Mint, simply press the [PrtSc] key to capture the whole screen, or [Alt][PrtSc] to capture the currently focused window.

After that, a window pops up which lets you save the screenshot as a file.
Just like windows, cool !
 

StefanR5R

Elite Member
Dec 10, 2016
5,498
7,786
136
One reported that he managed to complete such a task. But the task spent all the time in the first "decoy" (IOW was unable to complete the first of several stages of computation), thereby took a lot longer than it should have, and received very low credit once its result was reported. This is another hint that such tasks are faulty.

Windows hosts appear to be afflicted too.
 
  • Like
Reactions: TennesseeTony

biodoc

Diamond Member
Dec 29, 2005
6,261
2,238
136
In mint there is an app called "screenshot" under accessories with more features.
 
Last edited:

Markfw

Moderator Emeritus, Elite Member
May 16, 2002
25,542
14,496
136
Gaia@home finally has tasks available. (Apparently I've been waiting since Nov. 2019)

View attachment 30516

Calculating long-period comet orbits under the simultaneous Galactic and stellar perturbations.
Link to more details here.

Linux only. CPU only. Points are insane.
Sounds like after the WCG challenge, I may have a new project ! Thanks !
 

Fardringle

Diamond Member
Oct 23, 2000
9,188
753
126
Gaia@home finally has tasks available. (Apparently I've been waiting since Nov. 2019)

View attachment 30516

Calculating long-period comet orbits under the simultaneous Galactic and stellar perturbations.
Link to more details here.

Linux only. CPU only. Points are insane.
Thanks! I'll put my Linux VM(s) to work after the QuChemPedIA sprint ends. :)

Edit: It looks like task caching is not allowed. I connected two VMs just to get attached to the project and the project server is only allowing one task per CPU core/thread at a time. It even says in the Event Viewer that the number of tasks per computer is limited. There aren't any user configurable limits on the project site.
 
Last edited:

Fardringle

Diamond Member
Oct 23, 2000
9,188
753
126
Linux only. CPU only. Points are insane.

So I let my computer run the project for a little while just to see the results, and it appears that the reason for the "insane" points is because the points algorithm is not written properly. It is rewarding credits equal to the Run time (seconds) column instead of the calculated Credit column.

Edit a day later: After further analysis and running for a while longer, I can see that the project is not actually rewarding credits from the run time column, but it IS giving a lot more credits than what the credit column says.
 
Last edited:

emoga

Member
May 13, 2018
188
301
136
Rakesearch is back with a new app: Joint search of ODLS9 with Gerasim project
squares.png

Here is the link to the news thread. Average runtime is 20.4 minutes according to the server.

Windows only. Points are NOT insane.
 
Last edited:

StefanR5R

Elite Member
Dec 10, 2016
5,498
7,786
136
QuChemPedIA
A day ago the admin damotbe gave it another try to reconfigure the server in order to resend inconclusive workunits in a more timely fashion. Unfortunately, the result of the most recent change appears to be that hosts receive almost only work from old workunits which already failed multiple times and are unlikely to ever finish successfully. The hosts are therefore now returning mostly invalid and inconclusive results, apparently get marked as unreliable by the server, and are then denied any new work and fall idle (or have to switch to a fallback project).