• Guest, The rules for the P & N subforum have been updated to prohibit "ad hominem" or personal attacks against other posters. See the full details in the post "Politics and News Rules & Guidelines."

Recent Changes in projects

Page 4 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

TennesseeTony

Elite Member
Aug 2, 2003
3,855
2,730
136
www.google.com
At the mid-day update, I have 4.25M on Einstein, so I thought maybe I would be in the top ten at Free-DC...uhm, nope, the top producer (from XtremeSystems), at the MID-day update, is nearly 1.5 BILLION. o_Oo_Oo_Oo_Oo_Oo_Oo_Oo_Oo_Oo_Oo_Oo_Oo_Oo_Oo_Oo_Oo_O
 

iwajabitw

Senior member
Aug 19, 2014
828
138
106
At the mid-day update, I have 4.25M on Einstein, so I thought maybe I would be in the top ten at Free-DC...uhm, nope, the top producer (from XtremeSystems), at the MID-day update, is nearly 1.5 BILLION. o_Oo_Oo_Oo_Oo_Oo_Oo_Oo_Oo_Oo_Oo_Oo_Oo_Oo_Oo_Oo_Oo_O
DAAYUMM!

Maybe its just the users getting accounts corrected. I didn't crunch the 127mil overnight that posted to DC today.
 

StefanR5R

Diamond Member
Dec 10, 2016
3,569
3,844
106
(ninja'd by @emoga)
the top producer (from XtremeSystems), at the MID-day update, is nearly 1.5 BILLION.
This could be due to the GDPR change. The user may have produced continuously during the past days, but flipped the privacy switch only today. (If you are curious, you could investigate the task lists of his four computers.)
 

Orange Kid

Elite Member
Oct 9, 1999
3,887
1,566
146
Another project needs to be checked for stats export.

NumberFields@home: User consent required for stats export
If you want your stats exported, you will need to check the consent box on the project preferences page.

In a couple days, the stats export mechanism will be changed, and if this consent is not given, then the default will be to NOT export your stats.

Sorry for the inconvenience, but this was necessary due to the recent GDPR regulations.
 

TennesseeTony

Elite Member
Aug 2, 2003
3,855
2,730
136
www.google.com
So all these points will be showing up again like Einstein? Would have been nice to have thought ahead on this, it's a good way to cheat on the Formula BOINC Marathon for 2019, lol.

EDIT:
Does this option affect team credits?
One way to find out:
NumberFields@home currently:
14 - TeAm AnandTech - 170,487

I have 7M points, and enabled reporting just today, so if it counts we should see a bit of a jump at the Marathon.
 
Last edited:

StefanR5R

Diamond Member
Dec 10, 2016
3,569
3,844
106
PrimeGrid:
This weekend, the 321 Sieve project has been restarted.
Like GCW Sieve, it is CPU only and does not make use of FMA/AVX, does not offer multithreading, but does benefit from HyperThreading a.k.a. SMT.
https://www.primegrid.com/forum_thread.php?id=8390
On January 10 Michael Goetz said:
On January 10 tng* said:
Would anyone from the project care to comment on why the 321 sieve is being restarted?
321 is currently sieved up to n=25M. The current leading edge is n=14.6M. We're going to sieve 25M < n < 50M. That would bring the 321 sieve up to the same point to which we've sieved the conjecture projects. The sieving should take several years, and we'd like to get the sieving done before LLR reaches 25M. 321, having just a single k, and the smallest of all possible k's, could advance very quickly if people concentrated on it.

Additionally, with GCW sieve likely to be ending sometime this year, we did want to have another CPU sieve project to take its place. It's a good time, therefore. to reopen 321 sieving.
On March 16 Michael Goetz said:
Before anyone asks, I'm guessing 321-Sieve will last about 2 years, with two big caveats:

1) This assumes I didn't make a huge error in the calculations and be off by a factor of 10 or 100.

2) Even if I did it correctly, the calculations are based on a lot of rough estimates, so it's likely to be off by a factor of 2 or more. Realistically, I expect it to last somewhere between 1 and 5 years, with 2 years being the most likely.

GCW Sieve on the other hand could be stopped sometime "soon".
https://www.primegrid.com/forum_thread.php?id=7943
On February 18 JimB said:
As of the moment, I'm planning on starting 321 sieving right after the TdP, probably on March 1st, though I still have some work to do before that.

On GCW sieving there are eleven different b values. The optimal point is different for each of them, depending on how long it takes to sieve and how long it takes to run LLR on those bases.

Bases 13, 25, 55, 69 need a relatively significant amount of sieving.
Bases 29 and 49 need a bit more but are close to being stopped.
I stopped 47 and 101 a few days ago. 73, 109 and 116 were already stopped.

With just six bases still generating new sieving jobs, I don't expect 29 and 49 to last long. It all depends on how many people are still sieving GCW when that new 321 Sieve badge becomes available on March 1. My current thinking is that GCW might last until May, but I can't guarantee it. If you want to upgrade a GCW badge, don't wait.
 
Last edited:

StefanR5R

Diamond Member
Dec 10, 2016
3,569
3,844
106
Acoustics@home
No one running Acoustics, for a 2nd week in a row? Hmm, no stats since April the 1st, and I can't reach the site...
I found a SETI.Germany forum post referring to a boinc.berkeley.edu forum post referring to an unnamed Russian forum:
Vitalii Koshura said:
There is some issues with University Data Center. Currently unknown how long it will take to restore availability of the server. No worries, no data was lost. Just server of this project and probably some others that are hosted in the same University Data Center.
 
  • Like
Reactions: TennesseeTony

StefanR5R

Diamond Member
Dec 10, 2016
3,569
3,844
106
NumberFields@home:
I just now looked at this boinc notice from April 6, that's three weeks after we ran the NumberFields sprint at Formula Boinc:
Eric Driver said:
GPU app status update
So there have been some new developments over the last week. It's both good and bad. [...]
Both good and bad indeed.

While developing the GPU application, Eric found optimizations for the CPU application. There is now a new CPU application version which performs 10 times better. (That's while drawing ~10 % more power, according to one commenter.) Anybody who runs NumberFields should receive this application automatically. Along with deploying the new version, Eric cut the credits/task by a factor of 10.

Full post and several interesting follow-ups here: https://numberfields.asu.edu/NumberFields/forum_thread.php?id=366

And on March 22, one week after the FB sprint, the GPU application was released --- for now only in the Nvidia-Linux-64bit variant because the AMD version and Windows versions are still buggy. You need to enable "Use NVIDIA GPU" and "Run test applications" in the project preferences in order to run this application. Also, you need to have Nvidia driver 418.39 or later.
https://numberfields.asu.edu/NumberFields/forum_thread.php?id=362

If I understand correctly, Eric switched to CreditNew when he released the GPU application, then to fixed credit per task (despite varying processing time per task), and is now back to CreditNew again --- as far as I can tell from the forums and from host stats.

(Before the GPU application was released, NumberFields based credits on run time. That's about the worst option in general. And Eric was aware of that, to some degree.)
 

TennesseeTony

Elite Member
Aug 2, 2003
3,855
2,730
136
www.google.com
It has been a week or two since I read that, did he speed up the GPU app yet? He said that since he tweaked the CPU app, that the GPU app was about the same speed as the new CPU app (edit: 2-3 times faster).
 
Last edited:

StefanR5R

Diamond Member
Dec 10, 2016
3,569
3,844
106
Eric Driver's Threadripper 2990WX is completing the tasks at 1,200 s average. If he is running 64 tasks at a time, that'd be 192 tasks per hour.

Azmodes reported on March 28:
GTX 1660 Ti ..... 239 s (30 tasks per hour)​
GTX 1080 Ti ..... 306 s (24 tasks per hour)​
GTX 1070 Ti ..... 337 s (21 tasks per hour)​
GTX 980 ............ 333 s (22 tasks per hour)​
with each GPU running 2 tasks at a time. I don't know whether this was with the current v3.01 application from March 25, or from the initial v3.00 application. However, the only difference between the two should be that v3.01 was fixed to spread tasks properly over all GPUs of multi GPU hosts.
 

StefanR5R

Diamond Member
Dec 10, 2016
3,569
3,844
106
  • Like
Reactions: TennesseeTony

TennesseeTony

Elite Member
Aug 2, 2003
3,855
2,730
136
www.google.com
Pretty much the end of this project, because they say the next step (10 space squares) would generate about 7M times more tasks then 9 space squares did, and they "...don't want to run an endless search without any results, because many other interesting and useful projects exist." They will have a small test batch though, to see if they get lucky and find any 10 space thingys.






Rake search of diagonal Latin squares: Future of the RakeSearch project
Dear folks!

Two days ago the project reached a milestone of 95% of completion. As part of the current search, it remains to process about 1100 000 workunits. In the next few days, we plan to generate one or several bunches of workunits for new search - in space of diagonal Latin squares of rank 10. Initially, tasks only under Linux x86-64 platform will be available, if their processing is successful, the application for Windows will be released.
A few words about the new search. We expect that an typical task will process more squares for the same time (on average). In the application for a new search, we implement some optimizations and it will be significantly faster than default application for search in space of rank 9. Another interesting thing - the search space, itself. We increase a square rank by only one stage - from 9 to 10. Currently, for workunit names we use a format R9_<8 digits> (R9_022248939 for example) and first digit from tuple - always 0. But for the naming of workunits for a new search, if we try to count all of them, we must use a format like _0000000000000001! (We don't know the number of workunits for full search in space of rank 10 precisely, but rough estimate - about 160 millions of millions of workunits). The current search comprises 23 000 000 workunits, but the full search in rank 10 space targets about 7 000 000 searches of rank 9! Of course, we cannot perform a search like this. Even with new Ryzens. :)
Also, today we do not know whether or not "permutational" orthogonal diagonal Latin squares of rank 10 exist.
For the reasons listed below, we plan to perform a search over a tiny part of entire search space - may be 1 million of workunits, may be larger, but we don't want to run an endless search without any results, because many other interesting and useful projects exist.

Thank you for attention and participation!
 

StefanR5R

Diamond Member
Dec 10, 2016
3,569
3,844
106
NumberFields@home:
In addition to the Linux-cuda30 application, Eric Driver has a Linux-opencl_amd version out now, and a Windows-opencl_nvidia version crosscompiled with mingw. All of them are in beta status.
https://numberfields.asu.edu/NumberFields/apps.php
https://numberfields.asu.edu/NumberFields/forum_thread.php?id=375

Here are hosts of a user who is running Windows:
https://numberfields.asu.edu/NumberFields/hosts_user.php?userid=11683
At the moment, one host has 2 CPU tasks on Haswell Xeon E3 which took ~13,000 seconds for ~90 credits.
The other host has >100 GPU tasks on Titan X; they need ~1,400 seconds per 90 credits.
 

emoga

Member
May 13, 2018
93
117
76
Just some differences in Numberfields credit. The top is a 2080 in windows, bottom is a 1660 ti in linux.
2019_05_28_16_16_27_Valid_tasks_for_emoga.png
 

zzuupp

Lifer
Jul 6, 2008
14,817
2,311
126
Pretty much the end of this project, because they say the next step (10 space squares) would generate about 7M times more tasks then 9 space squares did, and they "...don't want to run an endless search without any results, because many other interesting and useful projects exist." They will have a small test batch though, to see if they get lucky and find any 10 space thingys.


Rake search of diagonal Latin squares: Future of the RakeSearch project
Dear folks!

Two days ago the project reached a milestone of 95% of completion. As part of the current search, it remains to process about 1100 000 workunits. In the next few days, we plan to generate one or several bunches of workunits for new search - in space of diagonal Latin squares of rank 10. Initially, tasks only under Linux x86-64 platform will be available, if their processing is successful, the application for Windows will be released.
A few words about the new search. We expect that an typical task will process more squares for the same time (on average). In the application for a new search, we implement some optimizations and it will be significantly faster than default application for search in space of rank 9. Another interesting thing - the search space, itself. We increase a square rank by only one stage - from 9 to 10. Currently, for workunit names we use a format R9_<8 digits> (R9_022248939 for example) and first digit from tuple - always 0. But for the naming of workunits for a new search, if we try to count all of them, we must use a format like _0000000000000001! (We don't know the number of workunits for full search in space of rank 10 precisely, but rough estimate - about 160 millions of millions of workunits). The current search comprises 23 000 000 workunits, but the full search in rank 10 space targets about 7 000 000 searches of rank 9! Of course, we cannot perform a search like this. Even with new Ryzens. :)
Also, today we do not know whether or not "permutational" orthogonal diagonal Latin squares of rank 10 exist.
For the reasons listed below, we plan to perform a search over a tiny part of entire search space - may be 1 million of workunits, may be larger, but we don't want to run an endless search without any results, because many other interesting and useful projects exist.

Thank you for attention and participation!

Rakesearch is on a new application for the next range. If you had been using the old optimized applications, you'll need to delete the app_info. BOINC will then grab the latest standard version.

Also, new applications are only 64 bit currently.
 

ASK THE COMMUNITY