Any interest in a SiDock challenge at BOINCStats?

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

StefanR5R

Elite Member
Dec 10, 2016
5,494
7,780
136
A few hours ago,
hoarfrost said:
Some computers may be without tasks we try to increase a number of project service processes.
hoarfrost said:
We increase a number of feeder processes. The number of tasks in "in progress" state began to increase.
hoarfrost said:
Now we increase a number of transitioner processes (may be next bottleneck).
(from the contest thread)

Let's see how effective this will be. If mass storage IOPS were the bottleneck, then there is a limit to what more server threads can accomplish.

- - - - - - - - - - - - edit - - - - - - - - - - - -

According to posts at the SG forum, boincstats began collecting race stats not at 00:00 UTC, but at 02:15 UTC. IOW initial stats fetching was delayed. One suggested possible reason was that the project server wasn't very responsive during the first several hours of the contest. (Another edit: This was noted in the SiDock message board too now.)
 
Last edited:

cellarnoise

Senior member
Mar 22, 2017
711
394
136
Well rather than fight this, I think we should all enjoy it. That is how I plan on looking at it anyway :)

Strategies on how to get work can be as much fun as gaming the system on the work end no? ;)

On some challenges they should choke the server with various performance parameters and make everyone figure out how to game the system to get work :) Seems to go against the main purpose of D.C., but driving humans a bit crazy and making humans think beyond the puters could be fun? Ha, Ha, ha!
 

crashtech

Lifer
Jan 4, 2013
10,523
2,111
146
I think what's typically happened is that individual participant's PC power has outstripped the projects ability to deal with it.
 

Markfw

Moderator Emeritus, Elite Member
May 16, 2002
25,540
14,494
136
I think what's typically happened is that individual participant's PC power has outstripped the projects ability to deal with it.
You mean like a 64 core EPYC 7742 overwhelms the server ? Or even a 5950x ?
 

Skillz

Senior member
Feb 14, 2014
925
948
136
Well rather than fight this, I think we should all enjoy it. That is how I plan on looking at it anyway :)

Strategies on how to get work can be as much fun as gaming the system on the work end no? ;)

On some challenges they should choke the server with various performance parameters and make everyone figure out how to game the system to get work :) Seems to go against the main purpose of D.C., but driving humans a bit crazy and making humans think beyond the puters could be fun? Ha, Ha, ha!

Some people just wanna watch the world burn. :oops:
 

Ken g6

Programming Moderator, Elite Member
Moderator
Dec 11, 1999
16,234
3,818
75
Is it just me or are the tasks suddenly taking a lot longer to run?
 

StefanR5R

Elite Member
Dec 10, 2016
5,494
7,780
136
It's not just you. They re-enabled task generation for the Eprot_v1_run_2 batch in order to reduce server I/O load: post from hoarfrost

And as far as I can tell, work supply and file transfers are smooth now.
 

cellarnoise

Senior member
Mar 22, 2017
711
394
136
Some people just wanna watch the world burn. :oops:
I hope that my posts and yours are all all in fun. I love fight club but we should not mention it ;)

Stay clean everyone and healthy!
The two bars of soap are on Skills. Cause edis on my fone...
 

Attachments

  • poster19945-1.jpg
    poster19945-1.jpg
    52.2 KB · Views: 4
  • poster19945-1.jpg
    poster19945-1.jpg
    52.2 KB · Views: 4
Last edited:

StefanR5R

Elite Member
Dec 10, 2016
5,494
7,780
136
I for one would be interested to participate.
I sure got second thoughts about this contest, given that I argued myself not so long ago that competing to receive work doesn't make so much sense to me, in contrast to compete to get work done. Therefore I am glad that the SiDock admins fixed the work distribution now, as far as I can tell.

server_status.php said:
Research progress
Target: corona_3CLpro_v3 (%) ....... 100.000
Target: corona_PLpro_v1 (%) ......... 100.000
Target: corona_PLpro_v2 (%) ......... 100.000
Target: corona_RdRp_v1 (%) .......... 100.000

Target: corona_Eprot_v1 (%) ............ 60.612
Target: corona_3CLpro_v4 (%) ......... 40.037
Target: corona_3CLpro_v5 (%) ......... 39.807
Target: corona_3CLpro_v6 (%) ......... 39.626
They wanted to prioritize the three 3CLpro targets over the now reactivated Eprot target though. I am thinking I'll keep an eye on what they'll do when the contest is over, and perhaps keep computers at SiDock for a while rather than going right back to my current favorite project.
 

Kiska

Golden Member
Apr 4, 2012
1,010
290
136
server_status.php said:
Research progress
Target: corona_3CLpro_v3 (%) ....... 100.000
Target: corona_PLpro_v1 (%) ......... 100.000
Target: corona_PLpro_v2 (%) ......... 100.000
Target: corona_RdRp_v1 (%) .......... 100.000
Target: corona_Eprot_v1 (%) ............ 60.612
Target: corona_3CLpro_v4 (%) ......... 40.037
Target: corona_3CLpro_v5 (%) ......... 39.807
Target: corona_3CLpro_v6 (%) ......... 39.626
I would love to graph the percentages... but server_status.php?xml=1 does not expose this

They re-enabled task generation for the Eprot_v1_run_2 batch in order to reduce server I/O load:
1631799055643.png
 

cellarnoise

Senior member
Mar 22, 2017
711
394
136
This Sidock challenge has been silly. Work, no work, work, and then change the settings to make the work last longer so all don't have to get as much work.

I like they are changing stuff up, but I have stopped and started crunching like 3 times! Based on this thread alone.

So I am happy and not :) And adding some now dusted off cores as it is cooler now! So it has been good for something!
 

lane42

Diamond Member
Sep 3, 2000
5,721
624
126
Team NameRankCredit-1:00-2:00-4:00-8:00-16:00-32:00Options
Planet 3DNow!16,987,0346,887,7606,811,6916,676,8346,305,2915,417,8352,510,682 Detailed stats Signature graphic
TeAm AnandTech23,628,0603,577,2123,525,5153,476,0593,322,9522,895,4311,543,842 Detailed stats Signature graphic
SETI.Germany33,537,1093,515,9373,502,6933,476,9513,386,1653,167,0171,254,134 Detailed stats Signature graphic
Rechenkraft.net42,718,6952,692,3482,674,6892,637,6762,430,9622,152,8801,046,303 Detailed stats Signature graphic
[H]ard|OCP52,374,3652,352,4442,334,1322,302,6562,154,5261,856,348844,757 Detailed stats Signature graphic
SETI.USA62,096,4172,067,3692,049,2782,025,6371,893,0151,692,442764,431 Detailed stats Signature graphic
L'Alliance Francophone71,191,6601,165,0591,148,7691,122,1261,046,692850,806322,688 Detailed stats Signature graphic
Ukraine8989,281980,383973,915962,056898,813731,698296,601 Detailed stats Signature graphic
BOINC.Italy9923,896914,238893,055883,103821,857690,750265,716 Detailed stats Signature graphic
Crystal Dream10487,899483,486478,675473,279448,662403,781172,883 Detailed stats Signature graphic
Back in second :)
 

lane42

Diamond Member
Sep 3, 2000
5,721
624
126
1214
int.png
xii5ku232,54917,117689,129390,124161,0421,359,8411,359,841117,83641,75212,938,873
2529
int.png
crashtech187,48034,198426,751209,01894,810851,149851,14974,67824,5806,787,383
31090
int.png
parsnip soup in a clay bowl103,14121,873252,66444,91844,745412,356485,68238,50014,1682,480,539
4635
us.png
biodoc93,39311,038184,86195,27242,702392,310392,31034,50511,0715,538,278
5426
int.png
Icecold72,9589,980152,60143,71928,047269,289269,28923,9847,2728,027,374
6982
us.png
Orange Kid65,6018,486328,5231,26974,48574,4856,9283292,588,112
716280
us.png
cellarnoise240,7726,83424,5526,9557,61894,096110,7358,3612,591852,659
8881
us.png
Fardringle25,9612,54455,6885,2439,89195,19695,1968,4802,5642,602,826
920592
int.png
Sesson16,5121,48227,73613,5768,54776,145112,9496,9783,572328,618
1021597
us.png
Lane426,8102,1362,16503098,9758,97584980326,431
11754
int.png
Skillz6,6797218,6093401,27915,62815,6281,4243313,818,549
12241,011
us.png
GLeeM5,0832,6355,9605,7058,31153,150146,9746,4925,255146,974
13271,230
int.png
Ken_g64,8875715,2864,4971,39814,67014,6701,289362100,566
1418463
jp.png
hca3,6111,0672,5635981,59812,02337,2721,8111,247451,032
1515216
int.png
Endgame1243,00239326,1642,8114,14632,02432,0242,8231,0751,050,587
1623830
us.png
geecee1,592003263694,08822,772688784201,936
17251,018
int.png
Kiska191191000191191190144,896
 

StefanR5R

Elite Member
Dec 10, 2016
5,494
7,780
136
Browsing through my oldest validation-pending results, most of their wingmen computers have a large number of tasks in progress while having only moderate or small CPUs (Core i5, i7, Ryzen…). There is at least one user who lets the client report 1024 CPUs and take on 2048 tasks in progress. (The latter required modification of the source code of the client. The SiDock server does not assign thousands of tasks in a single work request.)

It should work out if these guys loaded their bunkers during periods when the short 3CLpro work was emitted. But if they have sizable amounts of the longer Eprot work loaded, they will have over-bunkered substantially.

I think that the bunkered tasks in progress are only a rather small part of the tasks in progress, but they should still be enough for a some swaps at the lower team ranks, as well as for bringing SETI.Germany back ahead of us. Edit: The latter is not based on analyzing any host records, but simply a guess based on SG's output earlier in the race.
 
Last edited:
  • Like
Reactions: Fardringle

lane42

Diamond Member
Sep 3, 2000
5,721
624
126
most of their wingmen computers have a large number of tasks in progress while having only moderate or small CPUs
Sounds like the old days of Seti when guy's had computers like this.
and downloaded thousands of workunits.

20 Dec 2008, 14:50:20 UTC
ID: 4015642
Details | Tasks
Cross-project stats:
BOINCstats.com Free-DC
20
0.08​
36,908​
GenuineIntel
x86 Family 6 Model 8 Stepping 10 995MHz
(2 processors)
---Microsoft Windows XP
Home Edition, Service Pack 1, (05.01.2600.00)
1 Mar 2008, 21:32:37 UTC
 

Fardringle

Diamond Member
Oct 23, 2000
9,187
753
126
This guy somehow has over 800 tasks in progress on a computer with a 6 core/12 thread Ryzen 3600 CPU. All of them were downloaded on 9/14, and still haven't been completed and reported. The computer is reporting some tasks, so it appears to be processing the work, but that's a lot of wingmen waiting for tasks to be validated...


Same situation with this one with more than 1100 tasks in progress on a Ryzen 5600, and only gradually reporting them.


The vast majority of my old tasks waiting for validation are being held up by those two computers. :(
 

cellarnoise

Senior member
Mar 22, 2017
711
394
136
For you old-timers on here. Is this a new/old way if gaming the system?

Seems that it would be hard to benefit from though unless you knew from whom or when you were holding points from.
 

crashtech

Lifer
Jan 4, 2013
10,523
2,111
146
They are just users who know enough to be dangerous. I think I am still nominally in that camp, though in the past couple of years I have learned to abort excess tasks, which, while not as good as not getting excess work in the first place, at least frees them up for immediate re-issue.
 

StefanR5R

Elite Member
Dec 10, 2016
5,494
7,780
136
Sounds like the old days of Seti when guy's had computers like this.
and downloaded thousands of workunits.
The reporting deadline at SETI@home was measured in months though; at SiDock its just 6 days. :-)

Seems that it would be hard to benefit from though unless you knew from whom or when you were holding points from.
They have these buffers for different reasons than this one, most certainly.

I saw a few of such hosts which are currently reporting results. These computers could either be ones on which the owners attempted to bunker for the 2 days between announcement and start of the contest, and then for some reason neglected to revert the 'bunker mode' settings after the start. Or, less likely, the owners wanted to have a work queue of more than 2/CPU for their normal crunching but went seriously overboard with their queue settings.

But the older ones of my pending results have wingmen hosts which have not yet reported a single result since they started downloading. The owners of these hosts certainly want to bunker until nearer the end of the contest.

Why bunker until close to the end? In the Pentathlon, one would do this primarily because teams need to split their resources between several contests, and you want to keep your competitors guessing how many of your own resources you have put here or there. In Formula BOINC, you would want to keep particularly those competitors guessing who have a habit of hopping between teams. In any contest, you would want to keep your competitors guessing if you are afraid that they pull in some reserves (e.g. call for help from friends, or spend big money in the cloud).

There are some paranoid people who have another hypothesis about hosts which download a lot of tasks but don't return any results anytime soon: The paranoia suggests that these tasks were never meant to be processed, such that the wingmen don't get their results validated (before the deadline of the captive tasks passes, and replicas are sent by the project server and processed by third hosts). However, I for one don't subscribe to this paranoid hypothesis. Such a scheme can't be working as effective as the paranoia would make it seem, IMO.

They are just users who know enough to be dangerous. I think I am still nominally in that camp, though in the past couple of years I have learned to abort excess tasks, which, while not as good as not getting excess work in the first place, at least frees them up for immediate re-issue.
Somehow I don't have high hopes in the responsibility particularly of the owner(s) of hosts with 2048 tasks in progress. We'll see.
 

Fardringle

Diamond Member
Oct 23, 2000
9,187
753
126
Considering the fact that these (my two examples) are not powerhouse computers, but that they are actively returning some completed tasks, I suspect that the users just read somewhere online that setting the CPU count really high is the easiest way to bunker SiDock tasks. Which is true. But they set that number WAY too high and got far more tasks than their computers can finish before the deadline.