• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Rant --> Mega Dumps

ZapZilla

Golden Member
Did the DPC's MEGA Dump cause the current block clog and possible block losses?

DNet did ask nicely and clearly for teams not to Mega Dump.

Everyone, especially team leaders, should be quite aware of DNet's
limitations (like 'em or not) and no Mega Dump request.

Whether blocks are submitted as cracked, or Mega Dumped, the resulting stats
are the same in the long run.

In fact, if the DPC hadn't been saving for the Mega Dump they would have reached
the 600 million block milestone first easily.

Team-wide Mega Dumps are irresponsible and bad form, so lets discourage and
de-glamorize the practice, eh?
 
The current d.net proxy problems have been around for about a month or more now. I don't think that the DPC mega dump caused it as they would most likely flush through the "euro" proxys, and the problems have been with the "US" proxys.
 
The problem last time was a squeeze on bandwith at their provider, making it impossible for the proxies to freely flush to the keyserver.

It's entirely possible that the same thing is happening again, but I haven't seen anything like that mentioned this time around...
 
As I understand it, Mega Dumps stress the entire DNet system from the Master Key Server on down the line, not just the proxies.

So saying that, because the Euro proxies aren't clogged, a Euro Mega Dump does't affect the system seems flawed.

Are clogged proxies the cause of the reported block losses, or is it somewhere behind the proxies?

I could be way out of line here, but if DNET is already burdened with its growth rate and ever increasing daily block submission, how is it good to further burden the DNet system with unnecessary massive block dumps?
 
Actually, according to the DPC (see my congrats thread), this wasn't some organized building up of nearly 10m blocks, just one guy trying to rally them into all dumping at once. Same effect, yes, but it wasn't premeditated (It'll be argued that they're lying, I'm sure. I won't get into that).
 
That was indeed the case. Like Zapzilla pointed out DPC could have just done more blocks each day and not to dumps at all. However we also publish daily rankings as well as overall rankings of the subteams. And alot of them want to be in the top #3 often. That means they flush a small amount each day, or non at all, and save the rest to go for a bigger flush and take that days crown.

The part about only Euro servers being clogged isnt really true because some used a US server aswell when the Euro's slipped up.
 
I am very aware of the D.net limitations, and I know very well that D.net does not want us to have MegaFlushes, ad therefore we did not organize them any more for RC5.

This was something (what has been pointed out earlier) that was not planned at all, just someone who did get everyone motivated to flush whatever they had left in their buffers.

When the problems with the proxy's showed up around 19:00 hours I started warning people it could go wrong. First on IRC, leter on the forums, but the whole blocking started to grow huge!

I am owner of 1 of the public team proxy's and I just closed my RC5 output then and it is still saving up the RC5 amount till D.net backlog is way under the 5 million again. Till then, no rc5 output from my proxy.
 
I think that DNet should release new clients that are unable to ask for packets with less than 2^30 keys. That would partially solve the problem. Lots of people leave it at the default so if this was 2^31 it would help reducing the need for more bandwidth. Packets with more than 2^33 blocks would also help. Do the p-proxies have statisticks for the size of packets that they send?
 
That's a great idea DKappos! The only problem is the darn randoms are 2^28 in size (=1 work unit) 2^30 is 4 work units. The keyspace will continue to fragment because of the existing clients will generate thier own randoms of 2^28 if they run out of buffers.

I read once that the 2^28 size was selected because of the then typical speed cracker. Obviously the typical machine today is much faster than that. The average OGR work unit is MUCH greater than that taking around a day with a typical machine of say 400 mhz.

I think that a little tweaking of the clients may be in order, and we will certainly see some different features for the next Dnet contest after RC5. Checkpoints selected by default, larger packet sizes by default, and larger random packet sizes should be included.

/me knows above is confusing. I'm always confused 😕

viz
 
I think that it is time to eliminate randoms. While I don't know the reason they were initially included, it might have been to allow for communication blocks/outages. If so, then they have outlived their usefulness. I think that DNet should announce a "warning period" and stop accepting 2^28 blocks - all of them, random and not random. They could stop issuing them first. Make 2^29 the minimum issued size. The proxy could take 2^28 requests and change them into 1/2 the number of 2^29 blocks. After some time of this, they could then refuse the 2^28 blocks.
 
I'm not sure that would be possible (GREAT idea, though). The way I understand the way it works is that we would not be able to check the entire key space without including the smaller packets.

It has something to do with fragmentation, but somebody a lot smarter then me is going to have to explain it.

Russ, NCNE
 
It has to do with packets sent out to be cracked, but never returned. We're too deep into it to stop accepting ^28s now, but maybe next time around 🙂 (or they can start sending out the unreturned ones now, get them out of the way, then stop). There are a lot of areas in the "done" keyspace where there are holes of just a couple missing ^28 blocks, so to stop accepting them when a random may be done that covers it, or when they have to reissue them, would leave those blocks unchecked, which is, of course, a bad thing.

The reason randoms were allowed was for when the client isn't able to get more blocks, instead of doing absolutley NOTHING, it does a random range of keys to waste time until it can get real work, giving it a good chance of getting SOME work done, anyway. As we get further into the keyspace, the odds of that random not duplicating someone else's work goes down, though. But it at least stands a chance of getting something done where otherwise it'd act like a S@H client without an internet connection (S@H can't do randoms, for obvious reasons), not doing anything productive.

-edit because I can't spell.-
 
Possible suggestions :

- for the fure clients, make the default fetch threshold 32 instad of 24
- make the default packet size ^33 (32 block) instead of ^28 (1 block)
- If randoms still alowed, make them a bit bigger, as 2 or 4 blocks (a random takes under 2 minutes on current machines !)
- make chkpoints by default, and let the user set checkpoint time/percentage (currently : 10 mins or 10% of packet, whichever comes first) or disable it.

Rendus : the keyspace is divided in subspaces, not monolithic. So the fact that we are approx 40% done with the project DOESNT MEAN that 40% of randoms are invalid. Validity of randoms can range from 0 to 100% depending on where we are in the current subspace. If the client is not aware that we changed the subspace (which is common, since it's offline), it will crack 100% unvalid blocks. This explains why we often get 80% invalid randoms rather than 40%. Also, randoms on a sneakernet cow is an awful thing, typically 100% invalid.
 
As I have gotten into RC5 I have questioned the value of Mega Dumps. I've wondered if it has contributed to the loss of blocks? I think it has! So, how do we stop it?


  1. 1. Do not produce individual daily stats to eliminate the incentive to be "1st in the Dailies"
    2. Continue to produce Team Dailies to foster team competition
    3. Update individual overall stats on a daily basis so that one has a long-term goal
    4. Produce Weekly individual stats based on daily average output to foster short-term competition
    5. Let it be known that "Mega Flushing" is not acceptable behavior; use the trout on any offenders!


My $.02 worth. 🙂😀

 
One of the things that we are overlooking here is the incredible pace at which production has increased. Last July(ish) TA had about 280 million blocks done for a year and a half's work. We just passed 600M about six months later. DPC had maybe 100 million blocks done at that time and they just passed 600M. Where will we be by July this year, 1000M blocks? Most likely, even with a slower growth rate than now. CPU speed increases alone will probably lift the rate enough to do that. If we keep growing to where we need to be to win back the lead, we will be around that mark by May!

In other words we will equal the ouput of the first 18 months of TA in around 3 months!

The Russian Team and Hackzone regularly do 1M blocks a day, DPC's 2.5M and TA 1.8M and growing.
Less than a year ago you could have a Megaflush with TA's current daily output. A month or two back, we flushed about 3M for a day and a few raced in and branded it a Megaflush. At the time I pointed out it wasn't so, just an indication of the daily output we needed to maintain our position at the top. I stand by that, we need 3M a day output.

The problem is that Dnet this time round have not been able to scale quickly enough with the non-linear growth in output. Will they be able to pull back the lag and meet the future explosion? The previous project histories have shown the same production trends and I'm sure a couple of our own stats geniuses could look at the published data and graph the comparative trends. I'm sure that Dnet are aware of the scaling difficulty they are facing and I recall some positive comments from them being made about being in a better position to deal with these issues when they took up their new paid jobs.

That 10M block flush was no Megaflush. It represents a four day team output! Yeah - yikes!!:Q

Now what happens when the Russians get all fired up and decide to take on the western crackers en masse?
More of the same?
What about the Taiwanese? Be afraid, be very afraid!! (Think of the crackracks that half a dozen Taiwanese factory teams could run):Q

PS Maybe things would run smoother if Dnet ran one/two projects at a time, not an ambitious three! Clean up OGR 24 and 25 completely, hold off for about six-ten weeks or preferably til the system can take the strain, run OGR 26 to completion, wait six weeks etc... Seems like better science to me anyway even if it doesn't help that much.
 
Initially, I was against all Mega Dumping on sportsmanship principles. Then, not only because the DPC seemed to have so much fun Mega Dumping, but also because of the added fun that unpredictability introduced into the stats, I changed my mind in favor of the fun.

I now think that if DNet could handle the loads, and as long as blocks weren?t saved beyond keyspace reissuing times, and you have fun doing it, then by all means do it.

Unfortunately, DNet?s current limitations and request don?t favor that.

Also, I think that Team wide Mega Dumping needs to be distinguished from an individual?s mega dumping (or just dumping). Are there currently adequate reasons to discourage an individual?s (comparatively much, much smaller) dumping?

JWMiddleton: The incentive to be as high as possible in the individual daily stats is a huge assimilation motivating force. I believe eliminating individual stats would be a mistake and do more harm to projects than good.

Dantoo: Excellent point about DNet?s growth and future demand needs.

Dnet?s new stats box (going online soon) is a quad Xeon 500 with a
gig of ram and a boat-load of scsi drives that will help greatly, but will that be enough? What else will DNet need to improve or upgrade in it?s system to keep pace?

DNet?s cut of the $10,000 RC5 prize is only $2,000. Anyone know how much $$ DNet has invested in the RC5 project? Or has everything been donated, if so, what is the total value of equipment and monthly operating expenses now sunk into DNet?s projects?

Thanks to every one for the great dicussion!
 
Back
Top