Originally posted by: Insidious
I guess I'm not totally onboard with the assumptions that lead to (hypothetically) 10 year work units.
This, IMO is the crux of DC. Designing a method which breaks a huge job into 100,000 small ones. (as opposed to 1,000 medium ones)
Science is only hendered by too many nay-sayers to new ideas.... too many assumptions of impossibility.
When I look at the work of the Folding @ Home project, or the United Devices efforts.... Is 10 years really so preposterous? How long have they been crunching? Just think of the possibilities if this length of study had been assumed in the first place and shortcuts had not been taken to avoid a project that "took too long."
I just don't buy the notion that "it would be too hard without shortcuts"
Well, for one thing, if you're going to slice a CPDN-model into smaller pieces, you'll need to make sure to eliminate any roundoff-differences between platforms, this will likely take some months to accomplish and possibly also give a slower-running model.
When this is done, you'll need to buy more servers to handle the huge increase in upload/download-data... this with money you don't have...
As for the actual model, just like with Folding@home 1 finished 100-generation-model or whatever isn't really usable, you needs to finish many models, let's say 10k finished 160-year-models, each split into 160 pieces, giving a total of 1.6M pieces to hand-out...
But hang on, we've overlooked a very important fact, you can't just hand-out 1.6M pieces, seeing the weather in 1923 depends on the weather in 1922... Meaning, you're down to 10k models again, each using roughly 450 MB memory and taking 1 month to crunch, meaning you're still need 160 months to get 1 model to finish...
Ok, some computers are faster than 1 month/model, and computers will also get faster and faster as things progress. But, also remembers there's a 200-year ocean-spinup in the basis before the 160-year model can even start to run, and if you're also needing to increase resolution on this model it would maybe take 5 years or something just waiting on the spinup being ready before can start handing-out the main 160-year-models...
Could CPDN release a high-resolution-model, of course they can. But, if not mis-remembers there was a hint a long time ago what a high-resolution-model uses 800 MB memory or something, so how many users would you expect would run this high-resolution-model?
Well an indication is this, the BBC-model uses roughly 75 MB and a 160-year-model takes roughly 6 months, and has 92k users with credit and 59k active... "Seasonal Attribution" uses 430 MB and a 1-year-model takes roughly 1 month, and has 867 users with credit and 592 active...
As for Folding@home, they've got 40+ upload/download-servers, all is AFAIK dual-cpu, some is Xeons and some is Opterons, with total capasity over 40 TB. As a comparison, CPDN has AFAIK 3 upload-servers, but not quite sure here since found this post from 2 months ago:
the only internet traffic should be the daily (80KB) trickles (every model-year; 25920 timesteps), and the uploads (5MB) after a model-decade (259200 timesteps).
So the experiment will be no more than 100 MB of uploads spread out over (at the fastest) a 3 month period. That's not including your initial BOINC/Experiment Manager (11MB) and experiment data (12MB) download.
Hmmm, I just realized we have 3TB of uploads coming back to a .5TB server! ;-)
(Carl Christensen)