• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

BOINC and a computer farm: what is better?

petrusbroder

Elite Member
I have past sunday "transformed" 7 of my comps to BOINC after reaching my goal of 15 K WUs in seti@home classic - and have now 10 comps and 12 processors crunching BOINC.

Those who have more experience in BOINC: what is better: to dedicate a computer to lets say 2 projects, (say 5 comps for Seti and LHC and 5 comps for Predictor and Einstein) or is it better to let all comps crunch all 4 projects (and give 25% of the resourses to each project)? Does i t matter? 😕 I have comps with Athlons (from Thunderbird to A64) and some with Pentiums (PIII to P4 with HT). Would be grateful for input! 😀
 
Good question! Giving a bump cause I am curious too.

Hmmm... I don't know about Seti and LHC together. Does Seti still have server problems? And LHC sometimes not have WUs? If so computer might sit idle for a while.
I remember someone saying you could put an extra project on at 1% to keep the computer from sitting idle.

I would bet that P4 with HT would do better with the right combination of projects. I know that Predictor slowed down Folding@Home more than Einstein and LHC. They probably both wanted to use the same parts of the CPU.
 
On HT machines, memory is also going to be a consideration. CPDN, and Rosetta both use over 200 MB. Predictor is at 50 or so. I think Seti and Einstein are the lowest at 15 and 8...

So putting Rosetta and CPDN on the same machine may cause some problems if you only have 512 MB.
 
Originally posted by: GLeeM
Good question! Giving a bump cause I am curious too.

Hmmm... I don't know about Seti and LHC together. Does Seti still have server problems? And LHC sometimes not have WUs? If so computer might sit idle for a while.
I remember someone saying you could put an extra project on at 1% to keep the computer from sitting idle.

I would bet that P4 with HT would do better with the right combination of projects. I know that Predictor slowed down Folding@Home more than Einstein and LHC. They probably both wanted to use the same parts of the CPU.


LHC sometimes runs out of WUs. But seti seems have solved the problems with the servers. They have now planned outages (wednesdays for 3 hours) which takes care of a lot of problems ... I have had no problems for the last week ...
Just now I have all 4 projects on all machines - I'll run them in this way a month or so, then I'll reconfigure and just pair off the projects - seti and einstein, predictor and LHC; unless of course somebody has more info and better ideas 🙂
 
Originally posted by: mrwizer
On HT machines, memory is also going to be a consideration. CPDN, and Rosetta both use over 200 MB. Predictor is at 50 or so. I think Seti and Einstein are the lowest at 15 and 8...

So putting Rosetta and CPDN on the same machine may cause some problems if you only have 512 MB.


Good consideration - I have some comps with only 384 MByes of RAM. There is one limiting factor: RAM. Thanks mrwizer! 🙂
 
CPDN uses 65 MB memory and Sulphur-cycle needs 1.3GB disk-space...
Not sure which of Predictor and LHC is using most memory, and can't currently check, but in my recollection one is around 70MB and the other 50 or something...
Einstein@home is below 10MB.
SETI@Home demands you have more than 64MB memory, and uses... too long since used non-optimized but AFAIK standard v4.18 is 24MB... Seti_enhanced uses 31MB, but being beta this can chance.


Since CPDN's Sulphur-cycle uses 2months+ to crunch and have only 5months-deadline, using CPDN as a "backup-project" isn't advisable. Also, CPDN doesn't normally need a backup-project...
For both LHC@home and Predictor@home would recommend to use atleast one backup-project, and wouldn't use either as a single backup for another project.
For Seti@home and Einstein@home you can normally cache over normal outages, but still having a backup is advisable.


CPDN especially likes fast p4 due to using intel-compiler, needs very fast memory, and amd64 is a good 2nd choise...
Seti likes huge cache-size and fast memory...
Einstein@home on the other hand likes short pipeline and raw cpu-speed and care less for memory-speed, so Athlons is probably the best choise while p4 have more mediocre performance...
Not sure on LHC@home or Predictor@home, but possibly leans more against Einstein than Seti...


Anyway, if you've got a farm of same-type computers it's not any problem, just run all projects on all of them with short cache-setting and whatever resource-share you wants. Well, possibly you don't want to run CPDN on all of them...

With multiple-typed computers on the other hand, one computer can be excellent in one project but mediocre in another, so choosing the "best" setup can have large impact in total crunching...
Of course, if you're only really interested in one project but adds "backup-projects" to still do useful science instead of letting cpu go idle in case of outages/no work, a computer good/bad in a project doesn't really matter.
 
Rattledagger, thanks for all the great info 🙂

@petrusbroder
LHC is using 40,752K on my system.

Einstein is using 6,460K.
 
Thanks Rattledagger, for the thorough advice. I have different comps - so I will sit down and think ... For most of my comps memory is no big concern - have either 1 GByte or 768 Mbyte RAM, but some have less... and harddisk space galore. 🙂
Thanks again!
 
Except for CPDN, if you're not sure on the performance of the different projects on the different computers, it's just to use 0.1-day cache-setting and test all projects with equal resource-share. Well, LHC@home is a little difficult to pin-down due to the very variable crunch-times between wu and seti also have some variation due to angle-range, but for Einstein and predictor you'll after only 1 result have "stable" times.

After getting atleast one result for each project, it should be easy to see what on 3GHz computer A seti used 1h and Einstein 10h while on 3GHz computer B seti used 2h and Einstein 5h, and based on this example set A to "home" in all projects and seti with resource-share 10000 and rest to 1, while on B set all to "work" and Einstein at 10000 and rest to 1...

Since a "backup-project" always will initially download atleast 1 wu, running all projects on all computers for a short time before grouping them together in best/worst shouldn't be a problem.

CPDN is a little bit more difficult, since if you don't want to abort a bunch of wu after a couple time-steps, testing speed on all computers will mean tied-up running CPDN for 2+ months, atleast if Sulphur-cycle, so a good starting-point is to only add CPDN to a computer there seti is also fast...


Since LHC is around 40MB it means Predictor is highest at around 70MB of the released projects, so memory-requirements shouldn't normally be a problem for any of these projects. On disk, only CPDN needs lots of space...

Not sure on requirements of all the different alpha/beta-projects, but example Rosetta will apparently demand more memory...



BTW, another problem with atleast some of the projects acting as backup is they're not keeping-up with the server-side changes to BOINC, meaning you'll get a wu but this can also be accompanied with a "don't bother asking me again for the next 24h-48h"...
 
Back
Top