• Guest, The rules for the P & N subforum have been updated to prohibit "ad hominem" or personal attacks against other posters. See the full details in the post "Politics and News Rules & Guidelines."

The Raspberry Pi Thread

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Ken g6

Programming Moderator, Elite Member
Moderator
Dec 11, 1999
15,060
2,037
55
If Rosetta is going to start assigning 2GB tasks regularly, I may have to start having 2 Rosetta tasks and 2 WCG tasks. That seems to annoy boinc Though, based on some testing with a 2GB pi.
Maybe you need another instance...
 

StefanR5R

Diamond Member
Dec 10, 2016
3,569
3,843
106
With just 4 narrow cores per host, and not much RAM either (considering Rosetta's needs) it may be worthwhile to stick with 1 client instance per host. E.g. set the global CPU percentage to 50 %, define WCG applications as 0.01 % CPU users, set WCG's project_max_concurrent to 2, and use WCG's web config to limit its number of tasks in progress so as not to end up with a weeks deep work buffer.

(That's assuming the boinc annoyance which @Endgame124 referred to was about the client's local scheduling decisions.)
 

Endgame124

Senior member
Feb 11, 2008
353
207
116
With just 4 narrow cores per host, and not much RAM either (considering Rosetta's needs) it may be worthwhile to stick with 1 client instance per host. E.g. set the global CPU percentage to 50 %, define WCG applications as 0.01 % CPU users, set WCG's project_max_concurrent to 2, and use WCG's web config to limit its number of tasks in progress so as not to end up with a weeks deep work buffer.

(That's assuming the boinc annoyance which @Endgame124 referred to was about the client's local scheduling decisions.)
I thought it was the local scheduler that was the problem when I wrote the post - my 2GB pi was set via appconfig.xml to run 2 Rosetta tasks and 2 WCG tasks, but only WCG was running. Turns out, the problem was that Rosetta ran out of Tasks. I see that Rosetta has started sending tasks out again, so I’ll check on the host in the morning and see what it’s doing.
 

Endgame124

Senior member
Feb 11, 2008
353
207
116
I moved all of my pis to using swap over NFS using a loop device, thinking this has got to be a better solution than swapn on a local micro sd card. That said, I have one of them that is trying to swap over NFS and it keep hanging, so it really may not be the best idea ever. Perhaps I need to see if I can mount it in a different way...

Since I had to reboot all 8 pis, and I had a few minutes to do some testing, I switched all 8 to maximum power saving mode - over_volt -2, clockspeed 1525mhz. Combined, all 8 Pis pull 36 watts, including losses from the power supplies (based on the touch test, the Anker power supply is more efficient). That makes them 4.5 watts each, which isn't as good as I was thinking it would be from testing with a single Pi - its about 244 Credits / watt in Rosetta. Perhaps a single large system really is the way to go - similar performance per watt with substantially less system management.
 

Endgame124

Senior member
Feb 11, 2008
353
207
116
A few more power saving options I've found digging around the official pi forum.

#Disable Ethernet LEDs:
dtparam=eth_led0=4
dtparam=eth_led1=4

# Disable DRM VC4 V3D driver on top of the dispmanx display stack (comment existing line)
#dtoverlay=vc4-fkms-v3d

#disable hdmi frambuffer (does not appear to be covered by tvservice --off)
max_framebuffers=0

#disable audio (minimal power saving, but frees up memory)
dtparam=audio=off
 

Endgame124

Senior member
Feb 11, 2008
353
207
116
I picked up a power meter (this one to be specific: https://poniie.com/products/17), and after playing with it a little bit, I have these observations.

1) Anker customer service stated its 6 port power supply is 85% efficient at any load. This appears to be incorrect. 1 pi 4 4GB attached to the charger draws 5.43 watts peak running Rosetta. 5x pi 4 4GB plugged into the same charger draw 24.79 watts peak (4.958 watts each)

2) Rosetta generates a dramatically more variable load than WCG OP. Rosetta will vary by >.75 watts with 1 pi 4, while WCG OP varies by < .1 watt.

3) During load, multiple tweaks seem to make minimal to no difference in power utilization. Disabling USB, hdmi, and audio have no effect. Turning off the LEDs and slowing ethernet to 100mbit seem to be the most reliable way to reduce power.

4) the lowest observed power draw at the meter is 4.516 watts per pi (22.58 watts total using the 5 pi setup). This is not a great showing for efficiency - that's 311 Roseta RAC per watt at the current point values, or 243.58 RAC per watt on my values from April - this means there are certainly AMD processors that are more efficient per watt.
 

StefanR5R

Diamond Member
Dec 10, 2016
3,569
3,843
106
I picked up a power meter (this one to be specific: https://poniie.com/products/17),
I have not done any research on this particular power meter, just looked at your link: The vendor's only statement about the accuracy of this device is that it adheres to Class 1.0. This is probably referring to either ANSI C12.1 or IEC 62053-21. I don't own either specification. But a quick web search indicates that IEC 62053-21 defines Accuracy Class 1.0 to give at most 1 % error of measurement at the top end of the meter's measuring range (and at power factor 1.0, and within a certain temperature range). The error of measurement may be larger below the upper limit of the measuring range, or at a power factor of less than 1.0, or of course outside the given temperature range. It is unknown to me whether ANSI or IEC define further limits to the error of measurement at other points within the measuring range.
 
Last edited:

Endgame124

Senior member
Feb 11, 2008
353
207
116
I have not done any research on this particular power meter, just looked at your link: The vendor's only statement about the accuracy of this device is that it adheres to Class 1.0. This is probably referring to either ANSI C12.1 or IEC 62053-21. I don't own either specification. But a quick web search indicates that IEC 62053-21 defines Accuracy Class 1.0 to give at most 1 % error of measurement at the top end of the meter's measuring range (and at power factor 1.0, and within a certain temperature range). The error of measurement may be larger below the upper limit of the measuring range, or at a power factor of less than 1.0, or of course outside the given temperature range. It is unknown to me whether ANSI or IEC define further limits to the error of measurement at other points within the measuring range.
I went with it because it was the "Amazon Recommended" alternative to the Kill-a-watt which was over twice the price. On the plus side, its getting a similar measurement to my APC UPS - around 4.5 watts per 4GB pi when running 5 of them. 2 different devices giving a similar value is probably gives an acceptable level of confidence in power usage.
 

StefanR5R

Diamond Member
Dec 10, 2016
3,569
3,843
106
There are actually corporations which need large clusters of ARM nodes. They tend to power them over PoE:
https://www.servethehome.com/aoa-analysis-marvell-thunderx2-equals-190-raspberry-pi-4/
(from August 2019)

They need these clusters not for computation, but as test beds for software development. Hence, "bang for the buck" plays a somewhat lesser role than for us DCers.
Patrick Kennedy said:
I have seen a cluster with several dozen nodes and individual USB power inputs, and it looked like a rat’s nest in comparison.

(I found this via an article from today which briefly discusses a now abandoned plan to build an RPi 4 cluster:
https://www.servethehome.com/introducing-project-tinyminimicro-home-lab-revolution/
They found stacks of SFF corporate desktops easier to handle as the basis for a cluster of small nodes.)
 
  • Like
Reactions: Endgame124

Endgame124

Senior member
Feb 11, 2008
353
207
116
Using a mix of low cost desktop hardware sounds like the worst possible combination of possible worlds.

You don’t have a unified software, management, and configuration stack - each vendor (and likely each model) is likely to require different management. It’s also quite a bit more expensive than pis (I think the author over estimates the price by around $50 per pi), and without a compelling work per watt angle.

If power is less of a concern, likely the best approach would be to build as many 3900X systems as needed, which I believe is the best price per core option, and has solid power efficiency to boot.

Edit:
Just read the arm server article. That is pretty awesome, but up front cost makes it a questionable debate to compare against EPYC hardware.
 
Last edited:

StefanR5R

Diamond Member
Dec 10, 2016
3,569
3,843
106
About this SFF desktop cluster; going somewhat off-topic to your thread: These have laptop processors and laptop-style mainboards, which should make them comparably efficient when under load and when idle. The low core count per node is a big drawback though when this is considered for computational purposes.
They purchased different ones only in order to be able to make recommendations to their customers and readers; for a production deployment you'd sure want to have a uniform cluster.
I am noticing that all their photographs of those neat stacks of SFF computers were taken without any cables, notably without the tangled web of external power supplies. (On the other hand, one of the more realistic reasons to repurpose SFF desktops for server-like duties is "edge computing", i.e. having the nodes placed apart in remote locations, not actually concentrated in a single lab.)
 

Markfw

CPU Moderator, VC&G Moderator, Elite Member
Super Moderator
May 16, 2002
20,570
8,416
136
@Endgame124 , do you have newegg or amazon links to the best 4 gig PI's with the power supplies and SD cards required to run BOINC ? And any additional hardware I forgot to ask about I am looking at 12-18 PI's total.

What do you think of this one ?
 

Endgame124

Senior member
Feb 11, 2008
353
207
116
@Endgame124 , do you have newegg or amazon links to the best 4 gig PI's with the power supplies and SD cards required to run BOINC ? And any additional hardware I forgot to ask about I am looking at 12-18 PI's total.

What do you think of this one ?
While I replied in the other thread, I thought I would reply here for anyone that reads this later. Additionally, I can read this later and adjust earlier posts in the thread.

Q) Do I have a recommended, single pi starter kit, specific to distributed computing?

A) No - most of the starter kits are not designed to run at 100% cpu 24x7x365. If I were to recommend a "starter kit", I would create my own using these components.

Pi: 4gb pi 4 (board only) (https://www.amazon.com/Raspberry-Model-2019-Quad-Bluetooth/dp/B07TC2BK1X/)

Case: Flirc (https://www.amazon.com/Flirc-Raspberry-Pi-Case-Silver/dp/B07WG4DW52?ref_=ast_sto_dp)

SD Card: 32 GB Samsung Select (https://www.amazon.com/dp/B06XWN9Q99/ref=twister_B08D89LRCB?_encoding=UTF8&psc=1)

Power Supply: Official Raspberry Pi 4 Power Supply
(https://www.amazon.com/Raspberry-Model-Official-SC0218-Accessory/dp/B07W8XHMJZ/ref=sr_1_5?dchild=1&keywords=raspberry+pi+charger&qid=1600739788&s=electronics&sr=1-5)
 

pauldun170

Diamond Member
Sep 26, 2011
6,862
1,980
136
@Endgame124 , do you have newegg or amazon links to the best 4 gig PI's with the power supplies and SD cards required to run BOINC ? And any additional hardware I forgot to ask about I am looking at 12-18 PI's total.

What do you think of this one ?
I'm sure you'll get solid advice but I'll toss out some alternatives.
In addition to the Flirc and the official power supply (Which I own),
I also use the cheapo Micro Connectors cases (with fan) form Microcenter
along with after market power supplies that include switches
I actually prefer the aftermarket PS due to the power button.

Those cases come with fans and heatsink and I have the fans on the 3v so they are silent.

Flirc case is great....until its summer and it's in a hot room. Then they do get pretty hot.
The micro-connector case with the fan on low keeps the temps in the 40c-55c range.

A lot of sites recommend Argon one raspberry pi 4 case and on paper it looks excellent. HOWEVER I do see a lot of complaints about poor build quality.
I've been tempted to order one for awhile now but comments on real workd usage hold me back. Fortunately, Flirc and Micro Connector cases have been excellent and aside from the annoying connector situation I have no issues.

This case is kinda interesting if not over the top


For your Boinc project
How about a stackable setup? Each with fan and heastsinks

As for the SDCard
Here ya go.
All will work fine so pick the fastest one you are willing to pay for.

Finally, Microcenter some times have pretty good sales
 

Endgame124

Senior member
Feb 11, 2008
353
207
116
I'm sure you'll get solid advice but I'll toss out some alternatives.
In addition to the Flirc and the official power supply (Which I own),
I also use the cheapo Micro Connectors cases (with fan) form Microcenter
along with after market power supplies that include switches
I actually prefer the aftermarket PS due to the power button.

Those cases come with fans and heatsink and I have the fans on the 3v so they are silent.
The biggest issue with those little fans when running distributed computing, they will be running 24x7x365. My experience with those little fans is that they start making grinding noise within 3 months, and fail completely within 6-9 months. I find that a passive cooling case is going to be the longer lived solution, and worst case you can always point a 120mm fan at it.

Flirc case is great....until its summer and it's in a hot room. Then they do get pretty hot.
The micro-connector case with the fan on low keeps the temps in the 40c-55c range.
The Flirc case temp depends on ambient, of course, but even in a room with some high ambient temp (80f) I've not seen a pi 4 in one thermally throttle.

My temp testing with a flic case with a stock pi 4 inside, in a 68f degree basement:

w/ plastic top on 58c
w/ plastic top off 55c
w/ plastic top off + heatsink sitting on top of case 51c
w/ plastic top off + heatsink sitting on top of case + 60mm fan blowing on it: 35c


This case is kinda interesting if not over the top
That one is one I hadn't seen before. Very interesting, if not terribly cost effective for a whole cluster of boinc boxes.

Another completely over the top option:

For your Boinc project
How about a stackable setup? Each with fan and heastsinks
The multi layer setups are probably the ideal option, but I'm still concerned about the longevity of the little fans with that setup. I'm leaning toward buying one of these to try out:


As for the SDCard
Here ya go.
All will work fine so pick the fastest one you are willing to pay for.
Excellent article that I've seen before. It was part of my motivation to store all the boinc project data on my NAS. I'll make sure to include this link in the first post.

Finally, Microcenter some times have pretty good sales
Yep - microcenter is definitely the best place to buy a one off pi, and I've gotten several that way over the last 4 months. Buying multiple at once, though, microcenter seem to discourage (website shows a price increase for buying more than 1).
 
Last edited:

Markfw

CPU Moderator, VC&G Moderator, Elite Member
Super Moderator
May 16, 2002
20,570
8,416
136
I don't have a micro center near me. If you ever are recommending things for me, please use amazon or newegg. I hate that you can't get half or more of the things there on the internet.
 

pauldun170

Diamond Member
Sep 26, 2011
6,862
1,980
136
The biggest issue with those little fans when running distributed computing, they will be running 24x7x365. My experience with those little fans is that they start making grinding noise within 3 months, and fail completely within 6-9 months. I find that a passive cooling case is going to be the longer lived solution, and worst case you can always point a 120mm fan at it.


The Flirc case temp depends on ambient, of course, but even in a room with some high ambient temp (80f) I've not seen a pi 4 in one thermally throttle.

My temp testing with a flic case with a stock pi 4 inside, in a 68 degree basement:

w/ plastic top on 58c
w/ plastic top off 55c
w/ plastic top off + heatsink sitting on top of case 51c
w/ plastic top off + heatsink sitting on top of case + 60mm fan blowing on it: 35c



That one is one I hadn't seen before. Very interesting, if not terribly cost effective for a whole cluster of boinc boxes.

Another completely over the top option:


The multi layer setups are probably the ideal option, but I'm still concerned about the longevity of the little fans with that setup. I'm leaning toward buying one of these to try out:



Excellent article that I've seen before. It was part of my motivation to store all the boinc project data on my NAS. I'll make sure to include this link in the first post.


Yep - microcenter is definitely the best place to buy a one off pi, and I've gotten several that way over the last 4 months. Buying multiple at once, though, microcenter seem to discourage (website shows a price increase for buying more than 1).
I can say that so far, running 24x7 for several months those little fans have been running like champs. If they fail, last I check they are super cheap to replace.


That stackable case is interesting. It does raise the question of whether it might make an interesting project to just build your own case at that point.
Should be straight forward to do with basic tools. Important bits will be the heat sinks. Slap some fans on.

Come to think of it...take an old PC case with fans and airflow. Create a custom tray or rack inside that case. You can hide everything within the case and run the case fans (either custom wired or simply take an old PSU and use the jumper to power it up).

"Whats up with the old PC in the Corner?
Its got 20 raspberry pi's running in there...

The Flirc definitely is a great simple case.
Fyi, mine is sitting in top our fios stb. I was able to lower the temps a bit on Flirc case by turning it on it side on a cool portion off the set top box.
Was tempted to toss a heat sink on it but with temps under 50 I don't see any reason to do it
 

Endgame124

Senior member
Feb 11, 2008
353
207
116
I can say that so far, running 24x7 for several months those little fans have been running like champs. If they fail, last I check they are super cheap to replace.


That stackable case is interesting. It does raise the question of whether it might make an interesting project to just build your own case at that point.
Should be straight forward to do with basic tools. Important bits will be the heat sinks. Slap some fans on.

Come to think of it...take an old PC case with fans and airflow. Create a custom tray or rack inside that case. You can hide everything within the case and run the case fans (either custom wired or simply take an old PSU and use the jumper to power it up).

"Whats up with the old PC in the Corner?
Its got 20 raspberry pi's running in there...

The Flirc definitely is a great simple case.
Fyi, mine is sitting in top our fios stb. I was able to lower the temps a bit on Flirc case by turning it on it side on a cool portion off the set top box.
Was tempted to toss a heat sink on it but with temps under 50 I don't see any reason to do it
I experimented with using pis inside of a computer case, but it really came down to the fact that you need some kind of frame to attach the pi to within the case. If I had any HDD bays available, I could fit maybe 6 inside the HDD slots inside the NZXT case I have my NAS in. Any more than that and you have to build something.

I've also looked into using a PC power supply to power all of the Pis, but the fact they are powered by 5V means that a PC power supply isn't very good at powering a bunch of pis.

It seems the least expensive per pi, and least effort approach, is to buy one of the stacking cases, and power the whole stack with a multi port USB charger.
 

ASK THE COMMUNITY