The Raspberry Pi Thread

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Ken g6

Programming Moderator, Elite Member
Moderator
Dec 11, 1999
16,219
3,800
75
If Rosetta is going to start assigning 2GB tasks regularly, I may have to start having 2 Rosetta tasks and 2 WCG tasks. That seems to annoy boinc Though, based on some testing with a 2GB pi.
Maybe you need another instance...
 

StefanR5R

Elite Member
Dec 10, 2016
5,459
7,718
136
With just 4 narrow cores per host, and not much RAM either (considering Rosetta's needs) it may be worthwhile to stick with 1 client instance per host. E.g. set the global CPU percentage to 50 %, define WCG applications as 0.01 % CPU users, set WCG's project_max_concurrent to 2, and use WCG's web config to limit its number of tasks in progress so as not to end up with a weeks deep work buffer.

(That's assuming the boinc annoyance which @Endgame124 referred to was about the client's local scheduling decisions.)
 

Endgame124

Senior member
Feb 11, 2008
954
669
136
With just 4 narrow cores per host, and not much RAM either (considering Rosetta's needs) it may be worthwhile to stick with 1 client instance per host. E.g. set the global CPU percentage to 50 %, define WCG applications as 0.01 % CPU users, set WCG's project_max_concurrent to 2, and use WCG's web config to limit its number of tasks in progress so as not to end up with a weeks deep work buffer.

(That's assuming the boinc annoyance which @Endgame124 referred to was about the client's local scheduling decisions.)
I thought it was the local scheduler that was the problem when I wrote the post - my 2GB pi was set via appconfig.xml to run 2 Rosetta tasks and 2 WCG tasks, but only WCG was running. Turns out, the problem was that Rosetta ran out of Tasks. I see that Rosetta has started sending tasks out again, so I’ll check on the host in the morning and see what it’s doing.
 

Endgame124

Senior member
Feb 11, 2008
954
669
136
I moved all of my pis to using swap over NFS using a loop device, thinking this has got to be a better solution than swapn on a local micro sd card. That said, I have one of them that is trying to swap over NFS and it keep hanging, so it really may not be the best idea ever. Perhaps I need to see if I can mount it in a different way...

Since I had to reboot all 8 pis, and I had a few minutes to do some testing, I switched all 8 to maximum power saving mode - over_volt -2, clockspeed 1525mhz. Combined, all 8 Pis pull 36 watts, including losses from the power supplies (based on the touch test, the Anker power supply is more efficient). That makes them 4.5 watts each, which isn't as good as I was thinking it would be from testing with a single Pi - its about 244 Credits / watt in Rosetta. Perhaps a single large system really is the way to go - similar performance per watt with substantially less system management.
 

Endgame124

Senior member
Feb 11, 2008
954
669
136
A few more power saving options I've found digging around the official pi forum.

#Disable Ethernet LEDs:
dtparam=eth_led0=4
dtparam=eth_led1=4

# Disable DRM VC4 V3D driver on top of the dispmanx display stack (comment existing line)
#dtoverlay=vc4-fkms-v3d

#disable hdmi frambuffer (does not appear to be covered by tvservice --off)
max_framebuffers=0

#disable audio (minimal power saving, but frees up memory)
dtparam=audio=off
 

Endgame124

Senior member
Feb 11, 2008
954
669
136
I picked up a power meter (this one to be specific: https://poniie.com/products/17), and after playing with it a little bit, I have these observations.

1) Anker customer service stated its 6 port power supply is 85% efficient at any load. This appears to be incorrect. 1 pi 4 4GB attached to the charger draws 5.43 watts peak running Rosetta. 5x pi 4 4GB plugged into the same charger draw 24.79 watts peak (4.958 watts each)

2) Rosetta generates a dramatically more variable load than WCG OP. Rosetta will vary by >.75 watts with 1 pi 4, while WCG OP varies by < .1 watt.

3) During load, multiple tweaks seem to make minimal to no difference in power utilization. Disabling USB, hdmi, and audio have no effect. Turning off the LEDs and slowing ethernet to 100mbit seem to be the most reliable way to reduce power.

4) the lowest observed power draw at the meter is 4.516 watts per pi (22.58 watts total using the 5 pi setup). This is not a great showing for efficiency - that's 311 Roseta RAC per watt at the current point values, or 243.58 RAC per watt on my values from April - this means there are certainly AMD processors that are more efficient per watt.
 

StefanR5R

Elite Member
Dec 10, 2016
5,459
7,718
136
I picked up a power meter (this one to be specific: https://poniie.com/products/17),
I have not done any research on this particular power meter, just looked at your link: The vendor's only statement about the accuracy of this device is that it adheres to Class 1.0. This is probably referring to either ANSI C12.1 or IEC 62053-21. I don't own either specification. But a quick web search indicates that IEC 62053-21 defines Accuracy Class 1.0 to give at most 1 % error of measurement at the top end of the meter's measuring range (and at power factor 1.0, and within a certain temperature range). The error of measurement may be larger below the upper limit of the measuring range, or at a power factor of less than 1.0, or of course outside the given temperature range. It is unknown to me whether ANSI or IEC define further limits to the error of measurement at other points within the measuring range.
 
Last edited:

Endgame124

Senior member
Feb 11, 2008
954
669
136
I have not done any research on this particular power meter, just looked at your link: The vendor's only statement about the accuracy of this device is that it adheres to Class 1.0. This is probably referring to either ANSI C12.1 or IEC 62053-21. I don't own either specification. But a quick web search indicates that IEC 62053-21 defines Accuracy Class 1.0 to give at most 1 % error of measurement at the top end of the meter's measuring range (and at power factor 1.0, and within a certain temperature range). The error of measurement may be larger below the upper limit of the measuring range, or at a power factor of less than 1.0, or of course outside the given temperature range. It is unknown to me whether ANSI or IEC define further limits to the error of measurement at other points within the measuring range.
I went with it because it was the "Amazon Recommended" alternative to the Kill-a-watt which was over twice the price. On the plus side, its getting a similar measurement to my APC UPS - around 4.5 watts per 4GB pi when running 5 of them. 2 different devices giving a similar value is probably gives an acceptable level of confidence in power usage.
 

StefanR5R

Elite Member
Dec 10, 2016
5,459
7,718
136
There are actually corporations which need large clusters of ARM nodes. They tend to power them over PoE:
https://www.servethehome.com/aoa-analysis-marvell-thunderx2-equals-190-raspberry-pi-4/
(from August 2019)

They need these clusters not for computation, but as test beds for software development. Hence, "bang for the buck" plays a somewhat lesser role than for us DCers.
Patrick Kennedy said:
I have seen a cluster with several dozen nodes and individual USB power inputs, and it looked like a rat’s nest in comparison.


(I found this via an article from today which briefly discusses a now abandoned plan to build an RPi 4 cluster:
https://www.servethehome.com/introducing-project-tinyminimicro-home-lab-revolution/
They found stacks of SFF corporate desktops easier to handle as the basis for a cluster of small nodes.)
 
  • Like
Reactions: Endgame124

Endgame124

Senior member
Feb 11, 2008
954
669
136
Using a mix of low cost desktop hardware sounds like the worst possible combination of possible worlds.

You don’t have a unified software, management, and configuration stack - each vendor (and likely each model) is likely to require different management. It’s also quite a bit more expensive than pis (I think the author over estimates the price by around $50 per pi), and without a compelling work per watt angle.

If power is less of a concern, likely the best approach would be to build as many 3900X systems as needed, which I believe is the best price per core option, and has solid power efficiency to boot.

Edit:
Just read the arm server article. That is pretty awesome, but up front cost makes it a questionable debate to compare against EPYC hardware.
 
Last edited:

StefanR5R

Elite Member
Dec 10, 2016
5,459
7,718
136
About this SFF desktop cluster; going somewhat off-topic to your thread: These have laptop processors and laptop-style mainboards, which should make them comparably efficient when under load and when idle. The low core count per node is a big drawback though when this is considered for computational purposes.
They purchased different ones only in order to be able to make recommendations to their customers and readers; for a production deployment you'd sure want to have a uniform cluster.
I am noticing that all their photographs of those neat stacks of SFF computers were taken without any cables, notably without the tangled web of external power supplies. (On the other hand, one of the more realistic reasons to repurpose SFF desktops for server-like duties is "edge computing", i.e. having the nodes placed apart in remote locations, not actually concentrated in a single lab.)
 

Markfw

Moderator Emeritus, Elite Member
May 16, 2002
25,483
14,434
136
@Endgame124 , do you have newegg or amazon links to the best 4 gig PI's with the power supplies and SD cards required to run BOINC ? And any additional hardware I forgot to ask about I am looking at 12-18 PI's total.

What do you think of this one ?
 

Endgame124

Senior member
Feb 11, 2008
954
669
136
@Endgame124 , do you have newegg or amazon links to the best 4 gig PI's with the power supplies and SD cards required to run BOINC ? And any additional hardware I forgot to ask about I am looking at 12-18 PI's total.

What do you think of this one ?
While I replied in the other thread, I thought I would reply here for anyone that reads this later. Additionally, I can read this later and adjust earlier posts in the thread.

Q) Do I have a recommended, single pi starter kit, specific to distributed computing?

A) No - most of the starter kits are not designed to run at 100% cpu 24x7x365. If I were to recommend a "starter kit", I would create my own using these components.

Pi: 4gb pi 4 (board only) (https://www.amazon.com/Raspberry-Model-2019-Quad-Bluetooth/dp/B07TC2BK1X/)

Case: Flirc (https://www.amazon.com/Flirc-Raspberry-Pi-Case-Silver/dp/B07WG4DW52?ref_=ast_sto_dp)

SD Card: 32 GB Samsung Select (https://www.amazon.com/dp/B06XWN9Q99/ref=twister_B08D89LRCB?_encoding=UTF8&psc=1)

Power Supply: Official Raspberry Pi 4 Power Supply
(https://www.amazon.com/Raspberry-Mo...i+charger&qid=1600739788&s=electronics&sr=1-5)
 

pauldun170

Diamond Member
Sep 26, 2011
9,133
5,072
136
@Endgame124 , do you have newegg or amazon links to the best 4 gig PI's with the power supplies and SD cards required to run BOINC ? And any additional hardware I forgot to ask about I am looking at 12-18 PI's total.

What do you think of this one ?

I'm sure you'll get solid advice but I'll toss out some alternatives.
In addition to the Flirc and the official power supply (Which I own),
I also use the cheapo Micro Connectors cases (with fan) form Microcenter
along with after market power supplies that include switches
I actually prefer the aftermarket PS due to the power button.

Those cases come with fans and heatsink and I have the fans on the 3v so they are silent.

Flirc case is great....until its summer and it's in a hot room. Then they do get pretty hot.
The micro-connector case with the fan on low keeps the temps in the 40c-55c range.

A lot of sites recommend Argon one raspberry pi 4 case and on paper it looks excellent. HOWEVER I do see a lot of complaints about poor build quality.
I've been tempted to order one for awhile now but comments on real workd usage hold me back. Fortunately, Flirc and Micro Connector cases have been excellent and aside from the annoying connector situation I have no issues.

This case is kinda interesting if not over the top


For your Boinc project
How about a stackable setup? Each with fan and heastsinks

As for the SDCard
Here ya go.
All will work fine so pick the fastest one you are willing to pay for.

Finally, Microcenter some times have pretty good sales
 

Endgame124

Senior member
Feb 11, 2008
954
669
136
I'm sure you'll get solid advice but I'll toss out some alternatives.
In addition to the Flirc and the official power supply (Which I own),
I also use the cheapo Micro Connectors cases (with fan) form Microcenter
along with after market power supplies that include switches
I actually prefer the aftermarket PS due to the power button.

Those cases come with fans and heatsink and I have the fans on the 3v so they are silent.
The biggest issue with those little fans when running distributed computing, they will be running 24x7x365. My experience with those little fans is that they start making grinding noise within 3 months, and fail completely within 6-9 months. I find that a passive cooling case is going to be the longer lived solution, and worst case you can always point a 120mm fan at it.

Flirc case is great....until its summer and it's in a hot room. Then they do get pretty hot.
The micro-connector case with the fan on low keeps the temps in the 40c-55c range.
The Flirc case temp depends on ambient, of course, but even in a room with some high ambient temp (80f) I've not seen a pi 4 in one thermally throttle.

My temp testing with a flic case with a stock pi 4 inside, in a 68f degree basement:

w/ plastic top on 58c
w/ plastic top off 55c
w/ plastic top off + heatsink sitting on top of case 51c
w/ plastic top off + heatsink sitting on top of case + 60mm fan blowing on it: 35c


This case is kinda interesting if not over the top
That one is one I hadn't seen before. Very interesting, if not terribly cost effective for a whole cluster of boinc boxes.

Another completely over the top option:

For your Boinc project
How about a stackable setup? Each with fan and heastsinks
The multi layer setups are probably the ideal option, but I'm still concerned about the longevity of the little fans with that setup. I'm leaning toward buying one of these to try out:


As for the SDCard
Here ya go.
All will work fine so pick the fastest one you are willing to pay for.
Excellent article that I've seen before. It was part of my motivation to store all the boinc project data on my NAS. I'll make sure to include this link in the first post.

Finally, Microcenter some times have pretty good sales
Yep - microcenter is definitely the best place to buy a one off pi, and I've gotten several that way over the last 4 months. Buying multiple at once, though, microcenter seem to discourage (website shows a price increase for buying more than 1).
 
Last edited:

Markfw

Moderator Emeritus, Elite Member
May 16, 2002
25,483
14,434
136
I don't have a micro center near me. If you ever are recommending things for me, please use amazon or newegg. I hate that you can't get half or more of the things there on the internet.
 

pauldun170

Diamond Member
Sep 26, 2011
9,133
5,072
136
The biggest issue with those little fans when running distributed computing, they will be running 24x7x365. My experience with those little fans is that they start making grinding noise within 3 months, and fail completely within 6-9 months. I find that a passive cooling case is going to be the longer lived solution, and worst case you can always point a 120mm fan at it.


The Flirc case temp depends on ambient, of course, but even in a room with some high ambient temp (80f) I've not seen a pi 4 in one thermally throttle.

My temp testing with a flic case with a stock pi 4 inside, in a 68 degree basement:

w/ plastic top on 58c
w/ plastic top off 55c
w/ plastic top off + heatsink sitting on top of case 51c
w/ plastic top off + heatsink sitting on top of case + 60mm fan blowing on it: 35c



That one is one I hadn't seen before. Very interesting, if not terribly cost effective for a whole cluster of boinc boxes.

Another completely over the top option:


The multi layer setups are probably the ideal option, but I'm still concerned about the longevity of the little fans with that setup. I'm leaning toward buying one of these to try out:



Excellent article that I've seen before. It was part of my motivation to store all the boinc project data on my NAS. I'll make sure to include this link in the first post.


Yep - microcenter is definitely the best place to buy a one off pi, and I've gotten several that way over the last 4 months. Buying multiple at once, though, microcenter seem to discourage (website shows a price increase for buying more than 1).

I can say that so far, running 24x7 for several months those little fans have been running like champs. If they fail, last I check they are super cheap to replace.


That stackable case is interesting. It does raise the question of whether it might make an interesting project to just build your own case at that point.
Should be straight forward to do with basic tools. Important bits will be the heat sinks. Slap some fans on.

Come to think of it...take an old PC case with fans and airflow. Create a custom tray or rack inside that case. You can hide everything within the case and run the case fans (either custom wired or simply take an old PSU and use the jumper to power it up).

"Whats up with the old PC in the Corner?
Its got 20 raspberry pi's running in there...

The Flirc definitely is a great simple case.
Fyi, mine is sitting in top our fios stb. I was able to lower the temps a bit on Flirc case by turning it on it side on a cool portion off the set top box.
Was tempted to toss a heat sink on it but with temps under 50 I don't see any reason to do it
 

Endgame124

Senior member
Feb 11, 2008
954
669
136
I can say that so far, running 24x7 for several months those little fans have been running like champs. If they fail, last I check they are super cheap to replace.


That stackable case is interesting. It does raise the question of whether it might make an interesting project to just build your own case at that point.
Should be straight forward to do with basic tools. Important bits will be the heat sinks. Slap some fans on.

Come to think of it...take an old PC case with fans and airflow. Create a custom tray or rack inside that case. You can hide everything within the case and run the case fans (either custom wired or simply take an old PSU and use the jumper to power it up).

"Whats up with the old PC in the Corner?
Its got 20 raspberry pi's running in there...

The Flirc definitely is a great simple case.
Fyi, mine is sitting in top our fios stb. I was able to lower the temps a bit on Flirc case by turning it on it side on a cool portion off the set top box.
Was tempted to toss a heat sink on it but with temps under 50 I don't see any reason to do it
I experimented with using pis inside of a computer case, but it really came down to the fact that you need some kind of frame to attach the pi to within the case. If I had any HDD bays available, I could fit maybe 6 inside the HDD slots inside the NZXT case I have my NAS in. Any more than that and you have to build something.

I've also looked into using a PC power supply to power all of the Pis, but the fact they are powered by 5V means that a PC power supply isn't very good at powering a bunch of pis.

It seems the least expensive per pi, and least effort approach, is to buy one of the stacking cases, and power the whole stack with a multi port USB charger.
 

Endgame124

Senior member
Feb 11, 2008
954
669
136
A short update on what I've been working on with my Pis (instead of updating the first post of this thread, which could also use some substantial work)

I've been spending some time trying to figure out network boot w/ iscsi targets for remote storage instead of NFS. It's been slow going as I can usually only work on it at nights when I'm already dead tired, but I seem to be making progress on that front (now my test pi boots then hangs). I don't have a mini hdmi -> hdmi cable, so i've been attempting, and generally succeeding, at doing everything with my Pi 4s 100% headless. At this point, I need the cable to see where the Pi is getting stuck, so I ordered one. Should be here tomorrow.

Today I had a little extra time, and since I don't have my mini hdmi cable yet, I decided to work on something different - I cut down everything running on Pi OS lite, and knocked out a number of services to free up both memory and CPU time. I removed (I think) around 50mb of ram usage, and reduced cpu overhead with the following things:

1) set static IP and disable dhcpd
2) disable wpa_supplicant (used with wifi, but I had already disabled wifi)
3) disable audio and remove drivers and associated software
4) disabled triggerhappy (hotkey daemon that controls things like keyboard volume control)
5) disable avahi-daemon (I don't need to advertise this host to the network)
6) migrate cron to systemd and disable the cron service
7) disable VC4 DRM driver and reduce framebuffers to 0

I still have a fair amount of idle used ram (126MB) - some of which is tied to using NFS that I should be able to remove once I switch to icsi, but I'll take any improvement I can get. If anyone is interested and wants to help trim down the Pi OS lite install, or do further research on disabling hardware to reduce pi power usage, I'd love to have some help! I've also posted to a thread on the official Raspberry pi forums, so I may get some help cleaning things up on that end as well.
 
  • Like
Reactions: Orange Kid

Markfw

Moderator Emeritus, Elite Member
May 16, 2002
25,483
14,434
136
OK, I think I upgraded to 64 bit as suggested somewhere, but what command tells me that it worked ? Like what OS its running ?
 

Markfw

Moderator Emeritus, Elite Member
May 16, 2002
25,483
14,434
136
4.19.97.... Did I get the 64 bit version ?
 

Endgame124

Senior member
Feb 11, 2008
954
669
136
4.19.97.... Did I get the 64 bit version ?
That is a little less output than I was expecting. I'm running 32 bit OS with 64 bit user space, which allows me to access all 8GB of ram.

Output from one of my pis:

LemonChiffon:~ $ uname -a
Linux LemonChiffon 5.4.51-v8+ #1333 SMP PREEMPT Mon Aug 10 16:58:35 BST 2020 aarch64 GNU/Linux

64 bit user space is enabled with this flag in /boot/config.txt:
LemonChiffon:~ $ tail /boot/config.txt
[all]
##dtoverlay=vc4-fkms-v3d
arm_64bit=1
dtoverlay=disable-wifi
dtoverlay=disable-bt
gpu_mem=16
dtparam=eth_led0=4
dtparam=eth_led1=4
over_voltage=-2

Contents of /etc/os-release doesn't give any clues, but here is mine for what its worth:
LemonChiffon:~ $ cat /etc/os-release
PRETTY_NAME="Raspbian GNU/Linux 10 (buster)"
NAME="Raspbian GNU/Linux"
VERSION_ID="10"
VERSION="10 (buster)"
VERSION_CODENAME=buster
ID=raspbian
ID_LIKE=debian
HOME_URL="http://www.raspbian.org/"
SUPPORT_URL="http://www.raspbian.org/RaspbianForums"
BUG_REPORT_URL="http://www.raspbian.org/RaspbianBugs"

The full Raspbian 64 bit OS is still in development / beta, so I haven't installed it yet.
 

Markfw

Moderator Emeritus, Elite Member
May 16, 2002
25,483
14,434
136
pi@raspberrypi:~ $ uname -a
Linux raspberrypi 4.19.97-v7l+ #1294 SMP Thu Jan 30 13:21:14 GMT 2020 armv7l GNU/Linux
pi@raspberrypi:~ $

Thats all the shows up.
 

Endgame124

Senior member
Feb 11, 2008
954
669
136
pi@raspberrypi:~ $ uname -a
Linux raspberrypi 4.19.97-v7l+ #1294 SMP Thu Jan 30 13:21:14 GMT 2020 armv7l GNU/Linux
pi@raspberrypi:~ $

Thats all the shows up.
Looks like you're still using 32 bit mode - I bolded the part that would be aarch64 if you had 64 bit mode running. Yours is armv7l instead.

if you want to convert it to 64bit userland, run this from command line and reboot:

echo "arm_64bit=1" | sudo tee -a /boot/config.txt