• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

What are some good optimizations to do to a NFS server?

Red Squirrel

No Lifer
I am slowly migrating over to a new server that will be dedicated towards storage. So far I'm not having very good luck with my new array, it is a 4 drive raid 10 array. It took nearly 2 days to copy 2TB worth of files. That does not seem right at all. I had trouble figuring out what the transfer rate was as any tool I used the numbers were jumping around like crazy, so it's anything from 15MB per second to 50MB per second. Either way those arn't good I should be getting at least 100MB/sec. 125MB being the theorical limit.

Also my raspberry Pi just locks right up now in the middle of viewing anything. Anything I have to tweak on the server to make NFS performance better?

The network seems ok, based on a really quick test.

Code:
[root@isengard ~]# iperf -s
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
[  4] local 10.1.1.50 port 5001 connected with 10.1.2.10 port 41124
[ ID] Interval       Transfer     Bandwidth
[  4]  0.0-10.0 sec  1.09 GBytes    937 Mbits/sec
^C[root@isengard ~]# 
[root@isengard ~]# 
[root@isengard ~]# 
[root@isengard ~]# iperf -c falcon.loc
------------------------------------------------------------
Client connecting to falcon.loc, TCP port 5001
TCP window size: 23.2 KByte (default)
------------------------------------------------------------
[  3] local 10.1.1.50 port 58557 connected with 10.1.2.10 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec  1.09 GBytes    938 Mbits/sec
[root@isengard ~]#


At the same time I also put the Pi in a case... do these things overheat? Wonder if maybe the issue is it needs a heat sink and fan?
 
Last edited:
I am slowly migrating over to a new server that will be dedicated towards storage. So far I'm not having very good luck with my new array, it is a 4 drive raid 10 array. It took nearly 2 days to copy 2TB worth of files. That does not seem right at all. I had trouble figuring out what the transfer rate was as any tool I used the numbers were jumping around like crazy, so it's anything from 15MB per second to 50MB per second. Either way those arn't good I should be getting at least 100MB/sec. 125MB being the theorical limit.

Also my raspberry Pi just locks right up now in the middle of viewing anything. Anything I have to tweak on the server to make NFS performance better?

The network seems ok, based on a really quick test.

Code:
[root@isengard ~]# iperf -s
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
[  4] local 10.1.1.50 port 5001 connected with 10.1.2.10 port 41124
[ ID] Interval       Transfer     Bandwidth
[  4]  0.0-10.0 sec  1.09 GBytes    937 Mbits/sec
^C[root@isengard ~]# 
[root@isengard ~]# 
[root@isengard ~]# 
[root@isengard ~]# iperf -c falcon.loc
------------------------------------------------------------
Client connecting to falcon.loc, TCP port 5001
TCP window size: 23.2 KByte (default)
------------------------------------------------------------
[  3] local 10.1.1.50 port 58557 connected with 10.1.2.10 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec  1.09 GBytes    938 Mbits/sec
[root@isengard ~]#


At the same time I also put the Pi in a case... do these things overheat? Wonder if maybe the issue is it needs a heat sink and fan?

Honestly RPi isn't the best solution for a storage appliance, NAS or iSCSI over ethernet (if you can even get that working on that platform). Devices are constantly competing for bandwidth, in my experience they're good for simple storage solutions but if trying to implement software based RAID or ZFS you're looking for trouble. An atom based appliance would serve you better honestly.
 
Oh the storage server is a Xeon and is a fairly beefy machine. The Pi is just one of the clients and kept locking up, due to what I imagine was an IO starvation or something. Oddly it randomly stopped doing it so I'm not sure what that was about.

Just curious though if I should be doing anything at the server to further optimize NFS since it will be a dedicated file server and there will be more and more stuff accessing it. I'm even debating on redirecting /home to it from all servers.
 
Oh the storage server is a Xeon and is a fairly beefy machine. The Pi is just one of the clients and kept locking up, due to what I imagine was an IO starvation or something. Oddly it randomly stopped doing it so I'm not sure what that was about.

Just curious though if I should be doing anything at the server to further optimize NFS since it will be a dedicated file server and there will be more and more stuff accessing it. I'm even debating on redirecting /home to it from all servers.

What OS/software are you using to manage? Really depends on what role the server's going to play. Some of this stuff is quite new to me (from a *nix perspective at least) so I'm learning as you are. From my understanding ARCSIZE tuning yields some really good gains dependent on what server role its playing.

This pertains to Solaris but is a good guide on tuning with explanations.

http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide#File-Level_Prefetching
 
Last edited:
It's running CentOS 6, the arrays are using mdadm, and it's only role will be file serving. I may possibly do kerberos/LDAP at some point though I'm debating on if I want a separate box for that, since this system is really strictly file serving. There is no GUI or front end, it's all done at the command line.

One optimization I did find is when mounting a NFS share instead of using defaults, using something like this:

Code:
isengard.loc:/volumes/raid1/p2p  /network/p2p   nfs    rsize=32768,wsize=32768,intr,noatime    0 0

Though I'm curious if there are other such optimizations as well or if that is even the best settings. That's at the client side though, idealy it would be nice if I could do those at the server level, but not sure if those particular settings are doable at the server.
 
Copying files from the network filesystem level (CIFS/NFS) is very slow in comparison to copying via more direct methods like native iSCSI or FC mounts. The NFS mount code you posted is a good start at changing the block size to something larger. You'll get far more mileage if you choose the right file systems too...EXT2 is nonjournaling and handles smaller files better while EXT3/4 will likely give you better large file support. If you have a lot of smaller files, you may be better off tar'ing them up and transferring them in larger chunks. In the old days I would actually run a backup/restore job to transfer large amounts of data over the network. It was about 30% faster.

Network limitations can be an issue when trying to copy that much data if you're not on a Gigabit network. If you are running iSCSI, you need to have at least 600-650Mbit and you can't route that traffic...Jumbo Frames are a must.

Another limitation may be your RAID controller. If you have no RAID controller and are using software RAID, that could be part of the slowdown. If your controller doesn't have ample RAID cache, that could be part of the problem... You see a lot of low-end cards lacking resources.

The other limitation is spindles.... Trying to copy that kind of data to only 4 drives in a RAID 10 is essentially only 2 spindles... IF there was enough data coming down the pipe, it could be filling up the write cache and then causing a bottleneck, but typically this is rare if you're not copying over FC @ 4-8Gbit transfer rates.
 
Back
Top