Fastest Linux server file system for windows clients.

robmurphy

Senior member
Feb 16, 2007
376
0
0
I have a dell server which I want to use in my home network. The server is a poweredge 840, with a a xeon C2D dual core processor, and it has a perc 5 hardware SAS/SATA raid controller. The perc5 raid controller has a bandwith of upto 1 G byte per sec or 8 Gbs. This should be more than enough to fully saturate a single gigabit ethernet link.

I want to setup all the files I currently share on the home network on the dell server.

I have several server PCI-X intel gigabit network cards, and the server has 2 free PCI-X slots to take the cards.

What I would like is a linux build on the server that will allow windows clients with gigabit ethernet to mount the exported files systems, and give good to excellent performance. I more interested in raw transfer speed than anything else here.

I'm not interested in WHS for this. If I use a MS server build it will be server 2003 or 2008. Given funds a very tight at present I would prefer to use a linux server. I have seen various posts regarding samba and iscsi. As far as I can see from the posts i have read samba seems to offer quite low transfer speeds, and iscsi is reported to be slower that samba.

Are the speed limitations due to lack of processing power, or are they just a limitation of samba/iscsi?

Does anyone have suggestions for an alternative file system. I'm OK with using free BSD, Linux, or another free OS, just not WHS.

If Linux/free BSD/another free OS cannot provide decent performance what performance could I expect from MS SBS 2003/2008 to an ideal windows client over gigabit ethernet. Again as stated before I'm not interested in WHS.

Thanks in advance for any help.

If this post needs moving to another forum then please move it. I asked in this forum as after reading the previous posts it seemed the most relevant.

Rob.
 

Pantlegz

Diamond Member
Jun 6, 2007
4,627
4
81
freenas works good in my experience, it's free, lightweight and very easy to setup and configure.
 

RebateMonger

Elite Member
Dec 24, 2005
11,586
0
0
With modern disks and controllers, the server hardware won't likely make much difference in the case of a file server with few users. The networking hardware, the network file sharing protocol, and any special transfer applications are more likely to determine data transfer rates.

The most common File Sharing Protocols are NFS (Linux), SMB (Windows) and iSCSI. Protocols have to match on the server and the client side. FTP, HTTP, and rsynch are some other ways to transfer data, although they're not as generally useful.
 
Last edited:

robmurphy

Senior member
Feb 16, 2007
376
0
0
With modern disks and controllers, the server hardware won't likely make much difference in the case of a file server with few users. The networking hardware, the network file sharing protocol, and any special transfer applications are more likely to determine data transfer rates.

The most common File Sharing Protocols are NFS (Linux), SMB (Windows) and iSCSI. Protocols have to match on the server and the client side. FTP, HTTP, and rsynch are some other ways to transfer data, although they're not as generally useful.

At present I have several desktop machines running XP/Vista/Win 7 with drives mapped across. This is not great. XP to XP file sharing is not that good, and the workgroup setup leaves much to be desired. This was the reason to look at using a server build. I could run this on one of the desktops run into problems with room in the desktop case.

I also wanted the experience of working on and loading an OS on a real server. If I go a job/contract, and tell them I have worked on a server they want to know which one.

What I was hoping was that some one had setup samba and or iscsi on a linux server with windows clients, and could comment on the transfer speed, problems ect.

Some of the desktops will be dual booted with linux and windows. Having the same server used for both would be great. The most important thing is ,as stated in my first post, transfer speed. If the transfers to/from the server are quick, i.e. 80 to 100 MBS, or 640 to 800 Mbs then I'm happy to deal with other inconviences.

I'll also be using several of the desktops at once, so the server could be dealing with all 3 gigabit ethernet ports on the PCI-X NICS being fully used.

I have used a with PC NFS before but that was in the early 1990s. I have also used rsync many times between linux hosts. I'm not sure about using rsync on windows without the use of something like cygwin.

To be honest FTP is not what I want. I want to be able to read/write the files directly on the server.

Rob.
 

Emulex

Diamond Member
Jan 28, 2001
9,759
1
71
I also wanted the experience of working on and loading an OS on a real server. If I go a job/contract, and tell them I have worked on a server they want to know which one.
^^ everyone is using windows 2008 x64 or 2008 R2 now(120 days trial?).

if you want to bust out - setup a vmware ESX (trial) with site recovery/data recovery to go along with that. you will have to learn some real iscsi,nfs,smb skills to make it work reliably.
 

imagoon

Diamond Member
Feb 19, 2003
5,199
0
0
You left out disk information. The Perc 5 controller will work with data on the disks which will change the through put. Also it is very unlikely would will actually get 1 gigabyte / sec off that machine without a ton of extra disks.

Typical speeds: (actual performance relies on the disks / number of disks / RAID controllers).
SCSI (SCA style): With 15k disks you would be able to saturate a gigabit link but generally "just barely." Performance varies heavily based on the RAID config. RAID 10 is genearlly the fastest read/write followed by RAID 6 and then RAID 5.

SATA disks in the unit will struggle to saturate a single gigabit link even if you configure them in 4 disk raid 10. Performance will scale a bit as you add more disks to the array but the SATA overhead will tend to kill it. Multigig connections will be nearly unneeded here.

SAS disks come with speeds of up to 15k rotation and generally much better performance in most server configurations.

So with that:

Something like FreeNAS will be simple to setup. You install it then log in to the web.

iSCSI: #1 Odds are you don't want this. iSCSI makes a LUN created on the NAS appear as a local disk to 1 machine (doubt you are doing clustering.) While you can connect it to more than 1 machine at a time, expect massive corruption on the disk if you use it incorrectly. NTFS is not cluster aware (on XP Vista or 7 at least). Again you are basically mapping a raw disk using iSCSI. I am not sure where you heard "iSCSI is slower than SMB" but my servers here have no issue saturating 1 gig links to the SAN (enterprise level, not home.) iSCSI benefits from TCP/IP offload cards and good clean connections. If you try to share iSCSI traffic on the same LAN as your internet and other traffic it will be poor.

SMB: Samba performance varies but you are limited to SMB version 1. SMB 1 is not known for it's fantastic performance.

NFS: Generally faster than SMB, but takes a lot more configuration.

2k3 and 2k8 can export NFS, SMB, and FTP. 2k8 adds SMB2 to the mix for supported clients (Vista + Windows 7). Pretty sure Linux does not do SMB2 at all or well yet.

You need to give us more details on your goals however. Since this is all for home, it sounds like you could be going the overkill route. The PE840 will likely provide more power than you need in the default config. There will be no reason to add nics etc. Also do you even have the switch hardware to handle multinic?

"I'll also be using several of the desktops at once, so the server could be dealing with all 3 gigabit ethernet ports on the PCI-X NICS being fully used."
No you won't. The 840 doesn't have enough Disk slots or processor power to max out 3 connections. It is unlikely you would max out 1.
 
Last edited:

RebateMonger

Elite Member
Dec 24, 2005
11,586
0
0
As imagoon points out (and I didn't mention), iSCSI is pretty much limited to sharing data with one PC at a time per "share", unless the multiple clients are running in a cluster.
 
Last edited:

robmurphy

Senior member
Feb 16, 2007
376
0
0
The server was bought as I wanted a real server, and I also wanted an intel based machine with hardware virtualisation support to experiment with VMware, Suns virtual box, and Microsofts equivelant.

The server is a bit more heavy duty that I had imagined. I did not know about the perc5 card. The server came with 3 raid edition WD 250 and 320 MB SATA disks. These may well get replaced with some new faster drives. As far as I can tell from google the perc5 card will be much better than the onboard intel ICH SATA controller.

I have already downloaded unbuntu server, opensuse server, centos, and a trial version of server 2008. I want to use the present time when I have little or no work to get some experience on loading server 2008. I Have done server 2003 but that was a few years back, and a trial version of server 2003 is not available. I also want to use this time to get some experience on loading the various linux server builds onto this server, and getting them to work with windows and linux clients

Being able to install and configure both MS server and linux servers (free) will be an asset at the moment.

Just what kind of hardware do you need to get transfers of large files across gigabit ethernet to 70 - 80 MBs. That figure is given in another thread for PC to PC trasfers over gigabit ethernet. Other posters have reported 50+ MBs using WHS, but have not given any details. If its going to need 4 15K drives to obtain decent throughput then that will not happen, but again how do people get 70 - 80 MBs using normal desktop drives? The new desktop I have just built has a samsung F3 1TB drive and the benchmarks given for it show average speed for sustained reads and writes that would full saturate a gigabit link. Why does it take 4 15K drives to give about 50% of the capacity, many times the cost, power and noise just to get the the same sustained read/write performance. And thats using a hardware raid controller with its own processor. I'm not dealing with SQL databases, or other transfers of many small files.

Having multiple NICs allows me to take the switch used out of the equation. The 3 nics can be connected directly to 3 of the client PCs for testing purposes. All the PCI-X cards have TOE and support upto 9K jumbo frames. The motherboard NIC can be used for connection to the net.

After doing some more research myself I found that iscsi is not realy suitable as sharing the same volume would cause problems.

If SMB2 is not available on XP or linux that only leaves NFS and SMB. When researching NFS the only clients for windows machines seemed rather old and had not been updated in many years. What NFS client do you use on the windows machines?

Rob
 

skyking

Lifer
Nov 21, 2001
22,786
5,941
146
I have very good experiences with linux MD raid, so the perc 5 is not really an asset as I see it for linux. I get 30~35MB/second on my SATA150 RAID 1 desktop to my SATA150 RAID 1 linux server, and I have not found that to be a bottleneck for my usage. That was using Samba(SMB) with a 1GB ISO back and forth just now.
 

imagoon

Diamond Member
Feb 19, 2003
5,199
0
0
The new desktop I have just built has a samsung F3 1TB drive and the benchmarks given for it show average speed for sustained reads and writes that would full saturate a gigabit link. Why does it take 4 15K drives to give about 50% of the capacity, many times the cost, power and noise just to get the the same sustained read/write performance. And thats using a hardware raid controller with its own processor. I'm not dealing with SQL databases, or other transfers of many small files.

Benchmark system to cache will exceed a 1 gig connection. However sustained and random will not.

The disk will attempt to cache sectors for you trying to guess the next access. Once the cache is full (typically 2 to 32mb) and the "guessing software" manages to get it right you will see higher performance cache > to system performance level because you are going from RAM to RAM. These guesses have to happen during disk lag time otherwise the attempt to cache the data would slow down the actual data move. Once you move out of cache, you are limited by the physical speed of the platter and read systems which is far lower than cache. Also with multiple machines you will have random access with then adds arm swing times and rotational delays which also can play hell even on a good cache algorithm. A gig link is about 120megabytes a second by the way.

You will note on the spinpoint that only the very beginning edge of the disk is around 120meg / sec. That is outside edge of the disk. You will only see that performance of a very tiny subsection of the disk. At the other end of the disk you see only 70MB/s. Also benchmarks are pure sequential tests, which means the disk system does not need to move seek the head very far. Once you add random access, those numbers will drop. The disk scored a random access time of 13.5ms. Multiple disks help reduce that seek time by spreading disk accesses across many disk arms at once.

Once you add something like OS overhead, RAID overhead (on software cards esp), system overhead, network protocol overhead, and TCP/IP overhead, etc 30 - 40 MB/s is more reasonable.

Also networking is a 2 way street. Your client machine also needs to be able to digest the data at that speed.

Having multiple NICs allows me to take the switch used out of the equation. The 3 nics can be connected directly to 3 of the client PCs for testing purposes. All the PCI-X cards have TOE and support upto 9K jumbo frames. The motherboard NIC can be used for connection to the net.

You are adding a ton of complexity to something that is going to give you minimal if any improvement in performance. To give you some perspective, my virtual servers rarely saturate a single link (mostly during disk backups of multiple machines at once [ie very heavy sequential]) and to do that there is a rack of disks in a hardware SAN supporting it.

To answer your question (I missed it first time through) You can see 80MB/s from desktop drives it depends on your access methods. When you benchmark using a server to a desktop using a sequential test you will get a large percentage of the disk's performance. If you try running that benchmark from 2 clients however the rate will drop because the server and disk will need to service random I/O with involves arm swinging and rotational latency. It will not be a 40MB/s:40MB/s split. Real world will typically involve seek times from random I/O.
 
Last edited:

spikespiegal

Golden Member
Oct 10, 2005
1,219
9
76
My first suggestion is to avoid 'kit' Linux builds like FreeNAS if the mere concept of performance comes into the mix.

About a year ago I tested several of these 'kit' linux installs on several different hardware platforms and found at most they were lucky to hit 50% of the sustained SMB xfer rate of Win2k. They fare much better when using protocols like FTP, which tells you where the problem is.
 

spikespiegal

Golden Member
Oct 10, 2005
1,219
9
76
A gig link is about 120megabytes a second by the way.

On what planet? Using Mapped RAM drive to RAM drive across Cisco switches I've only been able to hit about 80-90meg.

Otherwise, you are correct in that the receiving machine having to write the data is usually the bottleneck. I've never seen RAID 5 with 15k drives ever come close to saturating a gig xfer rate on the write side. It takes a lot of drives with a lot of stripes and low controller overhead to write that much data and keep up. Plus, this only works with single large data files.

Still, try the RAM drive thing sometime at work for a hoot. Usually sets off alarms
in the network room as the fans light up on the Catalyst switches :)
 

imagoon

Diamond Member
Feb 19, 2003
5,199
0
0
On what planet? Using Mapped RAM drive to RAM drive across Cisco switches I've only been able to hit about 80-90meg.

Otherwise, you are correct in that the receiving machine having to write the data is usually the bottleneck. I've never seen RAID 5 with 15k drives ever come close to saturating a gig xfer rate on the write side. It takes a lot of drives with a lot of stripes and low controller overhead to write that much data and keep up. Plus, this only works with single large data files.

Still, try the RAM drive thing sometime at work for a hoot. Usually sets off alarms
in the network room as the fans light up on the Catalyst switches :)

Theroetical (IE you will never get there :) ).

1000Gb/s / 8 = bytes /s = 125MB/s

Even with a RAM disk you need to add over head (processor time both sides), network (TCP/IP etc) in there.

One of the reasons Jumbo frames increases performance is that 1 9000mtu frame has roughly 1/6 the network frame overhead, and roughly 1/6 the TCP/IP header overhead. It makes the link "more efficient."
 
Last edited:

n0cmonkey

Elite Member
Jun 10, 2001
42,936
1
0
My first suggestion is to avoid 'kit' Linux builds like FreeNAS if the mere concept of performance comes into the mix.

About a year ago I tested several of these 'kit' linux installs on several different hardware platforms and found at most they were lucky to hit 50% of the sustained SMB xfer rate of Win2k. They fare much better when using protocols like FTP, which tells you where the problem is.

FreeNAS is FreeBSD.