File Server Plans - Suggestions?

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

stncttr908

Senior member
Nov 17, 2002
243
0
76
I'm probably going to pickup a copy of Acronis True Image on the cheap and just schedule it to run nightly. It has some really nice features.
 

robmurphy

Senior member
Feb 16, 2007
376
0
0
Hi Madwand.

Could you tell me the reason for using FTP to test the network speed.

I've just upgraded my machines to Gigabit ethernet and I'm only getting 15 - 20% bandwidth used when using copy/paste or the copy command in a cmd window. The NICs are Intel Pro GTs and the switch is a negear GS608. I've tried copies with one NIC connected to another to cut out the switch but still get the same results.

Thanks in advance

Rob.
 

nweaver

Diamond Member
Jan 21, 2001
6,813
1
0
Originally posted by: robmurphy
Hi Madwand.

Could you tell me the reason for using FTP to test the network speed.

I've just upgraded my machines to Gigabit ethernet and I'm only getting 15 - 20% bandwidth used when using copy/paste or the copy command in a cmd window. The NICs are Intel Pro GTs and the switch is a negear GS608. I've tried copies with one NIC connected to another to cut out the switch but still get the same results.

Thanks in advance

Rob.

SMB has a higher overhead then FTP, so you will see a bit faster speeds with FTP versus a copy. Also, a copy/paste in GUI versus CLI can make a difference too.
 

Madwand1

Diamond Member
Jan 23, 2006
3,309
0
76
Originally posted by: robmurphy
Could you tell me the reason for using FTP to test the network speed.

FTP generally gets the best results, because it's a "leaner" protocol than normal Windows transfers (SMB), among other reasons. I chose FTP just for that reason -- to demonstrate better results in this case while still satisfying the basic requirement of demonstrating network file copy performance.

FTP is useful for such exercises, and when configured properly, will better indicate what performance your underlying file system and network are capable of, leaving out some of the complexities and inefficiencies of SMB. (OTOH, it's also possible to mess up FTP implementation/configuration and in some cases get less performance than with SMB; but you'll probably know when that's happening.)

Be sure to transfer large files; small files can (a) be too slow because of relative overhead (b) be too fast when they fit in the cache. "Real" results should use file sizes significantly higher than available RAM, but this can be painful when performance is slow, so smaller files are better during investigation.

Originally posted by: robmurphy
I've just upgraded my machines to Gigabit ethernet and I'm only getting 15 - 20% bandwidth used when using copy/paste or the copy command in a cmd window. The NICs are Intel Pro GTs and the switch is a negear GS608. I've tried copies with one NIC connected to another to cut out the switch but still get the same results.

Single drive performance will be the normal bottleneck if the networking is working well. Network file transfer performance will be somewhat less than both the underlying network performance and the hard drive performance, in a complex manner.

30 MB/s is a reasonable random performance target. You're not far from that. Measure actual performance by dividing file size by time, not just observing Task Manager / etc. Actual performance varies a lot according to circumstances. You don't give much in the way of detail regarding your testing method (e.g. file sizes), other hardware (drives, CPU, RAM), OS. All of these matter. In some cases even virus scanners have been seen to impact GbE file transfer performance.

Post or PM all the details, and maybe we can make some improvements, or just understand the expected performance better in your case. Another thread would be more appropriate.
 

robmurphy

Senior member
Feb 16, 2007
376
0
0
OK the setup at present is:

Netgear Gs608 switch, with support for jumbo frames.

Intel Pro GT gigabit ethernet cards in a PCI slot. No PCI X or PCI E available.

2 Machines running XP Pro, 1 Machine running XP Home.

3 x 2 Meter Cat5E cables.

I've enabled 9000 byte jumbo frames, and the TCP checksum offload onto the Intel NICs

The gigabit network is seperate. The onboard 10/100 ethernet ports are connected to a Netgear router/switch, this is on a 192.168.1.0 network. The 3 PCs have static addresses for the gigabit network. These being 10.0.0.1 to 10.0.0.3, and no default gateway. The drives are mapped across using the IP addresses rather than machine names.

All the PCs have the latest driver from Intel's website. The utilities with the network card have ben used to test the cables, and its shows they are OK. I have tried just connecting one PC to another but this made no difference to the transfer speed.

The transfer rates I'm getting going from 1 SATA drive on a PC accross the gigabit network to a SATA drive on another machine with no other copies is aprox 14-16 Mega Bytes per sec.

I ran a wireshark trace to have a look what was going on, and even using ftp the transfer is over TCP. The window size is just less than 64K.

I have been searching the web about this and came across various papers/articles concerning the TCP window size for XP. Most of the posts also apply to 95, 98, NT 2000 etc. I think that most of them do not apply to XP with SP2. In the end I did try making some changes in registry on all of the machines. These changes were 3 new keys in:

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters

The keys and values are:

Tcp1323Opts = 3
TcpWindowSize = 1057280
GlobalMaxTcpWindowSize = 1057280


I did some more traces and the TCP window is still just less than 64K. I'm I barking up the wrong tree with the TCP window size?

Could you recomend some software that can be downloaded from the net to test connection speed?

I have also checked Microsofts web site for info on this but if you search for info for XP it comes up blank.
 

Madwand1

Diamond Member
Jan 23, 2006
3,309
0
76
You need to reboot for the registry setting to take effect.

You can use iperf for network performance testing. E.g.

server: iperf -s
client: iperf -c server -l 64k -t 12 -i 3 -r

Surprised that ftp was still slow. What performance did you get? Which implementation did you try? I've found decent performance with the FileZilla server and standard command-line clients.

What file sizes did you use?

For a sample pair of computers, what's the CPU, RAM, motherboard and HD configuration?
Are the drives crowded and test files fragmented perhaps?
Did you observe high CPU utilization during the tests?
 

robmurphy

Senior member
Feb 16, 2007
376
0
0
I'll retry the tests again, and note what you have asked. The FTP server used was the one that is part of IIS on XP Pro. The client was the standard one that comes with XP.

 

kevnich2

Platinum Member
Apr 10, 2004
2,465
8
76
I'm just going to add my .02 in here. If this is strictly home use, I don't see a reason at all to add RAID (Unless it's RAID0 to combine several hard drives into one large drive but I don't consider that RAID as it's not redundant). If it's home use, you can manage having it going down for a little bit if a drive crashes. What you NEED is a good backup solution. Take an equal sized usb drive and make nightly or weekly backups to it so if a drive crashes, your data is still on your usb drive. I have yet to see a home user that really needs RAID. For a business, it's a must. My file server at home consists of two 500gb sata drives in a RAID0 configuration which is then backed up to another machine on the network once a week. This works great for me and I don't have to worry about my file server completely getting trashed. As for hardware, go with an old cheapy computer and if you know linux, put that on there with samba and back it up to the network. IMO, RAID isn't a backup AT ALL, it's just for helping keep uptime as high as possible.
 

robmurphy

Senior member
Feb 16, 2007
376
0
0
I'm not that bothered about raid. I might use raid 0 for speed. 1 of the reasons I went over to gigabit was to make it quicker when transfering large files or directories, i.e. > 4GB. Over 100 meg this takes time.

I use a torrent client on an old machine. The machine downloads to a shared area. Once downloaded the file or files are moved to the machine they will reside on. You could change the download path every time, but I prefer to have one download area, and then I do not worry about that being trashed, because if I wanted anything it has already been copied to another disk on another machine. If the machine with the torrent client on it gets trashed I have a ghost image of it ready to load.

The gigaqbit network would also help with backups, and imaging machines. Again over a 100 meg link this takes some time. I've got some improvement in speed with the new network, but not as much as I would have hoped. I've not reached the point that a single PATA 80G drive is the limiting factor (i.e. transfers to and from it are the limiting factor).

I have raised a support call with Intel about this, and will be responding to them next week as I'm away from home this weekend. I'll post the other details requested here next week as well.

All the best

Rob Murphy.
 

robmurphy

Senior member
Feb 16, 2007
376
0
0
It may be that the speed I'm getting over the gigabit ethernet is down to the file size used. One reason for using the existing machines to add storage to was the fact that most of the NAS boxes with gigabit connections dropped down to about 1 to 1.5 times better than 100 Meg ethernet when copying large files. The files I usualy copy are DVD ISO images. Last night I did copy a directory set with some mixed file sizes and it did complete quicker. The link showed 17 - 18% ocupancy which is the best I've seen. The DIY NAS on tomshardware showed the same trends as the files got to 512 MByte or bigger.

Could it be I'm running into some limit. The figures for the Gigabit NAS, and the DIY one all seem too show alarge reduction in performance once the file size gets to 512 Meg or more.

This prompts a question for Madwand. What was the file mix in the FTP test you did, was it one file, or a set of smaller files?.

Next week I'll try using winrar to compress up a large directory structure, and split the archive into 15Meg files, and then try copying those files accross the network.

Rob Murphy

 

nweaver

Diamond Member
Jan 21, 2001
6,813
1
0
This gig nas, what kind is it? My guess is still something in S/W land or hard drive limiting.
 

Madwand1

Diamond Member
Jan 23, 2006
3,309
0
76
Originally posted by: robmurphy
The files I usualy copy are DVD ISO images. Last night I did copy a directory set with some mixed file sizes and it did complete quicker. The link showed 17 - 18% ocupancy which is the best I've seen. The DIY NAS on tomshardware showed the same trends as the files got to 512 MByte or bigger.

Could it be I'm running into some limit. The figures for the Gigabit NAS, and the DIY one all seem too show alarge reduction in performance once the file size gets to 512 Meg or more.

This prompts a question for Madwand. What was the file mix in the FTP test you did, was it one file, or a set of smaller files?

I'll write more about Tom's hardware's NAS tests in a separate post.

While you should go ahead and test your small file hypothesis in case there is something to that, the general answer is that performance doesn't really improve when you use smaller files -- it can just seem that way because smaller files fit in cache better. Usually, when you're dealing with a significant volume and number of small files, the performance goes down quite a lot, because of the relative overhead for each file, starting and stopping and checking security, etc.

I used a single 10 GB file for those tests -- I like to use enough data to swamp any possible RAM cache effects, because while cache performance is interesting and useful at times, fresh file transfer is most important and a better reflection of overall performance -- cache performance leaves out actual drive access, which is pretty important here.

Here's a tweak to try on at least the receiving machine:

My Computer -> Right-click Properties -> Advanced Tab -> Performance section Settings button -> Advanced Tab -> Memory usage section -> Adjust for best performance of: "System cache".

A re-boot will be required if you change this setting.
 

stncttr908

Senior member
Nov 17, 2002
243
0
76
Me again. I received a raise at my summer internship so I have the extra cash to go ahead and do this I think. I've been running a LAMP server under Ubuntu Feisty Server on my old hardware (XP 2500+, NF7-S 2.0) and an old 30GB Maxtor I had sitting about, so I've really gotten a good foothold in LAMP, Samba, and advanced Linux features in general.

I'm only a few weeks away from moving so my housemates and I will need a central storage location with high capacity (1TB+). It would be nice to have some redundancy along with nightly backups of important files to be safe. These backups would be conducted with an existing external USB drive.

The box would function as a:

[*]Web server for my personal site as well as our web design company (nightly backups)
[*]Web design project files (nightly backups)
[*]Torrent/usenet daemon
[*]Personal storage (some backup?)
[*]File server (streaming media to PCs, HTPC, laptops, etc.)

I've been thinking about running either an ICH9R board (cheap) or a 3ware 9xxx controller (getting expensive).

[*]The G33 offerings from Foxccon (G33M-S) and Gigabyte (GA-G33M-DS2R) look pretty damn good
[*]The 3ware controllers offer a hardware based (right?) solution with onboard RAM buffers

There is a possibility that this server would run Windows Home Server (files) and Ubuntu Server (LAMP) in VMware if WHS turns out well. I've tried the beta and it seems pretty powerful and easy to use.

I was thinking 3x 500GB SATAII RAID edition drives from either WD or Seagate in RAID 5.

Sorry for another long-winded post. Thanks for reading, and any help is greatly appreciated!
 

Madwand1

Diamond Member
Jan 23, 2006
3,309
0
76
Why are you considering WHS? If you're comfortable with Linux, and want to run Linux stuff, you should probably just go ahead and run a native Linux setup.

WHS has some negatives off the bat IMO:

(1) Cost
(2) You have to run the Linux stuff virtualized, and on an OEM/marginalized OS, there could be additional problems with compatibility
(3) It's RAID-unfriendly -- (a) it provides an alternative to RAID which is one of its biggest features (2) you may have difficulty with RAID support on this platform. (You can also flip this around as a positive if you forget about RAID, but still WHS data redundancy is not as storage efficient as RAID 5 -- it's more like RAID 1.)

I haven't used it in a while, but the general opinion is that Linux has a decent RAID implementation, which is more flexible and reliable than on-board RAID.

Get a separate drive for the OS, a board with Linux support and enough drive ports of the types you want, decent on-board gigabit, either on-board video or cheap add-on video, and build the system around it. It's not too hard, and the hardware differences are not very important assuming you're above some basic performance level and have good driver support.
 

robmurphy

Senior member
Feb 16, 2007
376
0
0
Have a look at the Samsung T166 500GB. These drives are cheaper here (UK) than the WD and Seagate. The drive does support NCQ so would work in a raid array. The drive was reviewed on AnandTech recently see:

http://www.anandtech.com/storage/showdoc.aspx?i=3031

The review summed the drive up as "cool, quiet, and quick". If you are going to have 3 or more drives in the raid array then power and cooling do start to matter, so it would make sense to pick drives that run cool and are low power.

Rob.
 

stncttr908

Senior member
Nov 17, 2002
243
0
76
Thanks for the replies.

@Madwand1, point taken. You're right about WHS. Why limit my ability to customize everything to the fullest when I've already gotten a firm hold on Linux?

@robmurphy, thanks for the link. I can attest to the acoustics of the Seagate 7200.10 series. While they're plenty fast they aren't the quietest drives around. They do, however, have a two year warranty advantage on the Samsung drives. The Samsungs are bit slower, but that difference shouldn't be noticeable since Gigabit throughput is much lower than my max read/write speeds will be anyway. I'll consider them.
 

stncttr908

Senior member
Nov 17, 2002
243
0
76
Well I'm setting the wheels in motion here. I placed a bid on 4x512MB ECC PC3200 and am in the process of browsing for a Socket 939 processor.

What do you guys think of the Antec NSK6500? It comes with a nice 430W Antec PSU and appears to have pretty decent airflow, now to mention vibration dampening on the 3.5" bays.
 

robmurphy

Senior member
Feb 16, 2007
376
0
0
If you are getting an S939 CPU it would make sense to get one that supports cool,n,quiet. I have a S939 sempron 3400+ and while its quick enough it uses more power than the X2 4600 athlon at idle. I think most of the althons and the X2 athlons support it. The opterons may support it, but as far as i know the S939 semprons dont. Cool,n,quiet realy does make a big difference power and heat wise.

Rob.