• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Storage Question re:Fileserver

Kartajan

Golden Member
Currently running 2 WDC30EFRX 3TB drives in a mirrored "Storage Space"
Running on a G6950 w/ 4GB DDR3-1333.

I am in a position to upgrade, and I want to speed up.

I have 2 Windows 7 desktops on Gig Ethernet, 1 Win 8.1 Desktop on Wireless N, 2 Laptops (Wireless N), and miscellaneous other small devices on wireless N, and for Media, 1 HDHomerun Prime, 2 XBOX 360's and a Ceton (all hardwired) for MCX duties.

I am thinking that bumping to a i3-4130, 8G DDR3-1600, and leaping the "server" OS over to an Intel 730 SSD would get me closer to saturation of the hardwired boxes. I would spend about $700 to go that way.

I am thinking that the only NAS solutions would be faster would be at least $500 more expensive, and not show a speed increase, and I would like to spend less to get faster if possible.

Comments/ advice/ mild flames????
 
What kinds of speed are you seeing on the wired devices? For what types of data hauling?

Wifi-N is a lost cause, speedwise.

See this review for the kind of speed you see from a dedicated NAS box in that price range.

http://anandtech.com/show/8298/qnap-ts451-bay-trail-nas-performance-review/3

Without knowing what kind of performance you're getting, I can't say if you can expect big improvements from new hardware. And honestly, the rig you have is probably likely not CPU limited.

I'd be more interesting in a new OS, or some kind of cached storage (putting your OS on an SSD is a waste, but using that SSD as a cache drive could get you some nice benefits) and so on.
 
The RAM upgrade might not be a bad idea though - Windows/SMB does do ram caching of frequently accessed files. MOAR RAM = MOAR CAECH
 
As far as the data hauling, I did not conduct actual benchmarking, so no raw numbers. I am just referencing the perceptions of delay during file transfer, primarily when using my extenders on local files. (I am trying to enhance the WAF in order to get rid of the stupid cable box in the living room)

I am mostly thinking that the HW upgrade would provide a (noticeable?) speed upgrade while reducing my power consumption, maybe even quiet down the fan racket from the server side. The alternative in my budget of around $700 would be something along the lines of a Synology DS-412+, but I am thinking that would be slower than what I already run...

(I realize the wireless clients are a lost cause, even if I transitioned what I could to AC, I just listed them to show the overall load picture of what is pulling data)
 
I think you're looking at the wrong areas here. If you're simply doing network shares like Windows shares (CIFS/SMB), it's likely not going to be sucking processor or memory. Look at your system performance and see if something is wrong. Are you running Windows, Linux, FreeNAS, etc?

After that, I'd investigate what speeds you are getting. Do a very large copy of 1 file....pick one large enough that you can read how many MB/sec sustained transfer when nothing else is going on.

Small file copies take longer than large ones because they have to start and stop the transfers between files...

Here are some suggestions of things to check or do:
1. Definitely check your network connections for negotiated speed/duplex. If a system isn't connected at 1Gbps/Full Duplex, you can see slowness....
If you're running Windows, here's a script to get the info:
http://www.experts-exchange.com/OS/...ms/Server/Windows_Server_2008/Q_28290125.html

2. Consider adding more disks to your RAID array. This may take offloading the data and copying it back...but a RAID 10 array with more disks will provide you more spindles to distribute the load. If you think you're doing more reads than writes and want to maximize space, look at RAID6. I don't care for RAID5 or RAID6, but I've been burned by 5 numerous times and would rather have a .025% chance of data loss than get burned. 😀

3. Consider replacing your storage array with SSD. I am not on board with this 100% because of cost per GB and likelyhood of drive failure, but hey...they're selling these in SANs now and charging a lot of money for them. The read/write speed will shatter a the speeds of traditional hard drive...in a RAID10 with 4+ drives, you'd be screaming fast.

4. Consider replacing your server with a 4+disk NAS device....I've got a 2 disk QNAP nas device running WD reds at home and have been happy with it. Just be aware that most of the NAS devices have very little processor power with 512MB or 1GB of memory...for data transfers that's all that's required. The network shares require very little RAM to operate efficiently because they don't use Java or dot net. 😀
 
I am mostly thinking that the HW upgrade would provide a (noticeable?) speed upgrade...

Doubtful. CPU really, really, isn't your bottleneck.

...while reducing my power consumption, maybe even quiet down the fan racket from the server side.
That's always a nice thing, but you do that with new fans and stuff. A G6950 isn't going to use appreciably more power at idle than a new i3 - certainly not enough to justify the expense.

The alternative in my budget of around $700 would be something along the lines of a Synology DS-412+, but I am thinking that would be slower than what I already run...
I would really need performance data from your current setup to be able to even have a suspicion.

Try running LAN Speed Test Lite (use at least a 5GB test file) and see what numbers you get.

http://www.totusoft.com/downloads.html
 
Just to put it out there, I am not completely sure how the DS412+ compares to the RS812+ (looks like the 412 is about 2/3 the 812's rated performance) but even the baby 812+ is a "beast" in that we Veeam backup to it at around 160MB/s on 5400 rpm drives. MPIO iSCSI. If the 412 is around 2/3 that should be fairly quick for home use.
 
Speed Test Lite results; also ran to local C (SSD) & E(HDD) for kicks...
Home Server Packet Length 5120M/5120M Time to Complete 87/75 Mbps 465.4/545.5
Same to C: Time to Complete 6.8/9.6 Mbps 5986/4260
Same to E: Time to Complete 25.22/38.1 Mbps 1623/1072
 
Speed Test Lite results; also ran to local C (SSD) & E(HDD) for kicks...
Home Server Packet Length 5120M/5120M Time to Complete 87/75 Mbps 465.4/545.5
Same to C: Time to Complete 6.8/9.6 Mbps 5986/4260
Same to E: Time to Complete 25.22/38.1 Mbps 1623/1072

Thank you, that's great info.

Now the bad news - that's actually not bad. It's not great - my NAS at home is a bit faster, even with its weaker CPU - but, it's not crazy out of whack.

You could be getting throttled by an inexpensive switch or something. FreeNAS guys tend to hate on RealTek NICs because they don't perform as well. But I don't think throwing CPU at RAM at your NAS box is going to improve those speeds. Different protocols, different server OS, an expensive NIC, a new switch - stuff like that is probably going to be the fix.
 
Last edited:
based off those numbers, which in my opinion would indicate the responsiveness of my <<SERVER>>, would you think that the faster/ better solution is a DS412+, or upgrading my server (WITH NEWER OS + SLOWER COOLER THAN PLANNED CPU + RAM) in conjunction with a different switch... Current switching handled by combo of Asus GX-D1081 8 port Gig Switch + DLink DIR-655

BTW, just to make your feel extra cool, I ran this on my server at work (BE WARNED- MY IT BUDGET IS BIGGER AT HOME)

Packet Length 5120M 5120M
Time to Complete 559.123 450.509
Bytes Per Second 9.15M 11.36M
Bits per second 73.25M 90.91M
Mbps 73.25 90.19 <--- 100MBps 24 Port Switch, and they won't buy a Gig until this breaks
 
Last edited:
I read this 3 more times, I my thoughts still don't feel right.

I need recommendation on change (do/ do not/ if do, to what) in respect to switch & Wireless router.

After that, do you think that minor upgrade to server (focused on cooler operation with bump in RAM size) or drop server for NAS seems smarter.

This sounds "mo better" between written word & crap in my brain housing group...
 
based off those numbers, which in my opinion would indicate the responsiveness of my <<SERVER>>, would you think that the faster/ better solution is a DS412+, or upgrading my server?

Neither, really. You're running a bit over 50% capacity on your line right now. In the "real world" with consumer-level gear, that's pretty normal.

First off, run the benchmark locally on the file server too - if the local disk speed is the same as what your network bench is showing, then it's just Windows Software RAID being sluggish. You can probably alleviate that by adding two more disks and working as RAID-10.

But IMO, you need to start looking at the intermediate parts - the NIC, the switch, test the cables, and so on. But even best case, I wouldn't expect more than a 20-40% speed bump over where you're at. (So then it's a question of, "well, is all this trouble worth it?)

BTW, just to make your feel extra cool, I ran this on my server at work (BE WARNED- MY IT BUDGET IS BIGGER AT HOME)

Packet Length 5120M 5120M
Time to Complete 559.123 450.509
Bytes Per Second 9.15M 11.36M
Bits per second 73.25M 90.91M
Mbps 73.25 90.19 <--- 100MBps 24 Port Switch, and they won't buy a Gig until this breaks

Ouch. 😵

I ran the same benchmark against three file servers at work. One is a Windows VM, another is a hardware Windows fileserver, the third is a Linux-based NAS appliance. The VM and the NAS appliance are backed by the same SAN, the Windows Server is backed by an identical SAN in an adjacent building.

To the Windows VM, I got 257Mb write / 520Mb read.
To the Windows fileserver, I got 773Mb write / 726Mb read
To the NAS box, I got 802Mb write / 632Mb read

465/545 doesn't seem too shabby, tbh.
 
local run on server:
34.3/52.5 seconds, 1191/779.6Mbps

Although I would love to put more drives in the server, they won't fit unless I at least change the case; currently stuffed into an old emachines case that I literally found.

(Every time that I look at cases, I never find what I want, so I just keep slogging along in this total P.O.S.)
 
I am thinking to pull the Asus DX-G1081 switch for a TP-Link TL-SG1016D 16port Gig switch to see if that gets me closer to max. I would love to look at comparisons of switches, but I do not see any real roundup/ comparison type reviews, just stand alone "look at me" reviews....
 
If you have the switch handy, do the comparison. But you're right - the lower-end stuff doesn't publish CPU specs and so forth.

Most of the cheaper home switches should be good enough for Gb connectivity between two points, but it's worth trying if you have the ability to do so cheaply.

The guy doing Anandtech's NAS reviews is using a $3k switch.
 
I do not have one as of now, just thinking that the specs list a 32Gbps vice 16Gbps switching capacity, as well as a significantly higher Packet forwarding rate. This may (??) be the path to enhancing my network throughput a bit... and for a hair under $80 it does not seem like a dumb investment...
 
I do not have one as of now, just thinking that the specs list a 32Gbps vice 16Gbps switching capacity, as well as a significantly higher Packet forwarding rate. This may (??) be the path to enhancing my network throughput a bit... and for a hair under $80 it does not seem like a dumb investment...

When they list that stuff, they're usually just listing the collective bandwidth of all the ports.

8 port switch, 1Gb full duplex ports = 8 x 2Gb = 16Gb bandwidth.
16 port switch, 1Gb full duplex ports = 16 x 2Gb = 32Gb bandwidth

Packet forwarding "rate" is probably being being calculated a similar way (maximum bandwidth divided by minimum packet size.)

I'm a suspicious sort. It might help, (different switches are different) but I'd be ready to return it if it doesn't.
 
I will get to ordering what I think I need this weekend, and post results when I have everything running. Thanks for the input!
 
Changed Routers; and got functionally identical throughput. 🙁

At least I now have a few open ports for temp connections....
 
1. If you need disk speed on the NAS, start with RAID 10, if you don't use it already. Maybe use 7200 RPM drives. You can record performance stats using Performance Monitor, or even just look at Resource Monitor, when it seems to be running slow. A mirror with 5400 RPM drives could easily get bogged down if several clients use it at the same time.

2. An SSD cache wouldn't hurt, but might be overkill. If you try that, don't get an expensive one, just the cheapest of Ultra Plus (good and modern, but available in 64GB), M500 (120GB minimum) or MX100 (128GB minimum). Swapping to Intel RAID with SRT would be a cheap and effective way to try it, if your mobo supports it.

3. If you have a few GBs of RAM, Windows or Linux, you'll be fine. RAM needs for NAS are dependent upon the workload, more than what's in the box. Going to 8GB wouldn't hurt, but I wouldn't expect much from it.

4. Low-end prebuilt NAS units are slow, and likely wouldn't help. They make setting up the NAS easy, have apps you can add, etc., but are noticeably slower than a PC. If you get ones that are faster, you pay more than building one.

Try checking out performance metrics when it feels slow, to see what's going on. Without benchmarking a bit, or swapping in a new configuration and comparing, it would be hard to tell if it is disk (seeking) or network. If it looks OK on the server, with low utilization all around, then look at the clients, to see if they are being limited by networking. If you don't know where the main bottleneck is, upgrades could very well be wasted.

Realtek NIC issues are much more FreeBSD historic support not being great than it is hardware. Intel's are a bit faster, and use a bit less CPU, but not by nearly enough to make the difference between feeling slow and feeling snappy, on Windows or Linux.
 
Back
Top