• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

WHS 2011, Perc5i, Raid 5 etc. Lots of questions

Mytime34

Platinum Member
Hey All,

So here is my issue.
I currently have an SS4200-E (E4500, 2gb) 1tb, 3x2tb, Esata 4tb TR4m, WHS V1 setup that is starting to fail, mobo is taking a crap.
I want to move to a new setup, but trying to decide what is my best route and here are the possiblities.

1. SS4200-E (Spare, E4500, 2gb) Just throw the drives in the spare system and away I go
2. Foxconn mobo, X3210 Xeon, 4gb DDR2, 80gb SSD (OS Drive), 3 x 2tb attached to onboard sata ports, Esata 1x card with 4tb TR4m, WHS 2011
3. Asus mobo, X2 255, 6gb DDR3, 80gb SSD, 3x2tb onboard Sata 6 ports, Esata 1x card with TR4M, WHS 2011
4. Asus or Foxconn mobo, 80gb SSD, Perc5i w/ 3x2tb raid 5, 4x1tb Raid 5, Esata 1x card with TR4M with lots of spare drives
(I currently own all the parts listed above, just trying to find some good ideas on making the new system)

I am looking long term and I know the SS4200 setup is good and has done be well for at least 3yrs, I just want to move forward and have better speeds and more options.

I know the SSD is not needed, but I have a spare and that is what I want to use it for. I may also change the SSD to 2ea 500gb Raid 1 setup for the OS (still undecided)

If you could review and give me some options I would appreciate it.
Thanks
 
Last edited:
long term is best look for the easiest to implement/fix personally.

better speeds depends on your network I would be guessing more than anything else. While I do not use WHS, the basic file protection system uses file mirroring which is dependant on drive speed, while the average drive is fast enough, the network connection at 100mb/s (if using it) is far far slower than any current drive.

as to the options, #1 sounds good enough.

#2, getting a xeon seems overkill seeing as they are just better tested cpus, and not that fancy for basic tasks. Ram is cheap enough, but spending money for DDR2 now days is best avoided. Though if getting a xeon, might need to get ECC ram and that is never cheap.

#3 does not sound like much of a increase unless you are running some programs on the WHS, but if the WHS is left on 24/7, finising a task faster does not stike me as a worth while goal.

#4 unless you find window's file protection rubbish, I would avoid this one. the raid will be more reliable and give better use of the drive space (extra 2GB in that setup if using the file protection option of WHS vs the RAID 5), it will be more power hungry overall as all drives of the array will need to be active to access a file where as with the windows dynamic pool, it should (in theory anyway) only need to power up one drive to access the needed file. Personally, better if going raid 5 to replace the 1TB with 2TB drives and make a larger array. 3 drive raid 5 is near pointless for space gains over windows's file mirroring.
 
I already own all the parts listed in 1-4.

Not worried about the power it will consume, care more about backups and file access.

Network is 24 and 16 port netgear gigabit switches

My issues:
1. SS4200 has limited expansion capabilities and even with the Esata card I am pushing its limits. Still a good option, but also the slowest option, but easiest to maintain
2. Foxconn mobo has 2 PCI-e slots for expansion and is the fastest due to the processor.
3. Asus mobo has the most SATA slots and PCI-E expandability, but slower processor.
4. This was just a hailmary toss. I have the parts and was just seeing if it was worth it to do raid 5.

I was just going through my discs and here is what I have to play with
80gb Intel SSD G2
64gb Crucial M225 SSD
30gb Vertex SSD
5 x 1tb Seagate 7200rpm Sata II drives
3 x 2tb WD Black Drives
1 x 1tb Hitachi Drive
2 x 1tb WD Green Drives
4 x 500gb WD Blue Sata 6 drives

2ea TR4M's
3in2 Hotswap sata cage
 
What is the goal, ie. what, if anything, are you trying to improve beyond just fixing the current setup?

Does it have to be Windows? If so, what applications are you using that rely on a Windows server specifically?
 
Goal is to get my WHS backup and running, but no make a new system.
1. Stream DVD/blurays
2. Backup 5 laptops, 2 desktops
3. Access 10yrs worth of files, pics, etc
4. Easy to use system

I have WHS V1 and WHS 2011 already so I thought I would use them since I got em.
I am open to other OS's, just want to make it easy to manage and upgrade if need be.
 
OK, what I am thinking is that you should look at FreeNAS. Why? ZFS. ZFS is a filesystem that in many ways takes what WHS is trying to do with disk pools and just does it better in every way.

For example, you could set up a ZFS disk pool (zpool) like so:

- 3x2TB RAIDZ (basically RAID5 but done with ZFS)
- 4x1TB RAIDZ
- 30GB SSD as ZFS Intent Log (ZIL)
- 80GB SSD as L2ARC (transparent SSD cache, somewhat like SRT)
- 64GB SSD root device
- As much RAM as you can stuff in there for ARC (transparent RAM cache, think of it as SRT with a RAMdisk)

Add more RAIDZ pools as you have drives and sata ports, just try to avoid mixing 5400 RPM drives in the pool. The great thing about ZFS is that you can add and remove RAIDZ chunks at will, and the filesystem will automatically expand and contract to accommodate them.

ZFS is also awesome at taking advantage of SSDs. The ZFS Intent Log on a separate SSD basically makes it such that all writes go to the SSD first and are then written out sequentially to the disks, mitigating the huge random write penalty that HDDs have.

The ARC and L2ARC serve a similar purpose, except for reads. ZFS is going to cache as much data as possible in RAM and on the SSD so that you rarely have to read from the disks in the first place.
 
I may have to look at Freenas. Since I have 2 setups that are identical (SS4200) I may try freenas and WHS2011 and see which one I like better all around.

Thanks for the info and tomorrow I may start testing things out
Thanks
 
So here is the system I have decided to go with (for the moment)
Phenom II x4 830
Asus M4A87TDU3 mobo
8gb DDR3
Cheap video card (PCI too)
Intel 80gb SSD (OS)(I know it is overkill, but it was a spare and works)
4 x 1tb WD Blue Sata III drives (NO RAID, using Drive Bender)
1 x 1tb Hitachi sata II drive (connected to onboard sata III port)
2 x 1tb Seagate Sata II drive (connected to ASUS U3S6 card)

Crystaldiskmark scores:
C😀rive - 200-220 MB/s Read, 150-180 MB/s Write
D😀rive (all drives in a large DB Pool) 500-600 MB/s Read, 500-550 MB/s Write

This week I will be testing a new raidcard with 4ea 1tb WD Blue Sata III drives
Raid 5,6,10,50,60 and DB to see which is the best option and is easy to rebuild.


(I tried Freenas and it was a pain to setup and I like things sort of easy, sorta)
 
Glad you got it up and running. 🙂 You shoulda posted here before giving up on FreeNAS though, I would have helped you get a pretty bitchin' ZFS pool set up. PM me if you're still interested.
 
I have another system I can test with so maybe I will try freenas again. I was having a crappy day and nothing wanted to play nice. I will let you know as soon as I get it set up and loaded.

Also for freenas how do the PC backups work and how easy is it to restore them
 
Sure, no problem. As for backups, with FreeNAS you are just providing disk space and are free to use any backup software that you like. Windows built-in backup works (Pro or higher for backup to network), as does Acronis, Areca, etc etc.
 
I wanted to post some testing that I did with my new WHS 2011 Server

Here are the specs:
Asus M4A87TDU3
Phenom II X4 830 (H50 cooler)
4 x 2gb ECC DDR3 Ram
80gb Intel G2 SSD (it was a spare and it is not part of the pool)
Highpoint RocketRaid 2720GSL with SAS to SATA cables
4 x 1tb WD Blue Sata III drives
4 x 500gb WD Blue Sata III drives
Windows Home Server 2011
Drive Bender

So I tried many different raid settings, combinations, onboard sata III and the raid card and here is what I have come up with. All testing done with Crystaldiskmark and ATTO

Onboard Testing:
4 x 500gb WD Blue Sata III drives on AMD Sata III onboard
3 x 1tb Sata II misc drives
Read 537.6 MB/S, Write 543.5 MB/s

All drives connected to Rocketraid 2720GLS using Drive Bender
Single 1tb WD Blue Sata III Drive
Read 128.1MB/s, Write 126.9MB/s

JBOD 4 x 1tb WD Blue Sata III drives
Read 141 MB/s, Write 139.7 MB/s

Raid 0, 4 x 1tb WD Blue Sata III drives
Read 509.5 MB/s, Write 498.3 MB/s

Raid 10, 4 x 1tb WD Blue Sata III drives
Read 236.8 MB/s, Write 251.9 MB/s

Drive Bender with just the 4 x 1tb WD Blue Drives as standard drives
Read 558.1 MB/s, Write 604.4 MB/s

Drive Bender (No raid, just pooled drives)(This is while the drives are being balanced)
4 x 1tb WD Blue sata III drives
4 x 500gb WD Blue sata III drives
Read 526.0 MB/S, Write 552.1 MB/s


I am pretty excited about the testing. The raid card by itself does good at raid 0, but the other kinda fall flat. I would test raid 5 and Raid 6, but I didnt feel like waiting 55hrs for them to initialize.
I would include screenshots, but having difficulty getting them loaded.


Once I finally move the server into place I am going to test FreeNAS on the old SS4200-E setup.
 
I'd like to point out that your RAID results are entirely expected once you understand how the different forms of RAID work.

JBOD is just disk concatenation. In other words, if your first disk is X sectors long, and your second disk is Y sectors long, then address 0 though X-1 map only to disk 1 and sectors X through X+Y-1 map only to disk two. Thus, you will only ever touch one drive at a time when reading sequentially. Performance equal to a single drive is expected.

RAID 0 stripes the data across all disks. Using the same disks as before, sector 0 maps to disk 1, sector 1 maps to disk 2, sector 2 maps to disk 1, and so on. Thus, you would expect performance to equal the sum of all your disks. Note that there is NO REDUNDANCY with RAID0.

RAID 10 is a little more complicated in that it is a stripe of mirrors. With 4 drives, sector 0 maps to disks 1 and 2 (mirrored), sector 2 maps to disks 3 and 4 (mirrored), sector 3 maps to disks 1 and 2 (mirrored), and so on. The extra drives in a mirror don't provide any additional performance, just redundancy so you would expect performance to be about half what you would see with a RAID 0 of the same drives.

Drive Bender is an object-level (file) disk pool. It basically works as an abstraction layer on top of NTFS that redirects reads and writes to different physical disks. Since it works on the file level, performance to any single file won't be improved, but aggregate performance will go up similar to what RAID 0 can give you. It doesn't offer any redundancy by default, but you can elect to have it replicate certain directories across many disks in the pool.

That's more than I intended to write, but hopefully this sheds some light upon why your performance numbers are expected.
 
Back
Top