How to monitor the status of remote RAID-5 disks?

Fjodor2001

Diamond Member
Feb 6, 2010
4,007
441
126
Hi,

I just wonder if there is any software solution for Windows 7 where it is possible to monitor the status of a remote RAID-5 system? I.e. you have one server computer with a RAID-5 disk setup, and you want to get notifications on another (desktop) computer when the health status of the RAID-5 disk setup changes.

Are there any good solutions for that?
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
Yes it is possible, nobody can tell you how until you provide more system information.
What is your raid controller? OS? etc.
 

Fjodor2001

Diamond Member
Feb 6, 2010
4,007
441
126
I intend to build a new server computer, but the exact hardware is not 100% decided yet. It will be running Windows 7 though.

Most likely I'll get a socket 1155 motherboard where there is an on-board RAID-5 controller. Any suggestions on what hardware & software to get?

BTW: I though there were hardware indenpendent software solutions for this as well. How come that is not the case?
 

Emulex

Diamond Member
Jan 28, 2001
9,759
1
71
depends on how the controller presents errors. alot of servers have apps that can throw smnp traps and email you when an error condition is about to happen or is happening. but you know on-motherboard raid-5 is junk. better to find a dedicated card. having flash-back write cache helps immensely with raid-5 since you are doing read/writes.

low cost raid is usually not a good idea. esp raid-5.
 

Fjodor2001

Diamond Member
Feb 6, 2010
4,007
441
126
alot of servers have apps that can throw smnp traps and email you when an error condition is about to happen or is happening.

Sounds nice. Any such apps you can recommend for Windows 7?

Also, is there any software you can run on your remote desktop computer (e.g. in the system tray) that can report similar status changes? I assume there has to be some software executing on the server as well then (to detect the status changes and report them to the desktop computer)?

but you know on-motherboard raid-5 is junk. better to find a dedicated card. having flash-back write cache helps immensely with raid-5 since you are doing read/writes.

low cost raid is usually not a good idea. esp raid-5.

In what way are on-motherboard RAID-5 controllers junk? Do they for example have worse performance or reliability? Is that true for all on-motherboard RAID-5 controllers?

Also, do you know any good reasonably priced RAID-5 controller card, if I should decide to not use the one on the motherboard?
 

Emulex

Diamond Member
Jan 28, 2001
9,759
1
71
all onboard raid-5 unless we're talking about a DL360/DL380 HP motherboard ;) (built on mobo P410i) are SOFTWARE RAID.

$300-400 is a good price to pay for a good raid card with write back cache used with a fanout cable.

It will have the software that can run on the o/s to trap.

Honestly though.

I think you are going down the wrong path.

Server:
1. Continuous power - if power is lost - always time to shutdown on its own
2. Hardware - anything that is a server is not a pc. XEON/ECC RAM - all necessary
3. RAID - hardware is a must with flash back write cache(battery if older)
4. In a controlled heat environment - aircon and heat to hold the temps pretty steady and prevent overtemp thermal shutdown.
5. backup - on-site - and off-site backup strategy for disaster recovery

I'd add virtualization in there to make backup easier but that's your choice. What i think you want to do is something else?

check out the dl320s from hp it has a super warranty and can do raid-10 iirc. esxi compliant. bulletproof. i have heard you can get drive cages on ebay and RE4 drives fill them nicely but to my dismay coming from all SAS (15K/10K) to SATA (RE4 7200) they suck big hairy balls in performance. so much that well its bad.
 

Fjodor2001

Diamond Member
Feb 6, 2010
4,007
441
126
Well, I think that's a bit overkill for me. I intend to build a Windows 7 based home server, holding media and backups of desktop computers, running video encoding jobs, serving up video streams, etc. It will be attached to a Gigabit network, so I'd like it to be able to transfer ~100Mbyte/s from the drives. Also, I'd like it to use 3-4 2TB disks in a RAID-5 setup.

Do you still think I have to pay $300-400 for a RAID-5 controller for such server setup?

Oh, and finally, are the RAID-5 controller cards also able to encrypt the data on the disks? If so, will that cause any performance penalty?
 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
NTFS supports encryption regardless of the storage used. If you want something more portable there are other options like TrueCrypt.

I'm not a fan of Windows software RAID and even less of those onboard controllers, but if you're tied to Windows then an Intel ICH onboard software controller is probably the best option.
 

Mark R

Diamond Member
Oct 9, 1999
8,513
16
81
In what way are on-motherboard RAID-5 controllers junk? Do they for example have worse performance or reliability? Is that true for all on-motherboard RAID-5 controllers?

Most consumer-grade on-board RAID 5 is of very low grade.

Typical problems are:
1. Very poor performance. 10-15 MB/s transfer rates (that's not a typo) for writes are frequently reported in forums such as these.

2. Blocking of SMART access meaning that conventional monitoring software is unable to access drive health status. Monitoring would have to be done by the proprietary RAID tool, which may not support things such as remote access, logging or alarms. (These are server grade functions, and garbage grade on-board RAID doesn't usually come with server grade functions).

3. No way to perform background health-checks. It is essential that drives are periodically checked to ensure that all sectors are readable, including the parity sectors. (if a parity sector goes bad, you'll never know because that sector is never accessed under normal operation. Except, when a hard drive actually goes down, that sector becomes needed, you've got 2 unavailable drives and it's game over for the data on the array). As many mobo controllers hide the actual drives in the array, there is no way to perform this type of scan, even if you want to (or the scan may be impractical - e.g. it must be manually triggered from the BIOS menu).

4. No protection against parity corruption on unexpected mains power failure. If power to a RAID 5 array goes out during a write, the parity may not be written. If that parity becomes needed, then the data that it corresponds to will be corrupted without any warning or trace. Some files will just be trashed, with no easy way to get a list of which ones have been damaged. Better RAID controllers have a battery/flash backed cache to protect against this. Good software RAID installations will force a full parity scan at next boot if an unclean shutdown occurs. On mobo RAID there is often no way to scan the parity, even if you wanted to.
 

Emulex

Diamond Member
Jan 28, 2001
9,759
1
71
yes you can run ZFS in freebsd under a VM and export the storage to your servers using iscsi. that would be not be a bad idea.

:)
 

Mark R

Diamond Member
Oct 9, 1999
8,513
16
81
Not going to happen with software RAID.

I've seen software RAID-6 with 12 drives get over 1000 MB/s (write speed) on linux, with less than 20% (of one core) CPU usage. In fact, software RAID can outperform the fastest hardware RAID cards for raw transfer rate, and without significant CPU usage, because modern CPUs are so insanely fast.

Where hardware RAID shines, is in random 4k writes, where the flash/battery backed cache, permits much more efficient use of the hard drives. Without BBWC/FBWC, random 4k performance will be horrid (hardware, or software RAID).

On my old E6600/P5B machine, using linux software RAID-5 on 4x 5900 rpm Seagate drives on the integrated SATA ports - I get 200-300 MB/s reads and 120-200 MB/s writes.
 
Last edited:

Emulex

Diamond Member
Jan 28, 2001
9,759
1
71
fact is raid-5/6 has to read/write more than raid-10 so you have the overhead plus double read/writes. benchmarks need to be real world - simple benchmarks don't deal with the random nature of your application use. random writes in raid-5/6 are most painful because of the read/write issue to recompile the checksum/crc's. not too different than what has to happen with SSD when less than a block is modified. stripe size very much affects your application(s) as well.

linear is very different from sql (main/tmp/log) which is different than vm's
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
Yea, because we haven't gotten by for the past 3+ decades without ZFS...

So, if someone finds the cure for cancer, would you respond in the same manner?
ZFS ended the dark ages of file systems; there is currently an equivalent FS in development (BTRFS) but it is not yet ready. every other FS in the world is simply generations behind and unsafe for storing your data.
 

Mark R

Diamond Member
Oct 9, 1999
8,513
16
81
So, if someone finds the cure for cancer, would you respond in the same manner?
ZFS ended the dark ages of file systems; there is currently an equivalent FS in development (BTRFS) but it is not yet ready. every other FS in the world is simply generations behind and unsafe for storing your data.

I wouldn't say that other file systems are unsafe. They've been used for decades, and the most important ones are highly reliable and robust.

Of course, they're not as robust as ZFS - but they're not at all bad. Mitigation techniques for most of the problems of conventional file systems are well recognised and used in modern OSs (with the exception of detection of data corruption). However, to a large extent this can be mitigated in archives by the use of external checksums or external parity files.

The problem is that ZFS has a number of significant disadvantages:
Generally poor performance, especially write performance
Severely limited choice of OS (Solaris or FreeBSD, and their very limited hardware compatibility)
Limited flexibility to expand storage space

Whether ZFS is an appropriate choice for your purposes depend on your priorities.

I'm quite happy storing my backups on a RAID-5 array, with supplemental PAR2 files to provide checksumming and parity for recovery in case of corruption. It's not perfect, but it's good enough for a backup solution.
 

Emulex

Diamond Member
Jan 28, 2001
9,759
1
71
raid-10(1+0) imo is the best solution given cost. there is no computer based computation necessary the drives handle it, and you can read from each written pair at the same time and its expandable. in lower sized raids raid-10 has less pain..

ie 4 raid-10 = 50% capacity
ie 4 raid-5 = 75% capacity with heavy write penalty and cpu load (intel matrix raid-5) plus you can survive a 2 disk failure whereas raid-5 cannot unless it has a hot spare which then makes it raid-6 and gives no space advantage to raid-10.
 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
So, if someone finds the cure for cancer, would you respond in the same manner?
ZFS ended the dark ages of file systems; there is currently an equivalent FS in development (BTRFS) but it is not yet ready. every other FS in the world is simply generations behind and unsafe for storing your data.

Of course not because the analogy is retarded, people die from cancer every day while my data has been retained perfectly fine on XFS and a dozen other filesystem currently in use. ZFS is definitely a nice step forward and as soon as BTRFS or something similar appears for Linux I'll likely migrate to it. But I'm not subjecting myself to Solaris or FreeBSD just for ZFS, the tradeoffs aren't worth it IMO.
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
I wouldn't say that other file systems are unsafe. They've been used for decades, and the most important ones are highly reliable and robust.

No, they aren't. They are highly unreliable and cannot be trusted with data. Almost everyone has had random bit errors that caused individual file corruption, which is entirely avoidable if your FS is smart. Currently only ZFS does that.

Since switching to ZFS it caught 3 bit level errors. I don't know if it was cosmic rays bit flipping or just random errors which happen with every device, but ZFS Catches and fixes those.

raid-10(1+0) imo is the best solution given cost. there is no computer based computation necessary the drives handle it, and you can read from each written pair at the same time and its expandable. in lower sized raids raid-10 has less pain..

ie 4 raid-10 = 50% capacity
ie 4 raid-5 = 75% capacity with heavy write penalty and cpu load (intel matrix raid-5) plus you can survive a 2 disk failure whereas raid-5 cannot unless it has a hot spare which then makes it raid-6 and gives no space advantage to raid-10.

It is generally a good idea, yes. When you do get ZFS, set it up with multiple raid1 vdevs. Of course, if your specific needs are better met with raid5, 6, hot spares, etc. ZFS can do all that.
 
Last edited: