What drives for raid use

cmf21

Senior member
Oct 10, 1999
977
1
81
Does it matter what manufacture or type of drive is used or are their certain ones made for this purpose. Was thinking about getting some Samsung f3's 500gb or 1tb but wasn't sure if their made for this purpose.
 

corkyg

Elite Member | Peripherals
Super Moderator
Mar 4, 2000
27,370
240
106
You can use any drive you desire. For RAID 1 it is good to have them the same size.
 

Rifter

Lifer
Oct 9, 1999
11,522
751
126
There are raid edition drives that usually come with a high price premium, for home use they are not needed though. Im using 3 1.5TB Seagate LP's for my home server and have had no issues.
 

FishAk

Senior member
Jun 13, 2010
987
0
0
The above is not entirely true.


It's very important that you do not use Western Digital disks in a RAID configuration unless you pay extra for the RE (RAID enabled) disks. WD changed the way they allow users to set up their disks for error recovery late last year. Google TLER if you want more details, but the short version is that if you use normal WD disks in an array, you will have problems, because the RAID controller will kick disks off the array when they take more than 7 seconds to recover an error. This happens to me once or twice per month.
 

sub.mesa

Senior member
Feb 16, 2010
611
0
0
Actually, it depends on what controller you use. If you use advanced software such as Linux and FreeBSD software RAID, you won't need TLER and can use cheap disks.

If you are using hardware RAID or Windows-based software RAID, you would need TLER instead to prevent the array from being degraded/broken/split upon a bad sector or timeout.
 

RebateMonger

Elite Member
Dec 24, 2005
11,586
0
0
It likely depends on the drives, the controller, and, I believe, the type of RAID being used. I've had several "consumer" drives in RAID 1 arrays, using "professional" RAID cards, motherboard RAID chips, and Windows software RAID, with zero problems over several years.

Well---not completely true. The first "home" RAID 1 array I built, with a Promise home-type RAID card, had several RAID 1 rebuilds because one of the two Seagate 160 GB IDE drives wasn't spinning up fast enough. The RAID card dropped that disk several times on startup. I replaced the "malfunctioning" disk with a similar Hitachi disk and that RAID 1 array has been running for four years now with zero problems.

I'm not sure exactly why, but I get the feeling that RAID 5 arrays are more likely to drop disks, hence the trend towards dual products (non-RAID and Enterprise) with different approaches to on-disk error recovery and reporting.

I don't know how much an effect using "non-RAID" grade disks has on RAID 0 arrays. Obviously, if a RAID controller thinks a disk in a RAID 0 array has failed, the entire RAID 0 array is going to immediately fail when the RAID controller drops the disk from the array.

Also, the consequences of a dropped disk in RAID 1 aren't as obvious as in RAID 5. Rebuilding a RAID 1 disk has little effect on the useability of the array, while RAID 5 array rebuild drastically slow performance and can take a long time.

It's apparent that it's POSSIBLE to "get away with" non-RAID grade disks, since people have been running various types of RAID arrays (RAID 0, RAID 1, and RAID 5) with "just any old disk" for a long time now with fair success.
 
Last edited:

sub.mesa

Senior member
Feb 16, 2010
611
0
0
"Get away with" sounds like you really need TLER/CCTL. However, i do not consider TLER/CCTL to be a nice feature; it increases the BER and translates to more bad sectors. It also should never be used on non-redundant volumes like RAID0, JBOD or single disks.

It is sort of a 'bugfix' for RAID controllers who cannot distinguish between failed disks and disks that are not responding due to them trying to repair a bad sector. Due to them not responding during recovery times, which can be more than a minute long, the RAID controller will fail the array and update the metadata on all other drives to store that disk X was detached and should not be used anymore. So when you reboot it still won't detect disk X and it would show as Free Disk Member, while the array is degraded.

Not with TLER, if a disk encounters a bad sector it would spend a maximum of 7 seconds on trying to recover that sector, where most RAID controllers split the array if a disk does not respond for 10 seconds or longer. Thus, the difference here is that after those 7 seconds, an I/O error is returned instead. This allows the RAID controller to do various things:
- drop the drive anyway (not much difference than without TLER)
- correct the bad sector by writing redundant data to the bad sector, allowing the harddrive to swap the bad sector for a new one and all damage is fixed
- use redundant data to service the I/O request but leave the bad sector be as it is.

Option 2 would be the best, but actually few controllers implement this.

All this applies to hardware RAID and onboard RAID, but due to software RAID on Linux/BSD platforms being quite advanced and properly implemented, it would never need TLER/CCTL if you can live with small pauses whenever a bad sector pops up, but it should never break the array.
 

FishAk

Senior member
Jun 13, 2010
987
0
0
All this applies to hardware RAID and onboard RAID, but due to software RAID on Linux/BSD platforms being quite advanced and properly implemented, it would never need TLER/CCTL if you can live with small pauses whenever a bad sector pops up, but it should never break the array.

If I'm using Linux/BSD platform for my SW RAID 10, and for some reason decide I need to revert my OS partition to a saved image, how would that effect the array? Would there be no effect to the array, would it need rebuilt, or would the data on the array be lost?


I'm assuming Windows cam still be used with a Linux array. Is software RAID as fast as ICH10R?
 

alaricljs

Golden Member
May 11, 2005
1,221
1
76
Neither Linux nor BSD care what data is on the array. As long as you are not downgrading the OS itself to a version that contains an earlier version of software raid.

Windows can do nothing with Linux or BSD software raid. The closest you could get Windows to it would be as a VM stored on the raid running over Linux in VirtualBox or Xen, or VMware.

ICH10R is software raid with some minor hardware hooks for configuration. It's likely that Linux software raid is faster but it would be by a marginal factor and I don't have tests to prove it.
 
Last edited:

mv2devnull

Golden Member
Apr 13, 2010
1,531
161
106
If I'm using Linux/BSD platform for my SW RAID 10, and for some reason decide I need to revert my OS partition to a saved image, how would that effect the array? Would there be no effect to the array, would it need rebuilt, or would the data on the array be lost?


I'm assuming Windows cam still be used with a Linux array. Is software RAID as fast as ICH10R?
ICH10R is a software RAID too. So your question is whether Linux software RAID driver is as efficient as Windows software RAID driver.


On the image restore, it does depend on how you create the image in the first place. I would rather (backup) copy files from offline system, and restore at that level too. So no rebuild required nor loss of data.
 

sub.mesa

Senior member
Feb 16, 2010
611
0
0
If I'm using Linux/BSD platform for my SW RAID 10, and for some reason decide I need to revert my OS partition to a saved image, how would that effect the array? Would there be no effect to the array, would it need rebuilt, or would the data on the array be lost?
If you want to restore an image you could write that backupped image directly to the raw RAID device, which under Linux software RAID is likely /dev/md0. This would restore partition table, MBR, partitions and its contents.

No rebuild is required and what data you store on your RAID array is irrelevant to the RAID driver; as long as you're not directly writing to one disk member which you should never do (and will likely get permission denied error).

One problem is that commercial ghost/image applications, including those that boot from CD, cannot access your Linux software RAID array. But what you can do:

1) boot Ubuntu from a livecd (ubuntu.com)
2) it should detect your existing /dev/md0 RAID array with data on it
3) you should be able to access the backup image you want to restore
4) you could then execute a command like:

dd if=/path/to/backup.img of=/dev/md0 bs=1M
(warning: do not execute this command without modifying it and double checking before you hit enter)

dd can transfer data from point A (the if=...) to point B (the of=...); and is perfect for making disk images including all partition tables and boot stuff. But it is also a dangerous command; mistakes by the user could overwrite and thus destroy your precious data. So be careful, not recommended for those new with Linux!
I'm assuming Windows cam still be used with a Linux array. Is software RAID as fast as ICH10R?
Windows will see the disks as "RAW" and will ask to initialize them. If you do, you have dataloss on your array.

However, linux Software RAIDs might be portable to FreeBSD for example, or other advanced RAID, either manually or automatically. That's how Linux detects Windows RAIDs too; you could boot Ubuntu on a system with Intel ICHxR RAID array on Windows; and Ubuntu can access it.

The only things that don't work is migrating to Windows; Windows generally only supports its own technologies.

Linux md RAID5 should be faster than Intel, and also a lot safer; since Intel requires the use of 'write caching' that can corrupt your filesystem upon crash or power failure. Without this option, RAID5 writes would be VERY slow; VERY slow.

Linux/BSD can utilize write-back mechanisms while keeping filesystems consistent in case of crash/power failure. I'm not too intimate with Linux, but BSD does this with a BIO_FLUSH command that acts as write barrier; i believe ext4 filesystem works in a similar way.

Generally, you could say Software RAID on Linux or BSD is superior. But ZFS is a special case; and to be fair, ZFS doesn't really implement standardized RAID specs; as the on-disk specification can never be mimiced by an ordinary RAID controller; only the combination of RAID-engine and Filesystem can access the data. This is fundamentally different from RAID, where the filesystem has no knowledge about where data is really being located, on the physical disks. All it knows is 'position 456 on SCSI volume 1'; as RAID arrays are exposed to the OS as SCSI disks.

For those not familiar with ZFS, i highly recommend you read up about it; it's one of the hottest things in storage right now, except for SSDs of course. :)
 

aigomorla

CPU, Cases&Cooling Mod PC Gaming Mod Elite Member
Super Moderator
Sep 28, 2005
21,076
3,577
126
Incase ur wondering about drive speed.

Windows 7 /w ICH10R:
Raptors x 3 On Raid 0: Temp files / Cache files / Everything minus OS.
Raptors.jpg


Intel X25-V x 2 On Raid 0: OS
HDscore.png



Really depends on what your gonna do.
If you want absolute speed, you can see how fast even 2 intel X25-V's are in Raid.
If you want insane fast sequential writes, well, you can get 2 C300's if u can afford them, or get 3 raptors.

If your looking to jumbo size your storage... i would look at raid5 unless its porn storage.
 
Last edited:

taltamir

Lifer
Mar 21, 2004
13,576
6
76
Actually, it depends on what controller you use. If you use advanced software such as Linux and FreeBSD software RAID, you won't need TLER and can use cheap disks.

If you are using hardware RAID or Windows-based software RAID, you would need TLER instead to prevent the array from being degraded/broken/split upon a bad sector or timeout.

Allow me to make a minor correction here:
If you are using hardware RAID or Windows-based software RAID, you should stop using it and move to a proper data solution.
 

Dorkenstein

Diamond Member
Jul 23, 2004
3,554
0
0
I am in the same situation as the OP. I was going to get two Samsung F3 500gb drives for Raid0. I was under the impression that software RAID was fine for such an application. I thought you would need hardware raid only if you were using many drives in a mission critical setting instead of just a few drives for home use.
 

FishAk

Senior member
Jun 13, 2010
987
0
0
Allow me to make a minor correction here:
If you are using hardware RAID or Windows-based software RAID, you should stop using it and move to a proper data solution.

Hardware, nor software? That doesn't leave many options. I know your not totally against RAID, tatlamir, but I'm also sure you don't mean to say that you only like non Windows software RAID.
 

Dorkenstein

Diamond Member
Jul 23, 2004
3,554
0
0
So, would it be okay to RAID 0 two F3 500gb drives? I saw some reviews on Newegg that said you shouldn't RAID the F3 drives, but I don't know if that's true. My games drive is filling up, need to buy a replacement soon. Sorry to drag this back up, I just didn't think the question warranted a new thread. Thank you.
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
Hardware, nor software? That doesn't leave many options. I know your not totally against RAID, tatlamir, but I'm also sure you don't mean to say that you only like non Windows software RAID.

it leaves OS based raid using other OS. Solaris, linux, FreeBSD...

there are 3 kinds of raid:
1. Pure hardware: this is the vaunted 300+$ controllers
2. Hybrid: This one has some hardware component, but its not all hardware. mobo raid falls in here, as well as 30$ expansion cards.
3. Pure software/OS based: this is where drives appear completely independent, and the OS applies a RAID scheme to them via the file system. available in windows server, linux, solaris, freeBSD, mac server, etc.

I would, first and foremost, recommend you use ZFS OS based raid in mirror mode. Using opensolaris, or alternatively, FreeBSD.
If you don't want to, then use a linux flavor, or even something like FreeNAS
And no matter what you do, only do RAID1 arrays, and maybe RAID1+0 / 0+1 if you must. (its better to have two completely separate RAID1 arrays though if you don't absolutely NEED the extra speed)
 
Last edited:

guskline

Diamond Member
Apr 17, 2006
5,338
476
126
Dorkenstein: I have 2 500 gig Samsung F3 HDDs in raid 0 running fine. What's this statement that F3s should not be run in raid?
 

Dorkenstein

Diamond Member
Jul 23, 2004
3,554
0
0
I didn't think it was true, I just wasn't sure why a few people would erroneously claim that a drive absolutely shouldn't be used in RAID. They must have just been bad drives. That basically settles it, I'm going to order two of them. I'll have to juggle my existing drives around because my case doesn't have as much room as my old case for drives, but it should be okay.
 

john3850

Golden Member
Oct 19, 2002
1,436
21
81
If you had 2 500 gig in a raid 0 how much disk space if any do loose do to stripping etc.
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
If you had 2 500 gig in a raid 0 how much disk space if any do loose do to stripping etc.

RAID 0 doesn't lose any space, you get all the space, you also double your speed.
What you lose is reliability, it is extremely unreliable. You should have backups if you have RAID0
 

Dorkenstein

Diamond Member
Jul 23, 2004
3,554
0
0
What's the next best mode to RAID 0 in terms of speed while possibly being more reliable?
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
What's the next best mode to RAID 0 in terms of speed while possibly being more reliable?

RAID1, raid 1 is double the read speed (like raid0), same write speed as a single drive (unlike raid0 which has double write speed), and is the absolutely most reliable form of raid, bar none. It is also the easiest to upgrade, since drives simply come in pairs.
However, with RAID 1, 2 x 500GB drives = 500GB of storage. make sure to make seperate arrays for more drives than 2. So if you get 4 drives, don't make a raid1 of all 4, make 2 separate arrays, each one a raid1 array of 2 drives.

RAID1 is aka "mirroring"
RAID0 is aka "striping"

read: http://en.wikipedia.org/wiki/RAID
 
Last edited:

Spikesoldier

Diamond Member
Oct 15, 2001
6,766
0
0
if you want to raid0 something, get some ssd's like aigo said. this negates the risk of having one spinning disk failing and losing data on both. adding a second ssd to the intel raid controller is pretty much a linear performance gain, however diminishing returns begin at the third; the controller tops out at around 550MB/s. this is two x25-m 80gb g1's:

as-ssd-benchmark.png