What is RAID and what does it do?

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Lee Saxon

Member
Jan 31, 2010
91
0
61
raid-1/5 on consumer soft-raid is just too unstable in windows. it will let you down eventually.

THANK YOU.

I've had so many people deny that on these forums, and tell me I'd just had bad luck the 7054298032890325 times I've tried it. "Motherboard RAID will be fine" yeah right.
 

mv2devnull

Golden Member
Apr 13, 2010
1,526
160
106
There are websites for loathing RAID5 regardless of what controller is in use, so it is not just the Windows software that deserves blame.
 

Vic Vega

Diamond Member
Sep 24, 2010
4,535
4
0
3. raid is a burden we have to live with in business, it's not something anyone enjoys or brags about.

You're joking? RAID is awesome.

<------ SAN administrator.
 

Ben90

Platinum Member
Jun 14, 2009
2,866
3
0
Not to start a debate, but I find it hard to believe that ANY software RAID will match the throughput/performance of an "expensive" dedicated controller. I haven't implemented ZFS or other soft-RAID solutions (Windows has in fact sucked) and I would be interested in seeing some benchmarks that show it will keep up with a high-end controller.
Believe it or not, from a purely IO perspective, software RAID will outrun a hardware RAID. While fixed function hardware is no slouch, its tough to keep up with a CPU capable of nearing 100GFLOPS.

It is not until you start getting into to RAID cards costing thousands of dollars will it start to outrun randomly striping drives together in Windows. From here you can still push software even further by doing things such as using RAM as cache.

The above is from a strictly IO only perspective. There can be a decent amount of overhead on the more complex RAID solutions. If you are running low on CPU resources then dedicated hardware can flex its independent muscles. Not to mention all the RAS features you just can't get with software only. RAS is actually probably the biggest factor for going dedicated with modern CPUs.

I found a happy medium as a customer. ICH10R gives me quite pleasant RAID 0 performance without any CPU overhead. I do lose write cache reliability however. Not a big deal considering I have 6 drives tied together in a RAID 0. Wasn't expecting this to be reliable. I will say though, that at least for myself with a sample size of ~30 HDDs, they are more reliable than a lot of consumer hardware.
 
Last edited:

Emulex

Diamond Member
Jan 28, 2001
9,759
1
71
not sure what biz you are in - raid is a necessity. i rock raid-10 because i've had two double drive failures and VM's love disk write iops.

exotic software raid is essentially what NAS/SAN systems you buy - with some RAS (redundancy) - there's a reason why my hp dl380 P410i controller has flash back write cache - no battery to fail. There's a reason why it uses two connectors for one set of dual ported SAS drives. There's a reason why there are two chassis and drive 1 is in one chassis while drive 2 is in the other (forming the first mirror set). If i had another chassis i spose i could hook up a second raid controller and make even further redundant. but you know i still have old scsi 72gb raid-5 systems from 7 years ago rocking out at 100&#37; duty cycle - good design and care definitely keeps the backup man away.

i think every sysadmin comes in on monday or worries about that page at night when a disk fails. it's a constant anxiety - thank god for quality (hp,ibm) gear that's well thought out.

trust me i tried ICH and for raid-0 - that's it. nothing more. no thank you sir.
 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
THANK YOU.

I've had so many people deny that on these forums, and tell me I'd just had bad luck the 7054298032890325 times I've tried it. "Motherboard RAID will be fine" yeah right.

Some motherboard fakeRAID is fine, I would say that most people consider the Intel controllers reliable. It all depends on the implementation though, just as with anything else.

Emulex said:
not sure what biz you are in - raid is a necessity. i rock raid-10 because i've had two double drive failures and VM's love disk write iops.

With a nice tradeoff of cost and space, not everyone is willing to do that for their personal machine.
 

Emulex

Diamond Member
Jan 28, 2001
9,759
1
71
really? with drives being so cheap i can't imagine why someone would want 30mb/s ich raid-5 write speed. 30megabit
 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
really? with drives being so cheap i can't imagine why someone would want 30mb/s ich raid-5 write speed. 30megabit

I can't imagine why someone would spend twice the money on drives when it's not necessary.
 

Voo

Golden Member
Feb 27, 2009
1,684
0
76
really? with drives being so cheap i can't imagine why someone would want 30mb/s ich raid-5 write speed. 30megabit
The much better question is why any consumer would want to use Raid 5 to start with. Actually an even better question would be why anyone would want to use Raid5 these days, problems with high BER, the write hole which basically necessitates an expensive HW controller.

RAID isn't a backup, so you still need that anyhow. So basically you're exchanging a higher uptime and possibly higher performance (well mostly throughput, not so much latency) against the additional trouble and time and effort every RAID solution brings with it. That may be the right approach for some (it obviously is for businesses..), but I can certainly see the other point of view - especially for consumers
 
Last edited:

classy

Lifer
Oct 12, 1999
15,219
1
81
As cheap as drives are now, I can't see why anyone shouldn't use some type of raid. The soft or onboard controllers are pretty decent and work well, especially Intel's. Raid 1 or Raid 5 I think are a benefit even to a home user. Is it necesssary, of course not. But I firmly believe everyone should run Raid 1. Using SSDs for Raid would be totally over done, but for regular hard drives I think its a good idea. Now Raid 5 would be difficult to move to another board though using soft onboard controller.
 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
classy said:
Now Raid 5 would be difficult to move to another board though using soft onboard controller.

That's one reason why I prefer software RAID. I'm not tied to any one controller. I recently moved an encrypted mirror from my main Linux PC to a secondary one and it just worked for the most part, I didn't have to worry about any drivers, which chipsets were on either end, etc.
 

Red Squirrel

No Lifer
May 24, 2003
70,592
13,807
126
www.anyf.ca
really? with drives being so cheap i can't imagine why someone would want 30mb/s ich raid-5 write speed. 30megabit

Try more like ~200MB/sec write and ~2GB/sec read. :cool:

5x WD Black 1TB in raid 5 using Linux md raid.

3.7GB of redundant space, 3ish hour rebuild times. The people who say raid 5 is slow performance have not seen what more than 3 drives can do. It's not as good as raid 0, but at least there's redundancy. I'm looking into raid 6 if I add any more drives.
 

exdeath

Lifer
Jan 29, 2004
13,679
10
81
Try more like ~200MB/sec write and ~2GB/sec read. :cool:

5x WD Black 1TB in raid 5


I don't think so...

No drive based on 6 decade old mechanical data storage is doing 400-500 MB per sec individually in order to obtain 2000 MB/sec with only 5 drives in any version RAID. You'd need about 10 spindles just to break 1 GB/sec.
 
Last edited:

Voo

Golden Member
Feb 27, 2009
1,684
0
76
Try more like ~200MB/sec write and ~2GB/sec read. :cool:

5x WD Black 1TB in raid 5 using Linux md raid.
2gb/sec read? Yeah but only the cache. I mean think about it for one second: How would 5 drives that alone get at best when all stars are aligned 150mb/s get more than 5*150=750mb/s?

And 200mb/sec per write? So you only needed five drives to be 30&#37; faster than one drive alone?

Yeah you're right - amazing performance that :D
 
Last edited:

Emulex

Diamond Member
Jan 28, 2001
9,759
1
71
no way on ICH raid you'd get that speed. 30megabit a second is more like it. not having a cache severely hurts badly.

Post your #'s to back up your claim (assuming ICH and windows). AS CDM
 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
no way on ICH raid you'd get that speed. 30megabit a second is more like it. not having a cache severely hurts badly.

Post your #'s to back up your claim (assuming ICH and windows). AS CDM

As annoying as Red Squirrel is most of time, I believe your assumptions are wrong. I'd bet he's using mdadm on Linux.
 

T_Yamamoto

Lifer
Jul 6, 2011
15,007
795
126
well im not an enthusiast so i dont think i need it. lol

can you explain the differnet TYPES of raid and what each one of them does?
 

Emulex

Diamond Member
Jan 28, 2001
9,759
1
71
google/wiki raid and there are some great guides.

I don't doubt some of the ZFS solutions are better - but that is not typical for users here since you would be talking a dedicated storage server where most people here are wanting to run raid on a pc that does typical functions (windows, ie/ff,mail/browsing/games/etc).
 

Red Squirrel

No Lifer
May 24, 2003
70,592
13,807
126
www.anyf.ca
I don't think so...

No drive based on 6 decade old mechanical data storage is doing 400-500 MB per sec individually in order to obtain 2000 MB/sec with only 5 drives in any version RAID. You'd need about 10 spindles just to break 1 GB/sec.

I bought them recently, far from 6 decades old.

[root@borg tmp]# pwd
/raid1/tmp
[root@borg tmp]# mount -l
/dev/sda3 on / type ext3 (rw) [/]
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
/dev/sda1 on /boot type ext3 (rw) [/boot]
tmpfs on /dev/shm type tmpfs (rw)
/dev/md0 on /raid1 type ext3 (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
gvfs-fuse-daemon on /home/vmuser/.gvfs type fuse.gvfs-fuse-daemon (rw,nosuid,nodev,user=vmuser)
[root@borg tmp]#
[root@borg tmp]#
[root@borg tmp]#
[root@borg tmp]# mdadm --detail /dev/md0
/dev/md0:
Version : 0.90
Creation Time : Sat Sep 20 02:15:28 2008
Raid Level : raid5
Array Size : 3907039744 (3726.04 GiB 4000.81 GB)
Used Dev Size : 976759936 (931.51 GiB 1000.20 GB)
Raid Devices : 5
Total Devices : 5
Preferred Minor : 0
Persistence : Superblock is persistent

Update Time : Sat Sep 10 22:20:35 2011
State : active
Active Devices : 5
Working Devices : 5
Failed Devices : 0
Spare Devices : 0

Layout : left-symmetric
Chunk Size : 64K

UUID : 11f961e7:0e37ba39:2c8a1552:76dd72ee
Events : 0.787795

Number Major Minor RaidDevice State
0 8 80 0 active sync /dev/sdf
1 8 16 1 active sync /dev/sdb
2 8 32 2 active sync /dev/sdc
3 8 48 3 active sync /dev/sdd
4 8 64 4 active sync /dev/sde
[root@borg tmp]#


[root@borg tmp]# bonnie++ -d bonnietest/ -s 5000 -r 1024 -g ryan
Using uid:0, gid:1044.
Writing with putc()...done
Writing intelligently...done
Rewriting...done
Reading with getc()...done
Reading intelligently...done
start 'em...done...done...done...
Create files in sequential order...done.
Stat files in sequential order...done.
Delete files in sequential order...done.
Create files in random order...done.
Stat files in random order...done.
Delete files in random order...done.
Version 1.03 ------Sequential Output------ --Sequential Input- --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
borg.loc 5000M 67482 93 180586 41 182865 31 79496 98 2396792 98 +++++ +++
------Sequential Create------ --------Random Create--------
-Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
16 1755 3 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++
borg.loc,5000M,67482,93,180586,41,182865,31,79496,98,2396792,98,+++++,+++,16,1755,3,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++
[root@borg tmp]#


Maybe I'm interpreting it wrong, but that's where I got my numbers. The results vary slightly each time though given the disks are not 100% idle and different chunk sizes etc have various effects. So real time usage, I'm probably not pegging that high, but it shows the array has the capability of doing it.
 

njdevilsfan87

Platinum Member
Apr 19, 2007
2,342
265
126
I use RAID1 on my 2TB storage drives. I lost t a 500GB drive of media/documents once, and that was my lesson learned.
 

njdevilsfan87

Platinum Member
Apr 19, 2007
2,342
265
126
I use RAID1 on my 2TB storage drives. I've lost a 500GB drive of media/documents once, and that was my lesson learned.

Also I don't know what this "unstable in Windows" RAID is. I've been running RAID1 through my motherboard for 6 months now without any issues at all. But then again I don't actually RAID the boot drive. That I just image backup (onto the storage drives) about once a month.
 

exdeath

Lifer
Jan 29, 2004
13,679
10
81
I bought them recently, far from 6 decades old.




Maybe I'm interpreting it wrong, but that's where I got my numbers. The results vary slightly each time though given the disks are not 100&#37; idle and different chunk sizes etc have various effects. So real time usage, I'm probably not pegging that high, but it shows the array has the capability of doing it.

I don't care if the drive was made in 2011, it's still 6 decade old technology and slow as turds rolling uphill.

You are interpreting it wrong or benchmarking it wrong. Five spindle drives are not going to deliver 2,000 MB/sec in any configuration. Think about it for a second. Individually each drive probably can barely sustain 100 MB/sec, how does 4 x 100 = 2,000? (The 5th is parity and doesn't add to user data rates in RAID 5). It's obviously bursting from cache, which is pointless for benchmarking storage speed.