Recommend me a Raid 1 or 5 Card - 2-8 Port SATA

bob4432

Lifer
Sep 6, 2003
11,727
46
91
time to upgrade the home server for some serious space. looking to go hardware raid 5 but want it in a seperate add in card so i can move it around and not be m/b bound by the setup if/when i upgrade it. 32bit pci slot is preferred but i may have a look at pci-e and possibly rebuild the server if the pci-e stuff is that much better. no pci-x please.

i don't really care about write/read performance because i will be limited to 100Mb/s ethernet for a bit.

the array will hold images from other machines and video - so larger mult-GB files, but again, i understand i am ethernet limited and even when i switch to full Gb/E i would be happy with 40-60MB/s.

will probably put 4x7200.10s 320GB hdds in some type of a hot swap setup to begin with and possibly add more later if the 8port cards aren't too expensive.

also recommendations on a 4xhdd sata hotswap enclosure that fills 3-4 drive bays would be nice too. or even a external cage along with a card that can handle a external setup.

not trying to break the bank, but i want this card to last for some time.

rest of system - win2kpro, xp2000 (probably going to move up to a 2400mobile or 2500barton), 512-1GB ram, 10K scsi main drive, simple agp gpu and ~400-450 enhance/fps/enermax psu (haven't decided), or possibly will stick with my 350W antec - do you think 6hdds and a dvd-rw is too much for that psu? this machine is always on a ups and is a 24/7 setup that only gets rebooted when m$ updates require it, so usually on 30days in a row, then a reboot and then another 30 days.

would like a pretty popular card so acronis has the drivers built in but i can more than likely work around that.

reliability is much more important than speed if that makes a difference in the cards.

thanks in advance,
bob

edit: after seeing the prices on the cards how are the cheaper raid 1 cards and maybe do 2x500-750GB hdds?
 

bob4432

Lifer
Sep 6, 2003
11,727
46
91
Originally posted by: SuperNaruto
last long = areca

dont care = adaptec

looks like ~$200-$300 for the card alone....how about a nice reliable raid 1 card? same brands or are there others out there?
 

Madwand1

Diamond Member
Jan 23, 2006
3,309
0
76
Rocketraid 2310 4x SATA, x4 PCIe, ~ $140 USD. Probably one of the better buys around. Hybrid solution, but does decent RAID 5 writes.

Problem with growth; 8x SATA version 2320 is almost double the price, but.. the difference compares with the cost you'd pay for additional drives and perhaps for the premium for the bigger drives and you'd spend more in total if you got a 4x and an 8x later.

400 and 500 GB drives are coming down in price; if you shop well, you can find some deals which though not at the 250-320 GB $/GB level, approach it. 4x500 GB RAID 5 = 1.5 TB, which should be enough. With this sort of volume, you're taking a fairly big risk with difficulty of backup & data loss. It might be better to be more conservative and build space taking into account external backup capability.

PCI-X is generally backwards-compatible with PCI. You need to check to be sure, but most should be. There's no advantage to PCI-X of course unless you get a workstation/server board with that capability.

Based on experience, I'd be wary of the PCI performance on socket A MBs, and suggest that server-class boards might be a safer bet going this route. And in that case, you should be able to get PCI-X capability and match that with a cheap eBay PCI-X GbE NIC -- these are being dumped in favour of on-board / newer boards.

I think the 23xx line is actually PCI-X with a PCIe bridge, so the 22xx PCI-X line should be similar. But for newer gear at low cost, I think the better, longer-term answer is PCIe with onboard video and GbE.

I'm running a 2320 on an nVIDIA 430/6150 939 board in the x16 PCIe slot (as my backup server). 6 drives in RAID 5 + OS drive + a few fans, controllers, etc., takes ~ 300W AC to boot up, and runs with around 145W. Staggered spin-in is not enabled (can't turn it on -- maybe a drive-compatibility issue). I'm using a Enermax Noisetaker 420W for this. Wouldn't recommend skimping on a cheap PSU just for stability and reliability reasons in this context.

For RAID 1, I'd go with an on-board solution. RAID 1 is dead simple, and probably fairly easily portable -- easily within the same chipset family; maybe even cross chipsets (you only need a drive to be recognized as a simple drive to rebuild RAID 1). Intel might perform a bit better during RAID 1 reads than nVIDIA. 750 GB drives are expensive though; not sure that paying the premium makes much sense.
 

bob4432

Lifer
Sep 6, 2003
11,727
46
91
Originally posted by: Madwand1
Rocketraid 2310 4x SATA, x4 PCIe, ~ $140 USD. Probably one of the better buys around. Hybrid solution, but does decent RAID 5 writes.

Problem with growth; 8x SATA version 2320 is almost double the price, but.. the difference compares with the cost you'd pay for additional drives and perhaps for the premium for the bigger drives and you'd spend more in total if you got a 4x and an 8x later.

400 and 500 GB drives are coming down in price; if you shop well, you can find some deals which though not at the 250-320 GB $/GB level, approach it. 4x500 GB RAID 5 = 1.5 TB, which should be enough. With this sort of volume, you're taking a fairly big risk with difficulty of backup & data loss. It might be better to be more conservative and build space taking into account external backup capability.

PCI-X is generally backwards-compatible with PCI. You need to check to be sure, but most should be. There's no advantage to PCI-X of course unless you get a workstation/server board with that capability.

Based on experience, I'd be wary of the PCI performance on socket A MBs, and suggest that server-class boards might be a safer bet going this route. And in that case, you should be able to get PCI-X capability and match that with a cheap eBay PCI-X GbE NIC -- these are being dumped in favour of on-board / newer boards.

I think the 23xx line is actually PCI-X with a PCIe bridge, so the 22xx PCI-X line should be similar. But for newer gear at low cost, I think the better, longer-term answer is PCIe with onboard video and GbE.

I'm running a 2320 on an nVIDIA 430/6150 939 board in the x16 PCIe slot (as my backup server). 6 drives in RAID 5 + OS drive + a few fans, controllers, etc., takes ~ 300W AC to boot up, and runs with around 145W. Staggered spin-in is not enabled (can't turn it on -- maybe a drive-compatibility issue). I'm using a Enermax Noisetaker 420W for this. Wouldn't recommend skimping on a cheap PSU just for stability and reliability reasons in this context.

For RAID 1, I'd go with an on-board solution. RAID 1 is dead simple, and probably fairly easily portable -- easily within the same chipset family; maybe even cross chipsets (you only need a drive to be recognized as a simple drive to rebuild RAID 1). Intel might perform a bit better during RAID 1 reads than nVIDIA. 750 GB drives are expensive though; not sure that paying the premium makes much sense.

thanks for the info.....some more thinking as i really don't want to rebuild the server and would like to use the skt a setup since it has a lot of tweaking on it as it runs apache, mysql, php and i really don't want to swap out a bunch of db driven websites that are on it. i may still look at some type of raid 1 pci setup w/ a couple of 320GB hdds which would last me for a bit, i was just trying to set it up so i wouldn't have to mess with it for a couple years...decisions, decisions...
 

bob4432

Lifer
Sep 6, 2003
11,727
46
91
if i decided to just to raid 1, are the seperate cards much better than the onboard solutions? being the frugal person i am, i could just add a pair of 320GB hdds to my main rig in raid 1 and have the other machines send the hdd images there. i would probably step up to GbE though...

thoughts please....
 

Madwand1

Diamond Member
Jan 23, 2006
3,309
0
76
I think it'd be fine. Write performance couldn't get any faster (than single drive speed), and only read performance could improve with a good implementation, which "stripes" the reads taking advantage of duplication on the two drives. But for the most part, inexpensive solutions won't have this feature, and it's not easy to track it down, and you'd compromise overall performance by adding a PCI controller.

So in your place, not re-building anything, I'd probably go with onboard RAID 1 as-is. Yet again, no RAID is a backup, regardless of level or cost or quality, and RAID itself is added complexity (RAID 1 minimally so) and a potential point of failure (not least user error), so if the data mattered, I'd want an external backup as well.
 

bob4432

Lifer
Sep 6, 2003
11,727
46
91
Originally posted by: Madwand1
I think it'd be fine. Write performance couldn't get any faster (than single drive speed), and only read performance could improve with a good implementation, which "stripes" the reads taking advantage of duplication on the two drives. But for the most part, inexpensive solutions won't have this feature, and it's not easy to track it down, and you'd compromise overall performance by adding a PCI controller.

So in your place, not re-building anything, I'd probably go with onboard RAID 1 as-is. Yet again, no RAID is a backup, regardless of level or cost or quality, and RAID itself is added complexity (RAID 1 minimally so) and a potential point of failure (not least user error), so if the data mattered, I'd want an external backup as well.

this would be the 2nd full back up for the machines (basically a place to backup the backups). i already have all machines going to my home server, along with all of the drives in that machine being backed up to other drives in the same machine. this is an added layer of backups.

the likelihood of all the drives i am curently using to die at the same time (this would be about 4-6 drives) is near impossible unless there is a fire, which at that case i would have much bigger problems. i have thought about off site but there really is not any place i currently have access to that would offer any better protection than what i have at home but i may still look into a firewire setup for that as an easy grab if the shtf.