I've been in the server admin role for a long time and of course there are prepackaged arrays that can do this, however I'd like to do a home project on the cheap (relatively). My goal is to build my own external storage array with the following requirements:
Rackmountable
Hotswappable SATA II (at least 12 bays)
Redundant power supply
Supports at least a 2 node direct attached cluster, preferrably active/active
I have multiple servers for the nodes, but I am unsure on all the pieces I would need for the external array. I want it to be a dumb box (JBOD) that just directly connects to each of the nodes over eSATA (Inifiniband was brought up but I'm a little fuzzy on how that works and was told it was going the way of the dodo). I didn't want a full motherboard and processor in it. I was iffy on whether the raid card would sit in the external array or in each node. If in the external array I would have to special one that runs on it's own (not sure if these even exist). If the raid card(s) go in the nodes (I think this is probably right) then it may go something like this.
I was looking at this RM23212 from Chenbro as the enclosure. That takes care of the hotswap bays and I can get a redundant powersupply with it but that's where I start to get fuzzy.
From my research it seems like I would need a SATA multiplexer for each drive so that it could talk to multiple hosts, like these:
http://www.lsi.com/storage_hom...ics/sata_multiplexers/
http://www.mysimon.com/9015-11590_8-53656113.html
Then I would need SATA Multipliers to combine that many sata connections into "trunked" eSATA connections to each node like these:
http://www.cooldrives.com/cosapomubrso.html
http://usa.chenbro.com/corpora...oducts_line.php?pos=36
I guess my question is, is it even possible for me to do without developing my own firmware? Which way is it, RAID card in the external array or in each node - or can you do either? Pros/Cons to each way? Am I missing any components or am I overlooking problems? If I am correct, what are some good components that I may not have found that have good compatibility? Any other advice other than, just buy a prebuilt OEM?
I was looking at using perhaps OpenFiler (I've read it can do clustering) on the nodes. Remember I want to go SATAII and connectivity from the nodes will be iSCSI/NFS so SCSI and Fibre is out. Thanks for any and all constructive input.
Moved to appropriate forum - Moderator Rubycon
Rackmountable
Hotswappable SATA II (at least 12 bays)
Redundant power supply
Supports at least a 2 node direct attached cluster, preferrably active/active
I have multiple servers for the nodes, but I am unsure on all the pieces I would need for the external array. I want it to be a dumb box (JBOD) that just directly connects to each of the nodes over eSATA (Inifiniband was brought up but I'm a little fuzzy on how that works and was told it was going the way of the dodo). I didn't want a full motherboard and processor in it. I was iffy on whether the raid card would sit in the external array or in each node. If in the external array I would have to special one that runs on it's own (not sure if these even exist). If the raid card(s) go in the nodes (I think this is probably right) then it may go something like this.
I was looking at this RM23212 from Chenbro as the enclosure. That takes care of the hotswap bays and I can get a redundant powersupply with it but that's where I start to get fuzzy.
From my research it seems like I would need a SATA multiplexer for each drive so that it could talk to multiple hosts, like these:
http://www.lsi.com/storage_hom...ics/sata_multiplexers/
http://www.mysimon.com/9015-11590_8-53656113.html
Then I would need SATA Multipliers to combine that many sata connections into "trunked" eSATA connections to each node like these:
http://www.cooldrives.com/cosapomubrso.html
http://usa.chenbro.com/corpora...oducts_line.php?pos=36
I guess my question is, is it even possible for me to do without developing my own firmware? Which way is it, RAID card in the external array or in each node - or can you do either? Pros/Cons to each way? Am I missing any components or am I overlooking problems? If I am correct, what are some good components that I may not have found that have good compatibility? Any other advice other than, just buy a prebuilt OEM?
I was looking at using perhaps OpenFiler (I've read it can do clustering) on the nodes. Remember I want to go SATAII and connectivity from the nodes will be iSCSI/NFS so SCSI and Fibre is out. Thanks for any and all constructive input.
Moved to appropriate forum - Moderator Rubycon