Custom External Storage Array with Cluster Support

BaDaBooM

Golden Member
May 3, 2000
1,077
1
0
I've been in the server admin role for a long time and of course there are prepackaged arrays that can do this, however I'd like to do a home project on the cheap (relatively). My goal is to build my own external storage array with the following requirements:

Rackmountable
Hotswappable SATA II (at least 12 bays)
Redundant power supply
Supports at least a 2 node direct attached cluster, preferrably active/active

I have multiple servers for the nodes, but I am unsure on all the pieces I would need for the external array. I want it to be a dumb box (JBOD) that just directly connects to each of the nodes over eSATA (Inifiniband was brought up but I'm a little fuzzy on how that works and was told it was going the way of the dodo). I didn't want a full motherboard and processor in it. I was iffy on whether the raid card would sit in the external array or in each node. If in the external array I would have to special one that runs on it's own (not sure if these even exist). If the raid card(s) go in the nodes (I think this is probably right) then it may go something like this.

I was looking at this RM23212 from Chenbro as the enclosure. That takes care of the hotswap bays and I can get a redundant powersupply with it but that's where I start to get fuzzy.

From my research it seems like I would need a SATA multiplexer for each drive so that it could talk to multiple hosts, like these:

http://www.lsi.com/storage_hom...ics/sata_multiplexers/
http://www.mysimon.com/9015-11590_8-53656113.html

Then I would need SATA Multipliers to combine that many sata connections into "trunked" eSATA connections to each node like these:

http://www.cooldrives.com/cosapomubrso.html
http://usa.chenbro.com/corpora...oducts_line.php?pos=36

I guess my question is, is it even possible for me to do without developing my own firmware? Which way is it, RAID card in the external array or in each node - or can you do either? Pros/Cons to each way? Am I missing any components or am I overlooking problems? If I am correct, what are some good components that I may not have found that have good compatibility? Any other advice other than, just buy a prebuilt OEM?

I was looking at using perhaps OpenFiler (I've read it can do clustering) on the nodes. Remember I want to go SATAII and connectivity from the nodes will be iSCSI/NFS so SCSI and Fibre is out. Thanks for any and all constructive input.

Moved to appropriate forum - Moderator Rubycon
 

BaDaBooM

Golden Member
May 3, 2000
1,077
1
0
I tried calling Chenbro for some information but that was like calling 1st level tech support. Anyone have some knowledge on this? Thanks.
 

MerlinRML

Senior member
Sep 9, 2005
207
0
71
Originally posted by: BaDaBooM
Rackmountable
Hotswappable SATA II (at least 12 bays)
Redundant power supply
Supports at least a 2 node direct attached cluster, preferrably active/active

I want it to be a dumb box (JBOD) that just directly connects to each of the nodes over eSATA (Inifiniband was brought up but I'm a little fuzzy on how that works and was told it was going the way of the dodo). I didn't want a full motherboard and processor in it. I was iffy on whether the raid card would sit in the external array or in each node.

Then I would need SATA Multipliers to combine that many sata connections into "trunked" eSATA connections to each node like these:

I was looking at using perhaps OpenFiler (I've read it can do clustering) on the nodes. Remember I want to go SATAII and connectivity from the nodes will be iSCSI/NFS so SCSI and Fibre is out. Thanks for any and all constructive input.

Granted that you've already provided a fair amount of information, but I'm somewhat confused by seeming contradictory configuration points. This may be my own lack of understanding on a few things, but I thought I'd ask.

So it sounds like you want a storage device. And you're looking to build a SAN using eSATA. However, you also mention running OpenFiler and iSCSI, and it seems like you've got a gobbledegook of protocols, interconnects, and storage access that don't really fit together. So I'm going to start by simplifying if I can.

So how about starting with the client connectivity side first. How are you planning to access the storage? Are your clients going to come in over an IP network using NFS of CIFS/SMB? Or do the clients require block level access to the storage? Answering this question will tell you how to connect your storage and what your interconnect requirements are.

One additional point. Even though I'm avoiding interconnects so far, I don't believe eSATA will work that well for what you're hoping to do. It's really not designed to create a storage network. eSATA is really intended for a single connection, and port multiplier is a way of enumerating multiple devices over that single connection, but it won't allow you to go beyond 5 devices total. SAS is really what you're looking for. SAS expanders and SAS switches are more in the realm. However, your desire for something affordable is likely impossible with that gear.
 

BaDaBooM

Golden Member
May 3, 2000
1,077
1
0
I will clarify. Note that the eSATA piece is not really being used as any kind of SAN protocol. It will only be used as direct attached storage to the cluster nodes (more than likely only 2 as I haven't found any SATA multiplexers that do more than 2 host connections). I understand that each multiplier can only do a maximum of 5 drives and, given a 12 bay enclosure, will require at least 3 external connection for each cluster node - total of 6. Assuming I have the RAID cards in the cluster nodes (otherwise I probably would need a different type of external interconnect), I should just need to make sure my RAID card or cards have 3 or more eSATA connections.

The clients will be using iSCSI as they will be VMware ESXi. I may also share some portions of the space out using CIFS/SMB as a secondary consideration (one reason I was looking at Openfiler as it can do both). Note this isn't going to be a fully supported production network so I won't need to have the performance of a full blown SAS or fibre solution. It is a home project where I can play with making as efficient and redundant a system as possible while keeping it relatively affordable.

Does that help or does that make it less clear? ;)

 

SammyJr

Golden Member
Feb 27, 2008
1,708
0
0
You're not going to find SAS clustering solutions in the DIY market... not cheap anyway. Enhance has some units with BYO drives, but they're pricey.

For DIY, iSCSI/NFS is your choice. Look at the SuperMicro barebones chassis. There's a few with 12 and 16 hotswap bays, redundant PSUs, and some with SAS expanders. Add a server type motherboard/ram/raid card. Personally, I'd go with an LSI hardware RAID card or the SuperMicro UIO version, if you get the appropriate SM motherboard. IMHO, LSI RAID has the best variety of OS support. Adaptec is also broadly supported, but a lot quirkier.

The problem is software. IETD, the iSCSI target used with OpenFiler, is pretty outdated. It doesn't support SCSI-3. FreeNAS has a better target in 0.7, but it isn't completely done. Probably your best bet at this point is OpenSolaris. The new iSCSI target works very well and is very fast. NFS and CIFS are also supported. OpenSolaris/ZFS folks talk about RAIDZ/software RAID, but I'm not 100% confident that hotswap w/SAS HBAs and Solaris works 100%, just from people posting. Hardware RAID 6 is fine for me. If you have a TechNet subscription, Windows Storage Server 2008 is another good choice. It has a fast, compatible iSCSI target. The biggest problem with it is the Windows Update reboots, but there are ways around that. There's also StarWind for Windows and Open-E (linux distro), but both limit you to 2TB in the free version.

EDIT: Lio-target on Linux works really well, too, although the install is tricky if you've never built a kernel.