• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

large lvm filesystem with multiple iSCSI targets

Gooberlx2

Lifer
Our lab is trying to come up with adequate solutions for large and scalable storage. I think one easy solution for scalability would be multiple iSCSI arrays in a LVM group/volume.

The infrastructure is definitely there to ensure adequate transfer/performance on the network side of things. Has anyone implemented such a solution or have comments? Any recommendations for which filesystem to use? The filesystem would be upwards of 30TB.

We are actually getting quotes from the local Dell rep for their equalogic iSCSI systems, but I think their prices are going to be way over what we can afford.

But there are certainly other options. For example, pogolinux has some iSCSI DAS units or Nexenta powered StorageDirectors (ZFS goodness). We've had good luck with some of Pogo's other products so far.
 
I haven't used any of those products personally but I don't see why it wouldn't work. Generally, though, you'd do the LVM, RAID, etc on the storage unit and just partition and/or mkfs the exported volume on the server. Although if you need to work around some limitation of the storage unit you certainly could LVM and/or RAID multiple devices via iSCSI but it would increase the iSCSI bandwidth needed by the server.

It looks like ext3 only goes up to 32TiB so you'll probably want to look at something XFS or JFS so that you have room to grow.

If the Nextenta devices use ZFS that most likely means you'd just be mounting the exported filesystem directly from it via NFS, SMB, etc which isn't what you're talking about here.

Wait, are you talking about using Linux to build a SAN instead of buying one from Dell? If so I don't see why that wouldn't work although you better brush up on your knowledge of Linux software RAID and LVM. I can't see it being that difficult if you know what you're doing but with enough disks to cover 30TB you're going to want muiltiple levels of redundancy. And I'm not sure how initiating a LVM snapshot on a volume exported as a block device would interact with the filesystem on the remote server end, that would be something you'd need to test.
 
So if I understand it correctly, you want to set up an iscsi server on linux? I think that is a great idea. I personally wish we had gone that route instead of purchasing a fibre channel SAN. With a linux iSCSI server you get more features, more flexibility, and I think you could get throughput with dedicated switches and link aggregation. You can get external rackmountable esata enclosures that hold like 15 drives. I would pack one or two of those full of drives, carve 'em up into some 7-disk Raid5 arrays (and assign the leftover drives as global hotspares), then use lvm to define the logical block devices that you would export as iscsi targets.

Originally posted by: Nothinman

...
I'm not sure how initiating a LVM snapshot on a volume exported as a block device would interact with the filesystem on the remote server end, that would be something you'd need to test.
This should be basically the equivalent of how our fibre channel SAN takes snapshots. I've taken snapshots of logical volumes that were used as block devices in KVM virtual machines and they work just fine. I can add the snapshot as a block device to a second virtual machine and it sees the snapshot as a hard drive that is an exact point-in-time copy of the original logical volume block device. Exporting them as iSCSI should work just the same. It works great for making backups.
 
Originally posted by: Nothinman
This should be basically the equivalent of how our fibre channel SAN takes snapshots.

Most likely, but I'd still run it through some tests before saying it's good. =)

Well of course. But that goes for everything... at least in my datacenter. 😉
 
Thanks guys. I figured something like that would work. LVM/RAID are pretty easy to configure and manage. I'd probably go with a raid6 per iSCSI DAS unit and stripe them, for a RAID 60 setup. Yeah, I'd definitely have to do testing with the snapshots. My big concern was whether LVM would be a huge bottleneck at those group/volume sizes, or anything else that I may not have considered.

Though, we may actually end up going with Dell's arrays because of the massive discount our hospital apparently gets. I did get a chance to play around with them, and they are pretty slick. The concepts are simple and all the management can be done on the arrays themselves. Thin provisioning will probably prove to be quite useful as well, so that we can allocate for storage space up front in any studies, and expand as needed.
 
LVM is just a wrapper around device-mapper in the kernel these days, I really doubt that will become a problem before disks or network.
 
Back
Top