• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

redundant block device between 5 servers

Brazen

Diamond Member
I need to synch a block device between 5 servers. Only one server will write to the block device and the other three just need to read from it. If this was only between 2 servers, drbd would be perfect, but it's my understanding that drbd will only do 2 or 3 nodes, and I need it to do 5. The replication must be immediate, preferably the primary node not being notified the writes are finished until the data is on all 5 nodes.

Any ideas?
 
Does it have to be a block device? I.e. could you get away with read-only NFS mounts on the other machines?
 
Does it have to be a block device? I.e. could you get away with read-only NFS mounts on the other machines?

This would be my first suggestion too. Have the device on a machine that serves a read-only copy of the data via NFS to your other machines.
 
Does it have to be a block device? I.e. could you get away with read-only NFS mounts on the other machines?

no, I actually just thought of that and was coming back to mention it. It does not have to be a block device. The important part is keeping the files in synch, but it needs to be immediate, preferably synchronous but I would settle for asynchronous.

NFS mounts would be great, but then how do I keep them all in synch? The only thing I can think of is create a cron job that copies any new files and have it run every 30 seconds, but that has obvious flaws to it.
 
no, I actually just thought of that and was coming back to mention it. It does not have to be a block device. The important part is keeping the files in synch, but it needs to be immediate, preferably synchronous but I would settle for asynchronous.

NFS mounts would be great, but then how do I keep them all in synch? The only thing I can think of is create a cron job that copies any new files and have it run every 30 seconds, but that has obvious flaws to it.

Perhaps I misread the original post but you say you only need one device to write to it and the rest only read right? Have your NFS server be the host that writes to the device and since that volume is being served via NFS as read-only to your other hosts, the writes would be instantly available as soon as they're written.
 
Perhaps I misread the original post but you say you only need one device to write to it and the rest only read right? Have your NFS server be the host that writes to the device and since that volume is being served via NFS as read-only to your other hosts, the writes would be instantly available as soon as they're written.

oh, well, the purpose of this is too replicate data out to sites over relatively slow lines (like 10mbit), so they don't have to read from the main server. So that won't work.
 
no, I actually just thought of that and was coming back to mention it. It does not have to be a block device. The important part is keeping the files in synch, but it needs to be immediate, preferably synchronous but I would settle for asynchronous.

NFS mounts would be great, but then how do I keep them all in synch? The only thing I can think of is create a cron job that copies any new files and have it run every 30 seconds, but that has obvious flaws to it.

ummm... why would you copy the files form the NFS mount? Simply mount the NFS mount to where you expect to have read access to the files and then when whatever needs to read the file simply does. For all intents and purposes, the file(s) in the NFS mount point would be seen on the system as if they were part of it. Just share it out as "ro" and you should be good to go.

You can setup the automounter to mount that NFS directory somewhere, but you can also do it as a hard mount (with the side effect that it can cause your system to hang when booting if the mount point can not be established). Heck you can even mount it under some other existing directory tree like "/usr/local/some_dir".
 
oh, well, the purpose of this is too replicate data out to sites over relatively slow lines (like 10mbit), so they don't have to read from the main server. So that won't work.

Yeah, if that is the case, NFS mount will not help you. Best you can do is an "rsync", which is how things were done in the old days for ftp server replication.
 
Yeah, if that is the case, NFS mount will not help you. Best you can do is an "rsync", which is how things were done in the old days for ftp server replication.

Yeah, it looks like that may be the only simple solution until the next major version of drbd comes out (which supports multiple nodes, instead of just 2). The only other possibility would be to use a combination of gnbd, mdraid, and gfs, but I'm undecided if I want to go that route.
 
Back
Top