Drive Mounting

Anteaus

Platinum Member
Oct 28, 2010
2,448
4
81
I'm currently knee deep in Ubuntu Server. I'm building a new file server for home and have decided to make a serious effort to stick with Linux so hopefully I can get some guidance. I'm currently using a laptop as a practice machine for learning the ropes. My hardware should be shipping soon.

I have a few questions so if anyone has answers I would appreciate them. For our purposes, just assume a standard Samba file server without raid. Also, I want everyone who can see my shares to have the option to read/write to all directories within them.

1. When I create my mount points and mount my storage drives, is there anything I need to do up front with chmod to keep from having big issues with permissions later?

2. I plan to have four 4TB hard drives mounted, each drive as it's own share. Each drive has a matching external drive for backup which will be rsync-ed with its corresponding internet drive at least once per week. These drives will stay disconnected the rest of the time. Is there a way to plug these drives in and have Linux not on recognize which drive it is, but also mount each in a predetermined mount point? Currently they keep coming up as different device numbers which makes it far harder to do all this than it should be. Juggling four external drives from a command line is not my idea of fun. I'm confident that one of you has a nice elegant solution to this, because if not it is enough to push me over WS2012E.

3. Where should I be mounting my permanent drives? It's hard finding a straight answer. According to my findings, /mnt is for temporary mounts and /media is a for removable media in general but I just wanted clarification. My current plans are to mount the hard drives under /media and have the externals also mount there as needed for scripted rsync.

i.e.

/media/drive1
/media/drive2
/media/drive3
/media/drive4
/media/drive1ext
/media/drive2ext
/media/drive3ext
/media/drive4ext

I'm open to suggestions for a more optimal layout.

Thanks in advance.
 

mv2devnull

Golden Member
Apr 13, 2010
1,539
169
106
2. Does Ubuntu have
/dev/disk/by-id /dev/disk/by-label /dev/disk/by-path /dev/disk/by-uuid
and do your drives show there same id/path/uuid every time?
 

theevilsharpie

Platinum Member
Nov 2, 2009
2,322
14
81
1. Not really. You'd need to better describe what it is that you want to do, but in general, permissions problems are simple to fix.

2. udev controls how USB devices are mounted. By default, it will automount a USB drive using the first alphabetical drive slot available, but you can write custom udev rules so that particular devices are mounted to particular directories. You can also have udev run scripts, which could trigger your backup program automatically when a drive is plugged in.

3. It doesn't really matter where you put it, but I personally stick file shares under /srv.
 

Anteaus

Platinum Member
Oct 28, 2010
2,448
4
81
1. Not really. You'd need to better describe what it is that you want to do, but in general, permissions problems are simple to fix.

2. udev controls how USB devices are mounted. By default, it will automount a USB drive using the first alphabetical drive slot available, but you can write custom udev rules so that particular devices are mounted to particular directories. You can also have udev run scripts, which could trigger your backup program automatically when a drive is plugged in.

3. It doesn't really matter where you put it, but I personally stick file shares under /srv.

1. So if I create the mount points and mount the drives with sudo and don't mess with ownership of said mount points then set them to share with Samba, I won't have any issues? For example, if I mount the drives, share the folder, and someone writes a file to the share, who owns the file on the local linux server? Does it automatically become owned by the user who created the folder, or are the permissions automatically adjusted so that everyone can read/write? This is the area I am most murky on. With Windows I'm used to just moving files around....when I connected a drive, I never had to worry about which user was logged in as to whether I would be able to access the files or not.

2. Going from what you guys said, I did some research. Please confirm what I found. If I use e2label to set the filesystem label on a particular external drive and connect it, I can configure udev/fstab? to read that label and mount it to a pretermined point based on that label? A sort of file system IF-THEN statement?

In theory, using this reasoning or something similar I can connect all four external drives at the same time and have them all mount at predescribed points and then run a script that will rsync all four of them sequentially?

I really appreciate all the help.
 

Scarpozzi

Lifer
Jun 13, 2000
26,392
1,780
126
Windows rights are kind of bunk because they use the everyone group to give "everyone" read/write access. You can essentially do the same thing in Linux. The real key is how the SMB service security is configured. I've not set it up in a long time because I typically mount CIFS on Linux instead of going the other way.

fstab can be tricky because you need to make sure you get it exactly right. If it tries to mount something that doesn't exist or hits an error, it just won't mount. It gets nasty when you're dealing with having to remount as read-write to fix errors in fstab.

If you're trying to replicate data, have you considered software RAID? It might keep you from having to sync stuff around unless you're trying to keep historical copies of data.


Now I'm going to say what I always say with these kinds of posts. I bought a NAS device. It does CIFS/NFS/iSCSI and all three can be mounted on Windows and Linux...it works great, provides RAID, runs apps and a web server and burns 13 watts. Hardware was $150 + drives with firmware updates included....there are models from many companies between $400-500 with 4-5 drive bays and hot-swappable support. To me, you can't beat that because it's a set it and forget it solution that is less dependent on an OS. When you work with this stuff, you want to play with it less.
 

Red Squirrel

No Lifer
May 24, 2003
71,313
14,085
126
www.anyf.ca
I would strongly recommend to use mdadm raid and not just have a bunch of individual drives. Will be easier to manage especially when (not if, but when) you have drive failures. If you keep any individual drives only use them for backups, not for directly accessed data. Raid 5 for best bang for your buck, raid 10 for better performance and slightly better fault tolerance (depends which drives fail)
 

Anteaus

Platinum Member
Oct 28, 2010
2,448
4
81
Well I figured out how to mount by ID, so I have it working beautifully now. Now I'm trying to figure out how to auto mount my external drives to specific directories. I think it's something easy that I'm missing....if I put in the FSTAB info it stalls on boot.

Red Squirrel - I actually disagree with that.

First, if I use individual drives with individual backups, if any one drive dies then it is extremely simply to replace that individual drive and simply copy the data back over from the backup. If any other component dies on the machine, it would be a simple matter of moving them to another machine and mount them. Drive failures using any kind of hardware/software raid will affect all storage, whereas a single failure with individual drives only affects the single drive. No rebuilds to worry about, and the drives can be mounted directly on any Linux system or even Windows with the appropriate EXT driver.

Second, raid is for redundancy and not backup. Why would I give up a storage capacity for parity when I have a good backup in place? Better to just take the drive out of action and let the server continue on instead of running in a degraded state at the risk of the other drives. Also, running individual instead of raid can increase the service life of the drives because it doesn't require drives A, C, and D to spin up just because you need a file that's on drive B.

Third, Raid 5 is unnecessarily risky with consumer drives. Even a WD Green can live peacefully if they only spin up when needed, whereas with Raid 5 all drives must stay in action. One bad drive and every minute you run degraded increases the risk of catastrophic failure. On top of it, for the price of one enterprise drive you can usually buy twice the capacity with consumer drives but with the downside is that they aren't appropriate for Raid 5.....I mean, you can do it but it's a gamble. The risk is well documented.

I never understood how Raid 5 became so virtuous. I get there is "cool" factor with it, but for many people it just adds unneeded complexity.

Scarpozzi - Took a serious look at NAS appliances but there were four things I didn't like.

1. They were usually based on Atom processors had little ram, which is a bit lacking for the amount they charged. I was able to build a much better low power machine for less than $500 with much more ram and a much better processor but still low power.

2. If your NAS dies, your drives go with it since transplanting them into anything other than the same exact NAS type will likely require a wipe.

3. The motherboard I bought supports 6 SATA drives and it is a simply matter to add another SATA controller for more drives. Expanding past the limits of a NAS unit usually means buying a new NAS or an expensive eSATA drive expander.

4. With some or all of them, if you don't use drives that are own their compatibility list (basically, NAS/RAID level drives), you get zero support. I've already got plenty of high capacity consumer drives to use...I'm not going to buy new drives just to use RAID.

Anyways, that's my take. I appreciate the advice nonetheless.
 
Last edited:

Red Squirrel

No Lifer
May 24, 2003
71,313
14,085
126
www.anyf.ca
With mdadm raid you can transplant the drives in another system. But I agree about the NAS boxes and is one of the reasons I don't use them. With relying only on backups though you still need to copy files around, possibly deal with permission issues etc... when a drive fails. With raid, you don't have to do anything more than pop a new drive in. I've had a drive fail while I'm in the middle of doing work. I'd get the alarm in my email or phone, check it out, confirm the alarm is true, then grab a spare drive and go replace the failed one and let it rebuild. Takes all of 5 minutes, then I can resume my work again. Typically in this situation I'll also double check that my backups are running properly then move on to my work. No need to wait for anything to copy.

Another advantage of raid is less volumes, just a few bigger ones. It is good to have some separation though. I currently have a raid 10 with 4 3TB drives and a raid 5 with 8 1TB drives. the raid 5 has had every single drive replaced at one point or the other and even transplanted to a new box, and I never had to even touch backups. I accidentally deleted a folder once, and restoring from backups took hours because not only did I have to wait for it to copy but had to fix all the permissions and ownership and all that. Though, that is a Linux specific issue, as in Windows all that would just inherit properly and it would work. I can't imagine having to deal with that for an entire volume, and know that it can happen at any given time.

In the case of databases or other data being actively accessed it's also important to not disrupt that access. Heck, if I'm watching a movie and a drive fails, I can finish watching my movie. It's simply crazy to NOT do raid. Even raid 5 with consumer drives has less chance of losing all data than a single drive. No matter what, you need to have backups anyway, but if you can reduce the chance of needing to use them, why not do that?

Green drives ARE crap for raid though, but who cares, there are other consumer drives that arn't any more expensive. Use the greens for backups, use slightly more expensive drives for raid. I personally use 1TB Blacks for my raid 5, and some Hitachi/Toshibas for my raid 10. I did not really research the Toshibas that well, they were just a good price, and they were not green, so I bought them.
 
Last edited:

Anteaus

Platinum Member
Oct 28, 2010
2,448
4
81
I think it all depends on the purpose of said server as well as how much money you are willing to put into drives. I already have 6TB worth of internal WD greens in my desktop and the same in external that have served me well and will continue to serve me well once I put them in the new machine next week.

I'm glad you like your setup though. For myself, I'm just not going to bother with raid without appropriate drives to go with it. Thanks for the information though. :)
 

manly

Lifer
Jan 25, 2000
13,589
4,239
136
Linux software RAID doesn't care about TLER, so consumer drives are fine. I'm not so sure about the hypothesis that green drives that only spin when needed are perhaps longer-living than drives in constant rotation.

I tend to believe the Backblaze study that showed Seagate consumer drives have a significantly higher rate of failure than the other remaining brands. IIRC WD Blue drives still have a 2 year warranty vs Seagate's 1 year for Barracudas. Granted storage is fairly cheap and replacement cost is marginal in the grand scheme of things, but for these few reasons I'd lean away from Seagate drives.
 

Anteaus

Platinum Member
Oct 28, 2010
2,448
4
81
Linux software RAID doesn't care about TLER, so consumer drives are fine. I'm not so sure about the hypothesis that green drives that only spin when needed are perhaps longer-living than drives in constant rotation.

I'm not saying I agree with the theory, but it's not the rotation as much as the head parking that causes the wear and tear. WD Green drives aren't designed to be in constant demand over large periods of time. Electronically and physically they are designed to serve files and then lay dormant until needed again as their price reflects.

The Backblaze study is quite interesting, but it also mentions this which directly refers to my concerns. That said, I'm glad you brought it up because I feel a bit better about software Raid and consumer drives, though I'm still not going to do it. I'm surprised about Hitachi drives being so reliable, but that's more because of product ignorance. It's just not a brand I'd normally consider.

"For example, Backblaze said it will stop buying Seagate LP 2TB drives and Western Digital Green 3TB drives, because they just don’t work in the company’s environment. Part of the problem, Backblaze says, is these drives are designed to spin down when not in use to save power. That’s a great feature for a home PC user, but in an industrial environment Backblaze says the drive would spin down only to spin back up a few minutes later. The end result being more wear and tear on the drive than it was designed for."

http://www.pcworld.com/article/2089...eals-the-most-reliable-hard-drive-makers.html
 

Anteaus

Platinum Member
Oct 28, 2010
2,448
4
81
Just to push the conversation along, do any of you have experience with Greyhole? Instead of acting like a RAID array, it works just like Drive Extender from WHS V1 and allows you to create a large storage pool of drives. However, instead of adding redundancy to all files contained within it, the user chooses which shares deserve redudancy and Greyhole maintains a second copy on one of the other drives. The best part is that the drives can be removed from the pool and mounted seperately if need be.

I kind of like this because it gives redundancy on demand without wasting space on less critical files. That said, it isn't RAID nor is it trying to be. It's basically LVM+.

http://www.greyhole.net/

Food for thought.
 

Red Squirrel

No Lifer
May 24, 2003
71,313
14,085
126
www.anyf.ca
Anything that allows to create volumes with multiple drives and incorporates redundancy sounds good in my book. I am more old school and personally prefer traditional mdadm raid but I would recommend any of those pooling solutions over single drives, provided the solutions are proved stable and are open source. (look what happened to MS's DE - you don't want to rely on something that may vanish)

No matter what solution you use, you want backups too though. Backups within the same pool/system (for quick access/versioning) as well as backups outside the system, preferably offline backups such as removable drives. I'd say tapes, but hard drives are actually cheaper than tapes, get a drive dock and a bunch of cheap green drives.
 

Anteaus

Platinum Member
Oct 28, 2010
2,448
4
81
"...I would recommend any of those pooling solutions over single drives, provided the solutions are proved stable and are open source."

See that's only part of your logic I don't understand. If you are running 3 drives in a pooling situation without any sort of parity, how is that more beneficial to running with individual drives. Assuming the user chooses to not actually use take advantage of in pool backup, the failure rates are exactly the same as running with individual drives, except that now the user doesn't know which drives the files he/she needs resides on.

I see the virtue of Raid 5 or 10 if the user is willing to put the cash into appropriate drives because there is actual parity and extended up time at the cost of storage capacity. I really like that setup with enough drives. Without this though, the user actually loses control and at best doesn't have any idea which drives contains the drive he/she needs and at worse can be as dangerous as Raid 0. Pooling for the sake of pooling is misguided in my opinion. To me, splitting my movie collection over two drives and maintaining two shares is better than pooling the two drives and using share at the expensive of control. For what it's worth, miniDLNA allows the user to mount multiple video shares that it then combines into one that is visible client side, so ease of use is less compromised.

In my case I have 2 3TB WD Greens and 1 4TB Seagate within which I know precisely where all my files are. Raid 5 is not an option and even pooling will only add unnecessary complication to and otherwise simple scenario. I maintain a solid backup regiment to external drives which stay detached and put away when not in operation.

I respect what you are saying and I'm obviously poking at you a bit. I just think it's irrational to say that all solutions that involve individual drives are never as good. Of course at this point your welcome to do the "oh, well I didn't realize what you were trying to do" dance or even stick to your guns which I respect you for, but I disagree with the premise.

For myself, it's far simpler to have 3 individual drives with their equal size matched external drives for backup. That way I can run Beyond Compare or rsync on each drive and maintain nice clean backups as I have been for years now. :)
 

Red Squirrel

No Lifer
May 24, 2003
71,313
14,085
126
www.anyf.ca
See that's only part of your logic I don't understand. If you are running 3 drives in a pooling situation without any sort of parity, how is that more beneficial to running with individual drives. Assuming the user chooses to not actually use take advantage of in pool backup, the failure rates are exactly the same as running with individual drives, except that now the user doesn't know which drives the files he/she needs resides on.

I see the virtue of Raid 5 or 10 if the user is willing to put the cash into appropriate drives because there is actual parity and extended up time at the cost of storage capacity. I really like that setup with enough drives. Without this though, the user actually loses control and at best doesn't have any idea which drives contains the drive he/she needs and at worse can be as dangerous as Raid 0. Pooling for the sake of pooling is misguided in my opinion. To me, splitting my movie collection over two drives and maintaining two shares is better than pooling the two drives and using share at the expensive of control. For what it's worth, miniDLNA allows the user to mount multiple video shares that it then combines into one that is visible client side, so ease of use is less compromised.

In my case I have 2 3TB WD Greens and 1 4TB Seagate within which I know precisely where all my files are. Raid 5 is not an option and even pooling will only add unnecessary complication to and otherwise simple scenario. I maintain a solid backup regiment to external drives which stay detached and put away when not in operation.

I respect what you are saying and I'm obviously poking at you a bit. I just think it's irrational to say that all solutions that involve individual drives are never as good. Of course at this point your welcome to do the "oh, well I didn't realize what you were trying to do" dance or even stick to your guns which I respect you for, but I disagree with the premise.

For myself, it's far simpler to have 3 individual drives with their equal size matched external drives for backup. That way I can run Beyond Compare or rsync on each drive and maintain nice clean backups as I have been for years now. :)


I was not clear, I am assuming a setup where the pooling solution is configured with parity of some sort, where if a drive fails no data is lost. The advantage is no disturbance of data, and better organization. Why should /mnt/drive1 and /mnt/drive2 serve the exact same purpose, when you could have /mnt/md0 that has 3 or more drives in raid 5 with double the space of separate drives? You can backup to individual drives, but the actively accessed data is better on a redundant pooling setup (whether it's raid or something else).

I can't imagine trying to organize my data on all these drives if they were mounted individually:



I guess I'm more anal though, if I have a folder called movies, I want it to have ALL my movies, not have another folder called movies somewhere else that has some other ones.

But guess in the end you do what you're more comfortable with. I prefer raid so I don't even have touch backups if a drive fails, some prefer to just react to a failure as it happens and just restore from backups to a single drive.
 

Anteaus

Platinum Member
Oct 28, 2010
2,448
4
81
I can't imagine trying to organize my data on all these drives if they were mounted individually:

True, but then again I think we can both agree that you have a setup that can rival some enterprise systems. I'm talking about 3 drives....you are talking about 100....huge difference in scale lmfao. You obviously use your setup for business. I'm just talking about stashing some downloads and pushing video. As with everything, context is king.