AFS - very cool stuff

cleverhandle

Diamond Member
Dec 17, 2001
3,566
3
81
Anybody here use AFS in production? I've been playing around with it the last few days, and I'm pretty impressed so far. Security integration with Kerberos was relatively painless, lots of options for volume management, a simple Windows interface through WAKE, and even a decent ACL editor for Windows context menus. Are there any big pitfalls to AFS I'm not seeing? Obviously, the ease of setup depends a lot on what kind of Kerberos/AD/LDAP infrastructure you've got in place, but that would seem inevitable for any project of this scope. Any other AFS experience out there?
 

TonyRic

Golden Member
Nov 4, 1999
1,972
0
71
We use it at work and have for about 12 years. There is nothing like it that works as well. Thank god for OpenAFS because IBM is SO SLOW about releasing working drivers for new kernel revs and distros. We use it in an environment which we have a multitude of Solaris File Servers, Solaris Database Servers, NIS integration, Kerberos Authentication, Windows 2K/XP, Linux (BSD, RH 7.1-9, Suse, Debian, Mandrake), Solaris, HP-UX, SGI Irix clients and it all works the same EVERYWHERE. It is great knowing your data is always in the exact same place no matter what OS you are using or where on the network you are. No stale NFS file handles to deal with, etc.

Only 2 pitfalls I can think of is if one of your database servers goes offline, then your system will freeze until it can be reconnected or removed from the maps when a quorum takes place and a 4GB afs filesystem limit. In otherwords, when you allocate space for a project or whatever, you are limited there. Now this may be overcome in OpenAFS, but we use Transarc (IBM) AFS here and only the openafs client code if needed.

The biggest plus is the nearly UNINTERRUPTED movement of data. In other words, if an AFS partition gets full and you have no more disk space on the system, you can transfer the data to another AFS file server with almost no downtime perceptible to the end user. Seconds only.
 

cleverhandle

Diamond Member
Dec 17, 2001
3,566
3
81
Originally posted by: TonyRic
... a 4GB afs filesystem limit. In otherwords, when you allocate space for a project or whatever, you are limited there. Now this may be overcome in OpenAFS, but we use Transarc (IBM) AFS here and only the openafs client code if needed.
Hmm... hard to find any direct info on that with OpenAFS. Looks like the volume size limit was 8GB, but that's from a message from 2001. Probably buried in the release notes somewhere. In any event, that's only per-volume limit. What kind of projects would have problems here? I would think that mounting another volume in a subdirectory would work out OK, but maybe I'm missing some complexity somewhere.
The biggest plus is the nearly UNINTERRUPTED movement of data. In other words, if an AFS partition gets full and you have no more disk space on the system, you can transfer the data to another AFS file server with almost no downtime perceptible to the end user. Seconds only.
Yes - very, very neat. I haven't yet looked at MS-DFS - does it have similar capabilities?

 

TonyRic

Golden Member
Nov 4, 1999
1,972
0
71
8GB is the limit sorry. We are running with some legacy stuff here that limits us to 2GB (well that's actually 2 mistakes in one.). :) That's all we do is mount under a subdir. Not a problem unless the space requires is contiguous.

We are an all UNIX back end shop, but, DFS was checked out and it appears ok, but not nearly as robust, and if I remember correctly MS dropped it in favor of Active Directory. MS bought DFS from IBM when IBM bought Transarc and could have been a far superior product, but they dropped it. Oh well.

Also, with the loss of a server causing an outage until the quorum is issued, this only affects you if you were working on data sitting on that server. AND, if the data was on a replicated volume, then you don't notice the outage at all since the replicant would automatically take over the load. All very cool stuff.