Want to build a *nix-based network file server...

Booty

Senior member
Aug 4, 2000
977
0
0
The network where I work is all MS-based, at this point. We're looking at adding new file servers in two of the offices, so my co-worker priced out a couple Dell servers... 2.8 Ghz Xeon, 512 MB DDR SDRAM, 2x36 GB 10k rpm Ultra 320 SCSI hdds, DDS4 20/40, etc. With Windows 2000 server they come out to be $3300 or so each.

To save some money, I suggested using Linux or a BSD instead - actually, I suggested using Windows 2000 (non-server) at first, but he thinks he might want these machines to do DHCP and DNS for their networks (currently handled by the firewall devices, I believe). Well, then we figured, while we were at it, we'd see how much we could save by building the server ourselves instead of going through Dell.

So I need some advice... first off, which OS? I was thinking FreeBSD, but am open to suggestions. It has to play nicely with the rest of the servers on the WAN, which are all Windows-based. I've only used *nix in a desktop environment, so this is new territory for me. I need a back-up package, and preferably one that will handle open files... if you have any links to any how-to articles or anything like that, post 'em. I'm googling the subject as well, but as always I came here looking for more input.

Secondly, I'm wondering if anyone has any hardware suggestions as far as either changes in the configuration, specific hardware that will work better with *nix, etc. I considered going SATA, but have yet to mess with a kernel that supports it, and am not really confident enough to attempt to use anything less than stable and time-tested (and well documented).

Not only will this be my first time using *nix on something other than a desktop, it could be my first time building a true (albeit low-end) server, if we choose to go that route. Advice is greatly appreciated. Thanks as always.
 

Booty

Senior member
Aug 4, 2000
977
0
0
Good to know... I just discussed that with my co-worker. Our main office houses our PDC running Active Directory, as well as a couple other servers. Other offices (2 at this point, more in the future) are using VPN. He figured we'd set the other networks up to use the PDC for their primary DNS, and setup the local *nix fileservers for secondary DNS, in case something ever happened to the domain. Does that still sound like we're going to run into issues...?
 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
If you're not concerned with tech support I would use Debian (or Fedora if you like RedHat), everythign you need should be packaged up already and you can upgrade for life for free. FreeBSD wouldn't be bad, but the fact that you have to compile things a lot is annoying, to me atleast. For instance there was a problem with the kernel that allowed for a potential DoS attack recently and the only real fix was to recompile the kernel on every FreeBSD box we have. If it was Linux there would have been an updated kernel package released. I would avoid SATA in general because it's so new.

For AD to work well with BIND all you have to do is make sure it's a recent version that supports SRV records and allow illegal names, because AD adds a bunch of RFC-illegal names. Since the updates will be happening to the master DNS which is on Windows you don't have to worry about update security, just don't allow updates from anyone.

Any backup package for unix supports open files, atleast any on unix I've used. Generally I just write a script to tar up the files I want backed up and run that via cron. If you go with Linux and ext3 or ext2 don't use dump to backup the filesystems as you might get corrupt or incorrect data. I really like the XFS filesystem and it comes with a tool called xfs_dump that should be fine for backups, XFS also supports ACLs which may or may not come in handy for you.
 

drag

Elite Member
Jul 4, 2002
8,708
0
0
For a OS, I think your best bet is going to be Linux (although I am biased.)

Redhat specificly, they have the ES editions which are the small/midrange server editions and offer different pricing structures for support. For 349 dollars you get basic support for 799 dollars you get standard support.

Now mind you this isn't just for the OS, this is a 1 year support contract and the only reason I am recommending this is because you novice abilities. Redhat can be in a position to help you get everything running well.

For the basic and standard setups you get web support and phone support. Turn around time (supposedly) for basic is 1 day, and for standard is 4 hours. Phone lines are open standard business hours. The other big difference is that with standard you get it for one year, and the basic it is for installation and setup support for the first 30 days.

Personally I've never used any of Redhat's support stuff, but I would give it a try if I was in a position to spend 300 dollars on something like that. They have WS editions for workstations that are cheaper. Also see if Dell is offering stuff with Redhat pre-installed. For server lines it's not uncommon to have that option.

Otherwise if you are got some time to get everything running well on your own and don't need support then I'd look at Suse, Fedora, or Debian in no particular order. Depending on what sort of server you want.

Fedora is going to have the widest user base for online help and stuff. But it mostly depends on what sort of thing your familar with most.

For backups, I don't know. It depends heavily on how much your going to backup and how often and how you want to do and when you want to do it. Like your going to backup up your entire client files on friday evening on a DVD? Or use a tape drive? Or whatever.

here is one approach to the problem

this is a weird little site I found quickly I don't know how usefull it would be though.

It would probably be a good idea to do some studying, too. There are plenty of books aviable that cover administration basics, and reference books are handy to have around.

There are very good books, there are alright books, and there are plenty of bad books. Ones you can depend on for quality are usually from O'reilly. Good stuff very informative, but usually pretty technical. There are plenty of other books that are usefull to learn from.

Like go to borders and sit down and skim thru a couple of them and pick ones that seem smart and usefull. Maybe a couple Unix books. They apply to Linux just like any other Unix or BSD OS. Plus they will have good advice on what to do for backups or network issues and show you tips and nice stuff that can help out a bit.

The linux documentation project is a good place to look thru. They lots of howtos and especially usefull guides. Like adminstration and network guides. Learn some bash scripting and stuff.

Don't forget stuff like Samba.org since you probably going to be using that sort of stuff.

Of course this is a bit overkill for a couple basic filing systems, but with some good links to informative websites, a few good references books at hand and some trial and error and lots of googling it's possible to get a lot of learning done quickly.

this guy seems to have some good practical advice and probably knows a hella more then I do

With good unix/linux setup it's possible to get a server going and never have to touch it ever again for years, except for the occasional backup and update. Of course a good disaster recovery system that you know that works (by practicing it like a fire drill) is very cool to have.

his advice on backups

(most of his knowledge is about SCO operating systems, but that's what people used on x86 machines before Linux came along, so don't hold it against him. :p ) although it may require a little bit of translating to modern Linux.
 

OffTopic1

Golden Member
Feb 12, 2004
1,764
0
0
$700 is cheap for peace of mind. I wouldn't go with anything eles other than MS product, because the management is going to come down hard on you if there is any problem. Even the problem isn?t from the Unix/Linux machines. Most management is clueless and readily blaming the IT guys if there are any computer problem.

<--- I am experiencing it at the moment because one of my computer illiterate boss is being a bitch.
 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
$700 is cheap for peace of mind. I wouldn't go with anything eles other than MS product, because the management is going to come down hard on you if there is any problem. Even the problem isn?t from the Unix/Linux machines. Most management is clueless and readily blaming the IT guys if there are any computer problem.

But technically it is your problem if you promised something that you can't get working with the software you have, whether it's supported or not. With the Linux solution you have more leeway because now you have the source code, if by chance you have developers in house you can fix what is broken or not supported with just the cost of time.

I personally get no peace of mind from MS products, infact quite the opposite because I trust them so little and they're such a PITA to debug when there's a problem. But to each his own I guess.
 

spyordie007

Diamond Member
May 28, 2001
6,229
0
0
Originally posted by: Nothinman
$700 is cheap for peace of mind. I wouldn't go with anything eles other than MS product, because the management is going to come down hard on you if there is any problem. Even the problem isn?t from the Unix/Linux machines. Most management is clueless and readily blaming the IT guys if there are any computer problem.

But technically it is your problem if you promised something that you can't get working with the software you have, whether it's supported or not. With the Linux solution you have more leeway because now you have the source code, if by chance you have developers in house you can fix what is broken or not supported with just the cost of time.

I personally get no peace of mind from MS products, infact quite the opposite because I trust them so little and they're such a PITA to debug when there's a problem. But to each his own I guess.
There are definetly two ways to looking at this. If I have a fileserver go down I'm not going to have the time to debug the source to try and figure out why I'm having issues...

The other side to it is that Microsoft PSS does a very good job of sticking with you and your issue until it's resolved (at least that has been my experience the handfull of time I've called). With open source it's hard to find people that dedicated to helping you solve your problem (sure there are plenty of people who are willing to offer suggestions on forums, but that is not the same and doesnt have the same kind of turn-around time).

Back to the initial question something else you are going to want to consider is integrated domain authentication to resources. If you use *nix you are going to have to either open up the file share to everyone, or setup an ACL that is seperate from the domain. (If I'm wrong about this somebody please correct me, as I understand it you cannot do file-level security easily that integrates with AD).

Another thing to think about is your backups. If you're anything like me you've got a domain-wide backup/recovery stratagy (in my case veritas); if you have a stand-alone file server you will need to setup a seperate backup/recovery path for this machine by itself. As far as I'm concerned it would cost me more to spend the time with seperate backups on a single machine than the cost of a 2003 server license and the CALs.

-Erik
 

OffTopic1

Golden Member
Feb 12, 2004
1,764
0
0
Originally posted by: Nothinman
$700 is cheap for peace of mind. I wouldn't go with anything eles other than MS product, because the management is going to come down hard on you if there is any problem. Even the problem isn?t from the Unix/Linux machines. Most management is clueless and readily blaming the IT guys if there are any computer problem.

But technically it is your problem if you promised something that you can't get working with the software you have, whether it's supported or not. With the Linux solution you have more leeway because now you have the source code, if by chance you have developers in house you can fix what is broken or not supported with just the cost of time.

I personally get no peace of mind from MS products, infact quite the opposite because I trust them so little and they're such a PITA to debug when there's a problem. But to each his own I guess.
I don't trust MS either, but unfortunately management seemed to trust bigger corps than their employees (see how people blindly trust Enron/KPMG/MS/etc?)

My shop is MS centric shop &amp; we have several users terminal in though VPN &amp; I?m not allow to setup Linux/Unix thin clients even those I have demonstrated again &amp; again that it works. It is very hard to break their familiarity.

Another thing is that we are using MS Great Plain Dynamics as the front end for our MSSQL server, and again I?m not allowed to use a Linux/Unix backend solution. I wanted to rewrite/customizes (fixed MS fugeup) some of the VB code in MS forms &amp; SQL query for GPD, but was stopped by the management because they claimed that MS will not support them if I do so. But, MS being the ahole that they are, and isn?t truly fixing the problem?come with every upgrade/patches we waited for the fix &amp; find out it breaks something else.

And, I'm glad that I didn't implement anything eles other than MS here because it would be more of our fault if MS patch break something.

PS. All of our servers are Intel because management listens to computer sales men, that AMD is inferior (slower &amp; is crash prone).

Sorry for the rant, but I'm at the point that I wanted to quit IT, because logic don't play well together when you have normal people that work little with computer, and is in the decision making chair.
 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
There are definetly two ways to looking at this. If I have a fileserver go down I'm not going to have the time to debug the source to try and figure out why I'm having issues...

I realize that, but have you ever seen the Windows fileserver service log anything usefull to the eventlog? And have you ever seen the logging capabilities of Samba? They're worlds apart and you don't have to touch the source. I would much rather debug a Linux box with no help from anyone than deal with a broken Windows box with Microsoft's highest level support.

The other side to it is that Microsoft PSS does a very good job of sticking with you and your issue until it's resolved (at least that has been my experience the handfull of time I've called). With open source it's hard to find people that dedicated to helping you solve your problem (sure there are plenty of people who are willing to offer suggestions on forums, but that is not the same and doesnt have the same kind of turn-around time).

Thankfully I don't deal with the Windows boxes so I've never used PSS but I know recently they told us to ah heck off when we needed help with Exchange 2003 because we were only using evaluation version even though once we get through the trial run we'll be migrating our half-dozen or so Exchange servers to 2003.
 

drag

Elite Member
Jul 4, 2002
8,708
0
0
It depends what you want.

With Linux/Unix you get corporate backed software and hardware if you want.

So you end up with IBM vs Microsoft or Redhat vs Microsoft. That is if you want to pay for the support, either way it costs you.

Personally I like Redhat's service model better, the OS is free your paying for a one year service contract. MS you pay for the OS, the first service call is free then if you need any help with other stuff then it's another 200 dollars a phone call, or paying more for a service contract.

Whatever, but for a File Server?

Linux/Unix is going to be more stable, less trouble to work with, better performing and cheaper. Linux seems harder to use because the options you have are much more varied then Windows and it's tougher to decide whats best for you out of those selections.

It takes a lot of studying and work to know how to run Linux well, but then again just because you have nice GUI interface in windows it's no subtitute for knowledge or hardwork.

All you have to look at is the stats for worms and viruses in Corporate enviroments for stuff that was obviously patched long ago to realise lots of Windows admins are over their head, even with the option of paying for a phone call to MS.

Plus not all bosses are ignorant enough to beleive that their company is automagicly better of depending on MS just because they are a major corporation.

I mean it seems like a safe bet to depend on Microsoft nowadays. It's what everybody uses, it's a industry standard. Linux is something else, something that a minority used.

So you can apply the old ciche "Nobody got fired for buying IBM" and now you can say "Nobody got fired for buying Microsoft", but you have to remember that back in the day people DID get fired for buying IBM even though it seemed like the safe bet. Otherwise, why is everybody using Dell PC's and HP servers instead of IBM PCs?

It's not like IBM didn't have good support or anything.

It all depends on what sort of solution the situation dictates. For a file server for a small business a 800 PC built with good parts and some with some high quality drives and extra ram stuck in it running most any linux distro can do just as a good job as a 2000 dollar workstation/server running a 700 dollar w2k server OS.

Obviously being a newbie and setting up a production Linux server isn't something that is easy. There are a lot of variables to deal with, a lot to learn in a short time. But once you get it done it's done and the likelyhood of having to mess around with it again is pretty slim if you did a good job. Just enough work to keep it updated and backuped up and it will pretty much run itself.
 

Booty

Senior member
Aug 4, 2000
977
0
0
I really appreciate all the input... keep it coming.

$700 isn't a lot for piece of mind, this is true. However, if I can put this thing together, even if it does take some extra time to work out the kinks and what not, it'll be a heck of a lot easier to do it next time around. It will be a valuable learning experience for me.

The company I work with/for is growing fairly rapidly, so this won't be the last chance I get to do something of this nature, so with that in mind, whether or not we go with MS vs. Linuz this time around isn't that big of a deal. We're going to setup a demo file server before we ever put it into production, so if we don't have the kinks worked out of the demo by the time we need the fileservers to be in place, we'll just go with what we know how to do (by we I mean my co-worker and I... he has quite a bit more experience with networking, servers, and such than I do, but has had to use MS products so much in the last few years that he has inadvertantly neglected Linux).

So with that in mind, I'm gathering the Debian is where I want to start... I'm more familiar with Slackware than any other distro, but if Debian's the way to go then I'll learn it. That's the point - I'm always wanting to learn more - that's why I hang around this forum instead of some of the other less-productive forums on this board. Anyway, I suppose I'll start working on the demo and be back with questions as I encounter them (and don't worry... I'll google before I come here for help).

Can I ask - where did some of you truly get your start in all of this? It sounds to me like a few of you are admins for some pretty decent-sized networks... I'm eager to learn, but with such a huge ammount of information out there, it's kind of hard to figure out where to start sometimes. Since this will, hopefully, be my career, any suggestions on where to start getting the ball rolling? At this point I just research problems as they come my way, but I'd rather start learning different subjects... kinda home-schooling myself, if you will.
 

lowpost

Member
Apr 22, 2002
164
0
0
You'll also have to look at the amount of users. Windows (any version) can only support so many. Linux is coded to support more.

You'll have to do some tests to actually see what your network needs, and what OS can actually provide what is required.
 

mikecel79

Platinum Member
Jan 15, 2002
2,858
1
81
Originally posted by: lowpost
You'll also have to look at the amount of users. Windows (any version) can only support so many. Linux is coded to support more.

Care to show some proof of this? I assume since you make this statement that you've seen the Windows source code? This is an awfully general statement saying Windows can only support so many users. It's more limited by hardware than it is by the OS.
 

spyordie007

Diamond Member
May 28, 2001
6,229
0
0
I think he was referring to either 2K/XP Pro. which has a software limit of 10 concurrent users or 2K/2K3 server which requires CALs for concurrent users. Whatever the case may be Linux is certainly not "coded to support more" as they are both pretty much coded to "support as many as possible."
 

spyordie007

Diamond Member
May 28, 2001
6,229
0
0
Originally posted by: lowpost
There is no need for actual proof. Run tests and see which one can handle the load.

but in case you feel like creating an argument, here is a bit of evidence.
Those samba 3 throughput results are pretty impressive, especially as the client count increases. Though I'd be curious to see performance results on more up-to-date hardware (say a dual P4-HT-Xeon or dual Opteron box with a decent SCSI array). Not that I'm saying that Windows should perform any better in newer hardware, just that I'm curious if the gap would close at all. Note the hardware listed at the top of the page.

Of course this doesnt mean that "Windows (any version) can only support so many"; it simply means that "Samba 3 is more efficient with your bandwidth given the same number of users". AFAIK windows server enterprise does not have a hardcoded/set connection limitation to its CIFS services (though it is dependant on the licensing service to ensure enough CALs) and I do have access to the Windows source ;)

-Erik
 

lowpost

Member
Apr 22, 2002
164
0
0
I don't think there would be any real physical limitations to the windows source... I think it's more of an inability to get it any better than it is now. Linux was designed with networking in mind, whereas windows was all about user friendliness. I, once again, am making an 'ass'umption that micro$haft makes the 10 user limit because windows can't really handle anything higher in a "stable" fashion.

Anyone have anymore links about this or info, it would be nice if you posted.
 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
I, once again, am making an 'ass'umption that micro$haft makes the 10 user limit because windows can't really handle anything higher in a "stable" fashion.

You would be right, atleast about the part about you being an ass. The 10 user limit is on the workstation class OSes because they want you to pay if you want to share files with more than 10 people simultaneously.
 

spyordie007

Diamond Member
May 28, 2001
6,229
0
0
Linux was designed with networking in mind, whereas windows was all about user friendliness.
The Windows NT kernel was designed with networking in mind.

As Nothingman stated the 10 user limit is a software limit that is enabled in 2K Pro/XP Pro so that they are not used in place of 2K Server/2K3 Server.
 

spyordie007

Diamond Member
May 28, 2001
6,229
0
0
Originally posted by: lowpost
There is no need for actual proof. Run tests and see which one can handle the load.

but in case you feel like creating an argument, here is a bit of evidence.
This article uses SAMBA 2.2.7 (not 3) but it uses more realistic server hardware and shows the exact opposite (that Windows server performs better):
Veritest's Server 2003 vs. Linux File Server Comparison

I'm not trying to suggest that one is clearly better than the other, just wanted to show that both do a very good job. I'm sure with both the studys there are going to be a lot of differant contributing factors.

Note that the Veritest one was done specifically for Microsoft. And like any set of graphs or statistics you should always consider methodoligy, display, etc. As the saying goes "There are 3 kinds of lies, Lies, Damn Lies and Statistics"

-Erik
 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
This article uses SAMBA 2.2.7 (not 3) but it uses more realistic server hardware and shows the exact opposite (that Windows server performs better):

There's a world of difference between Samba 2 and 3, but I'm only a little surprised they would have even considered Samba 2 for that 'study'.
 

spyordie007

Diamond Member
May 28, 2001
6,229
0
0
Originally posted by: Nothinman
This article uses SAMBA 2.2.7 (not 3) but it uses more realistic server hardware and shows the exact opposite (that Windows server performs better):

There's a world of difference between Samba 2 and 3, but I'm only a little surprised they would have even considered Samba 2 for that 'study'.
Oh I absolutely argree that the Veritest one was biased and setup in a way that made 2K3 server look better, it's even noted in his earlier article.

My point was just that every article I've seen is either from the "Linux Community" or from the "Microsoft Community" and are all therefore inherently biased. Not to mention all the additional contributing factors like the network infrastructure, client-base, additional services, hardware, drivers, etc.
 

drag

Elite Member
Jul 4, 2002
8,708
0
0
I used to care more about networking benchmarks, until I realised that as long as the server was able to use up the bandwidth with the maximum amount of requests then that is the limiting factor, not how well the server performed under theoretical conditions.

In both these instances, the Linux server and the W2k3 server would both be perfectly capable of filling up a 100Mbps ethernet connection.

Anyways the results could be caused by anything, Samba 3 being better at windows file and print sharing then a Windows server could be irrelevent.

for example, what if the Windows drivers for the Nic card could be crap? Or what if the DMA access on the harddrive was turned off by default on Windows, but not on Linux?

Who knows, maybe putting a different nic card could of doubled the performance. It could be that Samba3 didn't have anything to do about it, maybe the Linux server could just handle disk caching much more efficiently. Maybe the test sucked, maybe he had a script that copied the same files over and over again for each client.


But then again, I hooked up a ancient server (dual cpu motherboard pentium Pro with only one cpu and 32megs of ram) running debian stable and some ancient version of samba, and that worked fine on a bad network were the w2k servers (1ghz+ Dell servers) were dropping connections like madmen. :p

So who knows? The sucky part is studies like this are so freaking expensive and time consuming almost nobody can do them properly.