OK, breaking this down into a couple of pieces here...
My Recommendations
With regard to a firewall, I wouldn't rule out SonicWALLs. They're easy to manage, are full featured, reliable in my experience, and are more than fast enoug to work in your role (their
low-end rack-mount firewall costs about $1,500 from an internet discounter, passes traffic at 190Mb/s, passes 3DES traffic at 25 Mb/s, and handles 30k simultaneous connections). I prefer SonicWALL products over some of the others because they're affordable (most of my clients are small businesses), make it easy to implement web filtering if you run into users with "issues" about what they're doing at work, and have an AntiVirus solution that's both inexpensive and tied into the firewall in such a way that anyone who isn't running the current definitions can't access the Internet.
With regard to a firewall in general terms, Linux-based firewalls are cheap and there are a number fo distributions that are made to keep the process simple (see
Mandrake's Multi Network Firewall for a good looking solution), but it's still a general purpose OS under the hood that
someone is going to have to administer occasionally. The firewall appliances will cost more, but offer increased reliability and ease of use (provided you buy well -- most of my experience is with SonicWALLs). Ease of use is a big one here -- if you want to set up 2 firewalls as a failiver pair, with SonicWALL that's as simple as plugging the second one in and filling out 5 items of info. How do you do it with a linux-based solution? Answer: it's possible, but you just bumped the complexity of the solution. Don't take this as an argument for SonicWALLs over the competition either, as I'm in favor of
any firewall appliance over a firewall built on a general-purpose OS. The exception to this rule comes in when you need the extra configurability that one of these solutions offers, and you have someone on staff who's capable of adminning the thing (or your company is willing to pay for a
good consultant).
Seriously consider coming up with a standardized image for your users, or one for each department. Win2k/RIS is wonderful for this -- it makes adding a new machine or reimaging a machine as simple as booting from the network, selecting the image you want, and waiting 20 minutes for the installation to complete. If you think it through properly, you can insure that all user data is stored on your fileserver so it can be backed up nightly -- imagine a network where the loss of a workstation didn't result in the loss of any data older than last night. Go a step further and set up Active Directory to automagically deploy software, and you can have a network where your graphic designers can log into any workstation and have access to their files, e-mail, and specal applications (though there will be a delay as they're deployed); the next time someone else logs into that workstation, the application will be removed. Kinda nice, isn't it?
Go to the effort of insuring that all your critical data is backed up and stored off-site. This might be a hard-sell as far as management is concerned (especially considering the cost of something like a DAT changer, which it soulds like you'll need), but IMHO it's critical to have a system that allows you to recover from a site failure in a very short period of time. The ideal recovery after a fire or whatever would look like: find new site while getting insurance check; buy replacement hardware; wire new network together; restore servers from backup; boot new workstations from network to find that everything is as it was just before the previous night's backup. Beautiful (though you'll need to run some serious tests to make sure things are running as you anticipate).
I would prefer RIS to Ghost -- Ghost is very hardware dependent, and it requires that you only use it on comparable hardware. RIS requires that the workstation you're adding have a network card it has the drivers for (you can add 'em manually), and will be happy to deploy an image to multiple computer configurations at once. Makes replacements cheaper and easier as well, as you aren't stuck buying the same (obsolete) configuration years later for more money than it's worth.
It's probably worth the expense to hire a consultant to come in and help you plan this out, even if you're going to deploy it all yourself. There's no substitute for proper planning, and there's no better teacher than having done it yourself in the past.
I wouldn't bother with WINS servers -- if everything is running Win2k/XP, then it's no longer necessary. Personally, I'm glad to be rid of it. In my experience NT-based DNS servers were flaky as hell, but Win2k does a much better job. I'm generally a pro-Linux-down-with-Bill-Gates kind of guy, but I don't feel much need to deploy Linux simply to run DNS -- DNS in 2k is solid.
Re: broadcast traffic... A couple of thoughts. If you aren't seeing NetBIOS broadcasts from the workstations (they can get all their info from DNS), are you really going to be seeing much more than regular ARP broadcasts? VLANS are neat, but they add to the complexity of the network and make problem diagnosis a little bit tougher -- I'd do some sniffing on the network before I implemented this to see if it's necessary. Deployment of VLANs for security reasons is another issue entirely, and it can make a lot of sense to make sure HR is on their own (essentially physically separate) subnet. I know Cisco gear in particular can limit VLANs to particular ports and from particular MAC addresses; I'd guess that other solutions can as well. Just make sure you set yout file server to be a memer of every VLAN (which of course, brings security back into question...)
Great advice on cabling in this thread. Hire someone competent, and lay more wire than you think you'll use. Believe me, it's incredibly convenient to have 2 network and 2 phone ports at every workstation. The additional cost of doing it this way is negligible.
I prefer to maintain web servers and what-not in-house, but I get a little paranoid if they're not on a DMZ with limited/no access to the local network. Make sure your SQL programmer unserstands the importance of this and doesn't code in such a way that the Internet server requires full, administrative access to the primary file server. That's just foolish, but you'd be amazed how many times it happens in smaller shops.
Now, for some questions for other posters:
- What can you tell me about the administration and configuration of equipment from Extreme Networks or Foundry? I learned IOS and find it works exceptionally well, though it locks me into Cisco solutions which are often times way overpriced. I really like the volumes of documentation available from Cisco, but I'd be happy to cut my hardware costs and keep my proposals priced about comparably. 🙂 About how are these solutions price against comparable Cisco gear -- 60% of Cisco cost? (Don't want to contact a sales rep at Extreme Networks to get pricing information).
- Can you give me a rough estimate of NetApp costs? How easy are they to back up? I generally spec out a Dell/IBM (it's fun to get them to bid against each other) server running hot-swap RAID-5 SCSI when a file server is needed. I'd love to see a cheaper but robust solution available.
- Is 1000-base-T to the desktop really a viable solution? Last I heard GB over copper was limited to fairly short cable runs.