Fatt, Cisco dominates the router and switch market for the same reasons that IBM used to dominate the PC market:
1. Nobody gets fired for buying Cisco
2. You can buy your whole network solution all-Cisco
Note that neither of these statements are saying that Cisco is the best. Extreme, Foundry, and even sometimes Enterasys and 3Com make better products that cost less money. Some of Cisco's switches are real junk - and some of them are very good. It's very important if you buy Cisco to understand which are which. (I say the same about your "buy Dell" comment, but since the original poster isn't going to buy new PCs anyway, it's a non-issue)
Your wiring suggestions are on the money. For this scale, one wiring closet would probably be the way to go though.
I seriously second the suggestion that outside-facing servers like WWW and external DNS be colocated. These days, colo is cheap.
WannaFly, if you have the budget, I would suggest that you build your network so that the (hopefully few) storage heavy users can have switched 1000BaseT gigabit to the desktop, and everyone else gets 10/100 switched to the desktop. At 70 employees (let's say 150 stations, that is, some room to grow and some extra for printers and such), this can all be brought into one switch without much trouble, and that'll make management far easier.
Home-run, centralize the wiring into one small room, organize it well. MAKE SURE YOUR NETWORK ROOM HAS EGREGIOUS COOLING. Run extra/special AC into there, whatever, talk with some real HVAC guys. Tell 'em you want to be able to disappate heat equivalent to, say, 80A@120VAC (forget how many BTUs that comes out to). If you don't have adequate cooling for your network equipment, it is more likely to fail, and you don't want that, do you? Similarly, make sure there's oodles of power in there, and make sure in particular you have several 15/20A@220VAC circuits 'cause some higher-end networking gear and/or higher-end UPSs will need that (much easier to run the wires now than when the room is filled with gear in production). Put down anti-static tile on the floor of that room - raised is probably not necessary, but there are glue-on tiles from several of the major manufacturers specifically for anti-static electronics applications, they're cheap, easy, and prevent ESD damage (most of which is subtle and over time, btw). Oh, and make sure the racks are properly grounded.
Be extremely picky about the wiring subcontractor. Make sure you've SEEN work done by that sub before you even allow 'em to bid. Some wiring subs are absolutely pedantic about neatness and that your cable plant / cable management is top notch. These are the kind of guys you want. Other guys simply run wires. Those are the kind of guys you don't want. No matter what your wiring room starts out looking like, it ends up a jungle. Excellent cable organization and neatness to start out with, and good cable management stuff installed to start out with, will make your life dramatically easier over the lifespan of this cable plant.
Consider separate racks for equipment and for cable management. At your scale, you just aren't going to have that many ports, so maybe you can get away with top half/bottom half. The key is to clearly separate cable management from equipment AND to have a way that you don't have to run bunches of long cables far to go from switches to ports. Also, remember that heavy stuff should go on the bottom of the rack, so UPSs at the very bottom, and if you get a high-end switch, that might be at the bottom too. While patch panels and finger ducts are nice and light and can go at the top, no problem.
If you have the budget, make sure the switch you get has full redundancy - redundant backplanes, redunant management modules, and the ability to hot-swap cards. An example is Extreme's Black Diamond, Cisco's 65xx (don't know which ones/what configurations get you redundancy), and I'm sure Foundry and Enterasys have similar. Otherwise you have a single point of failure that will, inevitably, fail on you
An alternative strategy which might be cheaper is to buy lower-end modular switches (Extreme's Alpine, Cisco's 47xx, etc.) and to simply have a cold spare of every part. The trick being that, for example, you only should need ONE spare of every part in the box, not one spare for each one you have (so for example, if you have 3 10/100 cards, you only need one spare, not three). This would mean some downtime on system failure, but if you have a cold spare chassis physically racked right above/below the main chassis, you can swap spare parts into the real unit pretty darn fast under fire. Plus in a pinch you already have a box to grow into if you need more capacity (just make sure to buy a new "spare").
Another alternative strategy which might be cheaper is to buy several fixed-configuration switches. You could, for example, buy a higher-end L3 switch with 12 to 16 100/1000BaseT gigabit ports, and then buy a few dumb L2 (or managed L2) switches with 10/100 ports and 1000BaseT uplinks - one dumb switch per "VLAN," and use the higher-end switch to route fast between the VLANs and to service the fewer number of power users. If you buy two of the higher-end switches and cable things right, you can get some redundancy (yaaay). This sort of configuration will be much cheaper than a big modular switch, but it WILL be a lot more headache to administer.
The main two reasons to use VLANs are:
1. Separates the broadcast domain, so that ARP and true broadcast traffic doesn't get to be too much. If memory serves, the CSMA/CD Ethernet specs say no more than 100 stations on an Ethernet -- this is mostly because of stuff related to CSMA/CD, but it's not a horrible rule of thumb in the modern world.
The main problem with broadcasts are that every single station has to receive them and minimally proceess them, creating some CPU load on every box on your network. If the rate of broadcast traffic is sufficiently low, it doesn't really matter, but if you start seeing even hundreds of kilobytes per second of broadcasts, it can start to matter for embedded devices. So splitting up your network into a few separate broadcast domains helps keep the rate of broadcasts that stations see down to a nominal level.
2. Makes it easier to implement administrative controls. This network is the HR network, it has this IP subnet, and now I can write IP address based server ACLs for it easily. Granted, IP address based ACLs are lousy to begin with, but they're easy so they're common. It also lets you create internal-only networks that can't see the outside world (for things like printers), separate from production networks that can. And you probably want to strictly separate systems that are inside the firewall from systems that are outside the firewall.
3. Allows you to split switches more easily. If you buy a big switch now sized for 150 users, and someday you end up with 1,500 users, it's often easier to do so by buying more switches and migrating a few whole VLANs to one of the other switches. And maybe by that point, the new switches physically migrate closer to the users.
The switches you should be looking at anyway (but double check for the feature) are "layer 3" switches, so they can do IP routing between VLANs within the switch chassis. All can do IP ACLs, too, and some of them even can do ACLs without massive slowdowns (*ahem*). "layer 3" switches tend to all around have more features than the really low end switches, and you probably will be happier buying in that class.
"SANs" - I'd suggest instead of trying to do a SAN that you instead try to do a NAS (also known as a file server). Get a Network Appliance fileserver. Don't buy their lowest end boxes. Make sure that the box you buy has a whole bunch more potential capacity than you're actually using, 'cause your needs WILL grow. Also look at back-up solutions that run directly on the NetApps box.
VoIP - not baked. Maybe in the future it'll be a great thing, but not now. It's a phone. It should Just Work. Get a traditional phone system that's been around for a while and Just Works. If you want a good small- to mid-sized PBX on the cheap, check out Altigen. They make a PC-based phone system that can use your favorite POTS phones on people's desks and can scale to a few hundred stations. It's based on NT, which scares the heck out of me, but then again, you don't want to know what's inside most PBXs, only whether it ends up working or not. Friends of mine with Altigen systems have had good experiences, and that's not true of friends of mine with various other smaller PBXs (Avaya, NEC).
Firewall - Watchdog? You mean Watchguard? I got a friend who'll sell you a pretty high end Firebox, cheap. You will have to get to him before he puts it downrange of his AK-47, and promise him that he will never, ever see that box again. He had several major outages because the box hard locked and stopped passing all traffic. Simply unacceptable. It got ripped out.
Sonicwall is pretty simple and straightforward, but probably too low end for your application.
Netscreen is okay, user interface sucks and they don't have a whole lot of features, but they work.
Cisco PIX is okay, user interface sucks and it's expensive at your scale, but they work.
A PC running OpenBSD or Linux makes a great firewall if you have enough clues. IPCop and a few others are free Linux-based firewall distributions on a CD that are set up to be easy to use and powerful - check 'em out. Very cheap, and works as well as the commercial ones (IMO) - IF you're willing to put some time and clues into it. If you aren't, see Cisco PIX or Netscreen.
Any PCs in the network room? Re-case them into rack-mount cases. 2U if you don't need many cards, 4U otherwise. 1U the thermals are more delicate as are the parts in general, and they're more prone to failure, so I'd avoid 1U unless you're truly hurting for rack space. Cheap 2U/4U rack-mount cases are okay (cheap 1U are NOT), but in the long run rack mounting PCs will just make life far easier for you.