What kind of hardware do ISPs run? What's in those 3U rack cases?

Jan 9, 2002
5,232
0
0
I'm curious to what kinds of hardware you'd find running in, not necessarily an ISP, but any rack-mounted file server enviroment. What sizes/brands/speeds of various components are popular? How important is PSU redundancy? Where can you find barebone cases like this to buy? I really want to find out more in this area. TIA!
 

Evadman

Administrator Emeritus<br>Elite Member
Feb 18, 2001
30,990
5
81
A 3U will allow regular add on cards ( PCI and such ) to stand striaght up.

90% of a file server will be disk space. Expect a 4+ drive Raid array 10k to 15k scsi drives are the norm. ISP's will generaly have a powered backplane with tons of modems in them for conectivity. A backplane is just like a motherboard, but it has 10+ slots for PCI and ISA cards.

PSU redundancy is usually only used in mission critical enviroments where a PWS failure brings down a large portion of the network. My father works for IBM and he assures me that none of the servers he admins have PSU redundancy. They have entire system redundancy :)
 
Jan 9, 2002
5,232
0
0
Ok cool. So IDE interfaces are null, eh? Do you know what kinds of SCSI-RAID controller cards they use? What's the most popular OS? What kinds of CPUs do they use, and how much RAM?

EDIT: HILARIOUS sig, btw... LOL...
 

Evadman

Administrator Emeritus<br>Elite Member
Feb 18, 2001
30,990
5
81
Originally posted by: NightFlyerGTI
Ok cool. So IDE interfaces are null, eh? Do you know what kinds of SCSI-RAID controller cards they use? What's the most popular OS? What kinds of CPUs do they use, and how much RAM?

EDIT: HILARIOUS sig, btw... LOL...

IDE used to be way to slow in a server world, but now IDE seems to be keeping up. I will not lie, I run 2 servers both have IDE raid with 5400 RPM drives (HA!) I use it for space and redundancy, not speed.

Popular OS's are all over the board. I would have to go with flavers of *nix being the most popular ( unix and other like OS's )

CPU's are going to be a vast majority of Intel's server side chips such as the Xeon series. Ram for a good size server will be in the 5+ gig range for a database server. ( I think AT's are at 1.5 if I remember correctly ) The MP seems to be making inroads, but Intel still holds probably 80% of the market share. Most mainboards are going to be made by "serverworks" I would think. I am unsure as to just how high-end intel mainboard selection goes. Probably only to 6-8 memory slots, but I could be wrong, it is nothing but a guess on my part.

It is very unlikely to see a single CPU as a server. Majority are going to be duals with quad's and oct's ( 4 an 8 ) processor's out there. I don't know if Intel makes a oct, may be only Sun. Not sure.



 
Jan 9, 2002
5,232
0
0
Well, if I'm going to be building one, it isn't going to have RAID in it- at least not IDE anyway. I've heard to shy away from those setups for mission-critical enviroments (like I'm wanting to build for). How does the Athlon MP compare to the Xeon? I would prefer using MPs. What kinds of RAM are best- AT uses Corsair, but I'm a Crucial zealot with Mushkin tendencies. :)
 

Evadman

Administrator Emeritus<br>Elite Member
Feb 18, 2001
30,990
5
81
Since I have never built a box on either an Xeon or MP platform, I will be of little help there. Only thing I will say is that the Xeon has a greater operating tempeature safety margin. A Xeon will run in the desert at 140 degrees while the MP will be a burned out cinder. I tend to recomend AMD's over intels from a pricing standpoint, but from a mission critical standpoint go with intel.

If you are building a mission critical server, then you WILL have raid. Raid = backups and raid = speed. It can be both if you use enough drives. (raid 5) For mission critical, you better use some kind of raid mirroring so if a drive fails, you are not up the creak.
 

Sunner

Elite Member
Oct 9, 1999
11,641
0
76
You're thinking about RAID-0, AKA striping, where you have, say two 100 GB disks, and create a stripe with a total size of 200 GB.
This will increase performance since you'll be able to read and write simultaneously to both disks, but it will effectively double the chances of the whole away failing, since if one disk goes down, the entire stripe will go.

For a server, you'll typically use RAID-1 or RAID-5, 1 is mirroring, which is exactly what it sounds like, you mirror two(or more) disks, if one fails, the other one is fine.
5 is a bit more complex, but in short, you spread data as well as parity over a number of disks, but is the minimum, this gives good performance, as well as redundancy, and you don't take too big a hit in terms of disk space.
If you have 3 100 GB disks in a RAID5 array, you'll end up with ~200 GB of space.

If you have cash to burn, you can go RAID 1+0, it's what it sounds like, striped mirrors.

Anyways, for a server, I'd go with a good Intel motherboard, or maybe a Tyan one, and a pair of Tualatin P3's, or if you're feeling rich, P4 Xeons.

And finally, my eternal tip for servers, go with a good brand name server instead of building it yourself, Compaq is my preferred brand, but IBM is good as well.
 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
Well, if I'm going to be building one,

If this is for a real business don't build it yourself, unless you're extremely strapped for money and want to reuse old parts. Get a premade box from a big company with a nice warranty, you'll be happier that way. For our x86 boxes at work we use mostly Compaq stuff and we've had very little problems with it and when we did service was great.

Raid = backups and raid = speed. It can be both if you use enough drives. (raid 5) For mission critical, you better use some kind of raid mirroring so if a drive fails, you are not up the creak.

RAID != backups. I can't express that enough. Tape == backups, CDR == cheap backups, RAID == redunancy. If one drive fails the whole array doesn't go down and with SCSI RAID you can pull the bad drive and put the new one in without taking the server down.

Also RAID 5 and mirroring (RAID 1) are very different, don't confuse them. RAID 1 mirrors one drive onto another so if one fails the other can keep running, also remember if the OS fails and writes bad data to one drive it gets mirrored too so backups are a must. RAID 5 the data is spread across all the drive with space also used for parity, you lose about 1 drive worth of space for parity, so that if one drive fails you can replace it and the missing drive can be rebuilt from the parity bits on the other drives, again if the OS fails and writes bad data you'll need to restore from backup.

For an ISP you don't need a lot of horsepower, a P133 running Linux or FreeBSD can saturate a 10Mb line easy. Essentially the box with the modems will be idle most of the time. The only real big boxes you'd need are for mail, news and proxying (optional). News feeds are huge, I'm talking hundreds of Gigs if you get a full feed, and mail is as big as you let your users make it.
 
Jan 9, 2002
5,232
0
0
What's wrong with building one? I would only be using top-of-the-line hardware, and I know my stuff. It's not like I'm going to be building computers with ECS motherboards and Realtek network cards as servers- I can make a good, solid server.

RAID is sounding more appealing, and wow, I had no idea you could swap out SCSI drives on the fly! How often do drives fail? It almost seems like a common occurance the way you guys talk about it...

I'm still new to the server end of the spectrum, but here's a brief rundown of what I think should go into one.

This 4U enclosure, possibly swap out PSU w/ 350w Antec TruePower
AMD Athlon MP 2100+ CPUs with thin copper HSFs
Tyan MPX motherboard
4GB PC2100 ECC Crucial DDR
nVidia TNT2 M64 32MB video
Intel PRO/1000 S Gigabit NIC
(need recommendation on good SCSI RAID controller card- 3Ware?)
(2x) Fujitsu 18GB 10k rpm hdd (maybe 3 for a RAID-5 setup)
Sony 52x CD-ROM
Sony 1.44MB floppy
etc...

 

bizmark

Banned
Feb 4, 2002
2,311
0
0
Well, it really depends on what you mean by "mission-critical" (well, really -- what "mission" is it so critical to). The main email server at my university has 4GB of RAM (proprietary Sun RAM). I'm not sure on the number of processors, but I do know that the whole setup is bandwidth limited (I/O bandwidth -- reading to/from the hard drives in particular) despite a massive RAID incorporating ... hmm, I just tried to look up the amount of drives but I couldn't find it anywhere. Anyway, it ain't no ordinary 4-disk stripe-and-mirror RAID lemme tell ya ;)

Anyway, imagine if you had ~4,000 users, half of whom are using the inefficient POP email protocol (which reads/writes the entire inbox each time mail is checked) and 400 of whom log in interactively (i.e. using Pine, mutt, etc. as their email clients), and 95% of whom are connecting with a very fast connection (so there's no lag on the user's end).

But as said above, something "mission-critical" could easily be handled by a very 'lightweight' machine (w/ respect to processor, disk storage, etc.) if its 'mission' is something simple like routing.

Also you shouldn't necessarily look at a rack-mount machine. Rack-mounts are only useful if you have a rack to mount them on. And you probably won't buy a rack unless you're looking at buying 3-4 machines and their accompanying rackmount equipment (e.g.: external storage, power supply/UPS/etc, racks for modems etc.). Dell sells PowerEdge servers that look just like desktop machines. (@ my work we just got a new one w/ dual PIII-1400s, 1GB PC133 RAM, 2x70GB 10,000k SCSI HD's -- it will be used for web and email serving -- its main job (for now) will be sending out email to our mailing list -- we're looking at around 25,000 (non-spam, thank you very much) emails in a batch.. heheh, the first time we tried it, we almost crashed the aforementioned mail server for our University :Q:eek:) If you're only going to be running one machine, it doesn't make any sense to go for a rackmount unless you have serious plans to expand later and you're going to actually *need* the rack for storage reasons.

reasons to go with SCSI for your RAID:

-hot-swappable
-much higher spindle speeds readily available (10k and 15k)
-faster data transfer rates

but the problem is that those higher spindle speeds also lead to a much greater failure rate. So you're right, disk failure does happen a lot more often in servers than in consumer machines. This is a necessary trade-off, however, and one that's expected (so folks running servers always have spare hard drives ready). But consumer drives typically don't back up their data very often, and they typically don't have the sort of redundancy that servers have, so the spindle speeds are kept down to lower the failure rate. Faster spinning = heat and noise.
 

bizmark

Banned
Feb 4, 2002
2,311
0
0
Originally posted by: NightFlyerGTI
This 4U enclosure, possibly swap out PSU w/ 350w Antec TruePower
AMD Athlon MP 2100+ CPUs with thin copper HSFs
Tyan MPX motherboard
4GB PC2100 ECC Crucial DDR
nVidia TNT2 M64 32MB video
Intel PRO/1000 S Gigabit NIC
(need recommendation on good SCSI RAID controller card- 3Ware?)
(2x) Fujitsu 18GB 10k rpm hdd (maybe 3 for a RAID-5 setup)
Sony 52x CD-ROM
Sony 1.44MB floppy
etc...

I added more to my post above. Now for comments on what you have listed:

-case -- see what I wrote above, do you really *need* a rackmount chassis or does it just sound cool?

-Motherboard/processors -- see what Evadman wrote, AMD still has yet to prove itself in the server market, and with significantly higher temps it may be possible that they see a lot more frequent failure of other components (e.g. the hard drives which make a lot of heat on their own and fail a lot on their own anyway) -- is raisng the internal temp of the enclosure going to have a negative effect on component life?

-RAM -- Just as long as it's name-brand and ECC, you'll be fine. You just want to avoid failure, not go for ultimate performance.

-Video card -- anything more than 2MB is complete overkill. No need for AGP. Once you get it set up, you probably won't even run the machine from the console anyway.

-NIC -- just as long as it matches your connection. Do you have a Gigabit connection to the internet? Also there are some NIC's for servers that are redundant within the NIC: if one connection fails, it automatically switches to the other. You'd need multiple ethernet jacks for this, of course.

-SCSI RAID controller card: I've liked the Adaptecs that we have at work, they seem to be pretty standard as well as fast. Just make sure that all of your SCSI stuff is Ultra-160 compliant. (You'll need a mobo with 64-bit PCI if I'm not mistaken)

-HDD's -- I can't give brand recommendations... we have Fujitsus in the new server I talked about. Go for 15k RPM if you're looking at extreme I/O intensive stuff. And buy an extra hard drive (or know where you can get one in less than 24 hours) for when one of them fails.

-CD, Floppy, etc. -- sure.

-Addition: a Tape Backup drive. As was mentioned before, RAID != Backup. Even if your RAID is working perfectly, if your server is hacked and your data is erased (among the many ways your data could be erased), no amount of RAIDage will help you get it back. Make backups weekly or daily, depending on how quickly and how much info is changed on a regular basis. You could arrange to do backups over the network (to another machine) but this will eat away at your bandwidth.
 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
What's wrong with building one? I would only be using top-of-the-line hardware, and I know my stuff. It's not like I'm going to be building computers with ECS motherboards and Realtek network cards as servers- I can make a good, solid server.

So can I, it's easy. The problem is support. If you build one buy hand and the NIC goes bad at 3AM where do you get a replacement? Maybe you're lucky you have one lying around, maybe you're not and your customers wait until 9-10AM when you can get to CompUSA to get a replacement, if it was a Compaq server with a real support contract you'd have a replacement there in hours (depending on contract).

I had no idea you could swap out SCSI drives on the fly! How often do drives fail? It almost seems like a common occurance the way you guys talk about it...

Of course it requires the right enclosure to do, just unplugging a drive from an internal LVD cable would be bad =)

Drives fail often enough you prepare for it happening, it's probably the most common problem you'll have (outside of software problems). SCSI drives are more reliable than IDE ones, but you still get one DOA every once in a while.

nVidia TNT2 M64 32MB video

Why? If it's all you can get sure, but if I'm building it I'd rather put a S3 Virge PCI or something equally old and stable.
 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
-SCSI RAID controller card: I've liked the Adaptecs that we have at work, they seem to be pretty standard as well as fast. Just make sure that all of your SCSI stuff is Ultra-160 compliant. (You'll need a mobo with 64-bit PCI if I'm not mistaken)

We have one of the Adaptec's where I work and havn't had any problems, but I've heard some really bad stories about them so I'd be iffy. Of course almost all our boxes are Compaq so they have Compaq SmartArray's in them, so that's what I've seen work and of the dozens (never really counted...) of servers I've only seen 2 or 3 have any problems related to the SmartArrays. 64-bit PCI slots are only required for certain cards, my workstation at home has a 29160N in it and it uses a 32-bit slot, although it's not RAID.
 
Jun 18, 2000
11,198
771
126
Originally posted by: Nothinman
Why? If it's all you can get sure, but if I'm building it I'd rather put a S3 Virge PCI or something equally old and stable.
Why not? A TNT2 M64 will cost about the same. Why limit yourself to a 2MB video card? That's rediculous.
 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
Why not? A TNT2 M64 will cost about the same. Why limit yourself to a 2MB video card? That's rediculous.

Why waste 32M on a 320x200 text console? Instaling X on a linux/fbsd server is pointless.

If you're using Windows then video drivers are the most common cause of random BSODs, so why not use something really old and known to work well?
 
Jan 9, 2002
5,232
0
0
Originally posted by: Nothinman
Why not? A TNT2 M64 will cost about the same. Why limit yourself to a 2MB video card? That's rediculous.

Why waste 32M on a 320x200 text console? Instaling X on a linux/fbsd server is pointless.

If you're using Windows then video drivers are the most common cause of random BSODs, so why not use something really old and known to work well?

Good points on both sides of the table, but most decent SMP motherboards are not going to come with built-in VGA. And what are the odds of finding a brand new (re: warranty) 2MB graphics card that probably requires an ISA slot? I think a TNT2 or original GeForce256 fits the bill nicely. They're cheap, reliable, still supported with drivers from nVidia, and have great 2D imaging. I don't know of a better solution for, what, 20-30 bucks today?
 
Jun 18, 2000
11,198
771
126
Originally posted by: Nothinman
Why not? A TNT2 M64 will cost about the same. Why limit yourself to a 2MB video card? That's rediculous.

Why waste 32M on a 320x200 text console? Instaling X on a linux/fbsd server is pointless.

If you're using Windows then video drivers are the most common cause of random BSODs, so why not use something really old and known to work well?
Waste? Am I taking the memory out of the hands of starving children in Africa? I can buy a new/retail TNT2 M64 for $30. Why get a used 2MB card that will cost just as much? Besides, the liklihood of getting a "random BSOD" while working at the desktop are virtually nil.
 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
And what are the odds of finding a brand new (re: warranty) 2MB graphics card that probably requires an ISA slot?

S3 VIrge 2M cards are PCI and have known, good, stable drivers.

Besides, the liklihood of getting a "random BSOD" while working at the desktop are virtually nil.

Video driver BSODs are a lot more likely than pretty much any other kind. I've seen machines BSOD from the video driver when all the user was doing is browsing in IE.

The whole point is moot anyway, you should get a box prebuilt like a Compaq Proliant which will come with an onboard Rage128.
 
Jan 9, 2002
5,232
0
0
Well, the main point is that I'm looking to build some. I run a computer business with a focus on customized enterprise, business-grade hardware and consulting (think IBM, only cooler). I can do workstations, clients, and your basic server as far as computers go, but rackmount and extreme heavy duty servers are the next step for me. I'm just trying to get a feel of how feasable they would be, and looking at the components, I don't think it's something that would be impossible to do and cover. My business isn't just some run of the mill Mom-n-Pop place- I'm on call 24x7, and if a component goes down, I've got another in stock ready to go. If you're local, you could be back up within an hour depending on the task. If you're not local, I can outsource someone to the problem within a couple hours or UPS/FedEx you the component same-day, or overnight at worst. I've heard Compaq makes good servers, but it would be utter sacrelige for me to start utilizing those for my clients. ;)
 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
I'm sure if you would have explained that all in the beginning the thread would have looked a lot different...