• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Dual Athlon Web Server w/raid&scsi help!

DirtylilTechBoy

Senior member
Hello,

I hope someone can find the time to help me with this. I am building a server farm, and I want to use a combination of servers mainly built on the dual Athlon MP Tyan Tiger Mother board with 4 512ddr ram registered chips. I want to use SCSI hard drives in a raid 1 combination. I plan to get dual 36 gig Seagate Cheetah 15k rpm drives and have them mirror each other in case one fails. I also want dual power supplies.

I have several questions and also requests for general advice and guidance.

1. How do I get a raid card and a scsi adapter to work together? If I have to plug the hardrives into the scsi adapter, how in the heck to I hook them up to the raid card too? Or do I buy a raid card with SCSI built in it? I have no idea, so if someone can just pick out the two best cards and explain how I hook them up, you will save me a huge headache and help me sleep better at night.

2. One server will act as the backup component and will be active on the internal network, but inactive on the web. I want to use dual athlons and get a case with 10+ bays and stick 10 Western Digital 100gig harddrives for a total of 1 terabyte. I have read some things that make me think hooking up 10 IDE drives to one motherboard won't be as easy as it sounds. I have heard that the cable can only be so long before it won't work, and that most motherboards can handle the many ide devices. I don't need a whole lot of processing power since the site won't act as a server, only a storage medium. Bluntly, how do I hook up ten Western Digital 100gig harddrives to a single motherboard? Do I need a power supply bigger than 300w if I will have that many drives? HELP!!!!!

What is better.... ASP or Coldfusion????


Thanks everyone!
 
holy crap...well i cant help ya with either of those... but those are some big ass setups yer going for 🙂 good luck
 
<<1. How do I get a raid card and a scsi adapter to work together? If I have to plug the hardrives into the scsi adapter, how in the heck to I hook them up to the raid card too? Or do I buy a raid card with SCSI built in it? I have no idea, so if someone can just pick out the two best cards and explain how I hook them up, you will save me a huge headache and help me sleep better at night.>>

You get a RAID card with SCSI built in. The best ones, well I'm not too heavily into raid but Adaptec makes a good line of RAID cards as does Mylex. You might want to take a look into which model would best suit your needs.

<<2. One server will act as the backup component and will be active on the internal network, but inactive on the web. I want to use dual athlons and get a case with 10+ bays and stick 10 Western Digital 100gig harddrives for a total of 1 terabyte. I have read some things that make me think hooking up 10 IDE drives to one motherboard won't be as easy as it sounds. I have heard that the cable can only be so long before it won't work, and that most motherboards can handle the many ide devices. I don't need a whole lot of processing power since the site won't act as a server, only a storage medium. Bluntly, how do I hook up ten Western Digital 100gig harddrives to a single motherboard? Do I need a power supply bigger than 300w if I will have that many drives? HELP!!!!>>

UHHHHH, wait a second, didn't you just say that you were going to use a RAID card with SCSI built in??? Dude the VERY LAST THING that you want to do if you're serving files a lot is to use the IDE interface. This is because only one device on a channel can be active at any given time and this can cause slowdowns. Let's say one client wants to get a large file off a drive on one of the channels, another client wants to get another file but it is on another drive but on the same channel. That client will have to wait till they can have access on that channel for the drive. With SCSI you do not have this problem because you can have multiple devices active at the same time on a chain. Plus the hassles of putting 10 IDE drives on a board will be ENORMOUS. Your normal boards have 2 channels on them, sometimes 4 which will give you a total of 8 drives to put on those channels, the other 2 will have to be put on a separate controller card. If you get a board that only has 2 channels though you will have to get 2 controller cards unless you can find a card with 4 channels on it. You are ASKING from problems by going with IDE for this setup. Stick with SCSI, it may be more expensive but it will be a lot more reliable and faster. Insofar as power supplies go HECK YES you'll have to get a power supply larger than 300W for that many drives. You're looking at power supplies in the arena of 600W possibly 700W to power them sufficiently, and for servers like this I ALWAYS recommend using redundant power supplies. I hope you remembered that it is very wise to have these servers hooked up to a UPS right? Because if the power goes out and you haven't done your backups yet, well let's just say that it won't be a very good rest of the day. You're looking at a UPS for the systems that you've described in the arena of 2200VA for the one with a lot of drives, estimated runtime is 30 minutes.

<<What is better.... ASP or Coldfusion????>>

Coldfusion all the way, a much more flexible & powerful language.
 
To emphasize what was said earlier, and to expand on it:

1. You said you want to build a server farm. So, forget about IDE. Not an option.
2. RAID & SCSI. *All* server-class RAID cards are SCSI only. Plug all SCSI HDs into the RAID card, and the SCSI CD into the SCSI controller.
3. Adaptec, Mylex and the vendor cards (IBM, Compaq, etc..) are the most used cards.
4. Servers tend to use RAID 5, rather than RAID 1, because it'w faster/easier to replace a drive with less (0) downtime.
5. I would push you towards a vendor package (Compaq Proliant?), because they come out of the box with lots of drive bays, SCSI & RAID support, redundant power supplies (hot-swap 🙂 ), and hot-swap NICs. They aren't cheap, but you get what you pay for. You should be able to get a decent sized server chassis on e*ay, from a dot-bomb sale.

If you have more questions, go ahead and post (or PM). (We have 3 server farms, >1500 servers)

--Woodie
 
Back
Top