• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Advice on Building Server from Spare Parts

Current Server:
ASUS CUSL2 motherboard
512MB of RAM (2 x 256 PC133 SDRAM)
PIII-866 CPU
3Com 10/100 NIC
1 x Seagate 400GB ATA133 haddrive
Ubuntu Linux Server 5.10

What I want to do...

I want to form a gigabit internal network. So I want to get that D-Link Gaming router with four 10/100/1000 LAN ports. (My current gaming machine has an ASUS SLI motherboard with onboard gigabit LAN.) I also want data redundancy.

So, my thoughts...

ASUS CUSL2 motherboard
512MB of RAM (2 x 256 PC133 SDRAM)
PIII-866 CPU
2 x WD 500GB SATA3.0 harddries
Ubuntu Linux Server 6.10

Which PCI gigabit NIC should I buy?

And more importantly, which PCI/SATA RAID card should I buy that will work with Ubuntu 6.10? My plan is to have a RAID 1 configuration with those two 500GB drives.
 
I know that Silicon Image based SATA RAID cards are supported by the current Linux kernel. And Netgear or Linksys have had decent and inexpensive LAN cards all along - don't see why it should be any different with Gigabit. IIRC, Intel based LAN cards are pretty decent too.

I've not heard of Sata 3.0 - what's that? You need some fancy bus slots to support Sata 2 (aka SATA-300) cards - Either extended PCI (64-bit and 66MHz) or PCI Express slots - those cards are generally expensive.

.bh.
 
Having a GbE NIC on PCI together with a storage controller on PCI will reduce your performance. An external backup is logically superior to RAID 1 by itself. So I'd recommend not getting a PCI storage controller and using regular external backups instead of RAID 1.

The Intel Pro/1000 GT or MT (older version) are generally recommended -- they're not too expensive, have good driver support, low CPU utilization and perform well.
 
Any Realtek 8169 based PCI Gigabit ethernet card will work well with Ubuntu (or any current Linux distro) these days. I use a Netgear GA311 without any issues. Files transfers using FTP sustain around 20-24MB/sec. Using SMB it's about 20% slower. Using SFTP its much slower. My "file server" has an AthlonXP 1800+ though, and I'm using old Seagate Barracuda ATA-IV drives. (still some of the quietest drives ever made)
 
Any GbE NIC will do if you're satisfied with 20-30 MB/s (or that's all your HDs can do, etc.). The Realteks are about the cheapest ones (VIA show up sometimes even cheaper), perform OK, but have high CPU utilization at the high end of performance (at the low end, 20-30 MB/s, it doesn't matter much).

The Intels are probably better all around, and if you can get them for around $20, why not?

http://www.pricegrabber.com/search_getp...70/sort_type=price/ut=d8633a8d0f4686c7
 
Originally posted by: GTaudiophile
Current Server:
ASUS CUSL2 motherboard
512MB of RAM (2 x 256 PC133 SDRAM)
PIII-866 CPU
3Com 10/100 NIC
1 x Seagate 400GB ATA133 haddrive
Ubuntu Linux Server 5.10

What I want to do...

I want to form a gigabit internal network. So I want to get that D-Link Gaming router with four 10/100/1000 LAN ports. (My current gaming machine has an ASUS SLI motherboard with onboard gigabit LAN.) I also want data redundancy.

So, my thoughts...

ASUS CUSL2 motherboard
512MB of RAM (2 x 256 PC133 SDRAM)
PIII-866 CPU
2 x WD 500GB SATA3.0 harddries
Ubuntu Linux Server 6.10

Which PCI gigabit NIC should I buy?

And more importantly, which PCI/SATA RAID card should I buy that will work with Ubuntu 6.10? My plan is to have a RAID 1 configuration with those two 500GB drives.

Waste.of.money. You're not going to be able to push anything close to gigE over a single PCI bus.

gigE is (theoretically) 125MB/sec. Adjust down for overhead and packet loss/collision, and you're probably in the 100MB/sec range assuming you don't have a lot of devices or are using a lot of wire.

Anything from the P2/P3 era that wasn't heavy-duty server class is going to be architectured like this:

NB->AGP
NB->SB->PCI->IDE

The IDE controller is going to be a peer on the same PCI bus as the slots. So, your gigE card is going to be sharing bandwidth the IDE controllers.

PCI 32/33 (standard PCI) is good for 127MB/sec on a good day. Notice that is almost identical to gigE's transfer rate. Now assume best case, that you can DMA transfer data from the gigE card to the HD without much CPU work, and that the HD can keep up. At best, you'd be looking at approximately 45MB/sec, real-world would be more like 30MB/sec.

There's a reason that all the consumer-grade stuff that supports gigE has it built into the NB and connected to a seperate bus, you're just not going to get good performance otherwise.
 
Originally posted by: TerryMathews
Waste.of.money.

He already has the server, and is talking about getting a GbE switch/router and a GbE NIC, which should give at minimum of 2x file transfer performance. Relatively speaking, that sort of performance jump at this sort of price is a bargain, and it's his money anyways.

What's your problem?

 
Originally posted by: TerryMathews
<snip>

OK then - say you had kit from the same era, but it's entry level server stuff, what would you do then? Specificaly talking about a Tyan Thunder LE-T dual P3 with 2 PCI-X slots...
 
Originally posted by: Atheus
Originally posted by: TerryMathews
<snip>

OK then - say you had kit from the same era, but it's entry level server stuff, what would you do then? Specificaly talking about a Tyan Thunder LE-T dual P3 with 2 PCI-X slots...

Well, gigE PCI-X cards aren't super expensive, so that wouldn't be such a bad investment. Although I'm not too sure how those Serverworks chipsets are at pushing data around.

I guess maybe I was looking at it from a different view from some of the other people in this thread, not whether gigE is a good value proposition, but why 100bT isn't good enough. There's not really much you can do with a computer that won't work just fine streaming at 12.5MB/sec...

If you've got a task that you'd really see a benefit from running gigE, go for it. If it were me, considering the server you're talking about (I'm not knocking it), I'd wait to transition to gigE until I had some more gigE-ready systems. If you need to replace a router or switch anyway, sure go ahead and future proof. I just wouldn't go out of my way to switch Right Now. 🙂
 
Originally posted by: Atheus
OK then - say you had kit from the same era, but it's entry level server stuff, what would you do then? Specificaly talking about a Tyan Thunder LE-T dual P3 with 2 PCI-X slots...

That's a strange board, and it doesn't have PCI-X slots -- they're 64/66 PCI slots, of 2 different types -- one is 3.3v, and the other is the older 5v. You need to be careful matching cards with the slots, and consider that if you use a 32/33 PCI card, the entire bus may run at 32/33.

I think you can find workable and affordable solution with this setup, esp. with really careful eBay/etc. sourcing of old (affordable) 64-bit controllers. However, if you buy new, by default expensive PCI-X cards for this server (assuming they're compatible), you should ask yourself -- is it worth it? Are you going to take this further and buy another server board for the eventual upgrade?

Going with a newer desktop PCIe board could be faster and cheaper and more future-friendly.

Also, you didn't provide enough detail regarding usage, other equipment (client, network, CPUs, OSs), budget, etc. to make good recommendations here.
 
Originally posted by: GTaudiophile
I want to get that D-Link Gaming router with four 10/100/1000 LAN ports.

Another thing to consider is that the D-Link router doesn't support jumbo frames, and with slower CPUs and PCI bus, you're likely to have some performance impact because of this. You need jumbo frame support on both NICs as well as the switch for it to work.

While these figures are not strongly correlated with whatever performance you'll see, I've seen for example 44 MB/s without jumbo frames and 58 MB/s with jumbo frames for very large file transfers using PCI NICs.

Inexpensive desktop switch such as the Netgear GS605 / GS105 version 2 support 9K jumbo frames.
 
Originally posted by: Madwand1
That's a strange board, and it doesn't have PCI-X slots -- they're 64/66 PCI slots, of 2 different types -- one is 3.3v, and the other is the older 5v. You need to be careful matching cards with the slots, and consider that if you use a 32/33 PCI card, the entire bus may run at 32/33.

Damn, I thought 64/66 PCI was PCI-X, but it looks like it isn't. Are PCI-X cards compatible then? At least with the 3.3v slot? I can't find much in 64-bit PCI on ebay, not SATA anyway, it's mostly SCSI and fiber channel.

Going with a newer desktop PCIe board could be faster and cheaper and more future-friendly.

Also, you didn't provide enough detail regarding usage, other equipment (client, network, CPUs, OSs), budget, etc. to make good recommendations here.

I won't be upgrading the board and processors, they're overkill anyway. I only bought it all because it was cheap and reliable - an uncommon combination. Its used as an SMB file server, media server, low traffic web server, and testing platform for web applications and Linux stuff. I basically want more space and RAID-1 just like the OP, but the onboard IDE is slow and the onboard SCSI is too expensive to get the drives for.

 
Originally posted by: Atheus
Are PCI-X cards compatible then? At least with the 3.3v slot? I can't find much in 64-bit PCI on ebay, not SATA anyway, it's mostly SCSI and fiber channel.

I think you're going to have to solve both problems at the same time -- GbE NIC and storage controller. For best performance, you'd need 2 64/66 cards. If you only got a fast storage controller, then you'd be limited by the onboard NIC, and you might as well not have a fast storage controller then. Likewise for just a fast NIC. If you got one 32/33 adapter, then (because there is no separate PCI-only bus) your entire bus would run at 33, so you'd lose at least 1/2 the speed. Maybe that'd be OK, but still is a waste.

So ideally you need to figure out how to get 2 64/66 cards that will work. And here, the problem is the 5v slot, as 5v is obsolete. I think the problem might be solvable with some old 64/66 server NICs -- these turn up quite often on eBay these days, as server pulls of various makes.

I don't have the time to solve this problem for you in detail -- I suggest starting off with Intel specs, to find out which PCI 66/64 cards would be compatible with 5v -- perhaps Intel documentation or pre-sales could help with this. Similarly, there are other good NIC vendors who might have such parts. Of course, starting with what's on eBay and looking up their specs is also a good way to proceed.

As you'll be using Linux, you can go with its software RAID, and as yuchai says, you don't really need a RAID controller then, just a SATA adapter. Supermicro makes a couple of these which are well-regarded, and come in 64-bit PCI-X by default and cost somewhere around $100 each. They're compatible with 64-bit PCI slots according to their FAQ. I'd believe they're only compatible with 3.3v, but you could try checking with Supermicro.

http://www.supermicro.com/products/accessories/addon/DAC-SATA-MV8.cfm
http://www.supermicro.com/products/accessories/addon/AoC-SAT2-MV8.cfm
 
Originally posted by: Atheus

I won't be upgrading the board and processors, they're overkill anyway. I only bought it all because it was cheap and reliable - an uncommon combination. Its used as an SMB file server, media server, low traffic web server, and testing platform for web applications and Linux stuff. I basically want more space and RAID-1 just like the OP, but the onboard IDE is slow and the onboard SCSI is too expensive to get the drives for.

If you're really serious about going gigE, I think the basic fact that we're all stepping around here is you should be looking at a S478 board based on the i875 chipset. That way, you get built-in fast (and properly connected) SATA and gigE. They're both connected to the Northbridge on their own PCI busses, so you're not going to get any weird bandwidth issues.

The stuff you've got is good, don't get me wrong, it's just 10/100 class equipment.
 
Back
Top