PCIe x16 to 2 PCIe x8 (or 4 x4) ?

Banderon

Member
Feb 29, 2000
43
0
0
I've been looking into building a file server. I'm planning on buying a low-power miniITX board with one PCIe x16 slot, and onboard GigE.

I'm going to throw a x4 (maybe x8) RAID card into the x16 slot and run a lovely 8-drive RAID5. The RAID will be putting out a lot of speed, more than enough to saturate the theoretical 125MB/s of GigE. This is before considering the fact that there will be a number of PCs connecting to the server, and that LOM (lan on motherboard) ethernet adapters aren't usually that amazing.

So, here's my plan on alleviating the ethernet bottleneck. I want to split the x16 channels of the one slot into 2 x8s, or 4 x4s. Then I can have the RAID card running at full speed, and throw in a few PCIe x1 GigE cards. Trunking the NICs will bump the bandwidth cap to a theoretical 1x(# of NICs) Gb/s, helping to alleviate the bottleneck. Ideally, I'd want to split the x16 into 2 x8 and run an x8 RAID card, and then split the remaining x8 into 1 x4 and 4 x1s with 5 NICs trunked to 5Gbps.

x16
|
-------- x8 RAID
|
------ x4 -- NIC
|
--- x1 -- NIC
|
--- x1 -- NIC
|
--- x1 -- NIC
|
--- x1 -- NIC


Basically, my question is this: are there any standard riser cards out there that split a x16 slot up into 2 x8s, or 4 x4s.. and are there any x8 risers that split up into x4s and/or x4s that split up into x1s? The miniITX mobos have 1 x16 slot to save vertical space. I like that. But with flexible risers, I can certainly find the room for 4-5 PCIe cards, and still take up less space than going with a larger motherboard.


So far, all I've found is the RSC-R2UE-2E4R [PCI-E X8 TO 2X PCI-E X4) from Supermicro. The problem is it's meant for right-hand slots. Take a look at the picture and you'll understand what I mean: http://www.wiredzone.com/itemdesc.asp?ic=10017841

Anyone know of anything that can help me out, or have any ideas or criticisms/advice?
 

Banderon

Member
Feb 29, 2000
43
0
0
Something similar, yes... but not that.

"ABKPCIEXPUP connects to the Intel Adaptive Slot on the Intel Entry Server Platform SR1425BK1-E 1 x expansion slot."

That doesn't connect to an actual PCIe slot, but rather to a proprietary expansion slot.
 

Zap

Elite Member
Oct 13, 1999
22,377
7
81
A case that can hold an 8 drive RAID 5 array should surely be able to hold a motherboard larger than mini ITX? Just pick up a motherboard with onboard GBe and enough PCIe slots, and the lowest end CPU it can handle. I think the power envelope will be close enough to a mini ITX solution (after accounting for 8 drives) and the cost will be cheaper (mini ITX with PCIe 16x ain't cheap).
 

QuixoticOne

Golden Member
Nov 4, 2005
1,855
0
0
I love mini-itx systems where they're appropriate, but I don't see the sense in using one here if it gets in your way.
The CPU power, chipset, and memory limitations inherent in most mini-itx systems would severely compromise your performance with that class of file server. The RAM often isn't even dual channel, and often can't be expanded to 8GBy or even 4GBy. The CPU is often sensibly limited to something that is a bit on the low end side which is OK if most of the intelligence is in the offloaded computations handled by smart RAID and TCP offloading NIC cards, but, still, there is often a decent amount of work for a CPU to do in that kind of I/O saturated environment. Certainly if the CPU is running an OS and providing a filesystem as opposed to just sharing essentially raw block devices over iSCSI or something.

Beyond that, the size and power consumption of a reasonable mini-ATX motherboard or "average" ATX motherboard (which is often considerably smaller than the full ATX specification limits these days) is relatively attractive compared to the size and power consumption of the needed drives, case, fans, power supply, RAID card, NIC cards.

Lots of RAM (e.g. 8GB) would be very useful for a cache for that many drives. If your application was relatively important, you might do well to consider using ECC RAM (which you can't get on a comon mini-ITX AFAIK unless someone designs one for 1U server rack use...maybe), as well as a redundant PSU.

And as for mini-itx or even mid-size ATX cases, have you tried cramming 8 or even 4 drives in even an better than average mid-tower case? Even many full-tower ATX cases aren't exactly a joy to put that many drives in, especially if they're in hot-swap quick removal enclosures, the cabling gets messy, the thermal environment is messy.... So if you use anything approaching a full tower case, the mini ITX MB is just irrelevant since a full size MB would work just as well in there. I can see a possible point if you want to just have a 1U blade CPU server hooked over eSATA / SAS or whatever to an external disk enclosure though....but even so.....

As for a RAID card and a low power CPU and non-ECC system RAM .. you're awfully brave if you're not running ZFS to catch those systemic checksum errors undetected by the RAID level (which are pretty common)... and if you were using ZFS for that you'd have the drives in JBOD mode not hardware RAID controlled anyway....



 

DSF

Diamond Member
Oct 6, 2007
4,902
0
71
How would you even have the physical space to run a riser off of a riser off of a riser? I must be missing something.

I know there are P35/P45 boards that have 2 PCI-e x16 and 3 PCI-e x1. You could probably even find one that has 4 PCI-e x1, and I imagine it's the same story on the AMD side. By the time you spend all that money on risers you could've just bought a board with all the slots hardwired.
 

Banderon

Member
Feb 29, 2000
43
0
0
Very good comments, all of these. I'll address the simplest first: to fit things, I was going to use flexible risers and link those to the splitter risers.

While I certainly care about function the most, looks and style does matter to me. I was even considering having the actual PC system in its own tiny case, just big enough for the miniITX and NICs positioned parallel to the motherboard with the riser cards. The actual HDs I could put into an external enclosure, connected via InfiniBand cables, ( http://www.cooldrives.com/fonasamidren.html ) but that setup looks to be a tad too expensive.

QuixoticOne makes many very good points. I wanted to use miniITX because of the very lot power usage (entire systems can run off of less than 70 watts). Throwing in a few NICs into the mix certainly adds to the power draw, and having 4-8 drives will also clearly add to the power usage. But I figured since a 7200rpm drive runs about 15watts (120 watts total) or so isn't too bad. That means a total draw of 190watts + NICs. I figured that since I wouldn't need anything that an average ATX motherboard has to offer, I might as well get something less power-hungry and smaller (smaller looks so much nicer).

This is for a home/VPN-accessible file/media server. It will be accessed by myself, the other PCs in the house, and via ~5 VPN accounts. While it will be rare that everyone will be accessing the server at once, I do want the system to be ready for that. I wanted to run XPlite ( http://www.litepc.com/xplite.html ) with most of Windows' components removed; basically just use it as a glorified NAS, a USB->Ethernet bridge for the house printer, and to run whatever Windows-specific functions/programs I want to have available.

I want to run RAID5 for the speed, as well as the extra bit of redundancy. Just in case two drives from the RAID5 die on me (or the RAID card itself), I plan on having a backup server running on a very old PC (K6-2 350mhz, 256MB PC100, 100MbitE) that would run a RAID0 on a 2-4 large drives. I'd have it boot up nightly, sync to the file server, and shut down again. Once a week, I guess I'd have the backup server run a disk check on the RAID0, just to make sure everything's ok. Data access on the main server certainly isn't that imperative that I would need a second PSU. I'll have a UPS, but that'll be just to let the system shut down properly instead of crashing in the event of a power failure.

I guess it would be too optimistic and idealistic to assume that the NICs and the smart RAID card would do all of their own heavy calculations, leaving the CPU free to just run the OS? I was going to run the thing in Windows due to the simplicity and ease of Windows file/printer sharing. So the file system is necessarily NTFS. I didn't realize that there was a high possibility of errors when writing to a RAID. More info would definitely be welcome.


Perhaps I should also repost this as a stand-alone post? Get some more advice..

I wasn't planning on having more than 1-2GB of RAM, since I figured barely any of it would have any use. Using some of the RAM as a cache for the HDs didn't occur to me... how would I do that?