Deciding on motherboard for media server.

benwood

Member
Feb 15, 2004
107
0
0
I'm thinking about putting together a new PC to act as a media server as my old one is failing. I won't be doing any gaming or overclocking or web surfing. I may run software RAID like FlexRAID or unRAID on it. I want to have as many onboard SATA ports as possible so I can add lots of hard drives. My present media server has 16 internal hard drives plus 8 external ones and I'd like to consolidate them into a number of smaller larger capacity hard drives. I'd prefer it if the motherboard had two slots that can run in PCIe 2.0 x8 mode (x8/x8) so I can add i one or 2 IBM M1015 HBAs which require x8 slots to add even more SATA ports if necessary. I've found 8 SATA port motherboards using both AMD and Intel chipsets. The Intel chipset motherboards require a extra SATA chip to get 8 ports while the AMD A85X chipset motherboards don't. So far I've found 3 motherboards that fit my needs:

1) ASRock Z77 Extreme4
2) ASUS P8Z77-V LX
3) MSI FM2-A85XA-G65 FM2

The ASRock was my first choice but the large number of negative reviews on Newegg scares me. Then I looked at the ASUS which seems mostly OK. And then finally I looked at the AMD board from MSI which got very good reviews.
 

dealcorn

Senior member
May 28, 2011
247
4
76
First, do you understand that in this context an IBM 5014 works great with 4 PCI-e lanes. Your maximum transfer speed is limited to 150 MB/sec per drive rather than 300 MB/sec so the available bandwidth better matches what is required in a home media center context. No current spinning drive requires even 200 MB/sec so I do not see this as a loss in a home media server context. If you have several SSD's, use the motherboard ports.

Second, GB ethernet peaks at 125 MB/sec so unless you are commmitted to port aggregation or a different NIC, network transfer speed will limit performance notwithstanding your sata port count.

You are talking 24 sata ports which I translate as a pair of 12 disk arrays using RAID 6. That provides 80 TB of useable storage using 4 TB drives or 60 TB using the far more affordable 3 TB drives. Have you given any realistic thought to your target storage capacity in TB rather than a port count?

Hard disks are guaranteed to break over time, always. It sure is a pain in the B* to replace them at odd hours. You can build anything you want but as you move towards large storage capacity devices, hot swap and either RAID 6 or ZSF Z2 or better are strongly recommended if you prefer not to lose data and have things to do other than restore from backups.

You do not state whether your media server will be on 24/7 which should affect motherboard selection. I await parts delivery to build my archival storage server. An archival storage server is typically powered off so I am not overly concerned with power requirements. I settled with a low end Supermicro socket 1155 motherboard and a Pentium G2020 with 4 GB of ECC RAM and a used IBM BR 10i (8 sata ports @ 150 MB/sec for $40). If the server was going to be powered up 24/7, I would have selected IBM 5014 and waited for Briarwood which may actually be your best solution if the quad core Crystal DMA engine really provides economical hardware support for RAID 6 parity calculations. Depending on model, you get either 32 or 40 PCI-e lanes to play with so there is space for both sata ports and also a higher end NIC. Selected pre release information was leaked here: http://www.cpu-world.com/news_2012/2012111401_Specifications_of_Briarwood_Atom_processors.html. The upside is it will likely provide you with an opportunity to learn how to set up a headless server.
 
Last edited:

BuffaloChuck

Member
Mar 12, 2013
31
0
0
We use ASROCK AMD's 990FX Extreme4 motherboards for media servers because they have 8 SATAIIIs (unlike any Intel board) and they offer an IDE port.

SATA 1-6 are all RAIDable, so we put the CD and the boot drive on SATA 7 & 8. There is a 'startup boot-time lag' by using either of these last two SATA's as a boot device - about 5 seconds - but with an SSD's boot time of 10-12 seconds, another five is NOT noticeable.

Often we have a tried-and-true old Pioneer 112 or 116 as an IDE DVD burner as well, freeing up that other SATA port as an easy clone-option for the SSD, or stick the BluRay burner out there.

When we use Intel boards, we nix the onboard RAIDs and always go with some RAID card that offers SATAIII support - again, unlike Intel which will only discover 3+ SATAIII existence in Haswell - 4 years after the rest of the world. Grrr... it's a shame to have the fastest processes hamstrung by the slowest-possible bottleneck (HDDs) by the slowest possible disk-controller.
 

dealcorn

Senior member
May 28, 2011
247
4
76
The ASROCK AMD's 990FX Extreme4 is truely a surpurb board that is a wacky, $180 suggestion in this context. First, it can support your non gaming needs with support for AMD Quad CrossFireX™, 3-Way CrossFireX™ and CrossFireX™and NVIDIA® Quad SLI™ and SLI™. Imagine the pride of ownership that results from saying "I paid through the nose for features I do not want and do not intend to use.." Second, the board gives you 8 sata 3 ports at 6.0 Gb/sec (750 MB/sec). Assuming you put them in a RAID 6 configuration, that gives you a 4500 MB/sec transfer ability. However, the wonderfulness of that capability must be understood in context of a 125 MG/sec speed limit placed by your single Realtek gigabyte Ethernet port. Thus your read speed is over 30 times what is required to saturate your Ethernet controller. Again you get the pride of ownership that comes from a grossly imbalanced system with wasted capabilities. With an IBM BR 10i and its crummy PCI-e 1 configuration you are limited to a read speed of 800 MB/sec which is still 6+ times what is required to saturate a single Ethernet port like the 990FX provides. I do know that Intel Ethernet ports are universally held in high regard in the server community. Hopefully, in a later post someone will pipe up that this Realtek chip is different from the ones with a bad reputation and you should not worry.

I would rather save $10 bucks and get a Supermicro X9SCM-F. You sacrifice a pair of sata ports but get a second Intel Ethernet controller and IPMI remote management so you never have any need to connect a monitor or keyboard. This is a server, not a gaming console and solid remote management capabilities are worth far more at home than benchmark only features. Video comes as part of the IPMI package so all four PCI-e slots are available for expansion. Supermicro has a good reputation for quality. Still, Briarwood is the best choice for 24/7 operations due to its efficiency which has not yet been tested.
 

benwood

Member
Feb 15, 2004
107
0
0
First, do you understand that in this context an IBM 5014 works great with 4 PCI-e lanes. Your maximum transfer speed is limited to 150 MB/sec per drive rather than 300 MB/sec so the available bandwidth better matches what is required in a home media center context. No current spinning drive requires even 200 MB/sec so I do not see this as a loss in a home media server context. If you have several SSD's, use the motherboard ports.

Second, GB ethernet peaks at 125 MB/sec so unless you are commmitted to port aggregation or a different NIC, network transfer speed will limit performance notwithstanding your sata port count.

I'm not concerned about speed I'm concerned about storage capacity. The reason I might want to use an IBM M1015 is that they are available cheap on EBay and there's a lot of support for using them once crossflashed to a LSI 9220-8i with various software RAIDs like FlexRaid, SnapRaid or Linux based UnRaid. I want the extra SATA ports not speed. And I may not buy the IBM 1015 card immediately. I just want the option to do at a later time without having to buy another motherboard.

You are talking 24 sata ports which I translate as a pair of 12 disk arrays using RAID 6. That provides 80 TB of useable storage using 4 TB drives or 60 TB using the far more affordable 3 TB drives. Have you given any realistic thought to your target storage capacity in TB rather than a port count?

I'm retiring 12 of the hard drives in my current media server as they are parallel ATA hard disks (PATA) on a PCI (not PCIe) 3Ware 7506-12 hardware controller running as two RAID-5 arrays. I've had several hard drive failures including once during a rebuild and I've run out of replacements. It's getting hard and expensive to find PATA drives nowadays and the 3Ware controller isn't supported under anything later than Windows XP. I don't need 60 TB (thus far) of storage but I do need lots of ports. I have a mix of eight 1 and 1.5 TB drives in external cases which I want to re-use to save money. Plus I've bought three 3 TB hard drives to hold what was stored on the drives using the 3Ware controller. I won't be running RAID-6. The software RAIDs I mentioned (SnapRaid, UnRAID, etc) use one or more parity disks.


Hard disks are guaranteed to break over time, always. It sure is a pain in the B* to replace them at odd hours. You can build anything you want but as you move towards large storage capacity devices, hot swap and either RAID 6 or ZSF Z2 or better are strongly recommended if you prefer not to lose data and have things to do other than restore from backups.

You do not state whether your media server will be on 24/7 which should affect motherboard selection...

My media server will be on 24/7. Another reason for building a new server is that my power company says that my electricity usage is lot higher than normal than other houses in my market.


I await parts delivery to build my archival storage server. An archival storage server is typically powered off so I am not overly concerned with power requirements. I settled with a low end Supermicro socket 1155 motherboard and a Pentium G2020 with 4 GB of ECC RAM and a used IBM BR 10i (8 sata ports @ 150 MB/sec for $40). If the server was going to be powered up 24/7, I would have selected IBM 5014 and waited for Briarwood which may actually be your best solution if the quad core Crystal DMA engine really provides economical hardware support for RAID 6 parity calculations. Depending on model, you get either 32 or 40 PCI-e lanes to play with so there is space for both sata ports and also a higher end NIC. Selected pre release information was leaked here: http://www.cpu-world.com/news_2012/2012111401_Specifications_of_Briarwood_Atom_processors.html. The upside is it will likely provide you with an opportunity to learn how to set up a headless server.

I'll look into that IBM BR10i card as I hadn't heard of it before. $40 is certainty cheaper than $80-100 for the IBM M1015 card. I'll have to see if the software RAIDs support it. And since those software RAIDs will be using the CPU to perform parity calculations I wonder how cheap and low power a CPU I can get away with.

Thanks for all your help.
 

Goros

Member
Dec 16, 2008
107
0
0
I did it because mine operates as a plex transcoder for multiple devices and an HTPC running MPC-HC with madVR, SVP and ReClock at the same time. 40pcie lanes are great for being able to add more HBA's without sacrificing anything also.

And, screw hardware raid. Software raid for media servers is where it's at.
 
Last edited:

dealcorn

Senior member
May 28, 2011
247
4
76
I live in an archipelago with expensive electricity. For planning purposes, I estimate that every watt burned in 24/7 mode has a net present value cost of $30 over a 10 year use period. That is why I will not use an IBM BR 10i in 24/7 mode. I believe it burns a couple more watts than an IBM 5014 so after considering the cost to power up the card, the $100 IBM 5014 costs the same as the $40 IBM BR 10i. However, the IBM 5014 will deliver 150 MB/sec using 4 PCI-e lanes. Using the same 4 PCI-e lanes will only get you 75 MB/sec with the BR 10i. That is why I limit the IBM BR 10i to use in an archival server rather than 24/7 duty.

I do not use and am not familiar with Unraid but my impression is that it provide parity (redundancy) using disks of dissimilar sizes. If so, it is a great choice for you old sata drives. If there is an option to specify dual redundancy, I say go for it.

The bulk of your savings will come from replacing the Promise ide array with a more contemporary, large capacity sata array. I selected 3 TB WD Reds which were recently promoted at Newegg for $135 each. When an Unraid array disk dies, replace the expired capacity on the new array. Depending on the relationship of Unraid array drive deaths vs capacity expansion required because you have more media, you may require fewer sata ports than you think. When a 1 or 1.5 Gb disk dies, you replace it with a more economical 3 GB disk as needed.

As you value array expansion (add a drive to an existing array), do not consider ZFS: it does not support array expansion. Do hang out on Wikipedia or a reasonable SOHO site (eg ServetheHome) long enough to get an understanding why it is important to go with RAID 6 over RAID 5 as you move to larger capacity drives. I may be wrong, but on the narrow question of who is most responsible for the widespread recognition that software RAID is preferred over hardware RAID, I have to give the nod to Patrick Kennedy at Servethehome. The site is misnamed in that it should be ServetheSoho but I think of it as the place where former System V coneheads hang. For some reason, knowledgeable folks with skin in the game are better sources of perspective than fanboy's.

That's the place to go for guidance on flashing all manner of SAS/SATA cards. Yes, the IBM BR 10i is widely used for software RAID but I did not bother to check whether it is good for a boot drive.
 
Last edited:

Goros

Member
Dec 16, 2008
107
0
0
FlexRAID allows for future disk expansion and multiple different disk capacities to be installed in the same pool.

It also offers expandable redundancy (raid6/raid60)