PCI Sata card performance

jvroig

Platinum Member
Nov 4, 2009
2,394
1
81
I have been tasked to see if there can be substantial savings to be gained from using a normal CPU/mobo/PSU/case loaded with as many HDs as possible (7-10, like Antec Titan/Atlas with 5.25 to 3.5 adaptors) versus using a real NAS box (the one in question is the Synology 7xx model with expansion, total of 7 drive bays, for about $1,400, units only, no harddisks).

The limitation encountered is the number of SATA ports on a mobo, which is usually just 4, 5, or 6. If the case is tall enough, this won't be enough to fill up all possile HD slots on the case. I believe this is easily remedied by using a PCIe Sata card (are there any other alternatives?).

Questions:
1.) Is there a performance penalty for filling up all SATA ports on the motherboard?
2.) If there is, is it negligible, or at least the same even when using multi-bay (5 and above) NAS boxes?
3.) Is there a performance penalty in using PCIe SATA cards? I'm assuming there isn't, as the existence of PCIe SSD drives mean this is probably just as fast, if not faster, than SATA II.

Thanks.
 

Lonyo

Lifer
Aug 10, 2002
21,938
6
81
The main performance issue would be if you were using RAID drives on a PCIe x1 1.0 card with bandwidth of "only" 250MB/s.
When a single drive tops out at ~100MB/s (mechanical) sustained transfer, you'd need a 3 or 4 drive RAID array to exceed it, so performance wise.

If you get a decent card, and you are using RAID (presumably you would want to use RAID) then the controller on the card will likely be better than what you'd get built in on a consumer level motherboard, so you might get better performance (no idea how it would compare to a NAS box though).


You might want to check out http://www.storagereview.com/
 

jvroig

Platinum Member
Nov 4, 2009
2,394
1
81
Thanks.

Let me see if I understood what you said:

1.) No performance penalty for filling up all SATA ports in a mobo. So for a mobo with 6 SATA ports, even if you plugged in six drives, and they can all top out at their respective read/write speeds at the same time.

2) No performance penalty, so Q#2 is N/A.

3) As long as the card (and slot) is > PC!e x1 1.0, there should be no problem, and this can easily be seen by checking the bandwidth. I suppose the 4 port cards are faster than the 2 port cards; anyway this is easily checked.

Did I get it right?

By the way, not yet sure about the RAID. This may be a software RAID (Linux MD RAID), or just separate volumes, or maybe even separate RAID volumes. The evaluation of "to RAID (and type of RAID) or not to RAID" comes after this evaluation, as it applies to both using a generic PC case or a real NAS unit, so it's not a real concern for now. Right now, it's really just focused on overcoming the SATA ports limitation of a motherboard, and seeing if possible solutions do not present a bottleneck in bandwidth that otherwise would not be there (hence the questions about filling up all SATA ports and performance limitations of PCIe SATA cards, if any)

Thanks.
 

notposting

Diamond Member
Jul 22, 2005
3,498
33
91
Somewhat off topic, but check out the Cooler Master Centurion 590. Pretty cheap and has 9 5.25" bays. Big and well ventilated too. Using it as my home server...which I need to expand but I just filled up my 6th (last) SATA port :p
 

jvroig

Platinum Member
Nov 4, 2009
2,394
1
81
Just an update, it seems there actually are SATA port multipliers, so you can distribute each SATA port on your mobo to up to 3 harddrives (SATA II = 300 MB/s, each HD tops out at around 100MB/s assuming all of them are reading/writing at the same time; if SATA III, even more). Given their availability is not a problem and the price is simply the price of a regular cable (and not as expensive as PCIe SATA cards that run from $40-$70), even mobos with 4 SATA ports can effectively service 8-10 HDs easily, max of 12.

@notposting: Thanks for the case recommendation. Can those 5.25" bays also house 3.5" HDs? Or do they come with 5.25"->3.5" adaptors?
 

NP Complete

Member
Jul 16, 2010
57
0
0
1) No performance penalty, but your total bandwidth is limited by the chipset's connection to the northbridge/CPU. I believe for C2D systems the bandwidth is 1GB/s (in each direction) (this is shared with all southbridge devices). Not sure if the bandwidth was increased on 1156/1366 systems, and I have no experience with AMD based systems (though it should be realitively easy to look up chipset to CPU bandwidth).

2) Depends on how drives are attached to the CPU.

3) Same as #1 - on most motherboards all external PCIe devices are connected to the southbridge/chipset, and therefore share the bandwidth with all USB/Sound/Hard drives.

Additionally, with port multiplier, all drives on the port multiplier share the SATA connections bandwidth, which may be an issue with faster drives on SATA I or SATA II connections (1 Gb/s or 3 Gb/s). There's also a very small overhead for using port multiplier (negligible for the most part), as both drives share the same set of control registers which means the driver needs to perform extra steps to determine which drive is responding on the port multiplier.

In short, unless you plan on using lots of SSDs (more than 3), or try filling up a port with SSDs (through a port multiplier), all the drives should be able to function at full speed.
 

jvroig

Platinum Member
Nov 4, 2009
2,394
1
81
In short, unless you plan on using lots of SSDs (more than 3), or try filling up a port with SSDs (through a port multiplier), all the drives should be able to function at full speed.
Thanks a lot for taking the time to post and all the explanations. As for the quoted part, this was what I was thinking as well. Thanks, I'm glad that got validated.

Good luck getting SATA port multipliers working with Intel chipset SATA ports and drivers.
I was not aware there were problems here (not surprising, since I just heard of the things a few hours ago). What sort of WTF moments/experiences have you had with them?
 

NP Complete

Member
Jul 16, 2010
57
0
0
If you're not using a port multipler that the manufacture built using the Intel chipset, I'd recommend staying away from them as well. Let's just say that my dealings with some individuals working on port multiplier drivers (in regards to intel chipsets), lead me to believe that at least in the Arrandale generation and earlier the drivers weren't to be trusted too much.

Things may have changed since I last heard, and newer drivers may be better.
 

jvroig

Platinum Member
Nov 4, 2009
2,394
1
81
Hmm.. I guess PCIe SATA cards would be the better option. Any 4 port with 400-500MB/s bandwidth would be perfect, if I could find one (what would that be, PCIe 2.0?) When I did some browsing I think they were close to $100 each, not sure what kind of PCIe it was, hopefully not 1.0.
 

notposting

Diamond Member
Jul 22, 2005
3,498
33
91
The case does come with one 4-in-3 adapter to start, has a fan included of course, but it's not an externally accessible setup or hot-swap or anything like that. Pondering getting myself some nicer ones including those features...but do I really need them? ;)

Actually I might look at the 5-in-3 for that matter. And I have a DVD drive as well so I will be looking for a 2 bay adapter to go with the rest (ripping entire DVD and CD collections to WHS).

If you want to feel inadequate, check out http://www.servethehome.com/ he's posted on here. Pretty impressive (look at the 30 drive whs). I point it out to my wife when she thinks I'm being excessive.
 

jvroig

Platinum Member
Nov 4, 2009
2,394
1
81
Thanks, that link is great. I was doing this for work, but after reading that blog you linked to, especially the 30-drive WHS posts, I started wondering about trying something similar at home, but far less ambitious.
 

notposting

Diamond Member
Jul 22, 2005
3,498
33
91
If you have time and inclination to fiddle with crap before getting into production (especially the home) I would suggest trying out virtualization. I wish I had done it with our WHS but as the beast gets bigger and bigger and the wife relies on it more and more the odds of me trying it out get less and less ;)

Sounds like you can get it working pretty good, USB drives are the one sore spot if I remember right. I've never bothered with them...only thing I would use it for is to backup the backups.
 

alaricljs

Golden Member
May 11, 2005
1,221
1
76
Here's some more info to make choosing port multipliers harder: http://www.serialata.org/technology/port_multipliers.asp And the bit about choosing a port multiplier based on what SATA controller you're connecting it to is true. There's a standard, but I regularly read of people not being able to mix/match.

If you're looking at lots of SATA ports, I recommend the Supermicro AOC-SASLP-MV8. It's a PCIe x4 8 port SAS/SATA card. It works in Linux if you decide to go that route now or in the future. I have the previous version which is a PCI-X 8 port SATA and it's been rock solid for years now.
 

jvroig

Platinum Member
Nov 4, 2009
2,394
1
81
I've done the math, and so far, what I've come up with is this:

DIY NAS vs real NAS (Synology DS7xx series + 5-bay expansion unit)
DIY NAS: cost advantage:
-Units only (diskless), equal number of drive bays available: ultimately, only 1/3 of the cost of the "real" NAS.
-In both cases, the 2TB disks are the more expensive part. In a 5-drive bay case, the disks would amount to 3x the price of the DIY NAS, and equal to almost the price of the NAS unit expansion.
-A more cost-effective approach would be the 1.5TB disks. They are by far the sweet spot, at least for the WD and Seagate prices available to me.

Real NAS: feature advantage
-Small and compact form factor - houses the disks in what is essentially less than 1/4 the area of a DIY NAS based on consumer PC parts.
-NAS intelligence (if you require it to be more than just dumb storage): in this case, the Disk Station DiskManager software is a very powerful, mature, AJAX-powered web-based interface
-Better RAID than probably all on-board RAID controllers on consumer-level desktops.

This doesn't take into account PCIe SATA cards yet, which may further decrease the cost per drive bay for a DIY NAS, because I haven't been able to source any of them locally which is all I'm stuck with. If good PCIe SATA cards reach $200 though, all they may be good for is cramming more disks in a case while keeping cost per drive bay the same (not necessarily a bad thing)

The comparison I made is mostly for work. For home, I'm definitely fiddling with a DIY NAS. Gives me an excuse to buy that Antec 300 case I've been eyeing and fill it up with 1.5TB disks. I also enjoy the idea of having two 120mm fans blowing on them, making the case even more attractive.

If you're looking at lots of SATA ports, I recommend the Supermicro AOC-SASLP-MV8. It's a PCIe x4 8 port SAS/SATA card. It works in Linux if you decide to go that route now or in the future. I have the previous version which is a PCI-X 8 port SATA and it's been rock solid for years now.
We love both supermicro and linux at the office. Thanks for the recommendation, I suppose this won't be hard to source through our supermicro supplier.

If you have time and inclination to fiddle with crap before getting into production (especially the home) I would suggest trying out virtualization. I wish I had done it with our WHS but as the beast gets bigger and bigger and the wife relies on it more and more the odds of me trying it out get less and less
Copy. Will definitely fiddle around with it before I declare it "production-ready" at home. We have a plan to connect the NAS "intelligence" and interface to the ERP to mine data better (and give context to the files/data in the NAS), and it would be nice to have a prototype at home.
 

alaricljs

Golden Member
May 11, 2005
1,221
1
76
My home "NAS" is based on a older Supermicro server board with PCI-X and multiple 4 and 8 port controllers. 6GB of ram and a Q6600, so I put Xen in and run 3 VMs consistently. 1 VM is a production mail/web server for 5 domains. 1 VM is my file server that at it's largest had 10 spindles (500GB). Currently it's spinning 4x 2TB spindles. The last VM is running XP and is used for downloading large stuff since the system is on 24x7 already.

The suggested SAS/SATA card is ~$105 from buy.com, but it requires cables with sff-8087 connectors. If you went with a Norco 4220 case (or something with similar hot swap hardware) it's 1 cable per 4 ports. Otherwise you get a breakout cable that turns the 1 connection to 4 SATA connectors. The cost for the 4220 is not to be beat if you are looking at doing hotswap bays. I don't have one but started this whole thing before I ever heard of Norco. I now have a 4in1 2.5" hotswap bay and 2x 5in3 3.5" hotswap bays inside a super cheap Xion Stacker ($30 shipped when I got it).

I just upgraded the OS drive to a Vertex 2 E 60GB and it made a huge difference in the mail/web VM.

Feel free to PM me with questions.