• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Where are all the good SATA cards?

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Something that plugs into a PCI-e 16x slot and takes like 32 drives and presents them to the system as if they were plugged in the motherboard would be pretty awesome. I don't know why that is not already something easy to buy, given the availability of 24+ bay chassis. It could either take some SAS connectors, or individual sata connectors.

ASROCK has motherboards that have up to 12 data ports. Pair with Intel RES2SV240 expander card https://m.newegg.com/products/9SIA24G28M7361 and a RAID/non RAID controller then you can have over 32 ports, I guess.

No personal experience though.
 
Last edited:
LSI sells 16 port hba's. I have one in a xeon e3 server and it's great.. Since it's a pure hba no cross flashing required. They are spendy though but if you are really limited on pcie slots that might be the way to go.

https://www.newegg.com/Product/Product.aspx?Item=N82E16816118142

Wow that's exactly what I was looking for when building my server, could not find anything like it. The best I could find is 2 port ones that were in the $900 range. That card would do 16 drives so with two you could do a 32 which would be enough for most high density chassis.
 
Wow that's exactly what I was looking for when building my server, could not find anything like it. The best I could find is 2 port ones that were in the $900 range. That card would do 16 drives so with two you could do a 32 which would be enough for most high density chassis.


That's probably the best way to do it for 32 drives, either 2 of these 16 port cards or 8 port hba + expander.
I don't know of any higher density hba's over 16 port.
 
Just keep in mind, you're now splitting your HBA's bandwidth to 4x the drive count. If you're running SSD's, you may notice the drop in throughput.
 
i just ran a quick AS-SSD bench with 12 7200 RPM drives in RAID 6 during a patrol read and got 730 read and 780 write.

9260-4i and the HP sas expander.
 
Isin't SAS like 3gbps or something? Most drives won't do more than about 300mbps, and that's what they're advertised at, they won't actually hit that.

That expander card does look pretty awesome... though is that something easy to get new or do you need to go with Ebay? Come to think of it, pretty sure there was a version of the Supermicro 24 bay case that had one built in, to me that would be the best bet for those type of cases if it was just built in. At near 2k for a case you'd think they could add that in. 😛
 
Isin't SAS like 3gbps or something? Most drives won't do more than about 300mbps, and that's what they're advertised at, they won't actually hit that.

SAS comes in up to 12Gbps flavors. Needs it since you can chain something like 168 drives off a single controller.

Older controllers (the kinds of controllers that people shopping for used server gear on eBay get) are slower, of course.
 
I agree although if you happen to get one of the hardware raid percs, most of the time it has to be crossflashed to LSI/avago because the firmware interface won't initialize in non-Dell motherboards. I have to use an old optiplex from a recycle pile for this 😀. The LSI firmware also performs better.
Huh, so they have DRM on the Dell PERC cards?
Well, that blows.
Can you still flash them on a non DELL board, or won't it even allow that?
 
Huh, so they have DRM on the Dell PERC cards?
Well, that blows.
Can you still flash them on a non DELL board, or won't it even allow that?
The problem is this.

http://yannickdekoeijer.blogspot.com/2012/04/modding-dell-perc-6-sas-raidcontroller.html

http://blog.asiantuntijakaveri.fi/2013/09/turning-dell-perc-h310-to-dumb-biosless.html

It's SMBus issue with Intel chipset that can be easily fixed. If you have an AMD system, probably don't need this mod.

https://www.google.com/webhp?source...736&ion=1&espv=2&ie=UTF-8#q=DELL+PERC+SMBus&*

I used to have PERC 5i and I did the mod, don't own it anymore. Don't know if newer PERC controllers still have same issue.
 
Last edited:
Seems that the IBM/lenovo m1015 & the Dell PERC H310 are all clone cards that can be flashed/cross flashed to IT mode, the same with the LSI 9211-8i.
Some more info on those https://forums.servethehome.com/index.php?threads/perc-h310-lsi-9211-8i-50.4540/

I have seen prices for these cards range from $45 - $260 (some used/refurbs and some new)


The Asmedia chipset that is having issues is the 106x. It just drops SSDs for no reason, spent quite a few hours trying different drivers, and it makes no difference at all. Heck, even tried linux, and the suckers still dropped. While the Asmedia seems to work OK for HDs, they do suffer a speed penalty there as well.
Sticking the same SSD on the native chipset ports never has a issue.

What the guy is trying to do is hook up 6 SSDs and probably 2 spinners in addition to the 4 he already has on the mobo native ports.

The problem with the Startech ones are, they are only PCIe x2, that isn't enough to have all ports run at full speed. Dunno what chipset they are using, but, I am bettering either Asmedia or Marvel.



The Supermicro one is another clone of the LSI card.
And yeah, that is what I need, non-RAID SATA controller.

* Now I see I'm answering you as the "respondent" mentioned below, rather than a subsequent "respondent." Just to clarify that confusion on my part. . . . .

And that's just it, exactly. What do you need? What performance do you want to guarantee? In the post someone made answering my suggestion for the SuperMicro 8-port and Startech 4-port controllers, the respondent noted that the Startech is only an x2 card. But if x4 is sufficient to guarantee full bandwidth on an NVMe drive -- only one device -- what do you need in bandwidth for -say- a 4-port SATA controller? What sort of bandwidth limitation is there to PCIE v.2.0 on such a card with that many lanes? If you plan on loading it up with SSDs in RAID, that's one thing. If you need to connect a backup disk and an ODD to your system, that's another.

And the little 4-port Startech offers up RAID 0/1/10 -- the SuperMicro doesn't. Yet I'd use either for a drivepool configured under AHCI mode.

If you wanted to totally disable an onboard Intel controller, you could spend more on a PCIE SAS/SATA controller, but you'd have to assure yourself it would work in AHCI mode if that's all you wanted. OF course, I'd be surprised that a pricier hardware-controller with a beefy cache wouldn't offer up a JBOD configuration.
 
I appreciate this thread. Years ago I built my first file server out of my old main rig. When I needed expansion (more than the few onboard SATA ports that I had) I bought a couple of Rosewill SATA II PCI cards. When I built a from scratch file server in 2013, I just found the cheapest board with a bunch of PCI slots and reused my Rosewill cards. I'm sure it uses Marvel controllers. One of the last PCI boards I could find. When I upgrade later this year I'll just get a server board with a lot of SATA ports built in.

Are all you guys running rackmount servers in your homes? I guess used hardware is cheap(er) on eBay.
 
Yes and yes. The electricity savings of running the latest and greatest, doesn't come close to offsetting how cheap you can get surplus mobos and processors.
 
Some rackmount stuff I got used, but most of it I built. My 24 bay server cost close to 4 grand when all was said and done, but it was built with expandability in mind. As long as SATA remains a standard the server is going to have a useful life. It's also incredible how efficient the newer Xeon stuff is. I've actually managed to build servers that run below 100w. I clocked my file server in at about 75w when I first built it. I never retested with drives though so it might be a bit over 100w now.

My next upgrade probably won't be server related, but power related. I want to go with a dual conversion 48v setup. But I also want to do solar, so I may just save up and do both at same time. Looking at various costs of solar equipment I figure about 15-20k though so it will be a while before I can afford that. 😛
 
Back
Top