• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

A Cheap Controller of Interest, and a "Review Analysis"

I try to consolidate my purchases, and for various and wise reasons.

I am still looking at some HDDs >= 2TB, balking at the WD Reds and the Seagate NAS offerings. Sooner or later, I'll make a choice there.

Meanwhile, I have an idea to further upgrade my server and three workstations in the house which are all LGA-775 C2D "Wolfdale" cores. The users are still quite happy. But, by my own assessment, they either have too much "slower" storage, or just enough "fast" storage without leveraging any slow capacity storage.

For such an old chipset and platform, we don't really want to spend a lot of money, and the money we spend should offer recyclable "extras" when systems are replaced later with skt-1150 or even 1155.

Everybody remembers how satisfied I've been with ISRT to leverage a 60GB SSD and a 0.5 TB HDD, while skeptics and critics abound. Back in 2011 when I chose to use ISRT, there were rumors about firmware hacks to Marvell controllers or mention that Marvell had implemented their own version of acceleration-caching. Since I had an onboard Marvell SATA controller in addition to the Intel, I tried to see if there was a hack or firmware upgrade that would work with it, but I don't think I had any luck. If I had, I'd be using it.

Mom's LGA_775 system is using more than 2/3 of her Elm Crest SSD boot drive. Bro's 775 system has RAID0 with two of my favorite SATA-II WD Black drives, but he doesn't need the TB of space. So for maybe $80+ each, with purchase of another small-capacity SSD, I can give both systems speed, capacity, and what appears to be bootable reliability.

Note the customer reviews on this item link at the Egg:

http://www.newegg.com/Product/Produc...82E16815158365

First, despite conflicting reports that come from a sample so small it barely qualifies as Student's T, it is obvious that the ISRT configuration is viable for a Win 7 or Win 8 boot drive. Second, the controller has some multiple applications, allowing even for limited RAID setups. It doesn't have a gigabyte cache or even half that with its own processor, but it's priced at only $80. So I can see adding it to my WHS rig, perhaps to create a second drive pool or add to that existing, or even allow me to pursue some clever backup scheme without contributing much to power consumption.

I started looking for a serious hardware controller the other day, perhaps propelled by another member's post here about an Adaptec card. But you're going to spend over $300+ for such a controller, even if it offers RAID5 and 6, the cache and onboard processor.

Overall, my best guesses include these. Marvell's "Hybrid-Duo" would allow for SSD caching of a HDD RAID0 or maybe any of the possible variations of RAID allowed by the card. And they would've taken special pains to make sure especially that Hybrid-Duo worked with Intel SSDs: after all, it's their response to a technology Intel worked on for at least a few years.

The only uncertainty I can think of did not come up in any of the four reviews. Best put in these terms -- "Is Hybrid Duo as reliable and useful as Smart Response?"

Eager for comments. $80/card seems neither a large investment nor much of a financial or technical risk, though. I go on record to say I had been looking to expand "Smart Response" improvements around the house without replacing any motherboards or processors soon. I looked into giving the three LGA_775 workstations a 240GB Chronos drive. But since I've got plenty of spare HDDs, I don't need the 240GB size, and at least half that would be worth the money over really small SSDs since Hybrid Duo will use the full drive-size as a cache!! [Was I wrong with that? I think I read correctly . . . ]
 
Last edited:
If it was me I'd get a IBM M1015 off eBay for about 100 bucks.

For my server box -- maybe. But the unit I described offers the Marvell "Hybrid-duo" feature. For the other dated LGA-775 systems, I can leverage small SSDs to get "acceleration" and the throughput would not be limited to the motherboard's old SATA-II ports. Otherwise, I'd spent double or triple on the SSD components and I'd still need another controller card just to squeeze out SATA-III performance on those disks.

The server is mostly going to get SATA-II speed for disks connected to an SATA-III controller (like the Startech or any other) even as the disks are "SATA-III." The only other speed enhancement would come from a RAID configuration off that controller. For this server and the number of household connections it has, drive pooling seems to be just as good an option for all. Of course, I might have better options with a controller like the IBM, but I see "new" prices for that unit that are double any Ebay price. I guess the question is whether it makes sense to get a used controller that offers those features and which probably took a beating in some enterprise server-farm.
 
Interesting little card. Only sticking point for me is that it requires an x2 slot, and on my Q9300 rigs, with P35-DS3R mobos, I only have x1 slots, in additional to the primary x16 slot which is occupied with a GPU (especially since that chipset doesn't have onboard video).

So I would have no way to utilize that card.

Do they make one with two SATA6G ports, that runs off of a PCI-E 2.0 x1 slot instead, and still supports the "Hybrid Duo" feature? Perhaps for half the price, since it would have half the ports?

Edit: I found this:
http://www.addonics.com/products/ad2sahssd.php
Some more info about the technology at that link. Apparently, it works at the filesystem level, and not the block level like SRT does. It scans the filesystem (probably for last access time) for "hot files", and copies them to the SSD.

http://www.cnet.com/news/hands-on-with-the-marvell-hyperduo-hybrid-storage-controller/

https://origin-www.marvell.com/storage/system-solutions/assets/Marvell-HyperDuo-Product-Brief.pdf
 
Last edited:
Interesting little card. Only sticking point for me is that it requires an x2 slot, and on my Q9300 rigs, with P35-DS3R mobos, I only have x1 slots, in additional to the primary x16 slot which is occupied with a GPU (especially since that chipset doesn't have onboard video).

So I would have no way to utilize that card.

Do they make one with two SATA6G ports, that runs off of a PCI-E 2.0 x1 slot instead, and still supports the "Hybrid Duo" feature? Perhaps for half the price, since it would have half the ports?

Edit: I found this:
http://www.addonics.com/products/ad2sahssd.php
Some more info about the technology at that link. Apparently, it works at the filesystem level, and not the block level like SRT does. It scans the filesystem (probably for last access time) for "hot files", and copies them to the SSD.

http://www.cnet.com/news/hands-on-with-the-marvell-hyperduo-hybrid-storage-controller/

https://origin-www.marvell.com/storage/system-solutions/assets/Marvell-HyperDuo-Product-Brief.pdf

Thanks for the links.

Go figure. Intel must have an army of patent attorneys. How would it boil down to something that almost seems -- well -- inane?

Figure you can't do that with an Adaptec or LSI card. I had long thought that maybe Intel would produce such a PCI-E add-in card. Frankly, I might have been looking for the Marvell "ISRT" solution, but that wasn't how it turned up. I was looking for less expensive SATA-III and RAID solutions that might have promise for avoiding any obstacles to 3TB x [n] GPT partitioning.

Also, I found an article on a performance comparison of RAID5 and RAID 10 or 1+0 -- contrasted with 0+1. 1+0 wins out, and the safety or reliability features are a lot better than 0+1. It may even offer the best of both worlds of more granulated data access versus large file access in terms of RAID "chunks." I think -- the stripe size.

Someone might even find a reason to leverage the Marvell Hybrid-Duo. Now I have to think twice about whether it would be a happy sort of fix for those other Wolfdale systems in our house. Even so, it would make sense to get larger SSDs as standalone boot units on the StarTech SATA-III cards. A lot better than this . . . curse of a WHS box I'm fiddling with on old NVidia SATA-II controllers.

Let me know if you find a PCI-E Intel card that offers ISRT, though.

TOOK A BETTER LOOK: Those LGA_775 systems' days are numbered. They may die more slowly than my server, though. That old motherboard is PCI-E 1.x. It will be no different than the SATA-II ports in the onboard chipset. And that holds true for all the rest of those systems.

I'm going to start looking for another processor and mobo that uses my spare G.SKILL modules. Maybe not an ITX, but maybe an mATX. I'll have to look into this. It might not even need a Z77 chipset. Then again . . .
 
Last edited:
Also, I found an article on a performance comparison of RAID5 and RAID 10 or 1+0 -- contrasted with 0+1. 1+0 wins out, and the safety or reliability features are a lot better than 0+1

Rebuild times are WAAAAAY better as well vs. parity based RAID. Faster, less disk/cpu intensive, and as a result, less likely to run into URE.
 
Rebuild times are WAAAAAY better as well vs. parity based RAID. Faster, less disk/cpu intensive, and as a result, less likely to run into URE.

I'm still trying to sort it out, though. Wasn't there a way of having RAID 1+0 (RAID10) using only two disks? If not, what was the configuration I'm vaguely remembering described somewhere?

There's a simple fact of probability or risk with drive failure or longevity: the more disks, the more likely it can happen. On the other hand, there's a minimum number of disks you need for redundancy, and "better" RAID configurations lead to "more disks." I'd rather have four than three in RAID5. On top of that, I want to minimize 24/7 power-consumption -- which a 4TB pool or array of 1TB disks makes worse than 2x4TB.

I'm even thinking of "no RAID" with two 4TB drives in a drive pool and duplication for certain folders. OF course, if both drives fail at once -- "SOL."
 
I'm still trying to sort it out, though. Wasn't there a way of having RAID 1+0 (RAID10) using only two disks? If not, what was the configuration I'm vaguely remembering described somewhere?

I don't think so. You can have RAID1 or RAID0 of two devices. You can mirror them. You can stripe across them. You can't mirror AND stripe without 4 devices. I don't think you can typically do anything fancy with two disks other than mirror them, or stripe across them., but it's possible I'm not remembering something.

OK well, with two disks, you could partition each one into devApart1, devApart2, devBpart1, devBpart2 and then combine the devA/devB part1's into a raid 0, put the devA/devB part2's into raid0 and have the raid0's mirror each other. But that is really really really silly...

I'd rather have four than three in RAID5.

Why? RAID5 provides exactly 1 disk of redundancy no matter how many disks you throw together at it. Every additional disk beyond the 3rd just increases the likelyhood of simulatenous failures, or of URE during re-silvering for a rebuild.
 
Last edited:
I don't think so. You can have RAID1 or RAID0 of two devices. You can mirror them. You can stripe across them. You can't mirror AND stripe without 4 devices. I don't think you can typically do anything fancy with two disks other than mirror them, or stripe across them., but it's possible I'm not remembering something.

OK well, with two disks, you could partition each one into devApart1, devApart2, devBpart1, devBpart2 and then combine the devA/devB part1's into a raid 0, put the devA/devB part2's into raid0 and have the raid0's mirror each other. But that is really really really silly...



Why? RAID5 provides exactly 1 disk of redundancy no matter how many disks you throw together at it. Every additional disk beyond the 3rd just increases the likelyhood of simulatenous failures, or of URE during re-silvering for a rebuild.

Well, that was a "statement of whim." I think you get a little boost of speed with the fourth drive. I know I refreshed my understanding of it in 2007 when I built that rig. But I could also be wrong. Even so, I'm sure my choice of four instead of three was tutored by something (to that effect) that I'd read.

Hard to believe that was six years ago. Seems like almost yesterday.
 
I think you get a little boost of speed with the fourth drive.

Ah, well I suppose that's certainly possible in reads at least, I'm not sure offhand how (or if) writes would be affected.

I don't trust RAID 5 further than I can throw a back-up drive 😛
 
Back
Top