• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Whats the difference between 40 dollar sata 3 raid cards and the 250 dollar ones.

alanwest09872

Golden Member
Whats the difference between 40 dollar sata 3 raid cards and the 250 dollar ones.

I know one has onbord memory but what does that really do for me
 
The more expensive ones tend to have a hardware processor and ram on-board that performs raid calculations. The cheaper ones typically use your CPU.
 
3 major differences:

The cheap cards don't have a CPU to handle the RAID work - the expensive ones do. There is a certain degree of overhead in working with RAID. There are additional calculations to perform (for RAID 5 and 6), and there is additional data that needs to be saved to drives (e.g. if you have RAID 1 - if the OS saves a 100 MB file, 200 MB actually needs to be sent to the drives).

The on-board RAID processor can perform the calculations (saving host CPU time). In addition, the on-board RAID processor is directly connected to the ports so, there is no bottleneck, transmitting the data from processor to drives.

Both of these were a problem with slow CPUs and slow buses (e.g. PCI). On a PCI slot with 133 MB/s max. If you used the host CPU to RAID 1 and saved a 133 MB file - 266 MB would have to be sent over the PCI bus, which would take 2 seconds. But with an expensive RAID card, the PC would send 133 MB (which would take 1 second), the on-board process would duplicate the data and send it to the drives without bottleneck.

Similarly, with slow Pentium CPUs, the RAID calculations (particularly for RAID 6) were a problem for CPUs, and could eat 30-40% of a very expensive CPU. With a dedicated RAID accelerator processor, the PC's CPU could be kept available for useful apps.

With modern PCs (especially core i5/7 CPUs) the CPU can perform the RAID calcs very quickly, and modern PCI-E v2 with 8x and 16x slots is so fast, that there is no bottleneck there. In fact, modern CPUs are so fast, that they are faster than even top-end RAID accelerator processors.

2. Better software. The cheap cards use drivers to perform the RAID processing. These drivers are often of low quality and very poorly optimised. Some RAID versions, e.g. 5 and 6, require very careful optimisation otherwise they will be horribly slow (but when correctly optimised are almost RAID 0 speed). The low quality drivers for a cheap SATA RAID card can give STR speeds of 15MB/s for a 3 drive RAID-5, whereas the same drives using a good quality software RAID or a high-end card might get 200 MB/s.

The better software on high-end cards may also allow advanced features (e.g. add a drive to a RAID without reformatting, automatically check the drives for corruption which would otherwise be undetectable when the RAID is running correctly - but would result in total data loss if a drive in the RAID died). The high-end card software is also subject to high levels of quality control by the manufacturers (these cards are aimed at critical servers, so quality and stability is essential).

3. The RAM on high-end cards is there for speed. If you need to save lots of small files (or update thousands of entries in a database) you need to make loads of writes to the drive. Certain critical data (e.g. databases) and critical system files (e.g. directory indexes) must be saved in a specific order, otherwise a system crash or power failure may corrupt the data. However, updating thousands of files in order requires thousands of seeks of the drives, and can be very slow (30-100 updates per second)

By using a battery pack to keep the RAM alive - a high-end card can save data to RAM temporarily, and tell the PC that it is safe for the next step - and then save the data to disk in the most efficient way possible - dramatically boosting speed for this type of work (5,000-10,000 updates per second). You have to have a battery, because without, if the power goes off - all that data is toast (and as it could be critical system files (e.g. directory indexes) then you risk loosing all the data on the drive, with no hope of recovery)

The main point is that the cheap RAID may be of very poor quality - it may be very slow, and there may be stability problems with it. With any RAID, it's important to use the same software/hardware if the card dies. If a card dies, at least with a big manufacturer it should be easy to get a replacement. With a no-name cheapie, it may be impossible to get a compatible replacement, leaving your data trapped and unrecoverable.

With a big-name (e.g. LSI, adaptec, areca) RAID card, you know you getting a good quality product with good quality software and high stability (but you pay for it). The battery-backed RAM option is there if you want it (but unless you are doing massive database work, it probably isn't worth it).
 
Caution: there are some fairly pricey RAID cards out there that are software RAID, albeit very good software RAID (a.k.a. "hybrid" or "intelligent" software RAID as used in some marketing brochures). e.g.

HighPoint RocketRAID 2320 PCI Express x4 - $260.00 + shipping

This card (and others like it) feature a fairly modest XOR offload engine (in this case, HPT601) but no IOP, thus making it both OS and host CPU dependent. The chip hiding under the heatsink is just an Intel PCI-X to PCI-E bridge chip.
 
the CCISSP raid controllers (pretty much the last 10 years of hp raid) are nice - one driver - no worries about moving drives around (as long as they can be physically connected somehow). battery back write cache IS EOL this year. Flash back write cache is in.
 
Haha friends don't let their friends use Highpoint. 😉

OP: If your goal is to simply increase STR and space by striping a pair of disks then the onboard controller is fine.

If you need a dedicated host for a more complex scenario and require hardware cache with tunable write back features then a mid end host (SAS) is a good bet. They do get expensive.

Using a write back cache without a battery is very risky even if the system is connected to a UPS. Most enthusiast are fine without the BBU but if you're doing work that's relatively important it certainly can save you pain and aggravation down the road. 😉

Caution: there are some fairly pricey RAID cards out there that are software RAID, albeit very good software RAID (a.k.a. "hybrid" or "intelligent" software RAID as used in some marketing brochures). e.g.

HighPoint RocketRAID 2320 PCI Express x4 - $260.00 + shipping

This card (and others like it) feature a fairly modest XOR offload engine (in this case, HPT601) but no IOP, thus making it both OS and host CPU dependent. The chip hiding under the heatsink is just an Intel PCI-X to PCI-E bridge chip.
 
Back
Top