Slow Raid 5 Performance

Owls

Senior member
Feb 22, 2006
735
0
76
I'm using an Adaptec 2610SA in a 32bit PCI slot.

I have 5 Seagate SATA II 7200.10s and Raid 0/5 performance is really in the crapper. Is it due to the 32bit PCI slot?

We're talkin 80.1MB/s I think sustained is like 28MB/s.
 

Mondoman

Senior member
Jan 4, 2008
356
0
0
Cheap controller cards can't be expected to give good performance. This card in particular seems to be a known not-too-good performer.
 

Owls

Senior member
Feb 22, 2006
735
0
76
hm. do you know of any good PCI controller cards? PCI-e ones are kind of expensive right now. I guess I'll take either suggestion.
 

Owls

Senior member
Feb 22, 2006
735
0
76
Actually nevermind. For good Raid5 performance I need to look into the $350+ range for cards with a dedicated XOR processor for parity calculations.

I guess I'll just stick to Raid 0+1
 

MerlinRML

Senior member
Sep 9, 2005
207
0
71
I typically see PCI performance around 100MB/sec. Depending on which devices are sharing your PCI bus, 80MB/sec is not unrealistic.

 

pugh

Senior member
Sep 8, 2000
733
10
81
Im very please with the areca 1210 RAID controller. paid less than 320$ for it.. at the Egg
 

Owls

Senior member
Feb 22, 2006
735
0
76
I'm giving the adaptec card another try with a smaller stripe set. I remember using an Ultra320 SCSI card in a PCI slot a few years back and I consistently got about 120MB/s with a raid 5 array. If it doesn't work out I will swap out for a board with ICH9R and Intel Storage Matrix. The raid 5 performance on that chipset is very respectable considering it doesn't have dedicated memory or processor.
 

imported_wired247

Golden Member
Jan 18, 2008
1,184
0
0
Originally posted by: pugh
Im very please with the areca 1210 RAID controller. paid less than 320$ for it.. at the Egg


same here. awesome card, but you need pcie8x.

very fast... & very easy to use...


FWIW I chose the maximum stripe size available, because that seemed to give the best performance for my purposes based on benchmarks I've seen on Tom's Hardware. Unless most of the files you access are <2-3MB then larger stripe sizes are usually fastest.
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
Originally posted by: Owls
Actually nevermind. For good Raid5 performance I need to look into the $350+ range for cards with a dedicated XOR processor for parity calculations.

I guess I'll just stick to Raid 0+1

Actually, its the NVRAM that makes it faster.
its due to the RAID5 write hole problem.

Basically RAID5 needs to read the data currently in the strips it is writing to, add the new data you want to add, calculate the parity of the old+new data, THEN write the whole stripe again with the old+new data. And you are also risking loosing data in case of a power outage.

In the 350+ card range you have a dedicated processor and non volitile ram. (expensive!)
When the card receives that data it tells the computer that it was written, it then takes its time really writing it to the disk. You could catch up to it and then it gets slow again. So the solution is more NVRAM so that the buffer never fills. very expensive, but it works, sorta.
RAIDZ + ZFS completely bypass this issue and allow good performance. But any other RAID5/6 implementation is gonna cost lots of money.

Any cheap controller is actually WORSE then using your motherboard's build in RAID ability.
If you don't want to build a dedicated solaris server, or pay for a 350+$ controller, then your only choice is RAID1 or RAID1+0.
ZFS is being ported to macosX and slowly to other operating systems. So in a few years this issue would be solved for everyone (hopefully).