Raid 5 transfer problems. For bulk storage application

DragonKin

Junior Member
Oct 10, 2012
3
0
0
so im somewhat new to the raid concept.
i currently have a asus p8z77-v lk motherboard with 6 sata connections. 2 of then 6gbps the other 4 3gbps.
connected to that motherboard are 5 3TB hardrives. one of them on the 6gbps, the other 6gbps sata connection is reserved for my boot drive.

all 5 3tb hadrives are on a raid 5 array controlled through the motherboard.the array was 100% initialized and then activated in the GPT so that i can have one large drive, about 11 terabytes. formatted in NTFS.

i am running windows Server 2008 standard R2. all drivers installed perfectly, intel rapid storage technology is working perfectly monitoring the array.. everything is green...

now to the problem..
when transferring from one networked array (raid5) to the new array (raid 5 above mentioned system) the transfer starts great, nice flat transfer speed around 90% network utilization,(gigabit). and slowly after about 10 seconds starts dropping down to 0% and will randomly spike up to 90% or just do nothing for minutes at a time. over time the file will finish, but it takes about 40 minutes or more... instead of 5. the files being transferred are large files around 10 to 16 GB each each.

the old array has been working flawlessly for about a year, it has a highpoint rocketraid controller card connected to 3 2TB hardives, problem is it just got full and i need something bigger. the reason i went with the new motherboard was for UEFI bois support for the 3tb hardrives.

i cannot figure out what is causing this problem.
* i have tried enabling and disabling write-back cache with no success
* i have tried on a different network switch
* i have tried from local boot drive to new array with same problem.
super fast transfer rates are not a major priority but at least something reasonable. on my old array i would average 60%-80% with minimal fluctuation and a constant transfer.
i have to have data redundancy, this is absolutely important...
but mostly i just need a stupid amount of space and it all on one large drive..
i hope i haven't left any information out please let me know if more is needed.
 

hhhd1

Senior member
Apr 8, 2012
667
3
71
I had the same exact problem once, although it wasn't a raid, after allot of debugging it turned out to be not enough power, the PSU was flaky, and I was using the same cable from the PSU to connect over 2 drives, I reduced the number of disks connecting to the same power cable and the problem went away.
 

DragonKin

Junior Member
Oct 10, 2012
3
0
0
interesting, how many drives did you have and what kind of power supply?
im running 6 total drives, I3 processor, 8gb ram and onboard video on a 680watt thermaltake. with 2 drives pr power cable. i do notice that the gauge of wire on each cable is quite small.. i will try a better quality powersupply. thank you.
 

hhhd1

Senior member
Apr 8, 2012
667
3
71
The power supply I had was a 'cheap' one, not from a reputable brand, I think it was 650w, and there was ~3 drive per cable I think, to verify this you can:

1. using rdp or the machine itself, try to access the raid array, specially when the network starts giving you 0%, see if it is accessible.

2. go near the drives, and try to see whether the drives spin up/down during the process, you can either touch it to feel it spinning up/down, or watch smart values for start/stop count, for each drive see if they increase during working.
 

greenhawk

Platinum Member
Feb 23, 2011
2,007
1
71
some other ideas, not that they will be easily addressed / checked.

Issue with software raid implementation (motherboard raid 5's are not the best).

Alinement issue with the HDDs.

As to the full speed, then dropping, then going again, sounds exactly like write caching occuring (so you are not seeing the actual array performance as data is stored in memory for a extended period of time.
 

mrpiggy

Member
Apr 19, 2012
196
12
81
Try not using the Intel RAID and see what happens. Basically, use Server's built-in software RAID5 on the same hard drives. Both the Intel RAID and the Server OS RAID is software based. I haven't had good luck with Intel's fake hardware RAID for anything beyond RAID0/1, but never had issues with the built-in OS RAID especially when the drives are all connected to the motherboard (albeit it will never be as fast as a dedicated caching RAID controller like an LSI unit, but it's as good as the Intel solution). There should be no performance penalty since both use the CPU for the RAID parity calculations.
 

DragonKin

Junior Member
Oct 10, 2012
3
0
0
well shucks... this LSI unit? is this a pci express card type of controller or do i need a new motherboard?

also just a update. i had one of the 5 drives go bad. wont spin up anymore. im pretty sure it was related to the power supply. i have a new one coming in today, along with a replacement 3tb drive. i will try the new power supply with the good drive and see if it becomes stable. i will also look in to a dedicated caching raid controller.

i might get lucky and just have had a low quality power supply that was slowly killing the drive. when amperage spiked when transfer rate peaked, maby the supply just couldn't handle it. and voltage drop on the one line killed the drive.

thank you everyone for your help and advice, i will post my results as soon as i get some.
 

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
well shucks... this LSI unit? is this a pci express card type of controller or do i need a new motherboard?
PCI-e cards. The most common ones are the Dell PERC series. They're good hardware RAID controllers, with reasonable performance, good drive and OS compatibility, and they aren't too expensive.