Insanely slow speeds in RAID 0

skyman66

Junior Member
May 16, 2012
6
0
0
After running windows 7 fine for 4 days, my new computer decided to randomly slow down to a crawl. It takes a few minutes just to click anything. I was using it fine today until I installed Adobe reader and then it instantly went slow (I think this is a coincidence).

I'm running ATTO at the moment and I see 3kb/s write and 5 kb/s read on 0.5 transfer size. Yesterday I was getting 1gb/s.

In safe mode, it has the same slowness. I restored to a previous version earlier than yesterday, and the same problem existed. I heard the RAID card making single, long beeps, usually during shut down. It boots up into windows as fast as usual and goes slow after in windows for a few seconds.

My guesses:
-the RAID card decided it can't handle the speeds
-the RAID card is overheating from the high speeds
-the RAID card is "rebuilding" even though it says it's fine
-windows update is messing with things (i see the icon when shutting down, but I never have the patience to shut down the computer normally...when I do wait, the RAID card beeps).

Any ideas?

Thanks.

System:
-i7 3770k
-GTX 680
-Highpoint 2720SGL RAID card + 4 x OCZ Agility 3 in RAID 0
-3x 3tb in RAID 5 (mobo RAID)
-32gb RAM
-ASUS P8Z77 WS mobo
 
Last edited:

bryanW1995

Lifer
May 22, 2007
11,144
32
91
I got that "super slow-mo" slow down last week on my RAID 0 m4's. One of my disks wasn't very flat (they're just lying in the bottom of my case) and it experienced a disk error. I eventually had to delete the RAID array and re-seat the disks properly.
 

skyman66

Junior Member
May 16, 2012
6
0
0
Well, on my old computer, I have 3 x OCZ Solid SSD's. They still run fine besides when playing HD videos online. They've been running w/o any erase or reformatting since 2009.

@ Bryan - My SSD's are secured in my 3.5" drive bays (2 per bay) with adapters. I managed to get a blue screen with a beep from the RAID card and now windows is in a boot loop...may just have to install 2 of the SSD's in RAID 0 on the mobo and give up on the RAID card.
 
Last edited:

philipma1957

Golden Member
Jan 8, 2012
1,714
0
76
4 ocz ssds in raid = fail .

try to do it over with out adobe. see if it runs will out adobe. other options use 3 crucial m4's in raid0 since you dont have crucials try 3 ocz's
 

Railgun

Golden Member
Mar 27, 2010
1,289
2
81
4 ocz ssds in raid = fail .

Really...

All of the OCZ naysayers are really starting to sound clueless.

5284daf4-9d75-906f.jpg


A raid 0 array won't rebuild. There's nothing to rebuild to. No parity information so it would just die.

Speed within a card in this respect doesn't necessarily make it hotter.

If you have one drive that's dying, I suppose it's possible. Get a backup image and check your drives individually. Secondly, the card would throw an audible error for a reason. The OM should gave what that specific beep means. Also, if that card has LED activity and error headers per disk, connect them up to see if it's complaining about a particular disk.
 
Last edited:

slow_poke

Junior Member
Dec 26, 2011
22
0
0
I got that "super slow-mo" slow down last week on my RAID 0 m4's. One of my disks wasn't very flat (they're just lying in the bottom of my case) and it experienced a disk error. I eventually had to delete the RAID array and re-seat the disks properly.

The error has nothing to do with the way the drive was seated, or not seated.
 

DominionSeraph

Diamond Member
Jul 22, 2009
8,386
32
91
Really...

All of the OCZ naysayers are really starting to sound clueless.

>comments are about OCZ's horrible reliability
>posts screenshot that has no statistical significance in regards to reliability

Good job there, champ. :rolleyes:
 

Railgun

Golden Member
Mar 27, 2010
1,289
2
81
Sorry...running over a year in RAID0. Thanks for the scrutiny in me replying to nothing you had mentioned previously or having anything to do with my post. See the quote I made? That`s what I was in reference to.

If I`m the champ, guess that makes you the looser? :rolleyes:

I would be willing to bet that the vast majority of these drives are running on crap controllers.
 
Last edited:

philipma1957

Golden Member
Jan 8, 2012
1,714
0
76
Really...

All of the OCZ naysayers are really starting to sound clueless.

5284daf4-9d75-906f.jpg


A raid 0 array won't rebuild. There's nothing to rebuild to. No parity information so it would just die.

Speed within a card in this respect doesn't necessarily make it hotter.

If you have one drive that's dying, I suppose it's possible. Get a backup image and check your drives individually. Secondly, the card would throw an audible error for a reason. The OM should gave what that specific beep means. Also, if that card has LED activity and error headers per disk, connect them up to see if it's complaining about a particular disk.










raid0 with 4 = fail is not clueless in the case of the op it is the truth.










I did tell him free advice on a fix,

raid0 with 3 drives and don't use adobe.

his raid 0 is dead my advice is free repeat free.

4 ssd's in a raid0 do not need much push to efff up.

OP mentioned that it effed up after adobe was added.

So a rebuild without adobe is free. Then if it fails with 4 drives and no adobe he can use 3 drives and no adobe.

If that fails then he needs to look into another set of ssd's 3 or 4 crucials if that fails another raid card.

the sentence above may be better getting a raid card first then the ssd's.


Now you are using a really good raid card and you state your ocz raid0 has worked for a year. your card is very costly and it is nice that it works for you. what model is it? some cost more then 1k.

the op seems to have a high point card cost = $160 not that good a card.


areca cards are 500 to 1500 and very high quality. I could tell him to buy one and try the 4 ocz drives

http://www.newegg.com/Product/Product.aspx?Item=N82E16816151039 = $ 450 cheapest one that may work with his 4 ssds.
 
Last edited:

Rubycon

Madame President
Aug 10, 2005
17,768
485
126
Here's even more fail!

as-ssd-benchArecaNORWEGIAN514201212-28-31AM.png


:D

On a serious note if you're experiencing those kinds of slowdowns you are going to need to test each drive independently and make sure there's no issue with any drive. Just one seemingly tiny issue with ONE drive will bring the array to its knees. It's like a chain, strong as the weakest link...
 
Last edited:

skyman66

Junior Member
May 16, 2012
6
0
0
"Alarm (speaker): the speaker emits and audible alarm in the case of Drive/array failure."

I'll go with Rubycon's suggestion and check out each drive.
 

Old Hippie

Diamond Member
Oct 8, 2005
6,361
1
0
"Alarm (speaker): the speaker emits and audible alarm in the case of Drive/array failure."
There ya have it although I'm very suprised the array even booted or doesn't point you to the failed drive.

I would think there'd be some Highpoint RAID managment software that would point you in the correct direction.

Good Luck!
 

groberts101

Golden Member
Mar 17, 2011
1,390
0
0
Rubycon knows his stuff and his advice is spot on.

Personally speaking, and knowing these drives more than most.. I think it's a particular drive causing issue in that array and as was mentioned.. it's cumulatively affecting the stripe down the lineand chopping the arrays legs off. Better to get your data backed up sooner than later.

I have that same card with many OCZ drives and have no issues whatsoever with longer term stability in raid or single pass through. If I did have an issue though?.. I wouldn't waste more than about 10 minutes trying to figure it out.

Simple recovery options to assure that the drives themselves are not to blame are going to be.. secure erase(this is CRITICAL since it's the ONLY way to wipe the drives internal mapping and eliminate corruption).. force flashing is optional, but highly recommended as it rewrites the firmwares basecode to assure no corruption exists there either.. rebuild the array.. restore the image. Then retest once again.

PS. are you running the latest Highpoint drivers and firmware on that card?
 

Railgun

Golden Member
Mar 27, 2010
1,289
2
81
raid0 with 4 = fail is not clueless in the case of the op it is the truth.

I did tell him free advice on a fix,

raid0 with 3 drives and don't use adobe.

his raid 0 is dead my advice is free repeat free.

4 ssd's in a raid0 do not need much push to efff up.

OP mentioned that it effed up after adobe was added.

So a rebuild without adobe is free. Then if it fails with 4 drives and no adobe he can use 3 drives and no adobe.

If that fails then he needs to look into another set of ssd's 3 or 4 crucials if that fails another raid card.

the sentence above may be better getting a raid card first then the ssd's.

I also suggested to check his drives. He could have easily had three to start and if one in fact is toast, and that one was part of that original three, then the same could be said that three in a raid=fail. Point there is that ANY drive in a stripe, HDD, SSD, whatever can fail and kill your array. Yes, the more you have, the greater the chances, but three is not the magic number of discs in an array.


Now you are using a really good raid card and you state your ocz raid0 has worked for a year. your card is very costly and it is nice that it works for you. what model is it? some cost more then 1k.

the op seems to have a high point card cost = $160 not that good a card.


areca cards are 500 to 1500 and very high quality. I could tell him to buy one and try the 4 ocz drives

http://www.newegg.com/Product/Produc...82E16816151039 = $ 450 cheapest one that may work with his 4 ssds.

My point there again is that there are a lot of folks that knock OCZ for whatever reason. I contend that the controllers are partly to blame. I don't disagree that the FW on the SSDs plays a part, perhaps a large part. I would love to see the communication between the host and SSD across various controllers to see if there are any differences. But that's neither here nor there and going way OT.

And I'm running an 1880-i. While I appreciate the fact that not everyone will have higher end cards, I feel that many are stepping into a realm that they don't necessarily fully understand and when things go wrong, they're not prepared and start throwing stones.

Yep.

Start by finding out what the beeps mean?

What I said as well.
 
Last edited:

Emulex

Diamond Member
Jan 28, 2001
9,759
1
71
pci lane's can renegotiate (problems with 3.0 pci btw) - like ide and sata. they clock themselves down. i just had an advisory for the new hp gen8 on lane throttling due to a bug in something. basically your 8x slows down to 1x pcie 1.0 and you wonder wtf happened to those ssd's that were hauling butt.

analogy: wifi signal spiral of death
 

groberts101

Golden Member
Mar 17, 2011
1,390
0
0
pci lane's can renegotiate (problems with 3.0 pci btw) - like ide and sata. they clock themselves down. i just had an advisory for the new hp gen8 on lane throttling due to a bug in something. basically your 8x slows down to 1x pcie 1.0 and you wonder wtf happened to those ssd's that were hauling butt.

analogy: wifi signal spiral of death


That seems very strange.. never heard of a board losing lanes from the ICH/PCH. Seen many an instance where the bandwidth is hogged/impinged upon by multiple and/or very fast PCI-E devices(usually requires several)... but never on a board which doesn't have them attached.

I would guess that auto-negotiation would come into play when the system is being taxed for lane availability.. which surely doesn't seem to be the case with the OP's system.

At least I sure as hell hope it doesn't.. because I was planning on buying that same board and running 3 raid cards.. one of them being the same as the OP is using.
 
Last edited:

Railgun

Golden Member
Mar 27, 2010
1,289
2
81
Most of that is a power saving technique. It should not be renegotiating while in use. You can frequently see that, in particular, with video cards.

I don't think the card is at fault if this is the case.
 

bryanW1995

Lifer
May 22, 2007
11,144
32
91
The error has nothing to do with the way the drive was seated, or not seated.

I get that, I just mentioned that mine wasnt seated properly because it was the reason that I got the disk error. My point was that with a disk problem in a raid 0 system, I experienced the same issue that he had, so I assumed that the OP likewise had some sort of disk issue.

Edit: @groberts, I did not SE my failed drive, I got lazy and just formatted it. I haven't done much more than update wiz 7 so far with the new/fixed array. Do you recommend that I go back and perform a SE now, or would you see if everything holds up ok on the new array?
 
Last edited:

groberts101

Golden Member
Mar 17, 2011
1,390
0
0
@groberts, I did not SE my failed drive, I got lazy and just formatted it. I haven't done much more than update wiz 7 so far with the new/fixed array. Do you recommend that I go back and perform a SE now, or would you see if everything holds up ok on the new array?

I have to assume you're talking about your M4 raid.. and I would either implement a dedicated garbage collection session over a few days time?(maybe about 12-18 hrs of constant power to the drive(which requires no sleep/hibernate or is S1 sleep only).. depending on array capacity. That will better allow the drive/s to settle and leverage all the tricks that the firmware designers included for optimum performance.. or.. if it were me?.. I would just spend the 15 minutes to SE/reimage and know it's as good as it'll get for my OS volume afterwards.

Either way you go there?.. you're rebuilding the fresh block pool in a quick enough fashion. Unless you write the hell out of them while you're trying to recover fresh blocks during idle time.

I would guess that the array should be fine with major write capability/stamina.. so long as you don't overfill it or write major GB's beyond the arrays available space over shorter worksessions. Latency of a fresher drive is usually a slight bit better too. Bonus.
 

bryanW1995

Lifer
May 22, 2007
11,144
32
91
I don't sleep/hibernate anyway, and it's already had some good GC time. I'll wait until tomorrow to install seti@home.

Thank you for the advice.
 

skyman66

Junior Member
May 16, 2012
6
0
0
More info:
-All of the drives had the current firmware update, as did the RAID card
-Sleep/hibernate were turned off right away
-ATTO benchmarks were always wacky...the read and write speeds would randomly drop very low mid test and increase thereafter, not a "consistent" increase as seen in most SSD tests done by others.

See below for the smart data on the 4 drives. I see #1 has some flash failures. I'm wondering why the unexpected power losses are high on all of them.

Any thoughts?

OCZ_SmartData2.jpg


Here is the ATTO test, but with SATA 2 cables at that time. I was using SATA 3 when the problem happened. I installed windows again once using SATA 3, just in case.
ATTO4xSSD.jpg
 
Last edited: