Anyone familiar with RAID0-to-RAID5 conversion for nVidia on-board controller?

BonzaiDuck

Lifer
Jun 30, 2004
16,632
2,027
126
Title and topic summary should be self-explanatory.

I thought I saw somewhere that you could simply add a 3rd HD to a RAID0 setup (for some controllers, anyway) and it would rebuild itself into a RAID5.

Does anyone have an idea of the steps I need to go through? I assume it's all done through the RAID setup-screen in the nVidia BIOS.

 

brinox

Member
May 4, 2006
63
0
66
been there done that. its pretty simple, but depending on your array size, it could take quite a while. conversions can only take place via the nvidia control panel if things havent changed. only basic stuff can be done through the nvidia raid bios.

the steps you would need to go through are adding the drive, and then converting it really. its just a couple of menu clicks in the nvidia control panel, but it WILL take quite a while to morph the array from raid 0 to raid 5.

this is my experiences from about a year and a half ago, but i do not recall hearing anything changing much from friends who have done nvidia based raid conversions...
 

BonzaiDuck

Lifer
Jun 30, 2004
16,632
2,027
126
Brinox-- Your reply almost suggests that I don't need to do ANYTHING in the BIOS setup? I'm guessing that I most certainly need to add the drive in the main BIOS as "Enable RAID" for the new drive. What do I need to do in the nVidia RAID setup at system-post?

I'll probably spend more time planning this than the 5+ (maybe more?) hours it takes to rebuild the array.

Since 2003, we've had six RAID0 arrays set up among my extended family's computers. One of them failed after two years; another failed after six-months due to an HD hardware failure. The first of the lot is still running after six years, weathering an emergency when I broke the four Molex solder-joints on one drive and ran to my brother for repair.

Don't know for sure why I chose to configure this 780i board in RAID0 initially. I thought I might use it exclusively for games, but the system seems rock-solid, so I'm considering that a "migration" might be in order. With that, I'll feel better with a 3-drive RAID5.
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
i wouldn't run raid5 on a mobo controller (well, i wouldn't run raid0 either though)... its best to do it on OS level or pure hardware controller...

Anyways, i find the most practical "conversion" method:
1. Buy new drives
2. Build a new array
3. Burn in new array.
4. Copy data to new array... test it to be working.
5. Take apart old array and sell drives on ebay :).
 

BonzaiDuck

Lifer
Jun 30, 2004
16,632
2,027
126
Originally posted by: taltamir
i wouldn't run raid5 on a mobo controller (well, i wouldn't run raid0 either though)... its best to do it on OS level or pure hardware controller...

Anyways, i find the most practical "conversion" method:
1. Buy new drives
2. Build a new array
3. Burn in new array.
4. Copy data to new array... test it to be working.
5. Take apart old array and sell drives on ebay :).

[I'm half-joking, but . . . ] . . . are you still running that [old] Q6600 in your sig?

I've got a hardware-[3ware] controller on my Q6600 system. This E8600 with VISTA 64 just seems like a vast improvement. So I'm looking at options . . . . maybe just buy an OEM VISTA 64 and make the full conversion on the old quad system. Dunno . . . . I have a spare drive to convert this RAID0, even so. . . .
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
ha... interesting you would mention... i bought a Q9400 on ebay for 164.99$ last week, and my Q6600 sold TODAY for 156.58$
I figure i'd make the difference in electricity savings alone :).

the fileserver is actually running a low power AMD X2 EE (i means specifically one of the low power models not that the X2 is a power efficient chip).

But you know, its not the AGE of the controller that is the issue here...
there are three way of doing raid... pure software, pure hardware, and hybrid.

Pure software aka OS,level raid (ex: ZFS on osol, windows based raid, linux based): slow (comparatively), reliable, free, no single point of failure, easily migrateable, etc
I use pure software raid.

Pure hardware (ex: 300+$ controllers): blazing fast, single point of failure (can be repaired by exact replacement of potentially hard to find hardware in the future), proprietary, non migrateable, etc

Hybrid (motherboards) - all the disadvantages of pure software, all the disadvantages of pure hardware, none of the benefits of either, and a many new disadvantages unique to hybrid (such as array loss on cmos clear) and it being even slower than the pure software implementation.

Hybrid (mobo raid) has one and only one unique advantage... Windows cannot be booted off of a RAID software array (some other oses can!), only a pure hardware or hybrid raid can be used as a windows boot drive. And the pure hardware is expensive while the hybrid is usually "free" (included in mobo price and has a ton of disadvantages and risks)

Anyone that knows anything would tell you to never, ever, ever! use mobo based raid. Actually I went against such advice from several professionals (what with me being a professional myself, and mostly thinking "it can't be THAT bad")... well i was wrong, it was worse than I ever imagined, i didn't give up easily either... i tried various configurations such as raid5 and raid1 on a variety of mobos over of the years, nforce 2, 4, 5, and intel chips (ICH#R)... all were bitter disappointments despite having been given plenty of time
 

BonzaiDuck

Lifer
Jun 30, 2004
16,632
2,027
126
Yes -- I've never been oblivious to the imperatives. That's why I originally implemented RAID5 with hardware controllers.

And I also had flirted with the motherboard offerings: my first RAID0 was done that way.

The E8600/780i build was a "casual" project, done with spare parts excluding the motherboard itself. So it seemed like the "natural" thing to do.

Of course, I just ran the Everest disk benchmark "read-test suite" on the array in question, and it bears out all of our assumptions. The performance increase isn't anywhere proportional to the number of drives.
 

Madwand1

Diamond Member
Jan 23, 2006
3,309
0
76
nForce RAID 5 is a different kettle of fish from nVIDIA RAID 0 and Intel RAID 5, etc., and is about as bad as Windows OS RAID 5, which is horrible for write performance, and it's the write performance which is the Achilles' heel of RAID 5 -- read performance is typically not bad.

Here's a thread which goes into the specifics with nVIDIA RAID 5.

http://forums.storagereview.ne...ex.php?showtopic=25786
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
OS raid5 at least has advantages to go with the disadvantages, hybrid does not.

As for your example... It is a different kettle of fish in that one is rotten makrels and one is rotten salmons, I wouldn't eat either because they are both rotten.
 

Madwand1

Diamond Member
Jan 23, 2006
3,309
0
76
Originally posted by: BonzaiDuck
I just ran the Everest disk benchmark "read-test suite" on the array in question, and it bears out all of our assumptions. The performance increase isn't anywhere proportional to the number of drives.

I haven't used that benchmark, but practically linear improvement is easily demonstrable with RAID for simple sequential access at least.

E.g.: 2 vs. 3 drive tests done a couple of years ago showing essentially linear scalability (with drives of that generation):

http://i89.photobucket.com/alb...0-2drive-vs-3drive.png

Did you mean some other measurements or perhaps use such fast drives that the chipset or interface / etc. is becoming the bottleneck?

Here's a simple 3-drive nVIDIA RAID 5 bench, which also shows such scalability with access sizes >= 64 KiB:

http://i89.photobucket.com/alb...d0/atto-nvr53-3264.png
 

BonzaiDuck

Lifer
Jun 30, 2004
16,632
2,027
126
It's obvious to me that I'll either have to live with the compromises, or (a) shell out for another PCI-E hardware controller, or (b) disassemble my 680i system and move the controller to the 780i. Once I've built one of these things and it's working fine, I'm not inclined to take it apart.

I think my disenchantment with the old 680i system is the Kentsfield B3 CPU. The board itself is spec'd to 3333 Mhz FSB, and will handle a Wolfdale dual-core just fine up to 1600 FSB -- WITH . . the right BIOS revision, but that's not a problem either. And frankly, Kentsfield notwithstanding, I'd bet I'd see the system become really snappy if I just junked the old 32-bit XP OS for either VISTA64 or Windows 7.

I jumped on the RAID5 bandwagon for both speed and reliability, and considering the number of disk failures I've had over 20 years, or the number of RAID0 arrays I've had which didn't fail (and I've always been careful about backups), it was probably overkill.

You figure most of the hott-dawgs here at the forums who post threads about either their over-clocking exploits or their over-clocking problems just plug their SATA2 drives into the motherboard. I saw where AigoMorla just completed a super-water-cooled extravaganza with the eVGA Classified X58 board, and I don't know if he used a hardware controller.

Right now, any successor to my 3Ware 9650SE controller still looks pretty expensive. The more modestly-priced alternative was a HighPoint 35x0 controller (the 3510 was reviewed by Maximum PC a year or so past). Figure you're going to shell out in excess of $300 for a four-port RAID5/RAID6 card.

Frankly -- and I don't think I'm alone here if you review our forum buddies for how quickly they turn over computer parts on an annual basis -- I'm so saturated with hardware that I feel like the kid stomping down the street with one of those giant Hershey bars and chocolate drooling down the chin and onto the ubiquitous striped T-shirt -- hands covered in muck.

Too many computers . . . . but my vast accumulation of grrr-eat writings, e-mails and manuscripts is safe for posterity, even if nobody else ever reads them once I've passed into cyber-space after-life.