Raid 0 advice for future i7 system

Shmee

Memory & Storage, Graphics Cards Mod Elite Member
Super Moderator
Sep 13, 2008
8,313
3,178
146
I am probably going to be upgrading to core i7 soon, and could use advice on what to use for hardrive(s). I have considered the 1 TB WD caviar black, though would think using 2 750 GB RE2s would get me better performance results and an extra 500 GB, but I am not sure. I have heard that for RAID, it is best to get good enterprise class drives. I could probably get the 2 750's for about the same price.

something to take into consideration is I might want a triple boot system. Vista ultimate 64, ubuntu, and mb windows 7. I have very little critical data, mostly games and apps, and some movies which I would back up to other drives/media anyways. One concern I might have with the array, is, how much increase in performance, and how easy would it be to install the latest ubuntu on it, probably x64, probably from a flashdrive, and no Floppy disk used.

If the array would have a considerable problem with linux, or if the performance increase is small, I would probably go with the WD1001FALS.

Upcoming system preliminary specs:
gigabyte x58-UD5
i7 920
3x2 GB OCZ gold 1600 Mhz
GTX 260 c216
TRUE w/high power scythe fan(OC)
LG Sata DVD burner
Probably would use current PSU
 

DSF

Diamond Member
Oct 6, 2007
4,902
0
71
What's the system going to be used for? RAID 0 isn't going to do much for gaming.
 

BonzaiDuck

Lifer
Jun 30, 2004
16,889
2,208
126
Originally posted by: DSF
What's the system going to be used for? RAID 0 isn't going to do much for gaming.

I'm not sure just how much or to what degree I agree with this, and that doesn't mean that I don't agree. While increasing the chance of (any) disk failure for the array as a whole, throughput is significantly increased, but as you say, this may not be much of an improvement for gaming. You can define the array for different block sizes depending on anticipated usage: small block sizes favor random-access program-loading performance; large block sizes favor sequential read/write speed for large files, as with video rendering.

Since disk storage is (relatively) low-cost, large volume "memory" with slow-speed bottlenecks due to its electromechanical nature, it makes sense to widen the bottleneck. But as we've discussed in another thread, some RAID configurations (like RAID0) increase the risk to data-reliability, increase the price-per-GB ratio, increase the investment in parts and the amount of power consumption.

With a good hardware controller you can get proportionately more sustained throughput as the size of a RAID0 increases by adding drives to it, but chance of failure also increases proportionately. So if I were to choose between a 3-drive RAID0 and a 3-drive RAID5, I'd pick the RAID5 option with slightly degraded performance so that I could increase data-reliability.
 

Shmee

Memory & Storage, Graphics Cards Mod Elite Member
Super Moderator
Sep 13, 2008
8,313
3,178
146
Well, as I stated in OP, it would be 2 drives, and an array failure isnt a huge problem as I can always reinstall stuff, I have little critical data, and that is backup up elsewhere anyways. I suppose the biggest problem a failure would present isnt so much downtime or loss of data, but rather the fact that I would need to replace a drive, and that can cost money, whether I need to pay for RMA shipping, or buy a replacement. On my main current system I have been running a 2x 160 GB Seagate Barracuda ES Raid 0 array, which I use for games, apps, and some storage. I use it on my nvidia raid controller on my 780i, and before that it was on my xfx 680i lt. I have had no problems with it, and I have liked the performance.

I guess my biggest concern then left to be answered, is how ubuntu will install on it, hopefully with ease as with vista. Perhaps that is a question for the linux forums?
 

darkenedsoul

Member
Oct 16, 2007
128
0
0
Last I saw (Ubuntu 8.04) did NOT support software raid so I was unable to do a triple boot (xp/vista 64/ubuntu installed from Vista due to the boot loader difference, i.e. if you do this you would need to install from Vista if 2ndary install). The install went just fine but on reboot into Ubuntu = no-go. I flamed about not having support in the ubuntu forum (well, not flame, just bitched about it not having "fake raid/software raid support" in this day and age. I mean COME ON, how long has mb level raid been around and they have yet to incorporate it into Ubuntu (or probably other flavors)? I ended up just reinstalling VMware 5.5.3 and loaded up RH FC7/8/9 (I ran into issues with FC9...need to redo it back to 7 as that worked just fine, 8 was a bit wonky for some reason and update didn't work well from 8->9).

Anyways, just something to think about *if* you are going to be using a RAID configuration from the ICH* chipset on the motherboard. If not, then you should be *ok* as far as I can tell. I may even do that once I reset from RAID 1 to normal drives on my other system once some memory arrives.

Mike
 

darkenedsoul

Member
Oct 16, 2007
128
0
0
Originally posted by: BonzaiDuck
Originally posted by: DSF
What's the system going to be used for? RAID 0 isn't going to do much for gaming.

I'm not sure just how much or to what degree I agree with this, and that doesn't mean that I don't agree. While increasing the chance of (any) disk failure for the array as a whole, throughput is significantly increased, but as you say, this may not be much of an improvement for gaming. You can define the array for different block sizes depending on anticipated usage: small block sizes favor random-access program-loading performance; large block sizes favor sequential read/write speed for large files, as with video rendering.

Since disk storage is (relatively) low-cost, large volume "memory" with slow-speed bottlenecks due to its electromechanical nature, it makes sense to widen the bottleneck. But as we've discussed in another thread, some RAID configurations (like RAID0) increase the risk to data-reliability, increase the price-per-GB ratio, increase the investment in parts and the amount of power consumption.

With a good hardware controller you can get proportionately more sustained throughput as the size of a RAID0 increases by adding drives to it, but chance of failure also increases proportionately. So if I were to choose between a 3-drive RAID0 and a 3-drive RAID5, I'd pick the RAID5 option with slightly degraded performance so that I could increase data-reliability.

Adding drives to a RAID 0 configuration? I didn't think that could be done, once it's striped, that's it for all my experience with RAID configurations. RAID 5 needs 3 drives MINIMUM but has the ability to rebuild/boot if you have to yank a drive. RAID 1 is 2 drives usually but I've never tried doing otherwise. This is in a corporate/testing environment vs a home situation. For me RAID0 = 2 drives (maybe more) striped on initial configuration, not after. Same for RAID1, 2 drives typically for mirroring. RAID5 = 3 drives for striping w/parity. Nice feature that I was exposed to back in the early/mid-90's at a company I worked for. I thought it was the cats meow when I could pull a disk on a down system and boot it up and it would come up just fine, slap drive back in = rebuilding starting and continuing to function.

So, enlighten me on adding drives to a stripeset as I have never heard of being able to do that on the fly and not run into possible issues. I'd be glad to learn of this. But to me RAID0= min. 2 drives (or more) and that's it when it's done being set up. Same for RAID1 or 5. I've never set up RAID10 (1+0?) or 50 (5+0) so I don't offhand know the configuration that'd be required to do it. I am sure I'd find this info in a mobo manual or online someplace for a better explanation.

No offense intended. I've just set things up (corporate-wise testing) like that in the past 10+ yrs. I've never done RAID on a PC until I built the 2 new desktops in the past 12+ months (one RAID0 2 500GB SATA-II and one RAID1 2 640GB SATA-II which will be broken up/set back up as normal drives soon).

Mike
 

Shmee

Memory & Storage, Graphics Cards Mod Elite Member
Super Moderator
Sep 13, 2008
8,313
3,178
146
Originally posted by: darkenedsoul
Last I saw (Ubuntu 8.04) did NOT support software raid so I was unable to do a triple boot (xp/vista 64/ubuntu installed from Vista due to the boot loader difference, i.e. if you do this you would need to install from Vista if 2ndary install). The install went just fine but on reboot into Ubuntu = no-go. I flamed about not having support in the ubuntu forum (well, not flame, just bitched about it not having "fake raid/software raid support" in this day and age. I mean COME ON, how long has mb level raid been around and they have yet to incorporate it into Ubuntu (or probably other flavors)? I ended up just reinstalling VMware 5.5.3 and loaded up RH FC7/8/9 (I ran into issues with FC9...need to redo it back to 7 as that worked just fine, 8 was a bit wonky for some reason and update didn't work well from 8->9).

Anyways, just something to think about *if* you are going to be using a RAID configuration from the ICH* chipset on the motherboard. If not, then you should be *ok* as far as I can tell. I may even do that once I reset from RAID 1 to normal drives on my other system once some memory arrives.

Mike

Hmm, I did a little bit of research myself, and it said that ubuntu 8.10 makes "easy" support of fakeraid(or whatever the mobo raids are) though it does seem to have a harder step of intalling GRUB.

Here is a good helper page I found. https://help.ubuntu.com/community/FakeRaidHowto
 

darkenedsoul

Member
Oct 16, 2007
128
0
0
Originally posted by: Shmee
Originally posted by: darkenedsoul
Last I saw (Ubuntu 8.04) did NOT support software raid so I was unable to do a triple boot (xp/vista 64/ubuntu installed from Vista due to the boot loader difference, i.e. if you do this you would need to install from Vista if 2ndary install). The install went just fine but on reboot into Ubuntu = no-go. I flamed about not having support in the ubuntu forum (well, not flame, just bitched about it not having "fake raid/software raid support" in this day and age. I mean COME ON, how long has mb level raid been around and they have yet to incorporate it into Ubuntu (or probably other flavors)? I ended up just reinstalling VMware 5.5.3 and loaded up RH FC7/8/9 (I ran into issues with FC9...need to redo it back to 7 as that worked just fine, 8 was a bit wonky for some reason and update didn't work well from 8->9).

Anyways, just something to think about *if* you are going to be using a RAID configuration from the ICH* chipset on the motherboard. If not, then you should be *ok* as far as I can tell. I may even do that once I reset from RAID 1 to normal drives on my other system once some memory arrives.

Mike

Hmm, I did a little bit of research myself, and it said that ubuntu 8.10 makes "easy" support of fakeraid(or whatever the mobo raids are) though it does seem to have a harder step of intalling GRUB.

Here is a good helper page I found. <a target=_blank class=ftalternatingbarlinklarge href="https://help.ubuntu.com/community/FakeRaidHowto">https://help.ubuntu.com/community/FakeRaidHowto</a>

I'll check the link out in a bit. I was trying to use 8.04 which at that time wasn't working with fakeraid/software raid last year. Thanks for the link!

 

darkenedsoul

Member
Oct 16, 2007
128
0
0
Hmm, after a quick read this doesn't look like it would work for how I would want to install Ubuntu. I want to install it from within Windows Vista as a windows program so that I have a file that's the OS vs putting it onto an actual partition on the RAIDset (which there isn't a spare one, I tried windows install). So looks like I stick with VMware still for the time being. I know that works fine as I've used it off/on for some time now. If I missed that part let me know (install as a windows program so it can be easily de-installed should I choose to down the road or do an update)!