Originally posted by: Mellman
hm...not a bad option i suppose...
of course i could just grab two more 750 gb's and add them to the array when i'm done with the move...maybe the better option, especially since im almost out of room on the array anyways?
Ya.. It depends on how much room you have in your server, I suppose.
Lets see.. Say you find some used DDS4 drive for 150 dollars. Now I suppose a lot of what you have on your drive is media and such, which is all already very highly compressed. So assume that each tape is going to be 20gigs irregardless of what you do. So that's 60 tapes to do your entire backup.. which is a crapload of tapes.
So at newegg each tape is going to cost around 7 bucks or so. Maybe you can find some place that sells in packs for 6 bucks apeice (with shipping).
Total cost for drive and media are 510 dollars. (ouch)
And that is not including the controller card. Most of them are SCSI, which adds additional costs. Some are USB2.0, but I think those would be much harder to find used and new ones are going to run closer to 300 dollars or more.
Now other formats may make sense, but DDS4 looks like the most inexpensive.. I don't know a whole lot about these tapes though. Somebody else if they have experiance working with them will know much more.
Now there are advantages to tapes..
You have external media. If you need more capacity you can add more tapes. So storage is very cheap. You do things like take files you don't use much and copy those off to tapes and store them in a closet.. that way you have access to things that you may need, but add a lot of capacity to your drives for stuff you need more immediate access.
Multiple copies of important data is great.
If your drives fail spectacularly, or your machine catches on fire or something like that then you have a copy somewere else you can use. This is very nice.
But still your looking at 600-700 dollars here... so that isn't very cool at all.
Now harddrives are a lot cheaper... But there is a problem.
Once you get up to very high capacity situations HD have these limitations. Sure their capacity has massively increased, but their speed hasn't.
So say you setup a RAID 5 array with a hotspare then if a drive fails then it's taking much much longer and it is running the drives at higher level for longer in order to get that hotspare working as a member of the array. If by chance you have a second drive failure during that time.. then your array and all the data on it is hosed. The longer it takes to get that spare up and running the higher the likelihood of failure.
And if you look at the reasons harddrives fail or get thrown off the array.. mechnical failure, overheat, controller malfunction, etc then 2 out of three of those are not unlikely to cause a second drive failing given enough time.
(edit:
this guy knows a hell of a lot more then me about it and he realy hates raid5)
So people running very massive amounts of data on a single RAID 5 array are setting themselves of for disaster.
So one of the ways of dealing with these issues is by moving to RAID 6 or RAID 10.
Raid 6 involves a lot of extra overhead and such. Raid 10 is very fast and has low overhead.
So ideally you'd want to create a RAID 10 array.
Linux MD stuff has limited support for converting between different types or raid arrays or expanding existing arrays. But I expect your using hardware raid so that doesn't realy count.
(edit: If your curious.. The Linux MD RAID10 driver has support for interesting things that go beyond the
standard 1+0 design)
...so I think your best bet seems like going with a second RAID 5 array.
So this is nice becuase it's the most economical way of doubling your capacity while retaining good performance and convience...
However you have the danger of all your data disapeering with a bad power supply or something like that. So it's not ideal.
Tough stuff.