• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Convert from NTFS/Fat32 to linux FS

Mellman

Diamond Member
I highly doubt this is possible...But i was curious if anyone here has done it- or has suggestions on HOW to do it?

Problem: I have 3x750GB drives in raid 5, about 300GB free. I am pondering converting my server from Win2003 to linux.

Why? No real reason other than ive become much more comfortable in linux, and I can run a VM for windows stuff I want to do.

Ideas?

I do NOT have a spare 1.5TB laying around to move the data around. i feel that using some tricky partition resizing I could achieve what i want - but thought maybe i could hope for better suggestions on how to do this.

-Matt
 
No way to convert it that I am aware of.

I hate mucking around with partitions.


But with 1TB then I don't know what to do about that. Normally the best way is to copy them to DVD and then restore it after you install Linux. This is nice as then you do end up with backups when your finished, but right now your looking at over 200 dvds, so that is insane and the probability of one of them being bad in a burn is very high.
 
Originally posted by: drag
No way to convert it that I am aware of.

I hate mucking around with partitions.


But with 1TB then I don't know what to do about that. Normally the best way is to copy them to DVD and then restore it after you install Linux. This is nice as then you do end up with backups when your finished, but right now your looking at over 200 dvds, so that is insane and the probability of one of them being bad in a burn is very high.

Yeah, i thought about that, then laughed. not to mention, the time to burn 200dvd's and trying to efficiently organize files onto 200 dvd's.

I'm familiar with windows partition tools - what does linux give you to resize partitions while maintaining data integrity?

Of course i'd have all my 'critical' data backed up before hand to other drives and DVD (do that anyways even with my raid 5 array)
 
Well if the filesystem is ext3 you can shrink it no problem, you can do it manually with resize2fs and then changing the partition table to match or with parted/gparted.
 
I was thinking about it a bit...

Maybe your best bet is to go and check out Ebay for some tape drives.

I was looking and I found a DDS4 for 120 bucks, which does like 40gb backup tapes and the tape media is like 6 bucks apeace.. but I don't know if that is truly 40gigs or if that is the compressed version.

There are other types that have higher capacity, of course. Not that expensive. Probably worth if you have that much data you want to save.
 
hm...not a bad option i suppose...

of course i could just grab two more 750 gb's and add them to the array when i'm done with the move...maybe the better option, especially since im almost out of room on the array anyways?
 
drag would you happen to have a 'lineup' of the different filesystems?

I've mostly been using reiserfs, but want to compare and contrast the benefits of the different ones.
 
I've mostly been using reiserfs, but want to compare and contrast the benefits of the different ones.

I would really, really recommend against reiserfs. My personal past experiences aside it's got some pretty huge design issues. The best of which is that if you have a reiserfs filesytem image, like say a VM disk with a reiserfs filesystem on it, on a reiserfs filesystem and you need to run reiserfsck it won't be able to tell that the image isn't supposed to be part of the parent filesystem and if you use the --rebuild-tree option it'll try to knit the two together and make things a whole lot worse. And there are some issues with the name hashing that can cause you to lose files and I believe the way they misuse the hashing causes the filesystems to be non-portable between some architectures. Obviously the last 2 aren't easy to run into but if you happen to be one of the unlucky people that does you'll wish you hadn't taken the chance.
 
Originally posted by: Mellman
hm...not a bad option i suppose...

of course i could just grab two more 750 gb's and add them to the array when i'm done with the move...maybe the better option, especially since im almost out of room on the array anyways?

Ya.. It depends on how much room you have in your server, I suppose.

Lets see.. Say you find some used DDS4 drive for 150 dollars. Now I suppose a lot of what you have on your drive is media and such, which is all already very highly compressed. So assume that each tape is going to be 20gigs irregardless of what you do. So that's 60 tapes to do your entire backup.. which is a crapload of tapes.

So at newegg each tape is going to cost around 7 bucks or so. Maybe you can find some place that sells in packs for 6 bucks apeice (with shipping).

Total cost for drive and media are 510 dollars. (ouch)

And that is not including the controller card. Most of them are SCSI, which adds additional costs. Some are USB2.0, but I think those would be much harder to find used and new ones are going to run closer to 300 dollars or more.

Now other formats may make sense, but DDS4 looks like the most inexpensive.. I don't know a whole lot about these tapes though. Somebody else if they have experiance working with them will know much more.

Now there are advantages to tapes..
You have external media. If you need more capacity you can add more tapes. So storage is very cheap. You do things like take files you don't use much and copy those off to tapes and store them in a closet.. that way you have access to things that you may need, but add a lot of capacity to your drives for stuff you need more immediate access.

Multiple copies of important data is great.

If your drives fail spectacularly, or your machine catches on fire or something like that then you have a copy somewere else you can use. This is very nice.

But still your looking at 600-700 dollars here... so that isn't very cool at all.



Now harddrives are a lot cheaper... But there is a problem.

Once you get up to very high capacity situations HD have these limitations. Sure their capacity has massively increased, but their speed hasn't.

So say you setup a RAID 5 array with a hotspare then if a drive fails then it's taking much much longer and it is running the drives at higher level for longer in order to get that hotspare working as a member of the array. If by chance you have a second drive failure during that time.. then your array and all the data on it is hosed. The longer it takes to get that spare up and running the higher the likelihood of failure.

And if you look at the reasons harddrives fail or get thrown off the array.. mechnical failure, overheat, controller malfunction, etc then 2 out of three of those are not unlikely to cause a second drive failing given enough time.

(edit: this guy knows a hell of a lot more then me about it and he realy hates raid5)

So people running very massive amounts of data on a single RAID 5 array are setting themselves of for disaster.


So one of the ways of dealing with these issues is by moving to RAID 6 or RAID 10.

Raid 6 involves a lot of extra overhead and such. Raid 10 is very fast and has low overhead.

So ideally you'd want to create a RAID 10 array.

Linux MD stuff has limited support for converting between different types or raid arrays or expanding existing arrays. But I expect your using hardware raid so that doesn't realy count.

(edit: If your curious.. The Linux MD RAID10 driver has support for interesting things that go beyond the standard 1+0 design)

...so I think your best bet seems like going with a second RAID 5 array.

So this is nice becuase it's the most economical way of doubling your capacity while retaining good performance and convience...

However you have the danger of all your data disapeering with a bad power supply or something like that. So it's not ideal.

Tough stuff.
 
Originally posted by: Nothinman
I've mostly been using reiserfs, but want to compare and contrast the benefits of the different ones.

I would really, really recommend against reiserfs. My personal past experiences aside it's got some pretty huge design issues. The best of which is that if you have a reiserfs filesytem image, like say a VM disk with a reiserfs filesystem on it, on a reiserfs filesystem and you need to run reiserfsck it won't be able to tell that the image isn't supposed to be part of the parent filesystem and if you use the --rebuild-tree option it'll try to knit the two together and make things a whole lot worse. And there are some issues with the name hashing that can cause you to lose files and I believe the way they misuse the hashing causes the filesystems to be non-portable between some architectures. Obviously the last 2 aren't easy to run into but if you happen to be one of the unlucky people that does you'll wish you hadn't taken the chance.

Reiserfs has a bad reputation. Not without reason.

For normal servers and desktops and such I'd normally say use Ext3, which is probably the best general purpose FS for Linux for a veriaty of reasons.

However for big things XFS has a very good repuation. Good performance and easy to deal with. Just make sure to have a UPS for your machine when using it. Ext3 tends to be a bit more resiliant to corruption caused by power loss or crashes, I beleive.
 
XFS has had it's share of problems too, I know a lot of people in the Ars Linux forums won't touch it anymore either. But most of those problems have been odd cases AFAIK and I've been using it for years without any problems so I'm more apt to trust it. Infact I've got ~1TB of 4 XFS filesystems on dm-crypt on LVM devices, they've only been running for a few months but I have yet to see a problem.
 
That's about why I prefer Ext3.

Ext3 isn't fancy or the fastest, but it seems to be the most robust FS Linux has. I wouldn't use anything else on my laptop, for instance.

But I think that XFS has advantages when you move into very large things. Probably when Ext4 stabilises there will be little reason to care about XFS anymore..
 
I've been using XFS on my notebook since I got it and I use suspend2 to hibernate/resume regularly. I honestly don't know what it takes to hit the few XFS bugs there have been in the past.

ext3's no panacea either, if you look on lkml recently you'll see that it has some major performance problems with reading directories and I believe it was the only filesystem that people noticed corruption due to the mmap() bug noticed in 2.6.19. But the latter could have just been luck since the bug was hidden for a very long time.
 
I wasn't thinking bugs so much.. Just how well the software reacts when very bad things start happenning. Although XFS is better now then it was at one point in it's Linux history. Ext3 has some extra features that are designed to protect data a bit more then other FSs.
 
http://linuxmafia.com/faq/Filesystems/reiserfs.html

If you loose power at the bad time with XFS you can corrupt files, get empty files, and all sorts of fun things.

Why doesn't ext3 get hit by this? Well, because ext3 does physical-block journaling. This means that we write the entire physical block to the journal, and only when the updates to the journal are committed do we write the data to the final location on disk. So, if you yank out the power cord, and inode tables get trashed, they will get restored when the journal gets replayed.



edit:

Here is another guy talking about the differences between ext3 and other journalling FSs.
http://www.gentoo.org/doc/en/articles/afig-ct-ext3-intro.xml

I know it's from Gentoo, but it's make sense to me.
 
Yea it's possible but it's not as big of a deal as Ted makes it out to be, IME it's a pretty rare occurance. And it's funny that you specifically mentioned a notebook since that's one of the few types of devices that have a battery backup built in. =)

I know it's from Gentoo, but it's make sense to me.

Actually it's from IBM, the very top of the page says: Disclaimer : The original version of this article was first published on IBM developerWorks, and is property of Westtech Information Services. This document is an updated version of the original article, and contains various improvements made by the Gentoo Linux Documentation team.
This document is not actively maintained.


And the article is also very old, it even lists data=ordered as "a new journaling mode" for ext3 and ext3's defaulted to ordered mode for as long as I can remember.

Sure if you have flaky power or a machine that crashes all of the time ext3 might save you a little bit of work, but you should also probably fix the real problems too because eventually even using ext3 won't be enough.
 
Well ya, but I still have issues with running out of battery juice and stuff automaticly.

My battery is borked so that it will report the wrong voltage to the OS so that Linux doesn't know when it's about to cut out. (I need a new battery). Silly Apple PMU stuff.
 
You guys forgot to tell OP the best part about linux raid. Namely you can move your drives between controllers/computers as much as you want :0) I've had to move my array 4 times in the last month (btw, anyone want a faulty mobo or two?? they still mostly work...)
 
Back
Top