• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Linux gains lossless filesystem

Well it doesn't look like it would be usefull for a normal Linux desktop or server filesystem.

Maybe it would be usefull for a seperate file system for keeping track of system logs, though.

It looks like it's designed specificly for 'carrier-grade' systems. These generally are very special computer enviroments were requirements are much different from normal server or desktop systems. Were high aviability and fault tolerance are at a premium above all else.

With this file system you can be certain to get as close to 0% chance of data loss as possible.

Say your recording a wav file from a conversation... and then suddenly the computer explodes in a ball of flaming bits... but miraclously most the the drive remains intact.

You could take that drive, plug it into a computer, then pull that entire conversation off of the drive all the way up to the point of the explosion.

In a normal file system most of that would either be missing or not obtainable by normal means.

If you accidently delete a file you can almost always recover it, for instance. That information will never be overwritten unless it absoletely has to.

On ext3, for instance, security is at a higher priority so that when you delete a file it is done in such a way to automaticly make it as difficult as possible to recover even a portion of that file, and that space will end up being reused as quickly as possible. That's why generally you'll never see a undelete in Linux.

So that's one trade off. Security is less, but data recovery is much easier.

Also another trade off would be a high degree of fragmentation is possible, especially when you start to run out of space on the system. Normal Linux filesystems don't suffer from fragmentation ailments, they are advanced enough to avoid it.

Another trade off would be a very noticable slowdown in performance once you run start to out of space. Normally this isn't very noticable in Linux filesystems.

So it seems a very usefull filesystem, but only for specific purposes. It's not going to be to general-use.

But it could be usefull for such things as keeping track of system logs. Or maybe as scratch space for video editing... like if the program crashes as it's writing out the multi-gig media file then you can save as much work as possible... You'd just reformat that partition in between uses (also it will have very fast write performance until you start to run out of space). It maybe usefull for storing security camera video (so if a burgler found the computer and smashed it then it would be likely that you'd still be able to get the video recording of him doing it) . Recording information under rough conditions were the computer can be destroyed or damaged at any point, like in a military or scientific robot exploration, or maybe a black box recorder for a airplane. Stuff like that.

At least that's my impression. I've never read much about logging-style filesystems.

edit:

PS. There is a very high likelihood that most of what I said was wrong.
 
Here is a wikipedia entry for log-structured file systems...
http://en.wikipedia.org/wiki/Log-structured_file_system


They basicly treat the entire disc drive in a sequencial fasion.. like a tape drive. They start writing at the beginning of a drive and just continue writing more and more data bit after bit after bit. They never go back over old data, they never seek on the drive for new blocks large enough to fit a file or anything like that. They just go from front to back, beginning to end.

Once they reach the end mature log-structure file systems will have a garbage collector that goes and starts flagging the oldest unused parts of the recorded data as 'dirty' allowing them to be overwriten.

NIFLS supports unique features such as:
have no data loss.
instantanious snapshots of file systems.
You can do snazzy stuff like mount the file system read-only on a different directory... however it can be at any time in the file system's lifespan. So you could go back to yesterday before you accidently deleted all your photos or important emails in your home directory. Also you can do backups of filesystem's snapshots as they are being used without disruption to anybody using the system, even if they are using something like a database and are doing multiple writes to files at the same time.

Other systems can do snapshots, like Solaris's file system.. but for that to work you have to take the file system offline for a bit. With this system you should technically be able to do snapshots and time-traveling while the system is in use.

bizzare stuff.
 
Interesting. I had a very similar idea many years ago. Everyone told me that I was crazy. Well, when you consider projects like MS Research's "digital life" or "my life" or whatever it was called, recording everything that happened in someone's life - considering how cheap and abundant digital storage is these days, and getting cheaper - it doesn't seem as crazy anymore.

Plus, the more important benefit, is the ability to roll-back, and remove unwanted system-state modifications. Such a thing could literally make modern OSes impervious to malware, essentially. Once any malware was identified, roll-back the system to the point before then, and then roll-forward, minus the malware-initiated transactions.

(In fact, I developed several FS designs, one being the archival "save everything" log-like FS, one being an object-caching FS, and then with a third, essentially the VM pagefile, for things that weren't fixed in size yet. It's interesting the optimizations that you can make to FSes, when you are able to segregate out the different sorts of data-manipulation semantics, and create optimized FSes for each, instead of a single general-purpose FS that is used for all different semantics, none of them particularly well, as we have today. The biggest problem is that the ability to delete files, and thus create "holes", is what leads to fragmentation. By factoring out the access semantics and specializing for them, you can largely avoid physical-layer fragmentation of data. That was my primary design goal at the time. The pagefile FS was allowed to be fragmented, because the app's view in RAM is remapped such that it isn't fragmented, so that isn't quite as critical.)
 
Originally posted by: MrChad
Originally posted by: VirtualLarry
...

:Q I haven't seen you around here in ages. How's it going?

I'm not sure how I would even being explaining, so I'm not, but I guess the answer to your question is, "not so well". But I'm hoping things will get better soon. Both myself, and the state-of-the-art of OSes. I'm getting to the point that I can barely stand computers these days. I'm either a dreamer, or a madman, or both. Software technology needs to make a leap soon, otherwise we will all be mired in a world filled with crapware (malware, bloated useless apps, etc.). My dream is to "clean things up", so to speak, in regards to the software world. Everything is designed so utterly, horribly, backwards. :|

Or to put it another way - I was on the verge of one of the biggest breakthroughs of my existance, so I thought, when I had a bit of a breakdown myself. So I'm in the process of recuperating, and trying to get back up to speed.

Software, and OSes in particular, need to be organic, self-similar, robust in the presence of failure, and still be easy to manipulate by end-users. No small task. Interestingly, I discovered a paper discussing the origins of virtual memory and the "working set", back in the 60s, and discovered that several techniques that I had envisioned for my OS design, were already invented, way back then, and then fell into disuse.

The idea was that every subroutine has its own protected memory space, sort of an O-O virtual-memory design, which also effectively used segmented pointers, a sort of global:local hybrid. Really, from a CS POV it all makes perfect sense, but it boggles the mind that current systems aren't even designed with these sorts of principles in mind, apparently. Here's the link: Origin of Virtual Memory and Working Set (PDF)

My idea building on top of that, was that the OS could in fact undo and re-route subroutine calls to an alternate implementation that had an identical interface contract, should one of the implementations encounted an exception. If done properly, it would also make it trivial to hibernate/persist individual processes, and then migrate those processes to other systems. Just like relocatable self-relative object/machine code can be moved to different memory addresses and still function, so to can self-relative processes and their resources be moved to different systems. To say nothing of the utility of a save/load-state feature for individual apps. (Much like game emulators let you save the state of the system.) Also, that would likely serve as the basis for a periodic point-in-time snapshot feature, in case there was any sort of exception or incident. Likewise, global system state changes should be fully transactional, the global system state should never be allowed to be in an indeterminate/intermediate state.

But the software guys seemingly NEVER LEARN from the hardware guys, who learn from the real (theoretical) CS guys. Sure, software systems are complex and getting more so, but that doesn't change the fundemental underpinnings of the system architecture and design. I mean, who just starts building houses, without understanding gravity and other forces of physics, and how the load-bearing elements of the structure have to be placed, in order for the building not to collapse. But yet, our computer systems still sometimes collapse and fold up like a house built from a deck of cards. (That was more true in the days of Win9x.)

I'm just rambling again, but there are all these beautiful ideas, most of them heavily/fully-researched in the 60s and 70s, and then the PC revolution happened, and it seems like most "cowboy" programmers decided to forget all about real CS, and just go their own way, and that's how we ended up with the disorganized kitchen-sink OS we call "Windows", where every programmer seemingly created their own new API call to suit their need.

HRESULT STDCALL _WashDishes(void * kitchenSink, void ** pUnknown);

I wouldn't be surprised to find out something like that actually existed, somewhere deep in the bowels of NT...
 
Back
Top