Typically I would like to be able to "cp -a" the system partition of a laptop as backup, but obviously this doesn't work, and using "dd" for the whole partition is wasteful if you have lots of free space. Permissions, junction points and links are f(&*ed up and many linux applications just don't work on NTFS. Then there are these shadow copy inconsistencies people are talking about and sound scary, not sure about specifics.
If you need that stuff to work in Windows, you need NTFS, and in time, ReFS. In Linux, you need a *n*x FS. They are not interchangeable for anything but directories of data files w/o special permissions, and that goes both ways (or would, if Windows had good FS drivers for non-native FSes
😉). IE, use NTFS just like you would FAT32, but take advantage of it being journaled. For anything like using links, or applying permissions at any granularity lower than the whole mount point, an *n*x-native FS is going to be a must.
In my experience, ntfs-3g is CPU limited in the worst possible way, that is single threaded. Overheads for filesystems should be minimal, not bring my sandy bridge on its knees because it's writing zeroes on disk.
Using a fast SSD? It's CPU-heavy, but I've never seen it use more than maybe 30% on old Core Duos and Athlon64s, copying files at 80+MB/s.
I also want an equivalent to journaling on anything I can. For dealing with low-memory embedded systems, I can understand wanting to make something different than NTFS, but a transactional-logging system with block-level copy-on-write (IE, write to a new cluster, then free the old cluster, and update a copy of the metadata, not making it the current metadata until the write is done; IoW, no in-place data changes) should be quite doable, in any number of ways, and offer the benefits of a journal, without a distinct journal, and without needing several MBs of RAM to manage. Performance would only cost a few % of clusters reserved, which would be largely unnoticed. By not overwritng an old chunk of data until a new chunk has been successfully written, most anything could be rolled back, as the new old data would all still be there.
A journaled FS in general is really not ideal for internal HDDs, either, but back in the 80s and 90s, software techniques to do otherwise were still wild crazy hippie ideas, like JIT
🙂.
exFAT doesn't have journaling, file compression, or logging. For a fixed disk (especially a system disk) these are incredibly useful features. But for a removable disk you actually don't want these things, which is why we have exFAT.
I don't care about compression or audit-quality logging, but why wouldn't you want a nominally transaction-safe file-system? At worst, it should be able to know what failed (not merely that something might have), and at best roll it back. Being removable makes me want that
more, not less, and is the 2nd main reason I use NTFS, now (the 1st being cross-platform compatibility, but FAT32 has that, too).
It doesn't need to actually be a journal. A journal causes more writes than may be necessary (it's not the MBs, it's the separate addressing, some of which may need to be synchronized, which does slow things down). What's really needed is to be reasonably sure that old data will stick around until after new data has finished writing, however that can be done. exFAT kinda sorta might be able to do that, sometimes, with small volumes and files, due to the legacy FAT table, but that apparently breaks with big files, big cluster sizes, etc. (and the whole point is to be able to use it on media that' hundreds of GBs in size).