Cerb
Elite Member
- Aug 26, 2000
- 17,484
- 33
- 86
*sigh* OK, lessee if I can break this down.So checkdisk/scandisk can alter the video?, but only if you set it to repair?, and by altering i mean totally corrupting not changing the quality or adding slight glitches to the film.
someone please reply.
Your file needs to appear to you, and any programs, as a specific ordered string of bytes, with a specific name, and other properties.
So, you might have MyRec.ts, a 500MB OTA capture, stored as C:\Users\Me\My Videos\MyRec.ts.
How does it know there even is a folder called Users, Users\Me, much less Users\Me|My Videos? How does it know there's a file there? How does it know what it's called? How does it know when it was made? How does it know when it was last written to? How does it know who owns it? And so on...
The file system handles that. The file system, however, is little more than ordered strings of bytes, too, just like files are. To the hard drive itself, there's no difference.
Now, on top of that, your file may be made up of many parts, scattered about the system (the more free space you have, the less likely this is). When reading the file, the file system driver finds a list of those parts (fragments), and stitches them together as needed.
While it's recording, it is, every so many seconds, writing out a new part of the file's data, and then also matching that by writing out an updated list of file parts (if there's plenty of space next to where it had been writing, it will update it as increasing the last fragment's size, instead). So when it records another 30s of video, it has to update not only the raw data, but also pointers to that data. Both sets of information, the file data itself, and the file system's metadata, need to be in sync. They need to match each other.
Those lists of pointers, with the file's name, logical location (IE, C:\Users\Me\etc.), and physical location(s) (800 sectors starting from sector 6500; 12000 sectors starting from sector 8999, etc.), and other info, are what make that raw data usable. Otherwise, it's just random data on the disk. Now, in normal operation, the file system does a fair bit of work to make sure they don't become incorrect in the first place (NTFS more than FAT, ReFS more than NTFS, in Windows), but dirty unmounts (use the Safely Remove Hardware icon!), dirty shutdowns (normally unpredictable, but you can use a UPS), or failing hardware (not always the HDD!) can throw a wrench in the works. Journaling file systems (such as NTFS or HPFS) are pretty good about recovering from such events, but nothing's perfect.
Chkdsk's concern is solely that those lists of information are in a proper state. If some is missing, or duplicated but different (without a way to figure out what's newer), or has impossible values (like multiple files or fragments overlapping others, or values past the end of the drive), it needs to fix it, in whatever way it can. If everything was properly written, and can be properly read, then chkdsk does nothing but set a value stating that the file system is actually OK, after reading however much it or you deemed necessary.
If chkdsk comes across errors for which there are not obvious correct values, the results are effectively unknown. I mean, with a certain set of errors in, you should always get the same repair results, but you never know what they might be, going in. So, if something isn't right, you can't necessarily know exactly what it will do. It might screw up some files, it might not. That's an unknown, if it encounters errors and goes about fixing them. But, if it goes about finding such errors, you also have no guarantee that you'd be able to correctly read the file without repairing via chkdsk, so if something is amiss with the file system, you already want to have made backups--an ounce of prevention, and all that.
If chkdsk is run without repair options, on a healthy drive, it will do nothing but spend time reading. It will then report errors it found, if any, but not fix them. Now, with a failing drive, chkdsk (or scandisk, or any fsck), being seek-heavy, will generally cause more strain on the drive, risking more data loss than trying to perform selective file recovery, or imaging, will. So, it's not a good idea to use it in that scenario, on the drive that may be failing.
The good news: drives are cheap, so backup important data. The benefits of all that complexity are that the same file copied from a Windows NTFS partition to an Apple HPFS partition to a FreeBSD ZFS partition, to an Android EXT4 partition, will, barring faulty hardware along the way, result in the same file you started with. All you lose between all that will be WIndows-specific security settings. A simple spare external drive, simple NAS, fancy NAS, custom NAS, DVD, tape, etc., all result in basically the same thing.
As long as it can have power removed from it when not in use, and you have a way to check the files themselves (look up SFV tools, and Quickpar, for practical-yet-inelegant ways to handle it, without using commercial software--not that commercial software is bad or anything, but having a process that you can make a habit with is the more important than the specific tool), you can keep them as safe as you are willing to.
Last edited: