Originally posted by: n0cmonkey
Originally posted by: thorny169
I'm really not interested in reading up on it, I'm recalling something I was told years ago from memory. If I'm wrong, I'm sorry for spreading misinformation, but please at least explain it instead of just saying I'm wrong.
Multiple overwrites is good, and ramdom over writes helps. The only really good method is destroying the drives.
To quote from a paper on usenix:
In conventional terms, when a one is written to disk the media records a one, and when a zero is written the media records a zero. However the actual effect is closer to obtaining a 0.95 when a zero is overwritten with a one, and a 1.05 when a one is overwritten with a one. Normal disk circuitry is set up so that both these values are read as ones, but using specialised circuitry it is possible to work out what previous "layers" contained. The recovery of at least one or two layers of overwritten data isn't too hard to perform by reading the signal from the analog head electronics with a high-quality digital sampling oscilloscope, downloading the sampled waveform to a PC, and analysing it in software to recover the previously recorded signal. What the software does is generate an "ideal" read signal and subtract it from what was actually read, leaving as the difference the remnant of the previous signal. Since the analog circuitry in a commercial hard drive is nowhere near the quality of the circuitry in the oscilloscope used to sample the signal, the ability exists to recover a lot of extra information which isn't exploited by the hard drive electronics (although with newer channel coding techniques such as PRML (explained further on) which require extensive amounts of signal processing, the use of simple tools such as an oscilloscope to directly recover the data is no longer possible).
So, with multiple over writes of random types (0s and 1s, obviously), it'd be harder to get the original piece. If you over write it with just 0s, it'd be easier to find the original.
9. Conclusion
Data overwritten once or twice may be recovered by subtracting what is expected to be read from a storage location from what is actually read. Data which is overwritten an arbitrarily large number of times can still be recovered provided that the new data isn't written to the same location as the original data (for magnetic media), or that the recovery attempt is carried out fairly soon after the new data was written (for RAM). For this reason it is effectively impossible to sanitise storage locations by simple overwriting them, no matter how many overwrite passes are made or what data patterns are written. However by using the relatively simple methods presented in this paper the task of an attacker can be made significantly more difficult, if not prohibitively expensive.
IIRC the paper explains that the read/write arm in the hard drive won't get the exact same place every time, with makes finding the original easier.
The DoD doesn't generally sit on its butt. I'm sure their standards have changed over the years.
😉