• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Does writing zeros to hard disk completely wipe out data ?

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
didnt see the click next to see the next page. so i kept clicking reply to and thought my message wasnt being displayed, only then to realize that it was being displayed on the second page.

sorry.



 
Originally posted by: thorny169
Originally posted by: n0cmonkey
Originally posted by: thorny169
Originally posted by: meatfestival
Wouldn't overwriting the hard disk with random data be better than zeros?

Once you write 0's 7 times the polarity of the platters makes it almost impossible to recover ANY data on the disk down to the bit level. Random writes could still leave clusters that haven't been written enough times to be totally clean.

Its kinda the same logic as a paper shredder, you don't want to leave any pieces big enough to read...

You're wrong.


Explain it then

Ok, some of you guys don't get it. Writing 0s or even 0s and 1s in random fashion across the whole hard drive will not do the trick. Even if you overwrite the whole hard drive, you're still not set.

If you disassemble the platters and take a magnetic head you can still pick up low level readings and you bet some data as well. When data has been stored on the hard drive for a long time, it makes an imprint. Like wet erase markers. If you write on and wash them off within the hour or the day it's fi ne. IF you leave it on a slide for like 2 years and then come back, you'll have one hell of a time getting it completely off without traces.

7 passes is considered very secure, but I recommend random 0s and 1s not just 0s. Finally if oyu're dealing with any sensitive data, just smash the platters. It's your best bet.
 
You can probably still see the files because it's not erasing the file table. The clusters of the files are probably gone, but the file names may still be intact. I'm just guessing, I really don't know as much as these other dudes.
 
Alas...And that is my problem, the file names and folder names are still on my disk after removing them with a blancco shredder program.

and so...how do i get rid of these file and folder names. or that is, how do i erase the file table or remove them from the file table.

note (im reffering to deleting information without wiping out the entire disk).

to simplify: i want to erase files and folders off my hard disk without completely wiping out the disk data. i have acquired a program which erases all the data, but it doesnt seem to be removing the file and folder names, which has something to do with the file table. so my question is simple: how can i remove those file and folder names from the file table ?
 
Originally posted by: V00D00
You can probably still see the files because it's not erasing the file table. The clusters of the files are probably gone, but the file names may still be intact. I'm just guessing, I really don't know as much as these other dudes.

Which file system(s) are you referring to?

In the case of FAT, the directory entry points at the first cluster. The allocation table is a map of all clusters, and the entry for cluster X will point to the next cluster allocated for the file in question.

When you delete a file, the first letter is changed to a reserved character (I don't recall the ASCII character off hand -- #129?) and all clusters associated with the file are marked as available.

The file's entry still point at the first cluster, so what happens during undelete is that the undelete utility will simply guess where the next cluster resides. On a partition with little fragmentation, the next cluster is usually the neighbouring cluster. The old Norton Undelete (mid-80s) would let the user choose clusters during the unerase operation. (I have no idea what modern unerasers do)

I doubt NTFS contains mechanisms to aid recovery further.

But for the sake of this discussion, even if the whole file proves unobtainable, a small fragment alone can spell disaster in certain cases.

The layer theory of deletion has been asserted several times for (atleast) the past two decades, but an example where it has actually been applied always seems to escape mention.

I guess all I'm saying is that overwriting the deleted data several times is all fine and dandy, but I would not pay a single shilling for a utility to perform this duty.

As for DLeRium's comment: Taking a hammer to a platter is easier said than done. The platters I've encountered do not simply break like a mirror or similar. They can take quite a punishment, so make sure you've overwritten the thing first! (kinda difficult re-assembling the drive after a botched-up hammer job)
 
Originally posted by: n0cmonkey

The government spends money.

Overwriting hard drives better costs more money.

The government over writes their disks better.

Hell, they probably just incinerate the things (Nuke'em from orbit, it's the only way to be sure). 😉

Or else they just sell the things without overwriting (or even formatting) them, like that infamous batch of laptops that got sold with secret data fully intact on the hard drives. The best overwriting methods in the world mean squat if they aren't used at all. 😀

I figure a great way of destroying hard drives would be a large belt sander. Screwing holes, sure it'll make reading anything horridly time consuming (like maybe a hundred years), but there are still portions intact.
Feed the platters through a belt sander, converting them to dust - that'd be fun, and as effective as high-temp incineration. But the metallic dust would just look neat. 🙂
 
Originally posted by: n0cmonkey
Originally posted by: thorny169
I'm really not interested in reading up on it, I'm recalling something I was told years ago from memory. If I'm wrong, I'm sorry for spreading misinformation, but please at least explain it instead of just saying I'm wrong.

Multiple overwrites is good, and ramdom over writes helps. The only really good method is destroying the drives.

To quote from a paper on usenix:
In conventional terms, when a one is written to disk the media records a one, and when a zero is written the media records a zero. However the actual effect is closer to obtaining a 0.95 when a zero is overwritten with a one, and a 1.05 when a one is overwritten with a one. Normal disk circuitry is set up so that both these values are read as ones, but using specialised circuitry it is possible to work out what previous "layers" contained. The recovery of at least one or two layers of overwritten data isn't too hard to perform by reading the signal from the analog head electronics with a high-quality digital sampling oscilloscope, downloading the sampled waveform to a PC, and analysing it in software to recover the previously recorded signal. What the software does is generate an "ideal" read signal and subtract it from what was actually read, leaving as the difference the remnant of the previous signal. Since the analog circuitry in a commercial hard drive is nowhere near the quality of the circuitry in the oscilloscope used to sample the signal, the ability exists to recover a lot of extra information which isn't exploited by the hard drive electronics (although with newer channel coding techniques such as PRML (explained further on) which require extensive amounts of signal processing, the use of simple tools such as an oscilloscope to directly recover the data is no longer possible).

So, with multiple over writes of random types (0s and 1s, obviously), it'd be harder to get the original piece. If you over write it with just 0s, it'd be easier to find the original.

9. Conclusion
Data overwritten once or twice may be recovered by subtracting what is expected to be read from a storage location from what is actually read. Data which is overwritten an arbitrarily large number of times can still be recovered provided that the new data isn't written to the same location as the original data (for magnetic media), or that the recovery attempt is carried out fairly soon after the new data was written (for RAM). For this reason it is effectively impossible to sanitise storage locations by simple overwriting them, no matter how many overwrite passes are made or what data patterns are written. However by using the relatively simple methods presented in this paper the task of an attacker can be made significantly more difficult, if not prohibitively expensive.

IIRC the paper explains that the read/write arm in the hard drive won't get the exact same place every time, with makes finding the original easier.

The DoD doesn't generally sit on its butt. I'm sure their standards have changed over the years. 😉

So you're saying when I run a random 7 pass wipe on my hard drives, then disassemble the drive and sand each platter down, I'm pretty safe, huh? 😉 Seriously, that's my process. I also shred all my documents, credit cards, CDs and floppies (don't have many of these left though).

Also n0cmonkey, what you've posted is what I'd say is more theoretical than anything else. Data recovery methods like you have described have been rumored for a LONG time now, but I've never seen anyone ever recover data from a 7 pass wiped hard drive, neither have data recovery companies. When asked about recovering data that has been written over ONCE, the data recovery tech responded, "we've heard rumors of recovering data after a disk has been written over, but we've never seen it done." Not saying the NSA is not capaable of recovering data that has been wiped with 7 passes, but I'd say they would be the ONLY ones capable of recovering that data.
 
Back
Top