What is the performance penalty for full disk encryption on an SSD?

FishAk

Senior member
Jun 13, 2010
987
0
0
What are the effects of full disk encryption using TrueCrypt on an SSD? Obviously a SandForce drive won't be able to compress the data, and will perform much less than optimal, but what about wear leveling?

With FDE, the entire usable portion of the disk is filled with random data. Also, the storage device is never fed any data that doesn't look completely random. The only space not filled is the reserve. I assume this would affect performance, but by how much?

Which controller would be the best candidate for this type of “abuse”?
 

FishAk

Senior member
Jun 13, 2010
987
0
0
Thanks, vrxtd. I understand there is hardware based encryption, but this is proprietary. There is no way to determine if proprietary (as apposed to open source) encryption has a "back door" or not.

I am specifically referring to encryption that only presents the drive with what is sees as just random bits.
 

Mark R

Diamond Member
Oct 9, 1999
8,513
16
81
As you rightly say, the issue is that the drive appears to be full of data, and therefore the 'free pool' of blocks is relatively depleted, compared to the same drive, unencrypted and with TRIM.

How much of an issue this is depends on how aggressively the drive reclaims free blocks and whether or not it does it in the background (or only on write), and also how much 'overprovisioning' there is on the drive.

Enterprise level drives frequently provide 20%, or so, overprovisioning. This ensures that even when the drive is full (or used on a non-TRIM enabled system, e.g. in RAID or FDE) that there would always be more than enough free pages available for most forseable write bursts.

Consumer level drives tend to use less overprovisioning. 7% is typical. First generation sandforce used up to 12%. Newer generation drives are tending to provide less overprovisioning, in order to appear as better value and give customers more flexibility. This isn't so much of an issue as TRIM is more prevalent. For instance, sandforce now permit OEMs to configure reduced overprovisioning in custom firmwares, with options of 3% and 0% available for use. For consumer use with TRIM, there are likely to be at least a few% of free blocks available - and you really only need 2-3% free pages to stop write amplification going insane, and performance plummetting.

In terms of free block reclamation, sandforce does it in the background automatically. If you 'hammer' a drive so that it runs out of free blocks, it will slow down because it has to free them up 'on the fly'. However, if allowed to rest for an hour or so, it will recover, as the drive automatically frees them up when idle.

One thing that affects write amplification (and flash wear) is how much of the data is static. On a drive with large amounts of static data (e.g. a very low capacity drive which can barely fit an OS and application bundle), the wear levelling is less effective, meaning that when there are writes, they tend to require very time consuming and wear inducing levelling operations. If, however, you are using large amounts of dynamic data, then the wear levelling tends to be more effective, faster and causes less overall flash wear.

Ideally, you want a drive with a lot of overprovisioning, especially if you are going to demand maximum performance. Don't forget that you can manually overprovision the drive, by secure erasing the drive, and then partitioning only part of the space, leaving the unpartitioned area untouched. If you leave 10-15% unused on a consumer drive, you'll get a similar total level as enterprise level drives.

Some FDE tools do provide their own encryption (compression is a useful pre-processing step prior to encryption as it makes certain types of cryptographic attack much more difficult or impossible), but you could always use software compression at the filesystem level (which is relatively lightweight in terms of resources, and will probably be significantly more effective than the hardware compression in sandforce controllers).
 

FishAk

Senior member
Jun 13, 2010
987
0
0
Thank you for the detailed response, Mark.

Don't forget that you can manually overprovision the drive...

This is a very good point. However, I foresee a problem in the actual implementation.

As I understand it, one must perform the secure erase, then partition the drive, and then must not allocate (or otherwise disturb) the portion meant for self over-provisioning.

I need to recover the OS from an image file in plain text, and then encrypt it. I can back up an image, and recover it from an encrypted container, but the recovered OS is in PT until it gets encrypted. I don't know of a way to start with a recovered image that is pre-encrypted and still able to boot.

Because of this, when the system partition is encrypted, the over-provisioned space will end up with part- maybe even all- of the OS in PT. The larger space of the self OP allows more of the pre-encrypted OS to be still available in PT until it is eventually turned over, through use, with encrypted bits. The bigger the over-provisioned space, the more of a chance some personal data will be leaked to that space as plain text.

Since you could not secure erase the unallocated space again, and still have it be used as over-provisioning, what would be the best way to force it to turn over, and be cleaned of the PT? I'm guessing one could simply transfer a large file through the empty space of the OS partition a few times to clear the OP space, but I'm not sure how effective that would be.

My OS runs around 22-23GB, so even a 40GB partition would leave plenty of space left for self OP on an 80 or 120GB drive.
 

Mark R

Diamond Member
Oct 9, 1999
8,513
16
81
Thank you for the detailed response, Mark.
As I understand it, one must perform the secure erase, then partition the drive, and then must not allocate (or otherwise disturb) the portion meant for self over-provisioning.

I need to recover the OS from an image file in plain text, and then encrypt it. I can back up an image, and recover it from an encrypted container, but the recovered OS is in PT until it gets encrypted. I don't know of a way to start with a recovered image that is pre-encrypted and still able to boot.

My OS runs around 22-23GB, so even a 40GB partition would leave plenty of space left for self OP on an 80 or 120GB drive.

There are 2 potential solutions:
1. Use a low-level HD management tool to 'hide' some of the SSD space. IDE/SATA drives have a command that can make space 'disappear' which is normally used by OEMs for hiding system recovery data. This way it is very difficult for the user or a virus to nail the recovery partition. A tool like 'hdat2' will be able to issue the 'SET MAX ADDRESS' command, which will limit the capacity that gets reported to the OS. So, immediately after secure erasure, use the tool to issue the 'SET MAX ADDRESS' command with a capacity lower than the full drive capacity. Once this command has been issued the OS will be unable to see or touch the reserved capacity.

2. Install the OS as planned onto a suitably undersized partition. Once the system is up and running. Using a low-level imaging tool, image the drive to a file on another drive, taking care to ensure that unpartitioned space does not get imaged. Secure erase the drive, and restore the low-level image.

I've done similar stuff with the linux dd command, which will make a total bit-for-bit copy of the drive. It won't care whether it's copying encrypted data, partitions, or just junk.

In my case, I took a master OS image from a spindle disk, imaged it onto an SSD, optimized, defragged, fixed permissions etc. on the SSD, then imaged the SSD back onto the master disk. The defrag, scripts, permissions, etc. ran so much faster on the SSD that it was well worth it.
 

FishAk

Senior member
Jun 13, 2010
987
0
0
That sounds like an excellent idea.

I have control of which sectors are written to on a HDD, and can sanitize it on demand. But on the SSD, the level of controller management means I can't control which sectors are physically written to or wiped.

I can restore the OS to the HDD, shrink the partition to the smallest size possible, then encrypt it. Next, I can bit for bit copy that partition to the already over-provisioned SSD. With this method, the SSD would never be exposed to plain text, and it only adds a couple steps to the method I use now.