A word about off-line back-ups

Essence_of_War

Platinum Member
Feb 21, 2013
2,650
4
81
Take a moment to visit:

http://www.codespaces.com/

And read the strongest argument for off-line back-ups that I've seen in a while.

On Tuesday the 17th of June 2014 we received a well orchestrated DDOS against our servers, this happens quite often and we normally overcome them in a way that is transparent to the Code Spaces community. On this occasion however the DDOS was just the start.

An unauthorised person who at this point who is still unknown (All we can say is that we have no reason to think its anyone who is or was employed with Code Spaces) had gained access to our Amazon EC2 control panel and had left a number of messages for us to contact them using a hotmail address

Reaching out to the address started a chain of events that revolved arount the person trying to extort a large fee in order to resolve the DDOS.

Upon realisation that somebody had access to our control panel we started to investigate how access had been gained and what access that person had to the data in our systems, it became clear that so far no machine access had been achieved due to the intruder not having our Private Keys.

At this point we took action to take control back of our panel by changing passwords, however the intruder had prepared for this and had already created a number of backup logins to the panel and upon seeing us make the attempted recovery of the account he proceeded to randomly delete artifacts from the panel. We finally managed to get our panel access back but not before he had removed all EBS snapshots, S3 buckets, all AMI's, some EBS instances and several machine instances.

In summary, most of our data, backups, machine configurations and offsite backups were either partially or completely deleted.

The entire business is in rubble. There are a ton of places where this
could have been fixed. It sounds like all of their Amazon EC2 settings
were either too lax, or simply not configured to take advantage of all
of the security options available. It sounds like their attempt to
regain control of their account was sort of half-baked.

But I think the whole event transitioned from annoyance to
catatrosophe when they had no off-line back-ups..

This is something we should all consider as we plan out how to meet our storage needs.
 

corkyg

Elite Member | Peripherals
Super Moderator
Mar 4, 2000
27,370
240
106
Interesting. But, they apparently did have off site backups. In order for them to be affected, they must have been on line. Yeah, off line is more bullet proof. It's what I use in my microcosm LAN. Each computer has an off line reserve drive. I rotate them all weekly. Your point is taken.
 

hoorah

Senior member
Dec 8, 2005
755
18
81
The problem with off-line backups is that they don't get updated. Why just take a drive offsite when I can use rsync or FTP or crashplan or yadda yadda to sync my files over the internet to a small nas box at my fathers office and then.......the need for the data to be current is my undoing.

A hard drive full of data a month old is infinitely better than bad data that gets propagated to all 4 of your regular online backups.
 

corkyg

Elite Member | Peripherals
Super Moderator
Mar 4, 2000
27,370
240
106
The problem with off-line backups is that they don't get updated. ...

Assume you mean in real time. My off line reserve drives get updated weekly and sometimes daily. Also assume off line means no live connection to a computer. My off line drives are only powered up and put on line when being updated and checked.
 

Elixer

Lifer
May 7, 2002
10,371
762
126
Take a moment to visit:

http://www.codespaces.com/

And read the strongest argument for off-line back-ups that I've seen in a while.



The entire business is in rubble. There are a ton of places where this
could have been fixed. It sounds like all of their Amazon EC2 settings
were either too lax, or simply not configured to take advantage of all
of the security options available. It sounds like their attempt to
regain control of their account was sort of half-baked.

But I think the whole event transitioned from annoyance to
catatrosophe when they had no off-line back-ups..

This is something we should all consider as we plan out how to meet our storage needs.

Hard to feel sorry for them, when it was their fault.
I feel sorry for all those people that used them.. but, the company itself, nope.
They should have known better. You never keep all your eggs in one basket.
 

Jeff7181

Lifer
Aug 21, 2002
18,368
11
81
Not having offline backups is a bad thing, but that's not the real failing here. The real failing is that this was allowed to happen and Amazon had no way to help their customer rebuild their data. I work for a cloud computing provider and we make use of SAN-based snapshots which get replicated offsite for DR purposes and could be used to quickly reconstruct data, even if entire LUNs had been deleted.

The fact that this doesn't appear to be possible for Code Spaces is bad. I don't know much about EC2, but I have a feeling this type of protection is possible - if they chose not to pay for it, shame on them. If their Sales Rep at Amazon didn't strongly suggest they pay for it, shame on them.

Either way, it's an example of people putting too much faith in "the cloud."
 

KingFatty

Diamond Member
Dec 29, 2010
3,034
1
81
I see a bit of an analogy to how you should design your backup plan to protect against yourself. Like when you delete something and later realize you want it back. If you propagate that deletion to all your backups, you are stuck. Here, it seems their offsite backups were designed to be updated really fast. So it's almost like you need to design a system that has a built-in delay or annoyance factor, like physically removing drives from access etc., or at least making then inaccessible for a while. That way nobody can accidentally overwrite them, or intentionally delete them.
 

Revolution 11

Senior member
Jun 2, 2011
952
79
91
And that length of delay will vary on the organization and the requirements for data. If it is a small mom-and-pop business, you may want a delay of a few days. But if the owner fails to see data damage or deletion, the backup period will need to be longer.

A large corporation may take several days to diagnose the extent of the damage. A month old backup will not be ideal but it is better than having backups only go back a week which are also corrupted.

Ideally, you want different age tiers to backups. Some should be quite old, some will be weeks-old, and some will be hourly or daily.
 

Elixer

Lifer
May 7, 2002
10,371
762
126
BTW, I was talking to someone that uses Amazon's services, and they said that part of the package they have from Amazon was to do backups. They then take those backups, and send it encrypted to another company for long term storage, and that company makes a backup of the backup as well.
 

Jeff7181

Lifer
Aug 21, 2002
18,368
11
81
It would also make sense, if budget permits, to have two redundant cloud environments. Half the environment in EC2, the other half in Azure.