We’ve been working on a project using Amazon Web Services for the past few years. One of the main concerns I have is the inability to change a server from one security group to another once deployed. I’m sure there are very good reasons for security groups being designed this way (the setup is most likely iptables on the host node behind the scenes), but when trying to long-term EC2 instances, it’s a System Engineer’s worst nightmare to not have full control over firewall configuration. Add in the fact that all general EC2 customers share a single private address space, and the need for flexibility becomes even more important.


You launch 20 MySQL servers in a single security group called “MySQL Servers”. Somewhere down the line (and this does happen), someone needs port 3306 open on one specific server to a random subnet (let’s call it

You now have three options:

  1. Re-launch the one server in its own security group, and grant access. While launch automation and central management should be one of an EC2 user’s top priorities, this option may not always be practical.
  2. Open up port 3306 to 192168.0.0/16 inside the “MySQL Servers” security group, and then use local iptables on the remaining servers to block off access on port 3306.
  3. Open up port 3306 to inside the “MySQL Servers” security group, and be okay with the fact that all 20 of your MySQL servers are now open to that subnet (and at that point, put your faith in your MySQL users configuration that access is configured appropriately).

In order to avoid encountering this issue in the future, I’ve recently been experimenting with putting every EC2 instance in its own security group. Not only can this be used to easily “tag” an instance with a name, but it allows you to individually control security rules for each server independently. Note that this does increase overhead, and most rules will be repeated for all servers of the same type, so I only recommend this solution if you’re automatically managing your instances via the EC2 API.