I'm using Amazon's tools to build a web app. I'm very happy with them, but I have a security concern.
Right now, I'm using multiple EC2 instances, S3, SimpleDB and SQS. In order to authenticate requests to the different services, you include your Access Identifiers (login required).
For example, to upload a file to S3 from an EC2 instance, your EC2 instance needs to have your Access Key ID and your Secret Access Key.
That basically means your username and password need to be in your instances.
If one of my instances were to be compromised, all of my Amazon assets would be compromised. The keys can be used upload/replace S3 and SimpleDB data, start and stop EC2 instances, etc.
How can I minimize the damage of a single compromised host?
My first thought is to get multiple identifiers per account so I can track changes made and quickly revoke the 'hacked' account. Amazon doesn't support more than one set of credentials per account.
My second thought was to create multiple accounts and use ACL's to control access. Unfortunately, not all the services support granting other accounts access to your data. Plus bandwidth is cheaper the more that you use, so having it all go through one account is ideal.
Has anyone dealt with, or at least thought about this problem?
What you can do is have a single, super-locked down 'authentication server'. The secret key only exists on this one server, and all the other servers will need to ask it for permission. You can assign your own keys to the various servers, and lock it down by IP address as well. That way if a server gets compromised, you simply revoke its key from the 'authentication server'.
This is possible, because of the way the AWS authentication works. Say your webserver needs to upload a file to S3. First, it will generate the AWS request, and send that request along with your custom server key to the 'authentication server'. The authentication server will authenticate the request, doing the crypto magic stuff, and return the authenticated string back to the webserver. The webserver can then use this to actually submit the request along with the file to upload to S3.
AWS allows you to create multiple users with Identity and Access Management. This will allow you to implement either of your scenarios.
I would suggest defining an IAM user per EC2 instance, this allows you to revoke access to a specific user (or just their access keys) if the corresponding EC2 instance is compromised and also use fine-grained permissions to restrict what APIs the user can call and what resources they can access (e.g. only permit the user to upload to a specific bucket).
Furthermore, AWS IAM roles allow you to assign permissions to an EC2 instance rather than having to place keys on the instance.
See the blog post at http://aws.typepad.com/aws/2012/06/iam-roles-for-ec2-instances-simplified-secure-access-to-aws-service-apis-from-ec2.html
Most of the SDKs utilize the temporary keys created by the roles.
AWS offers "Consolidated Billing" which addresses your concern in the second thought.
https://aws-portal.amazon.com/gp/aws/developer/account/index.html?ie=UTF8&action=consolidated-billing
"Consolidated Billing enables you to consolidate payment for multiple Amazon Web Services (AWS) accounts within your company by designating a single paying account. You can see a combined view of AWS costs incurred by all accounts, as well as obtain a detailed cost report for each of the individual AWS accounts associated with your paying account. Consolidated Billing may also lower your overall costs since the rolled up usage across all of your accounts could help you reach lower-priced volume tiers more quickly."