I have a Terraform configuration targeting deployment on AWS. It applies beautifully when using an IAM user that has permission to do anything (i.e. {actions: ["*"], resources: ["*"]}
.
In pursuit of automating the application of this Terraform configuration, I want to determine the minimum set of permissions necessary to apply the configuration initially and effect subsequent changes. I specifically want to avoid giving overbroad permissions in policy, e.g. {actions: ["s3:*"], resources: ["*"]}
.
So far, I'm simply running terraform apply
until an error occurs. I look at the output or at the terraform log output to see what API call failed and then add it to the deployment user policy. EC2 and S3 are particularly frustrating because the name of the actions seems to not necessarily align with the API method name. I'm several hours into this with easy way to tell how far long I am.
Is there a more efficient way to do this?
It'd be really nice if Terraform advised me what permission/action I need but that's a product enhancement best left to Hashicorp.
As I guess that there's no perfect solution, treat this answer a bit as result of my brain storming. At least for the initial permission setup, I could imagine the following:
Allow everything first and then process the CloudTrail logs to see, which API calls were made in a terraform apply
/ destroy
cycle.
Afterwards, you update the IAM policy to include exactly these calls.
Here is another approach, similar to what was said above, but without getting into CloudTrail -
- Give full permissions to your IAM user.
- Run
TF_LOG=trace terraform apply --auto-approve &> log.log
- Run
cat log.log | grep "DEBUG: Request"
You will get a list of all AWS Actions used.
Efficient way I followed.
The way I deal with is, allow all permissions (*) for that service first, then deny some of them if not required.
For example
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowSpecifics",
"Action": [
"ec2:*",
"rds:*",
"s3:*",
"sns:*",
"sqs:*",
"iam:*",
"elasticloadbalancing:*",
"autoscaling:*",
"cloudwatch:*",
"cloudfront:*",
"route53:*",
"ecr:*",
"logs:*",
"ecs:*",
"application-autoscaling:*",
"logs:*",
"events:*",
"elasticache:*",
"es:*",
"kms:*",
"dynamodb:*"
],
"Effect": "Allow",
"Resource": "*"
},
{
"Sid": "DenySpecifics",
"Action": [
"iam:*User*",
"iam:*Login*",
"iam:*Group*",
"iam:*Provider*",
"aws-portal:*",
"budgets:*",
"config:*",
"directconnect:*",
"aws-marketplace:*",
"aws-marketplace-management:*",
"ec2:*ReservedInstances*"
],
"Effect": "Deny",
"Resource": "*"
}
]
}
You can easily adjust the list in Deny session, if terraform doesn't need or your company doesn't use some aws services.
The method you are attempting is a bit unusual in cloud technology. Instead of giving fine grained control to the terraform user running the API calls, you could run terraform on an EC2 instance with an EC2 IAM profile that allows the instance itself to call the API's with the correct set of AWS service permissions.
Each call is made to a specific API endpoint. Some terraform operations applied during a apply would be different if you pushed an update. You also need to know what happens during a modify operation where a resource is updated in place.
Instead of restricting that API key to specifics, let it have more leeway, if it needs to create and destroy EC2, let it have EC2 full perms maybe with conditionals to restrict to account level or VPC.