Terraform local-exec provisioner on an EC2 instanc

2020-05-01 00:57发布

Trying to provision EKS cluster with Terraform.

terraform apply fails with:

module.eks_node.null_resource.export_rendered_template: Provisioning with 'local-exec'...
module.eks_node.null_resource.export_rendered_template (local-exec): Executing: ["/bin/sh" "-c" "cat > /data_output.sh <<EOL\n#!/bin/bash -xe\n\nCA_CERTIFICATE_DIRECTORY=/etc/kubernetes/pki\nCA_CERTIFICATE_FILE_PATH=$CA_CERTIFICATE_DIRECTORY/ca.crt\nmkdir -p $CA_CERTIFICATE_DIRECTORY\necho \"LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRFNE1URXhPVEUxTXpFeE5Wb1hEVEk0TVRFeE5qRTFNekV4TlZvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTG5MClhyT3ByL0dpeDFNdXJKTE1Za3Z5Mk83TDdnR1U2VVp0a2QvMmg1WmxYMUxUNVBtUk5XeTllaEErMW9KWE9MaHgKNWQ5L2tRMWcxdHJ3LzlpV0lPeENMN1oydjh5WVU2bjFuUTV3VXRSUWlIelFkcTlDL0ZNMmUzNVFGOFE5QWpFbQpkcGFob1JKVWk4UHJVYXRwV1NmYmZqM2F6RVFDcWgrMFF6TDdzUE1tS2dlOVpqbUw2VmFqNTNBSHZtcUkweUJYClQyR1ZySFJLUW9zZ2JwTHdIZE95andCejlvS3RScml6UnN2U2dPSVNNdDRIbVhDcDBwVGxBK2NGUFE4azBYdFoKaTFlcEc4aklCMGw0VFV3eGJROFFEUUxET25iUHFEdTFVV3cxSmIvaUZIZkV2Z0JrUTFpVjJDQmRhNzZkZjhDSgpSYzFqOERzeCtnYkFYdjhadzZNQ0F3RUFBYU1qTUNFd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFIcWxLZUdoN3dUM2U0N0RweEZhVW91WnYvbGwKbUtLR1V3Ly9RNWhSblVySEN6NU1Pc1gvcU5aWVM1aGFocjNJS2xqZnlBMkNJbk82cE9KeEcxVEVQOVRvdkRlcApnWlZhYVE0d3dqQTI0R2gwb1hrNGJ5TWNKaEhpRURmT0QvemlWMlN4ZEN4MHhGS0hQL3NRcm9RR1JyM0RPeTFwCmUvRzN6cndHN0FoSEg4dWJTQlZFMUVpZ0tvaXlpWTVnMGJVZTZUT2FobEdaMkxkTytheVg4UWdYOTNXOUVzdWoKTTIzaFA2T3pnRjhhbVpGZEpFRGNkR0dhdm5wMFBlbTJGb1dpY3hvZlFsWUhacnA4WGk3Z1JHY0RjakJuWFZxcAptQkpnSkNuSkJ2UWZ0WXJPbG1mZUZMMXpKckM5WUdUYmxjbUNuTi95UW5VcVdZZHJDLzNTYWVHOEZTbz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=\" | base64 -d >  $CA_CERTIFICATE_FILE_PATH\nINTERNAL_IP=$(curl -s http://169.254.169.254/latest/meta-data/local-ipv4)\nsed -i s,MASTER_ENDPOINT,https://232F45B8AFA817FD332477B84D964B2F.yl4.eu-west-1.eks.amazonaws.com,g /var/lib/kubelet/kubeconfig\nsed -i s,CLUSTER_NAME,terra-dev,g /var/lib/kubelet/kubeconfig\nsed -i s,REGION,eu-west-1,g /etc/systemd/system/kubelet.service\nsed -i s,MAX_PODS,20,g /etc/systemd/system/kubelet.service\nsed -i s,MASTER_ENDPOINT,https://232F45B8AFA817FD332477B84D964B2F.yl4.eu-west-1.eks.amazonaws.com,g /etc/systemd/system/kubelet.service\nsed -i s,INTERNAL_IP,$INTERNAL_IP,g /etc/systemd/system/kubelet.service\nDNS_CLUSTER_IP=10.100.0.10\nif [[ $INTERNAL_IP == 10.* ]] ; then DNS_CLUSTER_IP=172.20.0.10; fi\nsed -i s,DNS_CLUSTER_IP,$DNS_CLUSTER_IP,g /etc/systemd/system/kubelet.service\nsed -i s,CERTIFICATE_AUTHORITY_FILE,$CA_CERTIFICATE_FILE_PATH,g /var/lib/kubelet/kubeconfig\nsed -i s,CLIENT_CA_FILE,$CA_CERTIFICATE_FILE_PATH,g  /etc/systemd/system/kubelet.service\nsystemctl daemon-reload\nsystemctl restart kubelet\n\nEOL"]
module.eks_node.null_resource.export_rendered_template (local-exec): /bin/sh: /data_output.sh: Permission denied

Error: Error applying plan:

1 error(s) occurred:

* module.eks_node.null_resource.export_rendered_template: Error running command 'cat > /data_output.sh <<EOL
#!/bin/bash -xe

CA_CERTIFICATE_DIRECTORY=/etc/kubernetes/pki
CA_CERTIFICATE_FILE_PATH=$CA_CERTIFICATE_DIRECTORY/ca.crt
mkdir -p $CA_CERTIFICATE_DIRECTORY
echo "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRFNE1URXhPVEUxTXpFeE5Wb1hEVEk0TVRFeE5qRTFNekV4TlZvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTG5MClhyT3ByL0dpeDFNdXJKTE1Za3Z5Mk83TDdnR1U2VVp0a2QvMmg1WmxYMUxUNVBtUk5XeTllaEErMW9KWE9MaHgKNWQ5L2tRMWcxdHJ3LzlpV0lPeENMN1oydjh5WVU2bjFuUTV3VXRSUWlIelFkcTlDL0ZNMmUzNVFGOFE5QWpFbQpkcGFob1JKVWk4UHJVYXRwV1NmYmZqM2F6RVFDcWgrMFF6TDdzUE1tS2dlOVpqbUw2VmFqNTNBSHZtcUkweUJYClQyR1ZySFJLUW9zZ2JwTHdIZE95andCejlvS3RScml6UnN2U2dPSVNNdDRIbVhDcDBwVGxBK2NGUFE4azBYdFoKaTFlcEc4aklCMGw0VFV3eGJROFFEUUxET25iUHFEdTFVV3cxSmIvaUZIZkV2Z0JrUTFpVjJDQmRhNzZkZjhDSgpSYzFqOERzeCtnYkFYdjhadzZNQ0F3RUFBYU1qTUNFd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFIcWxLZUdoN3dUM2U0N0RweEZhVW91WnYvbGwKbUtLR1V3Ly9RNWhSblVySEN6NU1Pc1gvcU5aWVM1aGFocjNJS2xqZnlBMkNJbk82cE9KeEcxVEVQOVRvdkRlcApnWlZhYVE0d3dqQTI0R2gwb1hrNGJ5TWNKaEhpRURmT0QvemlWMlN4ZEN4MHhGS0hQL3NRcm9RR1JyM0RPeTFwCmUvRzN6cndHN0FoSEg4dWJTQlZFMUVpZ0tvaXlpWTVnMGJVZTZUT2FobEdaMkxkTytheVg4UWdYOTNXOUVzdWoKTTIzaFA2T3pnRjhhbVpGZEpFRGNkR0dhdm5wMFBlbTJGb1dpY3hvZlFsWUhacnA4WGk3Z1JHY0RjakJuWFZxcAptQkpnSkNuSkJ2UWZ0WXJPbG1mZUZMMXpKckM5WUdUYmxjbUNuTi95UW5VcVdZZHJDLzNTYWVHOEZTbz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=" | base64 -d >  $CA_CERTIFICATE_FILE_PATH
INTERNAL_IP=$(curl -s http://169.254.169.254/latest/meta-data/local-ipv4)
sed -i s,MASTER_ENDPOINT,https://232F45B8AFA817FD332477B84D964B2F.yl4.eu-west-1.eks.amazonaws.com,g /var/lib/kubelet/kubeconfig
sed -i s,CLUSTER_NAME,terra-dev,g /var/lib/kubelet/kubeconfig
sed -i s,REGION,eu-west-1,g /etc/systemd/system/kubelet.service
sed -i s,MAX_PODS,20,g /etc/systemd/system/kubelet.service
sed -i s,MASTER_ENDPOINT,https://232F45B8AFA817FD332477B84D964B2F.yl4.eu-west-1.eks.amazonaws.com,g /etc/systemd/system/kubelet.service
sed -i s,INTERNAL_IP,$INTERNAL_IP,g /etc/systemd/system/kubelet.service
DNS_CLUSTER_IP=10.100.0.10
if [[ $INTERNAL_IP == 10.* ]] ; then DNS_CLUSTER_IP=172.20.0.10; fi
sed -i s,DNS_CLUSTER_IP,$DNS_CLUSTER_IP,g /etc/systemd/system/kubelet.service
sed -i s,CERTIFICATE_AUTHORITY_FILE,$CA_CERTIFICATE_FILE_PATH,g /var/lib/kubelet/kubeconfig
sed -i s,CLIENT_CA_FILE,$CA_CERTIFICATE_FILE_PATH,g  /etc/systemd/system/kubelet.service
systemctl daemon-reload
systemctl restart kubelet

EOL': exit status 1. Output: /bin/sh: /data_output.sh: Permission denied

My eks_node module:

data "aws_ami" "eks-worker" {
  filter {
    name   = "name"
    values = ["amazon-eks-node-v*"]
  }

  most_recent = true

  owners = ["602401143452"] # Amazon
}

data "aws_region" "current" {}

data "template_file" "user_data" {
  template = "${file("${path.module}/userdata.tpl")}"

  vars {
    eks_certificate_authority = "${var.eks_certificate_authority}"
    eks_endpoint              = "${var.eks_endpoint}"
    eks_cluster_name          = "${var.eks_cluster_name}"
    workspace                 = "${terraform.workspace}"
    aws_region_current_name   = "${data.aws_region.current.name}"
  }
}

resource "null_resource" "export_rendered_template" {
  provisioner "local-exec" {
    command = "cat > /data_output.sh <<EOL\n${data.template_file.user_data.rendered}\nEOL"
  }
}

resource "aws_launch_configuration" "terra" {
  associate_public_ip_address = true
  iam_instance_profile        = "${var.iam_instance_profile}"
  image_id                    = "${data.aws_ami.eks-worker.id}"
  instance_type               = "t2.medium"
  name_prefix                 = "terraform-eks"
  key_name                    = "Dev1"
  security_groups             = ["${var.security_group_node}"]
  user_data                   = "${data.template_file.user_data.rendered}"

  lifecycle {
    create_before_destroy = true
  }
}

resource "aws_autoscaling_group" "terra" {
  desired_capacity     = 1
  launch_configuration = "${aws_launch_configuration.terra.id}"
  max_size             = 2
  min_size             = 1
  name                 = "terraform-eks"
  vpc_zone_identifier  = ["${var.subnets}"]

  tag {
    key                 = "Name"
    value               = "terraform-eks"
    propagate_at_launch = true
  }

  tag {
    key                 = "kubernetes.io/cluster/${var.eks_cluster_name}-${terraform.workspace}"
    value               = "owned"
    propagate_at_launch = true
  }
}

EKS currently documents this required userdata for EKS worker nodes to properly configure Kubernetes applications on the EC2 instance. https://amazon-eks.s3-us-west-2.amazonaws.com/1.10.3/2018-06-05/amazon-eks-nodegroup.yaml

My userdata.tpl:

#!/bin/bash -xe

CA_CERTIFICATE_DIRECTORY=/etc/kubernetes/pki
CA_CERTIFICATE_FILE_PATH=$CA_CERTIFICATE_DIRECTORY/ca.crt
mkdir -p $CA_CERTIFICATE_DIRECTORY
echo "${eks_certificate_authority}" | base64 -d >  $CA_CERTIFICATE_FILE_PATH
INTERNAL_IP=$(curl -s http://169.254.169.254/latest/meta-data/local-ipv4)
sed -i s,MASTER_ENDPOINT,${eks_endpoint},g /var/lib/kubelet/kubeconfig
sed -i s,CLUSTER_NAME,${eks_cluster_name}-${workspace},g /var/lib/kubelet/kubeconfig
sed -i s,REGION,${aws_region_current_name},g /etc/systemd/system/kubelet.service
sed -i s,MAX_PODS,20,g /etc/systemd/system/kubelet.service
sed -i s,MASTER_ENDPOINT,${eks_endpoint},g /etc/systemd/system/kubelet.service
sed -i s,INTERNAL_IP,$INTERNAL_IP,g /etc/systemd/system/kubelet.service
DNS_CLUSTER_IP=10.100.0.10
if [[ $INTERNAL_IP == 10.* ]] ; then DNS_CLUSTER_IP=172.20.0.10; fi
sed -i s,DNS_CLUSTER_IP,$DNS_CLUSTER_IP,g /etc/systemd/system/kubelet.service
sed -i s,CERTIFICATE_AUTHORITY_FILE,$CA_CERTIFICATE_FILE_PATH,g /var/lib/kubelet/kubeconfig
sed -i s,CLIENT_CA_FILE,$CA_CERTIFICATE_FILE_PATH,g  /etc/systemd/system/kubelet.service
systemctl daemon-reload
systemctl restart kubelet

1条回答
太酷不给撩
2楼-- · 2020-05-01 01:45

The local-exec provisioner is running the cat command on your local system (the one running Terraform) to output the contents of the user-data script, presumably for later reference. However, your user running Terraform does not have permission to write to the / directory.

If you don't care about having the rendered user-data script output to a local file, then you can comment out the whole resource "null_resource" "export_rendered_template" block.

But if you do want it, try changing the output path from /data-output.sh to ./data-output.sh or some other path where your user can write to.

Note: This may not work cleanly on Windows as you may need to change paths.

查看更多
登录 后发表回答