Control order of container termination in a single

2020-06-04 07:47发布

I have two containers inside one pod. One is my application container and the second is a CloudSQL proxy container. Basically my application container is dependent on this CloudSQL container.

The problem is that when a pod is terminated, the CloudSQL proxy container is terminated first and only after some seconds my application container is terminated.

So, before my container is terminated, it keeps sending requests to the CloudSQL container, resulting in errors:

could not connect to server: Connection refused Is the server running on host "127.0.0.1" and accepting TCP/IP connections on port 5432

That's why, I thought it would be a good idea to specify the order of termination, so that my application container is terminated first and only then the cloudsql one.

I was unable to find anything that could do this in the documentation. But maybe there is some way.

1条回答
相关推荐>>
2楼-- · 2020-06-04 08:19

This is not directly possible with the Kubernetes pod API at present. Containers may be terminated in any order. The Cloud SQL pod may die more quickly than your application, for example if it has less cleanup to perform or fewer in-flight requests to drain.

From Termination of Pods:

When a user requests deletion of a pod, the system records the intended grace period before the pod is allowed to be forcefully killed, and a TERM signal is sent to the main process in each container.


You can get around this to an extent by wrapping the Cloud SQL and main containers in different entrypoints, which communicate their exit status between each other using a shared pod-level file system.

A wrapper like the following may help with this:

containers:
- command: ["/bin/bash", "-c"]
  args:
  - |
    trap "touch /lifecycle/main-terminated" EXIT
    <your entry point goes here>
  volumeMounts:
  - name: lifecycle
    mountPath: /lifecycle
- name: cloudsql_proxy
  image: gcr.io/cloudsql-docker/gce-proxy
  command: ["/bin/bash", "-c"]
  args:
  - |
    /cloud_sql_proxy <your flags> &
    PID=$!

    function stop {
        while true; do
            if [[ -f "/lifecycle/main-terminated" ]]; then
                kill $PID
            fi
            sleep 1
        done
    }
    trap stop EXIT
    # We explicitly call stop to ensure the sidecar will terminate
    # if the main container exits outside a request from Kubernetes
    # to kill the Pod.
    stop &
    wait $PID
  volumeMounts:
  - name: lifecycle
    mountPath: /lifecycle

You'll also need a local scratch space to use for communicating lifecycle events:

volumes:
- name: lifecycle
  emptyDir:

Of course, these depend on the containers you are running having a shell available to them; this is true of the Cloud SQL proxy, but you may need to make changes to your builds to ensure this is true of your own application containers.

查看更多
登录 后发表回答