Gunicorn continually booting workers when run in a

2019-08-30 07:53发布

问题:

I've dockerized a Flask app, using gunicorn to serve it. The last line of my Dockerfile is:

CMD source activate my_env && gunicorn --timeout 333 --bind 0.0.0.0:5000 app:app

When running the app locally – either straight in my console, without docker, or with

docker run -dit \
           --name my-app \
           --publish 5000:5000 \
           my-app:latest

It boots up fine. I get a log like:

[2018-12-04 19:32:30 +0000] [8] [INFO] Starting gunicorn 19.7.1
[2018-12-04 19:32:30 +0000] [8] [INFO] Listening at: http://0.0.0.0:5000 (8)
[2018-12-04 19:32:30 +0000] [8] [INFO] Using worker: sync
[2018-12-04 19:32:30 +0000] [16] [INFO] Booting worker with pid: 16
<my app's output>

When running the same image in k8s I get

[2018-12-10 21:09:42 +0000] [5] [INFO] Starting gunicorn 19.7.1
[2018-12-10 21:09:42 +0000] [5] [INFO] Listening at: http://0.0.0.0:5000 (5)
[2018-12-10 21:09:42 +0000] [5] [INFO] Using worker: sync
[2018-12-10 21:09:42 +0000] [13] [INFO] Booting worker with pid: 13
[2018-12-10 21:10:52 +0000] [16] [INFO] Booting worker with pid: 16
[2018-12-10 21:10:53 +0000] [19] [INFO] Booting worker with pid: 19
[2018-12-10 21:14:40 +0000] [22] [INFO] Booting worker with pid: 22
[2018-12-10 21:16:14 +0000] [25] [INFO] Booting worker with pid: 25
[2018-12-10 21:16:25 +0000] [28] [INFO] Booting worker with pid: 28
<etc>

My k8s deployment yaml looks like

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: my-deployment
spec:
  replicas: 1
  selector:
    matchLabels:
      app: my-app
  template:
    metadata:
      labels:
        app: my-app
    spec:
      imagePullSecrets:
        - name: regcred
      containers:
        - name: my-frontend
          image: my-registry/my-frontend:latest
          ports:
            - containerPort: 80
        - name: my-backend
          image: my-registry/my-backend:latest
          ports:
            - containerPort: 5000

Here, the container in question is my-backend. Any ideas why this is happening?

Update: As I wrote this, the events list that is printed with kubectl describe pods was updated with the following:

Warning  FailedMount            9m55s                  kubelet, minikube  MountVolume.SetUp failed for volume "default-token-k2shm" : Get https://localhost:8443/api/v1/namespaces/default/secrets/default-token-k2shm: net/http: TLS handshake timeout
Warning  FailedMount            9m53s (x2 over 9m54s)  kubelet, minikube  MountVolume.SetUp failed for volume "default-token-k2shm" : secrets "default-token-k2shm" is forbidden: User "system:node:minikube" cannot get secrets in the namespace "default": no path found to object
Normal   SuccessfulMountVolume  9m50s                  kubelet, minikube  MountVolume.SetUp succeeded for volume "default-token-k2shm"

Not sure if it's relevant to my issue

回答1:

I solved this by adding resources under the container - mine needed more memory.

resources:
  requests:
    memory: "512Mi"
    cpu: 0.1
  limits:
    memory: "1024Mi"
    cpu: 1.0

Hope that helps.