Daemonize Celerybeat in Elastic Beanstalk(AWS)

2020-07-10 08:12发布

问题:

I am trying to run celerybeat as a daemon in Elastic beanstalk. Here is my config file:

files:
"/opt/python/log/django.log":
mode: "000666"
owner: ec2-user
group: ec2-user
content: |
  # Log file
encoding: plain
"/opt/elasticbeanstalk/hooks/appdeploy/post/run_supervised_celeryd.sh":
mode: "000755"
owner: root
group: root
content: |
  #!/usr/bin/env bash
  # Get django environment variables
  celeryenv=`cat /opt/python/current/env | tr '\n' ',' | sed 's/%/%%/g' | sed 's/export //g' | sed 's/$PATH/%(ENV_PATH)s/g' | sed 's/$PYTHONPATH//g' | sed 's/$LD_LIBRARY_PATH//g'`
  celeryenv=${celeryenv%?}

  # Create celery configuraiton script
  celeryconf="[program:celeryd]
  ; Set full path to celery program if using virtualenv
  command=/opt/python/run/venv/bin/celery worker -A avtotest --loglevel=INFO

  directory=/opt/python/current/app
  user=nobody
  numprocs=1
  stdout_logfile=/var/log/celery-worker.log
  stderr_logfile=/var/log/celery-worker.log
  autostart=true
  autorestart=true
  startsecs=10

  ; Need to wait for currently executing tasks to finish at shutdown.
  ; Increase this if you have very long running tasks.
  stopwaitsecs = 600

  ; When resorting to send SIGKILL to the program to terminate it
  ; send SIGKILL to its whole process group instead,
  ; taking care of its children as well.
  killasgroup=true

  ; if rabbitmq is supervised, set its priority higher
  ; so it starts first
  priority=998

  environment=$celeryenv"

  # Create celerybeat configuraiton script
  celerybeatconf="[program:celerybeat]
  ; Set full path to celery program if using virtualenv
  command=/opt/python/run/venv/bin/celery beat -A avtotest --loglevel=INFO

  ; remove the -A avtotest argument if you are not using an app instance

  directory=/opt/python/current/app
  user=nobody
  numprocs=1
  stdout_logfile=/var/log/celerybeat.log
  stderr_logfile=/var/log/celerybeat.log
  autostart=true
  autorestart=true
  startsecs=10

  ; Need to wait for currently executing tasks to finish at shutdown.
  ; Increase this if you have very long running tasks.
  stopwaitsecs = 600

  ; When resorting to send SIGKILL to the program to terminate it
  ; send SIGKILL to its whole process group instead,
  ; taking care of its children as well.
  killasgroup=true

  ; if rabbitmq is supervised, set its priority higher
  ; so it starts first
  priority=999

  environment=$celeryenv"

  # Create the celery and beat supervisord conf script
  echo "$celeryconf" | tee /opt/python/etc/celery.conf
  echo "$celerybeatconf" | tee /opt/python/etc/celerybeat.conf

  # Add configuration script to supervisord conf (if not there already)
  if ! grep -Fxq "[include]" /opt/python/etc/supervisord.conf
      then
      echo "[include]" | tee -a /opt/python/etc/supervisord.conf
      echo "files: celery.conf" | tee -a /opt/python/etc/supervisord.conf
      echo "files: celerybeat.conf" | tee -a /opt/python/etc/supervisord.conf
  fi

  # Reread the supervisord config
  supervisorctl -c /opt/python/etc/supervisord.conf reread

  # Update supervisord in cache without restarting all services
  supervisorctl -c /opt/python/etc/supervisord.conf update

  # Start/Restart celeryd through supervisord
  supervisorctl -c /opt/python/etc/supervisord.conf restart celeryd

This file daemonizes both celery and celerybeat. Celery is working fine. But celerybeat is not. I don't see celerybeat.log file created which I think suggests that celerybeat is not working.

Any ideas about this?

I will post more code if needed. Thanks for help

回答1:

Your supervisord syntax is a bit off, first of all you may need to SSH into your instance, and edit the supervisord.conf file directly (vim /opt/python/etc/supervisord.conf), and fix this line directly.

echo "[include]" | tee -a /opt/python/etc/supervisord.conf
echo "files: celery.conf" | tee -a /opt/python/etc/supervisord.conf
echo "files: celerybeat.conf" | tee -a /opt/python/etc/supervisord.conf

should be

echo "[include]" | tee -a /opt/python/etc/supervisord.conf
echo "files: celery.conf celerybeat.conf" | tee -a /opt/python/etc/supervisord.conf

EDIT:

To run celerybeat, and make sure that it only runs ONCE on all your machines, you should place these lines in your config files --

04_killotherbeats:
  command: "ps auxww | grep 'celery beat' | awk '{print $2}' | sudo xargs kill -9 || true"
05_restartbeat:
  command: "supervisorctl -c /opt/python/etc/supervisord.conf restart celerybeat"
  leader_only: true