-->

在弹性魔豆开始SQS芹菜工人(Start SQS celery worker on Elastic

2019-10-30 03:57发布

我想开始EB芹菜工人却得到不太多解释错误。

命令在配置文件中.ebextensions dir

03_celery_worker:
  command: "celery worker --app=config --loglevel=info -E --workdir=/opt/python/current/app/my_project/"

所列出的命令的工作我的本地机器罚款(只是改变WORKDIR参数)。

从EB错误:

活动执行失败,原因:/opt/python/run/venv/local/lib/python3.6/site-packages/celery/platforms.py:796:RuntimeWarning:您正在运行的超级用户权限的工作人员:这是绝对不建议!

启动新的HTTPS连接(1):eu-west-1.queue.amazonaws.com(ElasticBeanstalk :: ExternalInvocationError)

我已经更新芹菜工人命令参数--uid=2和特权错误消失,但命令的执行仍然未能因

ExternalInvocationError

任何建议我做错了什么?

Answer 1:

ExternalInvocationError

据我所知意味着所列命令不能从EB容器的命令来运行。 这是需要建立从脚本服务器并运行芹菜的脚本。 这篇文章介绍了如何做到这一点。

更新:它需要创建一个配置文件.ebextensions目录。 我把它叫做celery.config 。 链接到上面后提供了一个脚本,它的工作方式几乎罚款。 这是需要作一些小的补充工作,正确率100%。 我曾与调度周期性任务( 芹菜拍 )问题。 下面是关于如何解决的步骤:

  1. 安装(增加需求)Django的芹菜拍 pip install django-celery-beat ,把它添加到安装的应用程序和使用--scheduler开始芹菜拍时参数。 指令是在这里 。

  2. 在脚本中指定运行该脚本的用户。 芹菜工人celery ,其先前在脚本中添加(如果不存在)的用户。 当我试图启动芹菜拍我的错误PermissionDenied。 这意味着, 芹菜用户不具有所有必要权利。 使用ssh我登录到EB,看着所有用户的清单( cat /etc/passwd ),并决定使用守护进程的用户。

列出的步骤解决芹菜跳动误差。 与脚本更新配置文件是低于(celery.config):```文件: “/opt/elasticbeanstalk/hooks/appdeploy/post/run_supervised_celeryd.sh”:模式: “000755” 老板:根组:根内容:| #!的/ usr / bin中/ env的庆典

  # Create required directories
  sudo mkdir -p /var/log/celery/
  sudo mkdir -p /var/run/celery/

  # Create group called 'celery'
  sudo groupadd -f celery
  # add the user 'celery' if it doesn't exist and add it to the group with same name
  id -u celery &>/dev/null || sudo useradd -g celery celery
  # add permissions to the celery user for r+w to the folders just created
  sudo chown -R celery:celery /var/log/celery/
  sudo chown -R celery:celery /var/run/celery/

  # Get django environment variables
  celeryenv=`cat /opt/python/current/env | tr '\n' ',' | sed 's/%/%%/g' | sed 's/export //g' | sed 's/$PATH/%(ENV_PATH)s/g' | sed 's/$PYTHONPATH//g' | sed 's/$LD_LIBRARY_PATH//g'`
  celeryenv=${celeryenv%?}

  # Create CELERY configuration script
  celeryconf="[program:celeryd]
  directory=/opt/python/current/app
  ; Set full path to celery program if using virtualenv
  command=/opt/python/run/venv/bin/celery worker -A config.celery:app --loglevel=INFO --logfile=\"/var/log/celery/%%n%%I.log\" --pidfile=\"/var/run/celery/%%n.pid\"

  user=celery
  numprocs=1
  stdout_logfile=/var/log/celery-worker.log
  stderr_logfile=/var/log/celery-worker.log
  autostart=true
  autorestart=true
  startsecs=10

  ; Need to wait for currently executing tasks to finish at shutdown.
  ; Increase this if you have very long running tasks.
  stopwaitsecs = 60

  ; When resorting to send SIGKILL to the program to terminate it
  ; send SIGKILL to its whole process group instead,
  ; taking care of its children as well.
  killasgroup=true

  ; if rabbitmq is supervised, set its priority higher
  ; so it starts first
  priority=998

  environment=$celeryenv"


  # Create CELERY BEAT configuraiton script
  celerybeatconf="[program:celerybeat]
  ; Set full path to celery program if using virtualenv
  command=/opt/python/run/venv/bin/celery beat -A config.celery:app --loglevel=INFO --scheduler django_celery_beat.schedulers:DatabaseScheduler --logfile=\"/var/log/celery/celery-beat.log\" --pidfile=\"/var/run/celery/celery-beat.pid\"

  directory=/opt/python/current/app
  user=daemon
  numprocs=1
  stdout_logfile=/var/log/celerybeat.log
  stderr_logfile=/var/log/celerybeat.log
  autostart=true
  autorestart=true
  startsecs=10

  ; Need to wait for currently executing tasks to finish at shutdown.
  ; Increase this if you have very long running tasks.
  stopwaitsecs = 60

  ; When resorting to send SIGKILL to the program to terminate it
  ; send SIGKILL to its whole process group instead,
  ; taking care of its children as well.
  killasgroup=true

  ; if rabbitmq is supervised, set its priority higher
  ; so it starts first
  priority=999

  environment=$celeryenv"

  # Create the celery supervisord conf script
  echo "$celeryconf" | tee /opt/python/etc/celery.conf
  echo "$celerybeatconf" | tee /opt/python/etc/celerybeat.conf

  # Add configuration script to supervisord conf (if not there already)
  if ! grep -Fxq "celery.conf" /opt/python/etc/supervisord.conf
    then
      echo "[include]" | tee -a /opt/python/etc/supervisord.conf
      echo "files: uwsgi.conf celery.conf celerybeat.conf" | tee -a /opt/python/etc/supervisord.conf
  fi

  # Enable supervisor to listen for HTTP/XML-RPC requests.
  # supervisorctl will use XML-RPC to communicate with supervisord over port 9001.
  # Source: https://askubuntu.com/questions/911994/supervisorctl-3-3-1-http-localhost9001-refused-connection
  if ! grep -Fxq "[inet_http_server]" /opt/python/etc/supervisord.conf
    then
      echo "[inet_http_server]" | tee -a /opt/python/etc/supervisord.conf
      echo "port = 127.0.0.1:9001" | tee -a /opt/python/etc/supervisord.conf
  fi

  # Reread the supervisord config
  supervisorctl -c /opt/python/etc/supervisord.conf reread

  # Update supervisord in cache without restarting all services
  supervisorctl -c /opt/python/etc/supervisord.conf update

  # Start/Restart celeryd through supervisord
  supervisorctl -c /opt/python/etc/supervisord.conf restart celeryd
  supervisorctl -c /opt/python/etc/supervisord.conf restart celerybeat

命令:01_killotherbeats:命令: “PS auxww | grep的 '芹菜拍' | awk的 '{打印$ 2}' |须藤xargs的杀-9 ||真正的” ignoreErrors:真02_restartbeat:命令:“supervisorctl -c的/ opt /蟒蛇的/ etc /supervisord.conf重启celerybeat” leader_only:真```有一点要重点注意:在我的项目celery.py文件在config目录,这就是为什么我写-A config.celery:app在启动时芹菜工人芹菜拍



文章来源: Start SQS celery worker on Elastic Beanstalk