Puma restart fails on reboot using EC2 + Rails + N

2019-05-07 18:02发布

问题:

I have successfully used capistrano to deploy my rails app to Ubuntu EC2. Everything works great on deploy. Rails app name is deseov12 My issue is that Puma does not start on boot which will be necessary as production EC2 instances will be instantiated on demand. Puma will start when deploying via Capistrano, it will also start when running

cap production puma:start

on local machine.

It will also start on server after a reboot if I run the following commands:

su - deploy
[enter password]
cd /home/deploy/deseov12/current && ( export RACK_ENV="production" ; ~/.rvm/bin/rvm ruby-2.2.4 do bundle exec puma -C /home/deploy/deseov12/shared/puma.rb --daemon )

I have followed directions from the Puma jungle tool to make Puma start on boot by using upstart as follows:

Contents of /etc/puma.conf

/home/deploy/deseov12/current

Contents of /etc/init/puma.conf and /home/deploy/puma.conf

# /etc/init/puma.conf - Puma config

# This example config should work with Ubuntu 12.04+.  It
# allows you to manage multiple Puma instances with
# Upstart, Ubuntu's native service management tool.
#
# See workers.conf for how to manage all Puma instances at once.
#
# Save this config as /etc/init/puma.conf then manage puma with:
#   sudo start puma app=PATH_TO_APP
#   sudo stop puma app=PATH_TO_APP
#   sudo status puma app=PATH_TO_APP
#
# or use the service command:
#   sudo service puma {start,stop,restart,status}
#

description "Puma Background Worker"

# no "start on", we don't want to automatically start
stop on (stopping puma-manager or runlevel [06])

# change apps to match your deployment user if you want to use this as a less privileged user $
setuid deploy
setgid deploy

respawn
respawn limit 3 30

instance ${app}

script
# this script runs in /bin/sh by default
# respawn as bash so we can source in rbenv/rvm
# quoted heredoc to tell /bin/sh not to interpret
# variables

# source ENV variables manually as Upstart doesn't, eg:
#. /etc/environment

exec /bin/bash <<'EOT'
# set HOME to the setuid user's home, there doesn't seem to be a better, portable way
  export HOME="$(eval echo ~$(id -un))"

  if [ -d "/usr/local/rbenv/bin" ]; then
    export PATH="/usr/local/rbenv/bin:/usr/local/rbenv/shims:$PATH"
  elif [ -d "$HOME/.rbenv/bin" ]; then
    export PATH="$HOME/.rbenv/bin:$HOME/.rbenv/shims:$PATH"
  elif [ -f  /etc/profile.d/rvm.sh ]; then
    source /etc/profile.d/rvm.sh
  elif [ -f /usr/local/rvm/scripts/rvm ]; then
    source /etc/profile.d/rvm.sh
  elif [ -f "$HOME/.rvm/scripts/rvm" ]; then
    source "$HOME/.rvm/scripts/rvm"
  elif [ -f /usr/local/share/chruby/chruby.sh ]; then
    source /usr/local/share/chruby/chruby.sh
    if [ -f /usr/local/share/chruby/auto.sh ]; then
      source /usr/local/share/chruby/auto.sh
    fi
    # if you aren't using auto, set your version here
    # chruby 2.0.0
  fi

  cd $app
  logger -t puma "Starting server: $app"

  exec bundle exec puma -C current/config/puma.rb
EOT
end script

Contents of /etc/init/puma-manager.conf and /home/deploy/puma-manager.conf

# /etc/init/puma-manager.conf - manage a set of Pumas

# This example config should work with Ubuntu 12.04+.  It
# allows you to manage multiple Puma instances with
# Upstart, Ubuntu's native service management tool.
#
# See puma.conf for how to manage a single Puma instance.
#
# Use "stop puma-manager" to stop all Puma instances.
# Use "start puma-manager" to start all instances.
# Use "restart puma-manager" to restart all instances.
# Crazy, right?
#

description "Manages the set of puma processes"

# This starts upon bootup and stops on shutdown
start on runlevel [2345]
stop on runlevel [06]

# Set this to the number of Puma processes you want
# to run on this machine
env PUMA_CONF="/etc/puma.conf"

pre-start script
  for i in `cat $PUMA_CONF`; do
  app=`echo $i | cut -d , -f 1`
  logger -t "puma-manager" "Starting $app"
  start puma app=$app
done
end script

Contents of /home/deploy/deseov12/shared/puma.rb

#!/usr/bin/env puma

directory '/home/deploy/deseov12/current'
rackup "/home/deploy/deseov12/current/config.ru"
environment 'production'

pidfile "/home/deploy/deseov12/shared/tmp/pids/puma.pid"
state_path "/home/deploy/deseov12/shared/tmp/pids/puma.state"
stdout_redirect '/home/deploy/deseov12/shared/log/puma_error.log', '/home/deploy/deseov12/shar$


threads 0,8

bind 'unix:///home/deploy/deseov12/shared/tmp/sockets/puma.sock'

workers 0

activate_control_app

prune_bundler


on_restart do
  puts 'Refreshing Gemfile'
  ENV["BUNDLE_GEMFILE"] = "/home/deploy/deseov12/current/Gemfile"
end

However, I have not been able to make Puma start up automatically after a server reboot. It just does not start.

I would certainly appreciate some help

EDIT: I just noticed something that could be a clue:

when running the following command as deploy user:

sudo start puma app=/home/deploy/deseov12/current

ps aux will show a puma process for a few seconds before it disappears.

deploy    4312  103  7.7 183396 78488 ?        Rsl  03:42   0:02 puma 2.15.3 (tcp://0.0.0.0:3000) [20160106224332]

this puma process is different from the working process launched by capistrano:

deploy    5489 10.0 12.4 858088 126716 ?       Sl   03:45   0:02 puma 2.15.3 (unix:///home/deploy/deseov12/shared/tmp/sockets/puma.sock) [20160106224332]

回答1:

This is finally solved after a lot of research. It turns out the issue was threefold:

1) the proper environment was not being set when running the upstart script 2) the actual production puma.rb configuration file when using capistrano can be found in the home/deploy/deseov12/shared directory not in the /current/ directory 3) not demonizing the puma server properly

To solve these issues:

1) This line should be added to the start of the script in /etc/init/puma.conf and /home/deploy/puma.conf:

env RACK_ENV="production"

2) and 3) this line

exec bundle exec puma -C current/config/puma.rb

should be replaced with this one

exec bundle exec puma -C /home/deploy/deseov12/shared/puma.rb --daemon

After doing this, the puma server starts properly on reboot or new instance generation. Hope this helps someone avoid hours of troubleshooting.