We have 2 instances behind a load balancer running the same rails app with passenger. When we deploy, the server startup time causes requests to timeout. As a result we have a script that updates each webserver individually by taking one off the LB, deploying with cap, testing a dynamic page load, putting it back on the LB.
How can we get capistrano to do this for us with one command? I have been able to set it up to deploy to all instances simultaneously but they all restart at the same time and cause the site to be unavailable for 20 seconds.
What am I missing here? Seems like this should be a common pattern.
It's not actually that straightforward to serialize deployment in capistrano, which likes to parallelize all of its operations between servers. To restate the issue, it seems like you have a handful of servers and want to take each one offline in sequence to update the deployment.
The trick is to override the
deploy:create_symlink
task in your deployment configuration:In this case
perform_task_offline
includes commands that execute on the server specified inoptions
that remove it from the load balancer while ityield
s the block includingcreate_symlink_task
, which creates the deployment symlink.You should then be able to run the standard
cap
command to deploy, and you'll see the servers sequentially go offline, create the "current" symlink, then come back up.Note that the above code tracks the servers that have successfully been deployed to with
deployed_servers
. If you want to be able to rollback an active failed deployment (that is, the failure happens during deployment itself) on only the servers that had previously been deployed to, you'll need a similar loop inside of anon_rollback do
block, but over only thedeployed_servers
.