Mitigating the risks of auto-deployment

2019-09-01 09:28发布

问题:

Deployment

I currently work for a company that deploys through github. However, we have to log in to all 3 servers to update them manually with a shell script. When talking to the CTO he made it very clear that auto-deployment is like voodoo to him. Which is understandable. We have developers in 4 different countries working remotely. If someone where to accidentally push to the wrong branch we could experience downtime, and with our service we cannot be down for more than 10 minutes. And with all of our developers in different timezones, our CTO wouldn't know till the next morning and we'd have trouble meeting with the developers who had the issue because of vast time differences.

My Problem: Why I want auto-deploy

While working on my personal project I decided that it may be in my best interest to use auto-deployment, but still my project is mission critical and I'd like to mitigate downtime and human error as much as possible. The problem with manual deployment is that I simply cannot manually deploy on up to 20 servers via SSH in a reasonable amount of time. The problem perpetuates when I consider auto-scaling. I'd need to spin up a new server from an image and deploy to it.

My Stack

My service is developed on the Node.js Express framework. These environments are very rich in deployment and bootstraping utilities. My project uses npm's package.json to uglify my scripts on deploy, and also runs my service as a daemon using forever-monitor. I'm also considering grunt.js to further bootstrap my environments for both production and testing environments.

Deployment Methods

I've considered so far:

  • Auto-deploy with git, using webhooks
  • Deploying manually with git via shell
  • Deploying with npm via shell
  • Docker

I'm not well versed in technologies like Docker, but I'm interested and I'd definitely give points to whoever gave me a good description as to why I should or shouldn't use Docker, because I'm very interested in its use. Other methods are welcome.

My Problem: Why I fear auto-deploy

In a mission critical environment downtime can put your business on hold, and to make matters worse there's a fleet of end users hitting the refresh button. If someone pushes something that's not build passing to the production branch and that's auto-deployed, then I'm looking at a very messy situation.

I love the elegance of auto-deployment, but the risks make me skeptical. I'm very much in favor of making myself as productive as possible. So I'm looking for a way to deploy to many servers with ease, and in very efficient manner.

The Answer I'm Looking For

Explain to me how to mitigate the risks of auto-deployment, or explain to me an alternative which is better suited to my project. Feel free to ask for any missing details in the comments.

回答1:

No simple answer here. I offer a set of slides published by Mike Brittain from Etsy, a company that practices continuous deployment:

http://www.slideshare.net/mikebrittain/mbrittain-continuous-deploymentalm3public

Selected highlights:

  • Deploy frequently and in small batches
  • Use config/feature flags to control system behaviour and "dark release" major features
  • Code review all changes to the production branch
  • Invest in monitoring and improve the feedback loop
  • Manage "services" separately to the "application" and be mindful of run-time version and backwardly compatible changes.

Hope this helps