I've been developing a workflow for practicing a mostly automated continuous deployment cycle for a PHP project. I'd like some feedback on possible process or technical bottlenecks in this workflow, suggestions for improvement, and ideas for how to better automate and increase the ease-of-use for my team.
Core components:
Hudson
CI server
Git
and GitHub
PHPUnit
unit tests
Selenium RC
Sauce OnDemand
for automated, cross-browser, cloud testing with Selenium RC
Puppet
for automating test server deployments
Gerrit
for Git code review
Gerrit Trigger
for Hudson
EDIT: I've changed the workflow graphic to take ircmaxwell's contributions into account by: removing PHPUnit
's extension for Selenium RC
and running those tests only as part of the QC stage; adding a QC stage; moving UI testing after code review but before merges; moving merges after the QC stage; moving deployment after the merge.
This workflow graphic describes the process. My questions / thoughts / concerns follow.
My concerns / thoughts / questions:
Overall difficulty using this system.
Time involvement.
Difficulty employing Gerrit
.
Difficulty employing Puppet
.
We'll be deploying on Amazon EC2
instances later. If we're going about setting up Debian
packages with Puppet
and deploying to Linode
slices now, is there a potential for a working deployment on Linode
to break on EC2
? Should we instead be doing our builds and deployments on EC2
from the get-go?
Another question re: EC2
and Puppet
. We're also considering using Scalr as a solution. Would it make as much sense to avoid the overhead of Puppet
for this alone and invest in Scalr instead? I have a secondary (ha!) concern here about cost; the Selenium
tests shouldn't be running that often that EC2
build instances will be running 24/7, but for something like a five-minute build, paying for an hour of EC2
usage seems a bit much.
Possible process bottlenecks on merges.
Could "A" be moved?
Credits: Portions of this workflow are inspired by Digg's awesome post on continuous deployment. The workflow graphic above is inspired by the Android OS Project.
How many people are working on it? If you only have maybe 10 or 20 developers, I'm not sure it will make sense to put such an elaborate workflow into place. If you're managing 500, sure...
My personal feeling is KISS. Keep It Simple, Stupid... You want a process that's both efficient, and more important: simple. If it's complicated, either nobody is going to do it right, or after time parts will slip. If you make it simple, it will become second nature and after a few weeks nobody would question the process (Well, the semantics of it anyway)...
And the other personal feeling is always run all of your UNIT tests. That way, you can skip a whole decision tree in your flow chart. After all, what's more expensive, a few minutes of CPU time, or the brain cycles to understand the difference between the partial test passing and the massive test failing. Remember, a fail is a fail, and there's no practical reason that code should ever be shown to a reviewer that has the potential to fail the build.
Now, Selenium tests are typically quite expensive, so I might agree to push those off until after the reviewer approves. But you'll need to think about that one...
Oh, and if I was implementing this, I would put a formal QC stage in there. I want human testers to look at any changes that are being made. Yes, Selenium can verify the things you know about, but only a human can find things you didn't think of. Feed back their findings into new Selenium and Integration tests to prevent regressions...
Importent to make your tests extremely fast, i.e. no IO and ability to run parallel and distributed tests. Don't know how applicable it is with php, but if you can test units of code with in memory db and mock the environment you'll be better off.
If you have a QA/QC or any human in the way between the commit and production you would have a problem getting to a full continuous deployment. The key is your trust your testing, monitoring and auto response (immune system) enough to eliminate error prone process evolving humans from your system.
All handovers between functions have the effect of slowing things down, and with that, an increase of the amount of change (and hence risk) that goes in to a deployment.
Manual quality gates are by definition an acceptance that quality has not been built in from the start. The only reason code needs to be reviewed later is because there is some belief that the quality is not good enough already.
I'm currently trying to remove formal code review from our pipelines for exactly this reason. It causes feedback delays, and quoting Martin Fowler:
"The whole point of Continuous Integration is to provide rapid feedback. Nothing sucks the blood of a CI activity more than a build that takes a long time. "
Instead I'd like to make code review something that submitters request if required, or otherwise is done at the time of coding by team members, perhaps a la XP pair programming.
I think it should be your goal that once the code is merged to source control, that there is absolutely no more manual intervention.
I don't know whether this is relevant to PHP, but you can replace at least at least some of the code review stage with static analysis.
The quality of code reviews is relying on the quality of the reviewers, while static analysis relies on best practices and patterns, and is fully automatic. I'm not saying that code reviews should be abandoned, I simply think it can be done off-line.
See
http://en.wikipedia.org/wiki/Static_code_analysis
http://en.wikipedia.org/wiki/List_of_tools_for_static_code_analysis