I am working to rebuild my company's dev/test/QA environment. We have 10-15 programmers that are involved in a number of projects. They currently all develop locally on their PCs and use the dev environment for testing. We currently do not have a QA environment, so deployments are frequently a pain because bugs are usually found after something has gone live. Here's what I envision:
- Doing away with everyone's local admin privileges and making everyone develop on a dev server
- Create a QA environment that is identical to our production systems. This will allow them to test deployments.
- Create a new test environment that is more locked down than the dev server so that proper testing can be done.
What are your thoughts? What is the best way to set up an environment like this? We develop ASP .NET applications using MS Visual Studio 2008 (if that helps).
This seems to be screaming for Continuous Integration to my mind. This is where you'd have a second part of a dev environment that isn't someone's local machine but can be used to show what is currently being done and ensure code merges aren't breaking things. This separate from a test environment which is what QA would use and there should be another environment to be the quasi-production environment that is another level so that if there are hotfixes to publish this can be done separately than larger releases that may require more time for QA to perform enough regression to ensure new functionality isn't breaking a lot of stuff.
Of course, you should force them to develop as non-admins locally, but that's another story.
The first task you want to work on is to ensure all the products your company works on has a** deployment package that you can deploy in an automated manner on a squeaky clean machine**. If you don't have such package, you will have a lot of trouble trying to force the above process, since every deployment will require manual intervention which is going to cost your company so much time and resources that nobody will care about it.
The second task is to come up with definitive configuration for your deployment servers and prepare images, which anyone can deploy on a local or a virtual machine. This will be the baseline for any of your testing and should be as close to the actual production configuration as possible.
I'm not a big fan of the centralizing developers onto a dev server. Folks should be able to edit and merge from wherever they happen to be with whatever system they happen to be on. What problem are you trying to solve with this? There might be another solution to that problem.
The QA server is a must. Your QA team needs a place they can go to break things that won't impact development.
I'm assuming by "TEST" server, this is actually a place where nightly builds can be placed so developers can do testing before releasing to QA? That's a very good idea, and as Cen mentions, nightly builds on a build task to that server could be used to help this process along.
Remember - there is no such thing as the "ideal" general development environment that can be applied in all cases. Often, technical limitations prevent full adoption of these ideas. As a contractor of some years renown, I have found that the worst system I worked with had no local admin rights, every little install required a call to tech support, and they always cursed us developers for asking too much of them.
The best scenario I had is this: if you are going to remove local admin rights, give them a powerful locally hosted virtual machine. They should have a DMZ on the network so they can do whatever they want with the VM. If they mess something up, you can simply restore a VM from file. They important thing with this scenario is to use a good source repository like GIT, Team Foundation Server, SVN, etc. This is the way development is supposed to be done - without any reliance on the developer workstations beyond that of actually typing in the code.
The list of this and other tips:
Allow developers total freedom within their Virtual Machines (Internet access, application installation, etc)
Use a good source control repository which each developer can branch at will from. Enforce constant check-ins (like once an hour) and have a build server (continuous integration or "CI") that checks for broken builds. The CI server should email everyone on the team when the build is broken.
Give each local machine the best resources you can afford. I hear this argument that 4GB is enough for Visual Studio. Nothing is further from the truth. You may decide to stick with this, but trust me - when your developers machine is paging to disk over and over again because each build is taking up a lot of memory, you are losing minutes each hour - hours each week - in lost productivity because of slow machines.
Try not to look down your nose at your developers - they'll smell it a mile off and resent you for it (want to be responsible for disgruntled developers deleting source code or introducing bugs?). Chances are that the reason they are "sloppy developers" is because nobody else in the company is able to manage people. The best teams are led by intelligent, open, educated project managers. They get what they need when they need it. The cost of software is NOTHING compared to the cost of a developer's wages - yet I still hear of this or that manager refusing a product because it costs a grand. Last time it was XML Spy - because "Notepad will suffice". Sure it will - just as legs suffice instead of cars, but I don't want to walk everywhere dammit!
To go against the grain - I actually think removing the admin ability from all developers is a good thing IF you can create a poweruser with most abilities. The biggest issue I find from the team is people applying patches or installing additional software that they haven't cleared with management. Last time someone install ReSharper and then complained that the machine was moving slowly. They had a 2GB machine and ReSharper 5 needs 4GB minimum to run on top of Visual Studio 2010.
Addtionally - learn to develop without the use of a mouse. This is a radical concept I know but the mouse is slower than keyboard shortcuts. Unless the icon is in the very corner of the page, it takes on average a second or two to find an icon and click. Remembering a shortcut is quicker.
I wouldn't deal with this by removing local admin privilege, that will do more harm than good, but I would recommend setting up a build server to verify builds in a controlled environment. The toolset I've adopted, which I've found works very well for me and my team of a dozen or so volunteers, is:
I can't recommend those tools highly enough. TeamCity deals with your "we found the bug after we shipped" by moving your builds off the developers machines and building in a clean, controlled environment. It'll also run unit tests and ensure that you always have a working build (by naming and shaming the build breaker).
Crucible is a valuable product that lets you do peer code reviews easily and with full auditing, so you can verify that they are being done correctly.
The other items I think are self-explanatory, they all contribute towards 'best practices' that will move your shop a long way up the 'Joel Test'. Atlassian has an offer for 10 licenses for $10 on some of their products, which is hard to beat.
All that notwithstanding, the solution to your development woes is going to be at least partly cultural. You can put these (or other) tools in place but you'll need buy-in from the team and management and you'll need to address your practices to ensure developers use them. Some re-education might be riquired, because sloppy developers don't usually like being forced to up their game. I would start with the build server and insist that no code can be released unless it comes from an automated build. A lot of good practices will fall out of that. You can consider adopting unit testing and code reviews as and when it suits your organization - but plan to do it from the outset.
If you really want to streamline things, take a look at the Continuous Deployment concept, which is becoming very popular. Here's a good introductory post; the overall goal is to deploy drect from development to production, using strict automated checking to eliminate errors.
A full continuous deployment setup might be a bit much to bite off initially, but you could start by trying out this process to go from development to QA.