Database Deployment Strategies (SQL Server)

2019-01-20 23:06发布

I am looking for a way to do daily deployments and keep the database scripts in line with releases.

Currently, we have a fairly decent way of deploying our source, we have unit code coverage, continuous integration and rollback procedures.

The problem is keeping the database scripts in line with a release. Everyone seems to try the script out on the test database then run them on live, when the ORM mappings are updated (that is, the changes goes live) then it picks up the new column.

The first problem is that none of the scripts HAVE to be written anywhere, generally everyone "attempts" to put them into a Subversion folder but some of the lazier people just run the script on live and most of the time no one knows who has done what to the database.

The second issue is that we have 4 test databases and they are ALWAYS out of line and the only way to truly line them back up is to do a restore from the live database.

I am a big believer that a process like this needs to be simple, straightforward and easy to use in order to help a developer, not hinder them.

What I am looking for are techniques/ideas that make it EASY for the developer to want to record their database scripts so they can be ran as part of a release procedure. A process that the developer would want to follow.

Any stories, use cases or even a link would helpful.

15条回答
叼着烟拽天下
2楼-- · 2019-01-20 23:18

For this very problem I chose to use a migration tool: Migratordotnet.

With migrations (in any tool) you have a simple class used to perform your changes and undo them. Here's an example:

[Migration(62)]
public class _62_add_date_created_column : Migration
{
    public void Up()
    {
       //add it nullable
       Database.AddColumn("Customers", new Column("DateCreated", DateTime) );

       //seed it with data
       Database.Execute("update Customers set DateCreated = getdate()");

       //add not-null constraint
       Database.AddNotNullConstraint("Customers", "DateCreated");
    }

    public void Down()
    {
       Database.RemoveColumn("Customers", "DateCreated");
    }
}

This example shows how you can handle volatile updates, like adding a new not-null column to a table that has existing data. This can be automated easily, and you can easily go up and down between versions.

This has been a really valuable addition to our build, and has streamlined the process immensely.

I posted a comparison of the various migration frameworks in .NET here: http://benscheirman.com/2008/06/net-database-migration-tool-roundup

查看更多
闹够了就滚
3楼-- · 2019-01-20 23:18

I've written a .NET based tool to handle database versioning in an automated fashion. We have been using this tool in production to handle rolling out database updates (including patches) to multiple environments, keep a log in each database of which scripts have been run, and do it all in an automated fashion. It has a command-line console so you can create batch scripts which use this tool. Check it out: https://github.com/bmontgomery/DatabaseVersioning

查看更多
闹够了就滚
4楼-- · 2019-01-20 23:25

Gus off-handedly mentioned DB Ghost (above) – I second it as a potential solution.

A brief overview of how my company is using DB Ghost:

  • After the schema for a new DB has been reasonably settled during initial development, we use the DB Ghost 'Data and Schema Scripter' to create script (.sql) files for all the DB objects (and any static data) and we check-in these script files into source control (the tool separates the objects into folders such as 'Stored Procedures', 'Tables', etc.). At this point, we can use either of the DB GHost 'Packager' or 'Packager Plus' tools to create a stand-alone executable to create a new DB from these scripts.
  • All changes to the DB schema are checked-in to source by check-ins to the specific script files.
  • At anytime we can use the packager to create an executable to either (a) create a new DB or (b) update an existing DB. Some customization is required for certain path-dependent changes (e.g. changes that require data to be updated), but we have pre-update and post-update scripts that are run.

The 'update' process involves the creation of a clean 'source' DB and then (after pre-update custom scripts), a comparison between the schemas of the source DB and the target DB. DB Ghost updates the target DB to match

We routinely make changes to production DBs (we have 14 customers in 7 different production environments) but inevitably deploy a large-enough set of changes with a DB Ghost update executable (created during our build process). Any production changes that were not checked-in to source (or that were not checked-in to the appropriate branch being released) are LOST. This has forced everyone to check-in changes consistently.

To summarize:

  • If you enforce a policy that all DB updates be deployed using a DB Ghost update executable, you can 'force' developers to consistently check-in their changes, regardless of whether they are deployed manually in the interim.
  • Adding a step (or steps) to your build process to create a DB Ghost update executable will in-effect perform a test to verify that a DB can be created from scripts (i.e. because DB Ghost creates a 'source' DB, even when creating the update executable package) and if you add a step (or steps) to execute the update package [on any of the four test DBs you mentioned], you can keep your test DBs in line with source.

There are some caveats and some limitations in what changes are 'easily' deployed with this tool (really a suite of related tools), but they are all fairly minor (at least for my company):

  • Renaming objects must be done in one of the custom scripts
  • The entire DB is always updated (e.g. objects in a single schema can't be updated alone) making it difficult to support customer-specific objects in the main application DB
查看更多
唯我独甜
5楼-- · 2019-01-20 23:26

We use a modified version of the database versioning described by K. Scott Allen. We use the Database Publishing Wizard to create the original baseline script. Then a custom C# tool based on SQL SMO to dump the stored procedures, views and user functions. Change scripts which contain schema and data changes are generated by Red Gate tools. So we end up with a structure like

Database\
    ObjectScripts\ - contains stored procs, views and user funcs 1-per file
    \baseline.sql - database snapshot which includes tables and data
    \sc.01.00.0001.sql - incremental change scripts
    \sc.01.00.0002.sql
    \sc.01.00.0003.sql

The custom tool creates the database if necessary, applies the baseline.sql if necessary, adds a SchemaChanges table if necessary and applies the change scripts as necessary based on what's in the SchemaChanges table. That process occurs as part of a nant build script each time we do a deployment build via cc.net.

If anyone wants the source code to the schemachanger app I can throw it up on codeplex/google or wherever.

查看更多
Evening l夕情丶
6楼-- · 2019-01-20 23:30

Go here:

https://blog.codinghorror.com/get-your-database-under-version-control/

Scroll down a bit to the list of 5 links to the odetocode.com website. Fantastic five-part series. I would use that as a starting point to get ideas and figure out a process that will work for your team.

查看更多
Summer. ? 凉城
7楼-- · 2019-01-20 23:32

Red Gate has a paper describing how to achieve build automation: http://downloads.red-gate.com/HelpPDF/ContinuousIntegrationForDatabasesUsingRedGateSQLTools.pdf

This is built around SQL Source Control, which integrates with SSMS and your existing source control system.

查看更多
登录 后发表回答