Best Database Change Control Methodologies

2020-05-20 08:41发布

As a database architect, developer, and consultant, there are many questions that can be answered. One, though I was asked recently and still can't answer good, is...

"What is one of, or some of, the best methods or techniques to keep database changes documented, organized, and yet able to roll out effectively either in a single-developer or multi-developer environment."

This may involve stored procedures and other object scripts, but especially schemas - from documentation, to the new physical update scripts, to rollout, and then full-circle. There are applications to make this happen, but require schema hooks and overhead. I would rather like to know about techniques used without a lot of extra third-party involvement.

5条回答
兄弟一词,经得起流年.
2楼-- · 2020-05-20 09:18

as long as you add columns/tables to your database it will be an easy task by scripting these changes in advance in sql-files. you just execute them. maybe you have some order to execute them.

a good solution would be to make one file per table, so that all changes belonging to this table would be visible to who-ever is working on the table (its like working on a class). the same is valid for stored procedures or views.

a more difficult task (and therefore maybe tools would be good) is to step back. as long as you just added tables/columns maybe this would not be a big issue. but if you have dropped columns on an update, and now you have to undo your update, the data is not there anymore. you will need to get this data from the backup. but keep in mind, if you have more then a few tables this could be a big task, and in the normal case you should undo your update very fast!

if you could just restore the backup, then its fine in this moment. but, if you update on monday, your clients work till wednesday and then they see that some data is missing (which you just dropped out of a table) then you could not just restore the old database.

i have a model-based approach in my mind (sorry, not implemented at the moment) in which schema-changes are "modeled" (e.g. per xml) and during an update a processor (e.g. a c# program) creates all necessary "sql" and e.g. moves data to a "dropDatabase". the data can reside there, and if for some reason i need to restore some of the dropped data, i can just do it with the processor. i think over some time (years) this approach is not as bad because otherwise developers don't touch "old" tables because they don't know anymore if the table or column is really necessary. with this approach you don't risk too lot if you drop something!

查看更多
冷血范
3楼-- · 2020-05-20 09:22

In my case I have a script generate every time I change the database, I named the script like 00001.sql, n.sql and I have a table with de number of last script I have execute. You can also see Database Documentation

查看更多
Rolldiameter
4楼-- · 2020-05-20 09:24

What I do is:

  • All the DDL commands required to recreate the schema (and the stored procedures and the indexes, etc) are in a script.
  • To be sure the script is OK, it is tested from time to time (create a database, run the script and restore the backup and check the database works well).
  • For change control, the script is kept in a Version Control System (I typically use Subversion).

The trick is that, if the database cannot be brought down to recreate with, say, an added column, I have two changes to make, an ALTER TABLE + a modification in the script. A bit more work but, in the long term, it wins.

查看更多
Bombasti
5楼-- · 2020-05-20 09:25
Ridiculous、
6楼-- · 2020-05-20 09:36

The easiest way I have seen this done without the aid of an external tool is to create a "schema patch" if you will. The schema patch is just a simple t-sql script. The schema patch is given a version number within the script and this number is stored in a table in the database to receive the changes.

Any new changes to the database involve creating a new schema patch that you can then run in sequence which would then detect what version the database is currently on and run all schema patches in between. Afterwards the schema version table is updated with whatever date/time the patch was executed to store for the next run.

A good book that goes into details like this is called Refactoring Databases.

If you wish to use an external tool you can look at Ruby's Migrations project or a similar tool in C# called Migrator.NET. These tools work by creating c# classes/ruby classes with an "Forward" and "Backward" migration. These tools are more feature rich because they know how to go forward as well as backwards in the schema patches. As you stated however, you are not interested in an external tool, but I thought I would add that for other readers anyways.

查看更多
登录 后发表回答