How do you share SQL changes within your team? - sql

Whenever you make database changes, how do you apply these changes to others databases on the team (and also your servers)?
Currently we are using a file called changes.sql where we put all our changes, separated with ISO date comments.
Is there a better way?

We use an expanded version of your approach.
We have an database upgrade folder for each release, which contains all the scripts which are part of the release. There is one index file in the folder, which contains pseudo links to all the scripts which should be run.
We have a cruise control job which runs each night to restore a copy of the current production database, then runs the current release's upgrade scripts against it (by executing the scripts defined in the index file). There's also a CI job which runs whenever anyone checks anything into the upgrade folder for the current release.
The scripts need to be re-runnable obviously, eg they should check for the existence of something before dropping or creating it.

Take a look at http://dbmaintain.org/overview.html - It is a quite powerful tool to manage database updates. It basically works by executing several SQL scripts in the correct order. It remembers which scripts were already executed. If an executed script is changed it either reports an error (in production mode) or clears the database and executes all scripts again (in testing mode). There is a good tutorial too.
Edit: You can also group the sql scripts (i.e. by release). The big advantage here is that you can use the same tests for your unit tests, testing environments, coninuous integration, near-live and production environments.

Not at my current job, but in the past I used a database project in Visual Studio 2010, which was then published to SVN. We had an SOP rather than software automation to push changes from development to QA, staging, and production.
The team I worked with was small - five developers with shared responsibility for DB design and .NET development.

You should also consider using version control on your database. One example is Liquibase. By using Version control you can comment all the changes to the table structure, thus you don't need a changes.sql file.

We use a migration tool (migratordotnet - other alternatives exist) that lets you write C# classes that execute database commands. The migrations run locally on each invocation of the program or of the integration tests, and on the servers on each deployment. The migration framework automatically keeps track of which migrations have been applied. Of course, the migrations are a part of the version control repository.

Related

How to set up Redgate SQL Source Control with Continuous Integration

My question is this:
What is the best setup for managing SQL changes in a development team?
Our team consists of 4 developers, each with their own copy of a database.
When committing SQL/Application changes to our TFS server, we wish to ensure that any build errors do not get propagated to other developers. So, we are going to implement continuous integration to assist with this.
The idea is that
1.SQL and application code changes are committed to TFS.
2. A central database gets the SQL updates, and we build the application.
3. Unit tests are executed on the build server.
4. If any of these steps fail, the checkin is rejected and the database gets rolled back to the state it was in before the commit.
What is the best way to set up our Redgate SQL source code to implement this?
If you want to use SQL Source Control, based on your requirements this is a possible setup to consider.
For each developer machine:
Install SQL Source Control
Link each developer database to your TFS repository using Dedicated database as development model.
Install SQL Prompt to write your SQL more easily
Configure SQL Test for writing unit tests for SQL Server
On the build server:
Install Redgate DLM Automation (there are Add-Ons to simplify setup)
Configure a Validate build task to validate the schema by checking the database can be built successfully from scratch
Configure a Test build task to run SQL tests
Configure a Sync build task to update your central database with the SQL updates
Run your application unit tests
If the test fails, you can run a custom script that revert the last check-in, use the sync build task again to roll back the database changes and trigger a new build. You can use Redgate DLM Automation PowerShell cmdlets to do it.
The last step could be tricky. I honestly prefer and recommend to use branches instead of relying on a single central database. In this way, each developer can be working fully independently and you can merge the new changes in the master branch only when the work has being validated on each individual branch.
If you want to go further and also implement deployment you can use Redgate DLM Automation Deployment to create a release database package and deploy your database changes to production directly from your build server or using a release tool like Octopus Deploy.
Finally, I would also advice you to have a look at Redgate ReadyRoll especially if you are considering a migration-first approach to database changes.
As you can see, there are different ways of using Redgate tools to manage database changes and there is no single best way of setting them up. It always depends by the specific requirements and problems you need to solve.
Hope this helps.
You can use a Database Project. It can contain the entire database schema plus stored procedures. During a build, it will verify that the stored procedures match the schema.
Then enable the Gated Check-in option in build definition, it accepts check-ins only if the submitted changes merge and build successfully.
For the data written to database, it's based on your test method, you can set the method to delete the data if the test failed, or you shouldn't be writing to a real database. Instead you should mock the database classes. This way you don't actually have to connect and modify the database and therefor no cleanup is needed.
For more information you can reference below articles:
How To Unit Test – Interacting with the Database
Database cleanup after Junit tests

What are the different approaches for deploying DB changes using TFS 2015?

Currently, we are manual running DB scripts (SQL Server 2012) outside of our CI/CD deployment. What are ways (including toolsets) can we automate deployment of DB changes using TFS 2015 Update 3?
There are really two approaches here, both of which work with TFS. Really TFS just facilitates the execution of any scripting that you will use to update your database, including your custom, handcrafted scripts.
There is the state based approach, which uses a comparison technology to look at your VCS/dev/test/staging database and compare this to production. SQL Source Control and the DLM Automation Suite from Redgate Software does this, as do other comparison tools. What you would do is use a command line or programmatic interface to set your source and target, capture the output and then use this as an artifact in your release process. I might include a review of the artifact as a scripting choice in your flow.
Note, there are some problems that State based comparisons don't do well with. Renames, splits, merges, data movement, a few others. Some comparison tools have ways around this, some do not. Be aware this may be an issue. If you have a more mature database, perhaps not, but you should consider this. SQL Source Control allows custom migration scripts, which can handle these issues.
The other approach is a script runner or migration strategy where each change you make to a dev database is captured as an ordered script and a framework executes these in order, if they are needed. This is preferred by some people since you can see exactly what code will be executed at dev and deployment time. ReadyRoll from Redgate Software, Liquibase, Rails Migrations, DBUp, FlywayDB, all use this strategy.
Neither of these is better or worse. Both work, both have pros and cons, but really the choice comes down to your comfort level and preference.
Disclosure: I work for Redgate Software.
If deploy DB changes just mean using SQL Server Database Projects (.sqlproj files) with Team Foundation Build in Team Foundation Server.
There are several ways can achieve this:
Use MSBuild task with some arguments to publish your SQL project
during build.
Add a deploy target in your sqlproj file,run the target after build
completes.
Or add a "Batch Script" step in your build
definition to run "SqlPackage.exe" to publish the .dacpac file.
More details please refer to this blog: Deploying SSDT During Local and Server Build .
As for using TFS2015, you can also try to use SQL Server Database Deployment task.
Use this task to deploy a SQL Server database to an existing SQL
Server instance. The task uses a DACPAC and SqlPackage.exe, which
provides fine-grained control over database creation and upgrades.

Track changes made to a database

Background:
I have a MS SQL Server database and I want to track changes to it. For example if a column needed to be added or removed or a table needed to be dropped. Something similar to Version control for regular code.
The problem:
While looking around I saw that there were some tools that can be used:
RedGate SQL Source Control
Visual Studio Database project
I am more interested in knowing if either of these tools will track changes to my database? More specifically I have a TFS server that is the source control for my MVC code, can I use either of these with TFS? Will it allow us to restore from older versions? Will it allow multiple developers to work on the database simultaneously?
For this type of work, ApexSQL Source Control shown to be all that you need. With this SSMS add-in you can work directly on a database, and all of your changes will be tracked in real time.
Yes, several developers can work in the same time on the same database. When one developer works on a one or several objects, other developers can see which those objects are, and until the first one does not finish changing the others cannot change that object, they will not be allowed to.
If by any case, object is changed wrong, previous version or any earlier version can be restored at any moment.
This add-in has all necessary options and features to allow the developers to work without losing time for checking changes made against object, since the add-in does that for them. And you can always see by whom, when and what that change is.
Being in the database version control space for 5 years (as director of product management at DBmaestro) and having worked as a DBA for over two decades, I can tell you the simple fact that you cannot treat the database objects as you treat your Java, C# or other files and save the changes in simple DDL scripts.
There are many reasons and I'll name a few:
Files are stored locally on the developer’s PC and the change s/he
makes do not affect other developers. Likewise, the developer is not
affected by changes made by her colleague. In database this is
(usually) not the case and developers share the same database
environment, so any change that were committed to the database affect
others.
Publishing code changes is done using the Check-In / Submit Changes /
etc. (depending on which source control tool you use). At that point,
the code from the local directory of the developer is inserted into
the source control repository. Developer who wants to get the latest
code need to request it from the source control tool. In database
the change already exists and impacts other data even if it was not
checked-in into the repository.
During the file check-in, the source control tool performs a conflict
check to see if the same file was modified and checked-in by another
developer during the time you modified your local copy. Again there
is no check for this in the database. If you alter a procedure from
your local PC and at the same time I modify the same procedure with
code form my local PC then we override each other’s changes.
The build process of code is done by getting the label / latest
version of the code to an empty directory and then perform a build –
compile. The output are binaries in which we copy & replace the
existing. We don't care what was before. In database we cannot
recreate the database as we need to maintain the data! Also the
deployment executes SQL scripts which were generated in the build
process.
When executing the SQL scripts (with the DDL, DCL, DML (for static
content) commands) you assume the current structure of the
environment match the structure when you create the scripts. If not,
then your scripts can fail as you are trying to add new column which
already exists.
Treating SQL scripts as code and manually generating them will cause
syntax errors, database dependencies errors, scripts that are not
reusable which complicate the task of developing, maintaining,
testing those scripts. In addition, those scripts may run on an
environment which is different from the one you though it would run
on.
Sometimes the script in the version control repository does not match
the structure of the object that was tested and then errors will
happen in production!
There are many more, but I think you got the picture.
What I found that works is the following:
Use an enforced version control system that enforces
check-out/check-in operations on the database objects. This will
make sure the version control repository matches the code that was
checked-in as it reads the metadata of the object in the check-in
operation and not as a separated step done manually. This also allow
several developers to work in parallel on the same database while
preventing them to accidently override each other code.
Use an impact analysis that utilize baselines as part of the
comparison to identify conflicts and identify if a change (when
comparing the object's structure between the source control
repository and the database) is a real change that origin from
development or a change that was origin from a different path and
then it should be skipped, such as different branch or an emergency
fix.
An article I wrote on this was published here, you are welcome to read it.
If you're looking for a product that will track changes into TFS from your SQL Server automatically, I'd invite you take a look at our product, Sql Historian. It's different from most other SQL version control systems (including the ones you've listed) in that it does not require developers to perform a check-in ritual to synchronize version control with what's already committed to the db.
However, features common with Sql Historian and the other two systems you mention are: working with TFS, the ability to view older versions of your db objects, and allowing multiple users on the db at the same time.

Building SQL deployment scripts into the application?

We currently have a rather manual, fiddly, messy & error prone way of running SQL deployment scripts when we update our clients' software installations. We're considering finding a 3rd party SQL deployment tool to automate this process.
However, I'm pushing the idea of building our own SQL deployment tool into the application itself. It would be simple - on application startup, it would:
1) Check the existing database schema version (eg. "35")
2) Check against "up to date" database schema version (eg. "38")
3) Retrieve relevant SQL deployment scripts from resource files (eg. "36", "37", "38")
4) Lock the database and run each required SQL deployment script
Note that this would still be run by an IT technician in case any errors occurred, not by end users.
It seems unorthodox but I don't really see any problem. Your thoughts?
I don't see anything inheritly wrong with this.
At a company I've worked for, they built a custom SQL-script installer that would allow them to automatically apply changes to the database, roll back the changes if necessary, and keep tabs on the version of what's been applied.
No matter the desired result of the application, you'll need to set conventions (i.e. database releases should have this folder structure, etc.) and identify the needs and processes that will be used in running the tool (i.e. just how automated you'll make it)
Don't build your own. Far too common a problem for a bespoke solution.
You're looking for a database migration tool, my recommendation would be liquibase. It can be run from the command line or integrated into the build process. Unique features that are especially valuable to me is the generation of SQL upgrade (and downgrade) scripts, which are often demanded from us when supporting production installs.
For more a more detailed listing of alternative migration tools see the following answer:
Migrations for Java

Is there a version control system for database structure changes?

I often run into the following problem.
I work on some changes to a project that require new tables or columns in the database. I make the database modifications and continue my work. Usually, I remember to write down the changes so that they can be replicated on the live system. However, I don't always remember what I've changed and I don't always remember to write it down.
So, I make a push to the live system and get a big, obvious error that there is no NewColumnX, ugh.
Regardless of the fact that this may not be the best practice for this situation, is there a version control system for databases? I don't care about the specific database technology. I just want to know if one exists. If it happens to work with MS SQL Server, then great.
In Ruby on Rails, there's a concept of a migration -- a quick script to change the database.
You generate a migration file, which has rules to increase the db version (such as adding a column) and rules to downgrade the version (such as removing a column). Each migration is numbered, and a table keeps track of your current db version.
To migrate up, you run a command called "db:migrate" which looks at your version and applies the needed scripts. You can migrate down in a similar way.
The migration scripts themselves are kept in a version control system -- whenever you change the database you check in a new script, and any developer can apply it to bring their local db to the latest version.
I'm a bit old-school, in that I use source files for creating the database. There are actually 2 files - project-database.sql and project-updates.sql - the first for the schema and persistant data, and the second for modifications. Of course, both are under source control.
When the database changes, I first update the main schema in project-database.sql, then copy the relevant info to the project-updates.sql, for instance ALTER TABLE statements.
I can then apply the updates to the development database, test, iterate until done well.
Then, check in files, test again, and apply to production.
Also, I usually have a table in the db - Config - such as:
SQL
CREATE TABLE Config
(
cfg_tag VARCHAR(50),
cfg_value VARCHAR(100)
);
INSERT INTO Config(cfg_tag, cfg_value) VALUES
( 'db_version', '$Revision: $'),
( 'db_revision', '$Revision: $');
Then, I add the following to the update section:
UPDATE Config SET cfg_value='$Revision: $' WHERE cfg_tag='db_revision';
The db_version only gets changed when the database is recreated, and the db_revision gives me an indication how far the db is off the baseline.
I could keep the updates in their own separate files, but I chose to mash them all together and use cut&paste to extract relevant sections. A bit more housekeeping is in order, i.e., remove ':' from $Revision 1.1 $ to freeze them.
MyBatis (formerly iBatis) has a schema migration, tool for use on the command line. It is written in java though can be used with any project.
To achieve a good database change management practice, we need to identify a few key goals.
Thus, the MyBatis Schema Migration System (or MyBatis Migrations for short) seeks to:
Work with any database, new or existing
Leverage the source control system (e.g. Subversion)
Enable concurrent developers or teams to work independently
Allow conflicts very visible and easily manageable
Allow for forward and backward migration (evolve, devolve respectively)
Make the current status of the database easily accessible and comprehensible
Enable migrations despite access privileges or bureaucracy
Work with any methodology
Encourages good, consistent practices
Redgate has a product called SQL Source Control. It integrates with TFS, SVN, SourceGear Vault, Vault Pro, Mercurial, Perforce, and Git.
I highly recommend SQL delta. I just use it to generate the diff scripts when i'm done coding my feature and check those scripts into my source control tool (Mercurial :))
They have both an SQL server & Oracle version.
I wonder that no one mentioned the open source tool liquibase which is Java based and should work for nearly every database which supports jdbc. Compared to rails it uses xml instead ruby to perform the schema changes. Although I dislike xml for domain specific languages the very cool advantage of xml is that liquibase knows how to roll back certain operations like
<createTable tableName="USER">
<column name="firstname" type="varchar(255)"/>
</createTable>
So you don't need to handle this of your own
Pure sql statements or data imports are also supported.
Most database engines should support dumping your database into a file. I know MySQL does, anyway. This will just be a text file, so you could submit that to Subversion, or whatever you use. It'd be easy to run a diff on the files too.
If you're using SQL Server it would be hard to beat Data Dude (aka the Database Edition of Visual Studio). Once you get the hang of it, doing a schema compare between your source controlled version of the database and the version in production is a breeze. And with a click you can generate your diff DDL.
There's an instructional video on MSDN that's very helpful.
I know about DBMS_METADATA and Toad, but if someone could come up with a Data Dude for Oracle then life would be really sweet.
Have your initial create table statements in version controller, then add alter table statements, but never edit files, just more alter files ideally named sequentially, or even as a "change set", so you can find all the changes for a particular deployment.
The hardiest part that I can see, is tracking dependencies, eg, for a particular deployment table B might need to be updated before table A.
For Oracle, I use Toad, which can dump a schema to a number of discrete files (e.g., one file per table). I have some scripts that manage this collection in Perforce, but I think it should be easily doable in just about any revision control system.
Take a look at the oracle package DBMS_METADATA.
In particular, the following methods are particularly useful:
DBMS_METADATA.GET_DDL
DBMS_METADATA.SET_TRANSFORM_PARAM
DBMS_METADATA.GET_GRANTED_DDL
Once you are familiar with how they work (pretty self explanatory) you can write a simple script to dump the results of those methods into text files that can be put under source control. Good luck!
Not sure if there is something this simple for MSSQL.
I write my db release scripts in parallel with coding, and keep the release scripts in a project specific section in SS. If I make a change to the code that requires a db change, then I update the release script at the same time.
Prior to release, I run the release script on a clean dev db (copied structure wise from production) and do my final testing on it.
I've done this off and on for years -- managing (or trying to manage) schema versions. The best approaches depend on the tools you have. If you can get the Quest Software tool "Schema Manager" you'll be in good shape. Oracle has its own, inferior tool that is also called "Schema Manager" (confusing much?) that I don't recommend.
Without an automated tool (see other comments here about Data Dude) then you'll be using scripts and DDL files directly. Pick an approach, document it, and follow it rigorously. I like having the ability to re-create the database at any given moment, so I prefer to have a full DDL export of the entire database (if I'm the DBA), or of the developer schema (if I'm in product-development mode).
PLSQL Developer, a tool from All Arround Automations, has a plugin for repositories that works OK ( but not great) with Visual Source Safe.
From the web:
The Version Control Plug-In provides a tight integration between the PL/SQL Developer IDE >>and any Version Control System that supports the Microsoft SCC Interface Specification. >>This includes most popular Version Control Systems such as Microsoft Visual SourceSafe, >>Merant PVCS and MKS Source Integrity.
http://www.allroundautomations.com/plsvcs.html
ER Studio allows you to reverse your database schema into the tool and you can then compare it to live databases.
Example: Reverse your development schema into ER Studio -- compare it to production and it will list all of the differences. It can script the changes or just push them through automatically.
Once you have a schema in ER Studio, you can either save the creation script or save it as a proprietary binary and save it in version control. If you ever want to go back to a past version of the scheme, just check it out and push it to your db platform.
There's a PHP5 "database migration framework" called Ruckusing. I haven't used it, but the examples show the idea, if you use the language to create the database as and when needed, you only have to track source files.
We've used MS Team System Database Edition with pretty good success. It integrates with TFS version control and Visual Studio more-or-less seamlessly and allows us to manages stored procs, views, etc., easily. Conflict resolution can be a pain, but version history is complete once it's done. Thereafter, migrations to QA and production are extremely simple.
It's fair to say that it's a version 1.0 product, though, and is not without a few issues.
You can use Microsoft SQL Server Data Tools in visual studio to generate scripts for database objects as part of a SQL Server Project. You can then add the scripts to source control using the source control integration that is built into visual studio. Also, SQL Server Projects allow you verify the database objects using a compiler and generate deployment scripts to update an existing database or create a new one.
In the absence of a VCS for table changes I've been logging them in a wiki. At least then I can see when and why it was changed. It's far from perfect as not everyone is doing it and we have multiple product versions in use, but better than nothing.
I'd recommend one of two approaches. First, invest in PowerDesigner from Sybase. Enterprise Edition. It allows you to design Physical datamodels, and a whole lot more. But it comes with a repository that allows you to check in your models. Each new check in can be a new version, it can compare any version to any other version and even to what is in your database at that time. It will then present a list of every difference and ask which should be migrated… and then it builds the script to do it. It’s not cheap but it’s a bargain at twice the price and it’s ROI is about 6 months.
The other idea is to turn on DDL auditing (works in Oracle). This will create a table with every change you make. If you query the changes from the timestamp you last moved your database changes to prod to right now, you’ll have an ordered list of everything you’ve done. A few where clauses to eliminate zero-sum changes like create table foo; followed by drop table foo; and you can EASILY build a mod script. Why keep the changes in a wiki, that’s double the work. Let the database track them for you.
Schema Compare for Oracle is a tool specifically designed to migrate changes from our Oracle database to another. Please visit the URL below for the download link, where you will be able to use the software for a fully functional trial.
http://www.red-gate.com/Products/schema_compare_for_oracle/index.htm
Two book recommendations: "Refactoring Databases" by Ambler and Sadalage and "Agile Database Techniques" by Ambler.
Someone mentioned Rails Migrations. I think they work great, even outside of Rails applications. I used them on an ASP application with SQL Server which we were in the process of moving to Rails. You check the migration scripts themselves into the VCS.
Here's a post by Pragmatic Dave Thomas on the subject.