How to set up Redgate SQL Source Control with Continuous Integration - sql

My question is this:
What is the best setup for managing SQL changes in a development team?
Our team consists of 4 developers, each with their own copy of a database.
When committing SQL/Application changes to our TFS server, we wish to ensure that any build errors do not get propagated to other developers. So, we are going to implement continuous integration to assist with this.
The idea is that
1.SQL and application code changes are committed to TFS.
2. A central database gets the SQL updates, and we build the application.
3. Unit tests are executed on the build server.
4. If any of these steps fail, the checkin is rejected and the database gets rolled back to the state it was in before the commit.
What is the best way to set up our Redgate SQL source code to implement this?

If you want to use SQL Source Control, based on your requirements this is a possible setup to consider.
For each developer machine:
Install SQL Source Control
Link each developer database to your TFS repository using Dedicated database as development model.
Install SQL Prompt to write your SQL more easily
Configure SQL Test for writing unit tests for SQL Server
On the build server:
Install Redgate DLM Automation (there are Add-Ons to simplify setup)
Configure a Validate build task to validate the schema by checking the database can be built successfully from scratch
Configure a Test build task to run SQL tests
Configure a Sync build task to update your central database with the SQL updates
Run your application unit tests
If the test fails, you can run a custom script that revert the last check-in, use the sync build task again to roll back the database changes and trigger a new build. You can use Redgate DLM Automation PowerShell cmdlets to do it.
The last step could be tricky. I honestly prefer and recommend to use branches instead of relying on a single central database. In this way, each developer can be working fully independently and you can merge the new changes in the master branch only when the work has being validated on each individual branch.
If you want to go further and also implement deployment you can use Redgate DLM Automation Deployment to create a release database package and deploy your database changes to production directly from your build server or using a release tool like Octopus Deploy.
Finally, I would also advice you to have a look at Redgate ReadyRoll especially if you are considering a migration-first approach to database changes.
As you can see, there are different ways of using Redgate tools to manage database changes and there is no single best way of setting them up. It always depends by the specific requirements and problems you need to solve.
Hope this helps.

You can use a Database Project. It can contain the entire database schema plus stored procedures. During a build, it will verify that the stored procedures match the schema.
Then enable the Gated Check-in option in build definition, it accepts check-ins only if the submitted changes merge and build successfully.
For the data written to database, it's based on your test method, you can set the method to delete the data if the test failed, or you shouldn't be writing to a real database. Instead you should mock the database classes. This way you don't actually have to connect and modify the database and therefor no cleanup is needed.
For more information you can reference below articles:
How To Unit Test – Interacting with the Database
Database cleanup after Junit tests

Related

What's the recommended way to do database migrations with Ktor + Exposed (Kotlin)?

The Ktor or Exposed frameworks do not have any built-in support for database migrations. What's the recommended way to do this?
If you are using Ktor with Gradle I would recommend using Flyway programatically inside the entry point of your Application. This way it could easily be a part of your Continuous Delivery pipeline. You can see the Flyway docs using API here: https://flywaydb.org/documentation/api/
What I essentially do though is add the dependency (using Kotlin DSL):
implementation("org.flywaydb:flyway-core:6.5.2")
And then all you need to do is create an instance of Flyway and call migrate when you load your module:
fun Application.module() {
Flyway.configure().dataSource(/*config to your DB*/).load().migrate()
//the rest of your application
routing {
}
}
You could of course extract the creation of Flyway to your DI tool (e.g. Koin) and add some logging to show progress.
This way your DB will be migrated (if necessary) every time just before your app is started.
As for writing up the actual migration the official docs are very helpful. What you essentially need to do is:
Make sure you have the required directory for migration files - src/main/resources/db/migration by default ).
Write plain SQL in separate migration files in the above directory. The filenames need to stick to the convention too (by default: start with capital V then a number, which you will increment for each new migration, double underscore - this tricked me at the beginning ;) - and a snake_case description, e.g. V1__Create_person_table.sql)
Run the app and observe magic :D
Database tables are migrated at deployment via Flyway and scripts can be added in db/migrations folder to add new tables or execute queries like inserting data etc on server startup.
(https://github.com/arun0009/kotlin-ktor-exposed-sample-api)
Here's how Flyway works: https://flywaydb.org/getstarted/how
Download, install, and configure Flyway (See https://flywaydb.org/getstarted/firststeps/commandline)
Point it to a database, which will be the location of flyway_schema_history
Write your migration files for create table or insert data following their naming convention: V<version number>__<migration description> and run with flyway migrate
Write your repeatable migration files for create view following their naming convention: R__<migration description> and run with flyway migrate
Write your undo migration files for drop table or delete data following their naming convention: V<version number>__<migration description> and run with flyway migrate
Check your migration status with flyway info and commit your files if you're happy
Make any necessary modifications are rerun migration. Repeat and commit.
In case if this is a one time activity, you can try using a Off-the shelf utility like SQL Data Compare.
To make this happen, you would need to ensure that both the databases are accessible locally from your machine so that you can create 2 DB connections and run a comparison against them.
At the end of comparison, you can get a Auto-Generated SQL Script out of it to run against your new schema and make it sychronized.
In case if you wish to compare Schema Objects as well, again Red-Gate provides a similar Schema Compare tool, which they have now started to call as SQL Compare (God knows why !!). This utility would also provide similar auto-generated script to help you.
But, again Red Gate is good for one-time migration and you can use this with their Trial version for a period of 30 days. For similar activity in regular basis, you would need to buy the Licensed version of same software.
For Data Migrantion, I use Navicat Premium which i find very easy to use, but it is not an open source. If you are looking for an Open Source Tool, then You can use SQLines Data which is an open source (Apache License 2.0), scalable, parallel high performance data transfer and schema conversion tool that you can use for database migrations and ETL processes.
SQLines
It is available for Linux, Windows, both 64-bit and 32-bit platforms.
You can also use SQLines Data for cross-platform database migration. The tool migrates table definitions, constraints, indexes and transfers data.
This how you can start with SQLines:
Download and unzip the file, no installation is required
Run sqldataw.exe on Windows to launch the GUI version
Run ./sqldata on Linux to launch the command line tool.
And There are Migration guidelines available for specific databases.Guidelines

What are the different approaches for deploying DB changes using TFS 2015?

Currently, we are manual running DB scripts (SQL Server 2012) outside of our CI/CD deployment. What are ways (including toolsets) can we automate deployment of DB changes using TFS 2015 Update 3?
There are really two approaches here, both of which work with TFS. Really TFS just facilitates the execution of any scripting that you will use to update your database, including your custom, handcrafted scripts.
There is the state based approach, which uses a comparison technology to look at your VCS/dev/test/staging database and compare this to production. SQL Source Control and the DLM Automation Suite from Redgate Software does this, as do other comparison tools. What you would do is use a command line or programmatic interface to set your source and target, capture the output and then use this as an artifact in your release process. I might include a review of the artifact as a scripting choice in your flow.
Note, there are some problems that State based comparisons don't do well with. Renames, splits, merges, data movement, a few others. Some comparison tools have ways around this, some do not. Be aware this may be an issue. If you have a more mature database, perhaps not, but you should consider this. SQL Source Control allows custom migration scripts, which can handle these issues.
The other approach is a script runner or migration strategy where each change you make to a dev database is captured as an ordered script and a framework executes these in order, if they are needed. This is preferred by some people since you can see exactly what code will be executed at dev and deployment time. ReadyRoll from Redgate Software, Liquibase, Rails Migrations, DBUp, FlywayDB, all use this strategy.
Neither of these is better or worse. Both work, both have pros and cons, but really the choice comes down to your comfort level and preference.
Disclosure: I work for Redgate Software.
If deploy DB changes just mean using SQL Server Database Projects (.sqlproj files) with Team Foundation Build in Team Foundation Server.
There are several ways can achieve this:
Use MSBuild task with some arguments to publish your SQL project
during build.
Add a deploy target in your sqlproj file,run the target after build
completes.
Or add a "Batch Script" step in your build
definition to run "SqlPackage.exe" to publish the .dacpac file.
More details please refer to this blog: Deploying SSDT During Local and Server Build .
As for using TFS2015, you can also try to use SQL Server Database Deployment task.
Use this task to deploy a SQL Server database to an existing SQL
Server instance. The task uses a DACPAC and SqlPackage.exe, which
provides fine-grained control over database creation and upgrades.

Building SQL deployment scripts into the application?

We currently have a rather manual, fiddly, messy & error prone way of running SQL deployment scripts when we update our clients' software installations. We're considering finding a 3rd party SQL deployment tool to automate this process.
However, I'm pushing the idea of building our own SQL deployment tool into the application itself. It would be simple - on application startup, it would:
1) Check the existing database schema version (eg. "35")
2) Check against "up to date" database schema version (eg. "38")
3) Retrieve relevant SQL deployment scripts from resource files (eg. "36", "37", "38")
4) Lock the database and run each required SQL deployment script
Note that this would still be run by an IT technician in case any errors occurred, not by end users.
It seems unorthodox but I don't really see any problem. Your thoughts?
I don't see anything inheritly wrong with this.
At a company I've worked for, they built a custom SQL-script installer that would allow them to automatically apply changes to the database, roll back the changes if necessary, and keep tabs on the version of what's been applied.
No matter the desired result of the application, you'll need to set conventions (i.e. database releases should have this folder structure, etc.) and identify the needs and processes that will be used in running the tool (i.e. just how automated you'll make it)
Don't build your own. Far too common a problem for a bespoke solution.
You're looking for a database migration tool, my recommendation would be liquibase. It can be run from the command line or integrated into the build process. Unique features that are especially valuable to me is the generation of SQL upgrade (and downgrade) scripts, which are often demanded from us when supporting production installs.
For more a more detailed listing of alternative migration tools see the following answer:
Migrations for Java

How do you share SQL changes within your team?

Whenever you make database changes, how do you apply these changes to others databases on the team (and also your servers)?
Currently we are using a file called changes.sql where we put all our changes, separated with ISO date comments.
Is there a better way?
We use an expanded version of your approach.
We have an database upgrade folder for each release, which contains all the scripts which are part of the release. There is one index file in the folder, which contains pseudo links to all the scripts which should be run.
We have a cruise control job which runs each night to restore a copy of the current production database, then runs the current release's upgrade scripts against it (by executing the scripts defined in the index file). There's also a CI job which runs whenever anyone checks anything into the upgrade folder for the current release.
The scripts need to be re-runnable obviously, eg they should check for the existence of something before dropping or creating it.
Take a look at http://dbmaintain.org/overview.html - It is a quite powerful tool to manage database updates. It basically works by executing several SQL scripts in the correct order. It remembers which scripts were already executed. If an executed script is changed it either reports an error (in production mode) or clears the database and executes all scripts again (in testing mode). There is a good tutorial too.
Edit: You can also group the sql scripts (i.e. by release). The big advantage here is that you can use the same tests for your unit tests, testing environments, coninuous integration, near-live and production environments.
Not at my current job, but in the past I used a database project in Visual Studio 2010, which was then published to SVN. We had an SOP rather than software automation to push changes from development to QA, staging, and production.
The team I worked with was small - five developers with shared responsibility for DB design and .NET development.
You should also consider using version control on your database. One example is Liquibase. By using Version control you can comment all the changes to the table structure, thus you don't need a changes.sql file.
We use a migration tool (migratordotnet - other alternatives exist) that lets you write C# classes that execute database commands. The migrations run locally on each invocation of the program or of the integration tests, and on the servers on each deployment. The migration framework automatically keeps track of which migrations have been applied. Of course, the migrations are a part of the version control repository.

Automatic incremental SQL Script generation for incremental, nightly builds when using Team Build in TFS 2008 and Visual Studio 2008?

hope that everybody here is OK.
We are using VS 2008 as development tool, TFS 2008 as version control as well as build automation. Some of our developer use dbpro for databases changes and some use SQL Server management studio.
I am trying to automate build for Web Application built using C# and VB.Net.
Our scenario is such that we have a central database to which our web application connects.
Whenever we supply our clients with a new functionality or a bug fix, we supply them incremental builds.
The SQL script is checked into source control for every incremental build when they have made and tested there changes on our central DB Server.
I want to generate Differential script that can be run at the client as an incremental update script. Now to come about it is a problem. Sometimes our developers tend to forget the database change-sets and the script in the source control is missing an SP or a two.
Also, sometimes we need to insert default data into some of the tables that have strict stringent values and not test values. Like a table that contains Services provided by the panel, we add a new service name, signature, credentials and service address, etc etc in the ServiceTable. Besides this many other tables may have test data that may not be needed.
If we use DataCompare, it will generate changeset for required data (important for client to enable certain services) and our test data that was added to the database as a result of our testing of the functionality or bug fix.
Currently i am using SQLSchemaCompareTask (from Visual Studio 2008 Team Database Professional Power Tools API) in the TFSBuild.proj file of the build definition for TFS 2008.
Using SQLSchemaCompareTask, the script generated contains database names like [dbo]. etc which are not desired as the script fails when run against SQL Server 2000 databses (Some of our client still use SQL Server 2000) databases as teh backend of the application.
Also default data can't be generated by this process.
To overcome this problem, i have to come up with a solution that can compare databases and generate script automatically that does not have to be manually reviewed again before being sent to the client.
Please suggest effective methodology of such SQL script generation and suggest whether two different databases may be used or something ? Is there any toolkit or api that can enable build automation for SQL Server databases?
Thank you all.
Regards
Steve
Try to use SQL Examiner Suite for this:
http://www.sqlaccessories.com/SQL_Examiner_Suite/
The tool compares both schema and data and produces synchronization scrips (or differentials scripts). You can automate script creation with supplied command-line tool.
Rather than collating many individual change set scripts (and therefore occasionally missing objects out), why not use schema compare and data compare to create a single script from your database project using a database equivalent to your client's on the target? This should create a script tailored to their requirements.
In data compare you can exclude test data records that you don't want pushed to your client by unchecking them in the lower grid.